This post is summary of the “Anomaly Detection : A Survey”. Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These non-conforming patterns are often referred to as anomalies, outliers, discordant observations, exceptions, aberrations, surprises, peculiarities or contaminants in different application domains.
Anomalies are patterns in data that do not conform to a well defined notion of normal behavior.
Interesting to analyze
Unwanted noise in the data also can be found in there.
Novelty detection which aims at detecting previously unobserved (emergent, novel) patterns in the data
Challenges for Anomaly Detection
Drawing the boundary between normal and anomalous behavior
Availability of labeled data
Noisy data
Type of Anomaly
Anomalies can be classified into following three categories
Point Anomalies - An individual data instance can be considered as anomalous with respect to the rest of data
Contextual Anomalies - A data instance is anomalous in a specific context (but not otherwise), then it is termed as a contextual anomaly (also referred as conditional anomaly). Each data instance is defined using following two sets of attributes
Contextual attributes. The contextual attributes are used to determine the context (or neighborhood) for that instance eg: In time- series data, time is a contextual attribute which determines the position of an instance on the entire sequence
Behavioral attributes. The behavioral attributes define the non-contextual characteristics of an instance eg: In a spatial data set describing the average rainfall of the entire world, the amount of rainfall at any location is a behavioral attribute
To explain this we will look into "Exchange Rate History For Converting United States Dollar (USD) to Sri Lankan Rupee (LKR)"[1]
Contextual anomaly t2 in a exchange rate time series. Note that the exchange rate at time t1 is same as that at time t2 but occurs in a different context and hence is not considered as an anomaly
3. Collective Anomalies - A collection of related data instances is anomalous with respect to the entire data set
Data Labels
The labels associated with a data instance denote if that instance is normal or anomalous. Depending labels availability, anomaly detection techniques can be operated in one of the following three modes
Supervised anomaly detection - Techniques trained in supervised mode assume the availability of a training data set which has labeled instances for normal as well as anomaly class
Semi-Supervised anomaly detection - Techniques that operate in a semi-supervised mode, assume that the training data has labeled instances for only the normal class. Since they do not require labels for the anomaly class
Unsupervised anomaly detection - Techniques that operate in unsupervised mode do not require training data, and thus are most widely applicable. The techniques implicit assume that normal instances are far more frequent than anomalies in the test data. If this assumption is not true then such techniques suffer from high false alarm rate
Output of Anomaly Detection
Anomaly detection have two types of output techniques
Scores. Scoring techniques assign an anomaly score to each instance in the test data depending on the degree to which that instance is considered an anomaly
Labels. Techniques in this category assign a label (normal or anomalous) to each test instance
Applications of Anomaly Detection Intrusion detection
Intrusion detection refers to detection of malicious activity. The key challenge for anomaly detection in this domain is the huge volume of data. Thus, semi-supervised and unsupervised anomaly detection techniques are preferred in this domain.Denning[3] classifies intrusion detection systems into host based and net- work based intrusion detection systems.
Host Based Intrusion Detection Systems - This deals with operating system call traces
Network Intrusion Detection Systems - These systems deal with detecting intrusions in network data. The intrusions typically occur as anomalous patterns (point anomalies) though certain techniques model[4] the data in a sequential fashion and detect anomalous subsequences (collective anomalies). A challenge faced by anomaly detection techniques in this domain is that the nature of anomalies keeps changing over time as the intruders adapt their network attacks to evade the existing intrusion detection solutions.
Fraud Detection
Fraud detection refers to detection of criminal activities occurring in commercial organizations such as banks, credit card companies, insurance agencies, cell phone companies, stock market, etc. The organizations are interested in immediate detection of such frauds to prevent economic losses. Detection techniques used for credit card fraud and network intrusion detection as below.
Medical and Public Health Anomaly Detection
Anomaly detection in the medical and public health domains typically work with pa- tient records. The data can have anomalies due to several reasons such as abnormal patient condition or instrumentation errors or recording errors. Thus the anomaly detection is a very critical problem in this domain and requires high degree of accuracy. Industrial Damage Detection Such damages need to be detected early to prevent further escalation and losses. Fault Detection in Mechanical Units Structural Defect Detection Image Processing Anomaly detection techniques dealing with images are either interested in any changes in an image over time (motion detection) or in regions which appear ab- normal on the static image. This domain includes satellite imagery. Anomaly Detection in Text Data Anomaly detection techniques in this domain primarily detect novel topics or events or news stories in a collection of documents or news articles. The anomalies are caused due to a new interesting event or an anomalous topic. Sensor Networks Since the sensor data collected from various wireless sensors has several unique characteristics.
References
[1] http://themoneyconverter.com/USD/LKR.aspx
[2] Varun Chandola, Arindam Banerjee, and Vipin Kumar. 2009. Anomaly detection: A survey. ACM Comput. Surv. 41, 3, Article 15 (July 2009), 58 pages. DOI=10.1145/1541880.1541882 http://doi.acm.org/10.1145/1541880.1541882
[3] Denning, D. E. 1987. An intrusion detection model. IEEE Transactions of Software Engineer-ing 13, 2, 222–232.
[4]Gwadera, R., Atallah, M. J., and Szpankowski, W. 2004. Detection of significant sets of episodes in event sequences. In Proceedings of the Fourth IEEE International Conference on Data Mining. IEEE Computer Society, Washington, DC, USA, 3–10.
We used have Singleton Design Pattern in our applications whenever it is needed. As we know that in singleton design pattern we can create only one instance and can access in the whole application. But in some cases, it will break the singleton behavior.
There are mainly 3 concepts which can break singleton property of a singleton class in java. In this post, we will discuss how it can break and how to prevent those.
Here is sample Singleton class and SingletonTest class.
Now we will break this pattern. First, we will use java reflection.
Reflection
Java Reflection is an API which is used to examine or modify the behavior of methods, classes, interfaces at runtime. Using Reflection API we can create multiple objects in singleton class. Consider the following example.
ReflectionSingleton.java
packagedemo1;importjava.lang.reflect.Constructor;publicclassReflectionSingleton{publicstaticvoidmain(String[]args){SingletonobjOne=Singleton.getInstance();SingletonobjTwo=null;try{Constructorconstructor=Singleton.class.getDeclaredConstructor();constructor.setAccessible(true);objTwo=(Singleton)constructor.newInstance();}catch(Exceptionex){System.out.println(ex);}System.out.println("Hashcode of Object 1 - "+objOne.hashCode());System.out.println("Hashcode of Object 2 - "+objTwo.hashCode());}}
Example to show how reflection can break the singleton pattern with Java reflect. You will get two hash code as below. It has a break on the singleton pattern.
There are many ways to prevent Singleton pattern from Reflection API, but one of the best solutions is to throw run time exception in the constructor if the instance already exists. In this, we can not able to create a second instance.
privateSingleton(){if(instance!=null){thrownewInstantiationError("Creating of this object is not allowed.");}}
Deserialization
In serialization, we can save the object of a byte stream into a file or send over a network. Suppose if you serialize the Singleton class and then again de-serialize that object will create a new instance, hence deserialization will break the Singleton pattern.
Below code is to illustrate how the Singleton pattern breaks with deserialization.
Implements Serializable interface for Singleton Class.
DeserializationSingleton.Java
packagedemo1;importjava.io.*;publicclassDeserializationSingleton{publicstaticvoidmain(String[]args)throwsException{SingletoninstanceOne=Singleton.getInstance();ObjectOutputout=newObjectOutputStream(newFileOutputStream("file.text"));out.writeObject(instanceOne);out.close();ObjectInputin=newObjectInputStream(newFileInputStream("file.text"));SingletoninstanceTwo=(Singleton)in.readObject();in.close();System.out.println("hashCode of instance 1 is - "+instanceOne.hashCode());System.out.println("hashCode of instance 2 is - "+instanceTwo.hashCode());}}
The output is below and you can see two hashcodes.
To overcome this issue, we need to override readResolve() method in Singleton class and return same Singleton instance. Update Singleton.java, with below method.
protectedObjectreadResolve(){returninstance;}
Now run above DeserializationDemo class and see the output.
Using the "clone" method we can create a copy of original object, samething if we applied clone in singleton pattern, it will create two instances one original and another one cloned object. In this case will break Singleton principle as shown in below code.
Implement the "Cloneable" interface and override the clone method in the above Singleton class.
If we see the above output, two instances have different hashcodes means these instances are not the same.
Prevent Singleton Pattern from Cloning
In the above code, breaks the Singleton principle i. e created two instances. To overcome the above issue we need to implement/override clone() method and throw an exception CloneNotSupportedException from clone method. If anyone try to create clone object of Singleton, it will throw an exception as see below code.
Microservices can have a positive impact on your enterprise. Therefore it is worth to know that, how to handle Microservice Architecture (MSA) and some Design Patterns for Microservices. General goals or principles for a microservice architecture. Here are the four goals to consider in Microservice Architecture approach [1].
Reduce Cost: MSA will reduce the overall cost of designing, implementing, and maintaining IT services.
Increase Release Speed: MSA will increase the speed from idea to deployment of services.
Improve Resilience: MSA will improve the resilience of our service network.
Enable Visibility: MSA support for better visibility on your service and network.
You need to understand what principles microservice architecture has been built
Scalability
Availability
Resiliency
Flexibility
Independent, autonomous
Decentralized governance
Failure isolation
Auto-Provisioning
Continuous delivery through DevOps
Add hearing to the above principles, brings several challenges and issues while bring your solution or system to live. Those problems are common for many solutions. Those can overcome with using correct and matching design patterns. There are design patterns for microservices and those can divide into five Patterns. Each many contains many patterns. Below diagram shows the those.
Design Patterns for Microservices
Decomposition Patterns
Decompose by Business Capability
Microservices is all about making services loosely coupled, applying the single responsibility principle. It decomposes by business capability. Define services corresponding to business capabilities. A business capability is a concept from business architecture modeling [2]. It is something that a business does in order to generate value. A business capability often corresponds to a business object, e.g.
Order Management is responsible for orders
Customer Management is responsible for customers
Decompose by Subdomain
Decomposing an application using business capabilities might be a good start, but you will come across so-called “God Classes” which will not be easy to decompose. These classes will be common among multiple services. Define services corresponding to Domain-Driven Design (DDD) subdomains. DDD refers to the application’s problem space — the business — as the domain. A domain is consists of multiple subdomains. Each subdomain corresponds to a different part of the business.
Subdomains can be classified as follows:
Core — key differentiator for the business and the most valuable part of the application
Supporting — related to what the business does but not a differentiator. These can be implemented in-house or outsourced
Generic — not specific to the business and are ideally implemented using off the shelf software
The subdomains of an Order management include:
Product catalog service
Inventory management services
Order management services
Delivery management services
Decompose by Transactions / Two-phase commit (2pc) pattern
You can decompose services over the transactions. Then there will be multiple transactions in the system. One of the important participants in a distributed transaction is the transaction coordinator [3]. The distributed transaction consists of two steps:
Prepare phase — during this phase, all participants of the transaction prepare for commit and notify the coordinator that they are ready to complete the transaction
Commit or Rollback phase — during this phase, either a commit or a rollback command is issued by the transaction coordinator to all participants
The problem with 2PC is that it is quite slow compared to the time for operation of a single microservice. Coordinating the transaction between microservices, even if they are on the same network, can really slow the system down, so this approach isn’t usually used in a high load scenario.
Strangler Pattern
Above three, design patterns that you go through were decomposing applications for Greenfield, but 80% of the work you do is with brownfield applications, which are big, monolithic applications (legacy codebase). The Strangler pattern comes to the rescue or solution. This creates two separate applications that live side by side in the same URI space. Over time, the newly refactored application “strangles” or replaces the original application until finally, you can shut off the monolithic application. The Strangler Application steps are transformed, coexist, and eliminate [4]:
Transform — Create a parallel new site with modern approaches.
Coexist — Leave the existing site where it is for a time. Redirect from the existing site to the new one so the functionality is implemented incrementally.
Eliminate — Remove the old functionality from the existing site.
Bulkhead Pattern
Isolate elements of an application into pools so that if one fails, the others will continue to function. This pattern is named Bulkhead because it resembles the sectioned partitions of a ship’s hull. Partition service instances into different groups, based on consumer load and availability requirements. This design helps to isolate failures, and allows you to sustain service functionality for some consumers, even during a failure.
Sidecar Pattern
Deploy components of an application into a separate processor container to provide isolation and encapsulation. This pattern can also enable applications to be composed of heterogeneous components and technologies. This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, is created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is the last decomposition pattern that we show in the post.
Integration Patterns
API Gateway Pattern
When an application is broken down to smaller microservices, there are a few concerns that need to be addressed
There are multiple calls for multiple microservices by different channels
There is a need for handling different type of Protocols
Different consumers might need a different format of the responses
An API Gateway helps to address many concerns raised by the microservice implementation, not limited to the ones above.
An API Gateway is the single point of entry for any microservice call.
It can work as a proxy service to route a request to the concerned microservice.
It can aggregate the results to send back to the consumer.
This solution can create a fine-grained API for each specific type of client.
It can also convert the protocol request and respond.
It can also offload the authentication/authorization responsibility of the microservice.
Aggregator Pattern
When breaking the business functionality into several smaller logical pieces of code, it becomes necessary to think about how to collaborate the data returned by each service. This responsibility cannot be left with the consumer. The Aggregator pattern helps to address this. It talks about how we can aggregate the data from different services and then send the final response to the consumer. This can be done in two ways [6]:
A composite microservice will make calls to all the required microservices, consolidate the data, and transform the data before sending back.
An API Gateway can also partition the request to multiple microservices and aggregate the data before sending it to the consumer.
It is recommended if any business logic is to be applied, then choose a composite microservice. Otherwise, the API Gateway is the established solution.
Proxy Pattern
API gateway we just expose Microservices over API gateway. I allow to get API features such as security and categorizing APIs in GW. In this example, the API gateway has three API modules:
Mobile API, which implements the API for the FTGO mobile client
Browser API, which implements the API to the JavaScript application running in the browser
Public API, which implements the API for third-party developers
Gateway Routing Pattern
The API gateway is responsible for request routing. An API gateway implements some API operations by routing requests to the corresponding service. When it receives a request, the API gateway consults a routing map that specifies which service to route the request to. A routing map might, for example, map an HTTP method and path to the HTTP URL of service. This function is identical to the reverse proxying features provided by web servers such as NGINX.
Chained Microservice Pattern
There will be multiple dependencies of for single services or microservice eg: Sale microservice has dependency products microservice and order microservice. Chained microservice design pattern will help you to provide the consolidated outcome to your request. The request received by a microservice-1, which is then communicating with microservice-2 and it may be communicating with microservice-3. All these services are synchronous calls.
Branch Pattern
A microservice may need to get the data from multiple sources including other microservices. Branch microservice pattern is a mix of Aggregator & Chain design patterns and allows simultaneous request/response processing from two or more microservices. The invoked microservice can be chains of microservices. Brach pattern can also be used to invoke different chains of microservices, or a single chain, based your business needs.
Client-Side UI Composition Pattern
When services are developed by decomposing business capabilities/subdomains, the services responsible for user experience have to pull data from several microservices. In the monolithic world, there used to be only one call from the UI to a backend service to retrieve all data and refresh/submit the UI page. However, now it won’t be the same. With microservices, the UI has to be designed as a skeleton with multiple sections/regions of the screen/page. Each section will make a call to an individual backend microservice to pull the data. Frameworks like AngularJS and ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). Each team develops a client-side UI component, such an AngularJS directive, that implements the region of the page/screen for their service. A UI team is responsible for implementing the page skeletons that build pages/screens by composing multiple, service-specific UI components.
Database Patterns
Defining the database architecture for microservices we need to consider below points.
Services must be loosely coupled. They can be developed, deployed, and scaled independently.
Business transactions may enforce invariants that span multiple services.
Some business transactions need to query data that is owned by multiple services.
Databases must sometimes be replicated and shared in order to scale.
Different services have different data storage requirements.
Database per Service
To solve the above concerns, one database per microservice must be designed; it must be private to that service only. It should be accessed by the microservice API only. It cannot be accessed by other services directly. For example, for relational databases, we can use private-tables-per-service, schema-per-service, or database-server-per-service.
Shared Database per Service
We have talked about one database per service being ideal for microservices. It is anti-pattern for microservices. But if the application is a monolith and trying to break into microservices, denormalization is not that easy. The later phase we can move to DB per services pattern, Till that we make follow this. A shared database per service is not ideal, but that is the working solution for the above scenario. Most people consider this an anti-pattern for microservices, but for brownfield applications, this is a good start to break the application into smaller logical pieces. This should not be applied for greenfield applications.
Command Query Responsibility Segregation (CQRS)
Once we implement database-per-service, there is a requirement to query, which requires joint data from multiple services. it’s not possible. CQRS suggests splitting the application into two parts — the command side and the query side.
The command side handles the Create, Update, and Delete requests
The query side handles the query part by using the materialized views
The event sourcing pattern is generally used along with it to create events for any data change. Materialized views are kept updated by subscribing to the stream of events.
Event Sourcing
Most applications work with data, and the typical approach is for the application to maintain the current state. For example, in the traditional create, read, update, and delete (CRUD) model a typical data process is to read data from the store. It contains limitations of locking the data with often using transactions.
The Event Sourcing pattern [8] defines an approach to handling operations on data that’s driven by a sequence of events, each of which is recorded in an append-only store. Application code sends a series of events that imperatively describe each action that has occurred on the data to the event store, where they’re persisted. Each event represents a set of changes to the data (such as AddedItemToOrder).
The events are persisted in an event store that acts as the system of record. Typical uses of the events published by the event store are to maintain materialized views of entities as actions in the application change them, and for integration with external systems. For example, a system can maintain a materialized view of all customer orders that are used to populate parts of the UI. As the application adds new orders, adds or removes items on the order, and adds shipping information, the events that describe these changes can be handled and used to update the materialized view. The figure shows an overview of the pattern.
Event Sourcing pattern[8]
Saga Pattern
When each service has its own database and a business transaction spans multiple services, how do we ensure data consistency across services? Each request has a compensating request that is executed when the request fails. It can be implemented in two ways:
Choreography — When there is no central coordination, each service produces and listens to another service’s events and decides if an action should be taken or not. Choreography is a way of specifying how two or more parties; none of which has any control over the other parties’ processes, or perhaps any visibility of those processes — can coordinate their activities and processes to share information and value. Use choreography when coordination across domains of control/visibility is required. You can think of choreography, in a simple scenario, as like a network protocol. It dictates acceptable patterns of requests and responses between parties.
Saga pattern — Choreography
Orchestration — An orchestrator (object) takes responsibility for a saga’s decision making and sequencing business logic. when you have control over all the actors in a process. when they’re all in one domain of control and you can control the flow of activities. This is, of course, most often when you’re specifying a business process that will be enacted inside one organization that you have control over.
Sage pattern — Orchestration
Observability Patterns
Log Aggregation
Consider a use case where an application consists of multiple services. Requests often span multiple service instances. Each service instance generates a log file in a standardized format. We need a centralized logging service that aggregates logs from each service instance. Users can search and analyze the logs. They can configure alerts that are triggered when certain messages appear in the logs. For example, PCF does have Log aggregator, which collects logs from each component (router, controller, diego, etc…) of the PCF platform along with applications. AWS Cloud Watch also does the same.
Performance Metrics
When the service portfolio increases due to a microservice architecture, it becomes critical to keep a watch on the transactions so that patterns can be monitored and alerts sent when an issue happens.
A metrics service is required to gather statistics about individual operations. It should aggregate the metrics of an application service, which provides reporting and alerting. There are two models for aggregating metrics:
Push — the service pushes metrics to the metrics service e.g. NewRelic, AppDynamics
Pull — the metrics services pulls metrics from the service e.g. Prometheus
Distributed Tracing
In a microservice architecture, requests often span multiple services. Each service handles a request by performing one or more operations across multiple services. While in troubleshoot it is worth to have trace ID, we trace a request end-to-end.
The solution is to introduce a transaction ID. Follow approach can be used;
Assigns each external request a unique external request id.
Passes the external request id to all services.
Includes the external request id in all log messages.
Health Check
When microservice architecture has been implemented, there is a chance that a service might be up but not able to handle transactions. Each service needs to have an endpoint which can be used to check the health of the application, such as /health. This API should o check the status of the host, the connection to other services/infrastructure, and any specific logic.
Cross-Cutting Concern Patterns
External Configuration
A service typically calls other services and databases as well. For each environment like dev, QA, UAT, prod, the endpoint URL or some configuration properties might be different. A change in any of those properties might require a re-build and re-deploy of the service.
To avoid code modification configuration can be used. Externalize all the configuration, including endpoint URLs and credentials. The application should load them either at startup or on the fly. These can be accessed by the application on startup or can be refreshed without a server restart.
Service Discovery Pattern
When microservices come into the picture, we need to address a few issues in terms of calling services.
With container technology, IP addresses are dynamically allocated to the service instances. Every time the address changes, a consumer service can break and need manual changes.
Each service URL has to be remembered by the consumer and become tightly coupled.
A service registry needs to be created which will keep the metadata of each producer service and specification for each. A service instance should register to the registry when starting and should de-register when shutting down. There are two types of service discovery:
client-side : eg: Netflix Eureka
Server-side : eg: AWS ALB.
service discovery [9]
Circuit Breaker Pattern
A service generally calls other services to retrieve data, and there is the chance that the downstream service may be down. There are two problems with this: first, the request will keep going to the down service, exhausting network resources, and slowing performance. Second, the user experience will be bad and unpredictable.
The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote service will fail immediately. After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed, the circuit breaker resumes normal operation. Otherwise, if there is a failure, the timeout period begins again. This pattern is suited to, prevent an application from trying to invoke a remote service or access a shared resource if this operation is highly likely to fail.
Circuit Breaker Pattern [10]
Blue-Green Deployment Pattern
With microservice architecture, one application can have many microservices. If we stop all the services then deploy an enhanced version, the downtime will be huge and can impact the business. Also, the rollback will be a nightmare. Blue-Green Deployment Pattern avoid this.
The blue-green deployment strategy can be implemented to reduce or remove downtime. It achieves this by running two identical production environments, Blue and Green. Let’s assume Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic. All cloud platforms provide options for implementing a blue-green deployment.
Blue-Green Deployment Pattern
References [1] “Microservice Architecture: Aligning Principles, Practices, and Culture” Book by Irakli Nadareishvili, Matt McLarty, and Michael Amundsen
Last few years has been a great year for API Gateways and API companies. APIs (Application Programming Interfaces) are allowing businesses to expand beyond their enterprise boundaries to drive revenue through new business models. Larger enterprises are adopting API paradigm — developing many internal and external services that developers connect to in order to create user-facing products. As the number of APIs management products increases in 2018.API and API Mangement / Gateways can be found in a lot of enterprises today. Those enterprises level API management solutions allow external companies and external users to use these APIs. Enlightened businesses are recognizing a new channel to market through APIs.
Companies with API-M and End-users
As the above figure shows Company A, B, and C are interconnected with API-M solutions. End-user also connected to the APIs. Developers are rapidly driving new opportunities for businesses, developers and customers from these APIs.
API Economy
The API Economy allows access to business assets and digital services through a simple to use API. Software developing company see the economic advantages of integration, many large, monolithic software systems currently supported on premises will decompose into highly-organized sets of microservices available in the cloud.
The ultimate goal of the API economy is to facilitate the creation of user-focused apps that support line of business goals. Enterprise uses APIs to bring together ecosystem partners and unlock new sources of value. Successful companies or enterprise will see APIs not just as technical tools, but as sources of strategic value in today’s digital economy. As managers look for creative ways to monetize services and assets through APIs.
API Monetization
Access to the assets or services is provided via APIs enabling new and innovative usage of the assets to drive additional revenue. This is referred to as API Monetization. APIs can also be used in this indirect or direct monetization model. However, the monetization model leads someone paying you for the use of the API (APIs on banks, news, telco), you pay them to use (API for advertising, marketing).
API Monetization Models
Free
Free Model is used when there is typically low valued assets. While no money is exchanged clearly there must be a business purpose. When a company drives brand or loyalty and enter new channels Free model is tried. There are many portals used this API example is rapidapi, any-api. Some company used this free model to identify user usage patterns. An example is Facebook APIs
Subscriber Pays
Subscriber Pays model
API must be of value to the Subscriber. The Subscriber may obtain downstream revenue through its use of the API. This Subscriber may developer application/mobile App using those APIs.
Pay As You Go: Developer / Subscriber pays for what has been used. There is no minimums, no tiers. It is usually billed periodically (e.g. monthly / weekly).
Freemium: The Basic API is free, with higher value APIs priced. Tiered — Multiple tiered options: A developer/subscriber chooses the tier they believe they need and pays for the tier. Tier contains the level of access.
Unit Based: There are different API features or APIs have a different value. Those are assigned a number of units/score. The developer buys the unit before API use and the point will reduce upon the usage.
Point (Box) Base: Same like Unit Base. A point will buy before the API usage. Point reduces upon call and call will contain the Categories, like Freemium.
Transaction Fee: A fixed or percentage of a transaction is paid to the API Provider.
Subscriber Gets Paid
Subscriber gets paid
API Gateway holder provides a monetary incentive for a developer to leverage your web API. Basic scenarios include you selling an asset or services through an agent. This payment method can be seen in the marketing industry a lot.
Revenue Share: It is acting as an agent to help sell a provider product/asset. A fixed or percentage of a transaction is paid to the API Consumer
Affiliate: In this model, a partner includes your content/ advertisements to drive potential customer traffic to you. There may be several possible sub-models:
Cost Per Action (CPA): This specific Developer earns the affiliate a commission based on a successful conversion. Generally, a flat rate per user who subscribes to the merchant’s API / service. There can be commission structure also
Cost Per Click (CPC): This method developer / API subscriber is paid for every click they send to the merchant’s site / API consume.
Sign-up Reteral: Developer gets paid once, he able onboard API / API list consumer (Completing process). There are two sub-models, one is once he completes a full list of API consumer he gets paid the amount of money. (Pre-define amount) is paid for each completion. Next submodel is ‘Recurring’. This is where the developer will get paid each consumer after 3rd party completes the process of API calls.
Indirect Payment
Indirect Payment Methods
By Indirect Payment, API achieves some goal that drives the business model.
Content Acquisition: APIs allow for content submission by 3rd parties which attract customers to you.
Content Syndication APIs: allow third parties to distribute your content. Multiple financial models may surround this. You might create a contract between parties
Software as a Service (SaaS): More than one lever should drive SaaS pricing, API based pricing makes things one-dimensional. It’s easy to set up any additional user parameters, as your parameter to define pricing as an add-on. Price help attracts a different lot of users. This model can be seen in many place Salesforce (Upsell — model). Software as a service is a software licensing and delivery model in which software is licensed on a subscription basis it helps reduce licensing price and bring cost also depending on the feature in the software.
Internal use Consumer: APIs are used by the same company employees to build customer-facing capability for your company. Typical scenarios include creating Mobile Apps and Web Commerce sites.
Internal Nonconsumer: APIs are used internally to assist in productivity, It aligns cross-Lines of Business and Business units in the company. Typical scenarios include providing simplified secure access to systems of record and managing asset. Those help handle charge bill in company asset over the business units.
B2B Customer: APIs are used by your customers to integrate into your enterprise. Customer value is provided through the use of the API so they are incented to use the API. The same way those APIs are used to expand into new geographies or new demographics, offer new products or upsell new capabilities to existing clients
B2B Partner: APIs are used by your partners to integrate into your enterprise. This is used to increase existing partner relationships or expand to new partners.
kubectl (Kubernetes command-line tool) is to deploy and manage applications on Kubernetes. Using kubectl, you can inspect cluster resources; create, delete, and update components. NOTE
You must use a kubectl version that is within one minor version difference of your cluster. If not you may see errors as below
WSO2 Enterprise Integrator is shipped with a separate message broker profile (WSO2 MB). In this Post I will be using message broker profile in EI (6.3.0).
1) Setting up the message broker profile
1.1) Copy the following JAR files from the <EI_HOME>/wso2/broker/client-lib/ directory to the <EI_HOME>/lib/ directory.
andes-client-3.2.13.jar
geronimo-jms_1.1_spec-1.1.0.wso2v1.jar
org.wso2.securevault-1.0.0-wso2v2.jar
1.2) Open the <EI_HOME>/conf/jndi.properties file and add the following line after the queue.MyQueue = example.MyQueue line:
queue.JMSMS=JMSMS
1.3) Open axis2.xml in <EI_HOME>/confconf/axis2/axis2.xml and uncomment configure to JMS transport support with WSO2 EI Broker Profile
There you will find transportReceiver and transportSender.
1.4) Add transport.jms.SessionTransacted value for each transportReceiver
2.4) Then Improve Sequence to send value (XML) to End point.
If the Endpoint have issue message should retry and move to the dead letter channel.
The Dead Letter Channel (DLC) is a sub-set of a queue, specifically designed to persist messages that are typically marked for deletion, providing you with a choice on whether to delete, retrieve or reroute the messages from the DLC.
When two or many applications want to exchange data, they do so by sending the data through a channel that connects the each others. The application sending the data may not know which application will receive the data, but by selecting a particular channel to send the data on, the sender knows that the receiver will be one that is looking for that sort of data by looking for it on that channel.
When designing an application, a developer has to know where to put what types of data to share that data with other applications, and likewise where to look for what types of data coming from other applications. These paths of communication cannot be dynamically created and discovered at runtime; they need to be agreed upon at design time so that the application knows where its data is coming from and where the data is going to. One exception is the reply channel in Request-Reply. The requestor can create or obtain a new channel the replier knows nothing about, specify it as the Return Address of a request message, and then the replier can make use of it. Another exception is messaging system implementations that support hierarchical channels. A receiver can subscribe to a parent in the hierarchy, then a sender can publish to a new child channel the receiver knows nothing about, and the subscriber will still receive the message.
First the applications determine the channels the messaging system will need to provide. Subsequent applications will try to design their communication around the channels that are available, but when this is not practical, they will require that additional channels be added. When a set of applications already use a certain set of channels, and new applications wish to join in, they too will use the existing set of channels. When existing applications add new functionality, they may require new channels.
Another common source of confusion is whether a Message Channel is unidirectional or bidirectional. Technically, it’s neither; a channel is more like a bucket that some applications add data to and other applications take data from. But because the data is in messages that travel from one application to another, that gives the channel direction, making it unidirectional. If a channel were bidirectional, that would mean that an application would both send messages to and receive messages from the same channel because the application would tend to keep consuming its own messages, the messages it’s supposed to be sending to other applications. So for all practical purposes, channels are unidirectional. As a consequence, for two applications to have a two-way conversation, they will need two channels, one in each direction.
Therefore different types of channels used in a messaging system. A Message channel is a basic architectural pattern of a messaging system and is used fundamentally for exchanging data between applications.
One-to-one or one-to-many
When application shares data with just one other application or any other application that is interest on that data. Then you can use a Point-to-Point Channel. This does not guarantee that every piece of data sent on that channel will necessarily go to the same receiver, because the channel might have multiple receivers; but it does ensure that any one piece of data will only be received by one of the applications.
If all of the receiver need to be receive the data, use a Publish-Subscribe Channel. Then piece of data is copied effectively by the channel and pass it to each of the receivers. Simple the sender broadcasts an event to all interested receivers.
Type of data (Datatype Channel)
The message contents must conform to some type so that the receiver understands the data’s structure. Datatype Channel is the principle that all of the data on a channel has to be of the same type. This is the main reason why messaging systems need lots of channels; if the data could be of any type, the messaging system would only need one channel (in each direction) between any two applications.
Invalid and dead messages
The message system can ensure that a message is delivered properly, but it cannot guarantee that the receiver will know what to do with it. The receiver has expectations about the data’s type and meaning. What it can do, though, is put the strange message on a specially designated Invalid Message Channel, in hopes that some utility monitoring the channel will pick up the message and figure out what to do with it.
Many messaging systems have a similar built-in feature, a Dead Letter Channel for messages which are successfully sent but ultimately cannot be successfully delivered. Again, hopefully some utility monitoring the channel will know what to do with the messages that could not be delivered.
Crash proof
If the messaging system crashes or is shut down for maintence. When it is back up and running, those messages will still be in its channels. By default, no; channels store their messages in memory. However, Guaranteed Delivery makes channels persistent so that their messages are stored on disk. This hurts performance but makes messaging more reliable.
Non-messaging clients
If an application cannot connect to a messaging system but still wants to participate in messaging. But if the messaging system can connect to the application somehow—through its user interface, its business services API, its database, or through a network connection such as TCP/IP or HTTP—then a Channel Adapter on the messaging system can be used to connect a channel (or set of channels) to the application without having to modify the application and perhaps without having to have a messaging client running on the same machine as the application.
Sometimes the "non-messaging client" really is a messaging client, just for a different messaging system. In that case, an application that is a client on both messaging systems can build a Messaging Bridge between the two, effectively connecting them into one composite messaging system.
2) Open it on IDE respectively idea and eclipse by going into the directory
mvn idea:idea
mvn eclipse:eclipse
3) Change the context to ‘/automobile’ from ‘/service’
4) Create new Java class called “Automobile” that will contain the blueprint for our micro services. You can used auto generator build getter and setter and constructor.
WSO2 API Manager includes five main components as the Publisher, Store, Gateway, Traffic Manager and Key Manager.
API Gateway - responsible for securing, protecting, managing, and scaling API calls. it intercepts API requests and applies policies such as throttling and security checks. It is also instrumental in gathering API usage statistics.
API Store - provides a space for consumers to self-register, discover API functionality, subscribe to APIs, evaluate them, and interact with API publishers.
API Publisher - enables API providers to easily publish their APIs, share documentation, provision API keys, and gather feedback on API features, quality, and usage.
API Key Manager Server - responsible for all security and key-related operations. When an API call is sent to the Gateway, it calls the Key Manager server and verifies the validity of the token provided with the API call.
API Traffic Manager - regulate API traffic, make APIs and applications available to consumers at different service levels, and secure APIs against security attacks. The Traffic Manager features a dynamic throttling engine to process throttling policies in real-time, including rate limiting of API requests.
LB (load balancers) - In distributed deployment requires two load balancers. - First load balancer (NGINX Plus) to manage internally cluster. - Second load balancer is set up externally to handle the requests sent to the clustered server nodes, and to provide failover and autoscaling. It may be NGINX Plus or any other third-party product.
RDBMS (shared databases) - The distributed deployment setup depicted above shares the following databases among the APIM components set up in separate server nodes.
- User Manager Database : Stores information related to users and user roles. This information is shared among the Key Manager Server, Store, and Publisher
- API Manager Database : Stores information related to the APIs along with the API subscription details. The Key Manager Server uses this database
- Registry Database : Shares information between the Publisher and Store
Note
It is recommended to separate the worker and manager nodes in scenarios where you have multiple Gateway nodes
Message Flow
The three main use cases of API Manager are API publishing, subscribing and invoking.
WSO2 API Manager deployment patterns
Pattern 1 (Single node) All-in-one deployment
Pattern 2 (Partially Distributed Deployment) Deployment with a separate Gateway and separate Key Manager
Pattern 3 (Fully distributed setup) It contains scalability at each layer and higher flexibility at each component
Pattern 4 (Internal and external / on-premise API Management) This pattern require a separate internal and external API Management with separated Gateway instances
Pattern 5 (Internal and external /public and private cloud API Management) It maintain a cloud deployment as an external API Gateway layer
Database Configuration for Distributed Deployment
API Manager Profiles
The following are the different profiles available in WSO2 API Manager.
Gateway manager: Acts as a manager node in a cluster. This profile starts frontend/UI features such as login as well as backend services that allow the product instance to communicate with other nodes in the cluster.
Gateway worker: Acts as a worker node in a cluster. This profile starts the backend features for data processing and communicating with the manager node.
Key Manager: Handle features relevant to the Key Manager component of the API Manager. Traffic Manager: Handle features relevant to the Traffic Manager component. The Traffic Manager helps users to regulate API traffic, make APIs and applications available to consumers at different service levels, and secure APIs against security attacks. The Traffic Manager features a dynamic throttling engine to process throttling policies in real-time, including rate limiting of API requests.
API Publisher: Only starts the front end/backend features relevant to the API Publisher.
Developer Portal: Only starts the front end/backend features relevant to the Developer Portal (API Store).
SMPP - Short Message Peer to Peer protocol is an open, industry standard protocol
designed to provide a flexible data communications interface for transfer of short message data between a Message Center, such as a Short Message Service Centre (SMSC), GSM
Unstructured Supplementary Services Data (USSD) Server or other type of Message Center
and a SMS application system, such as a WAP Proxy Server, EMail Gateway or other
Messaging Gateway.The advantage of supporting SMPP protocol with the Axis2 SMS Transport is it can be use to send receive high volume of Short messages very fast.SMPP protocol is a Application layer protocol which can be used over TCP. There are many SMPP gateways available in the world.and now almost all the Message centers support SMPP.
Use case 01
There is HTTP SMS API and user can invoke it with http call with JSON. API is able to send a SMS for the request number in JSON request with message in JSON
SMSC Simulator is an application which can be act like a SMSC. Using a simulator we can test our scenario without having access to a real SMSC. For the real production servers we have to use a real SMSC. In here we will be using OpenSmpp (https://github.com/OpenSmpp/opensmpp)
Add a comment