What is Helix?

It  used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix automates reassignment of resources in the face of node failure and recovery, cluster expansion, and reconfiguration. Modeling a distributed system as a state machine with constraints on states and transitions.

Terminologies

  • Node :  A single machine
  • Cluster: Set of Nodes   
  • Resource : A logical entry (e.g.    database, index, task)
  • Partition: Subset of the resource  (Each subtask is referred to as a partition)
  • Replica: Copy of a Partition State  (e.g Master, Slave). It increase the availability of the system
  • State: Describes the role of a replica (Each node in the cluster has its own Current State)
  • State Machine and Transitions: An action that allows a replica to move from one state to another, thus changing its role. ( e.g    Slave    -->    Master )  
  • spectators: the external clients. Helix provides an External View that is an aggregated view of the current state across all nodes.
  • Current State: represents resource's actual state at a participating node.
    - INSTANCE_NAME: Unique name representing the process
    - SESSION_ID: ID that is automatically assigned every time a process joins the cluster
  • Rebalancer: The core component of Helix is the Controller which runs the Rebalance algorithm on every cluster event.
  • Dynamic Ideal State: Helix powerful is that Ideal State can be changed dynamically. It is adjusting the ideal state. Whenever a cluster event occurs, Helix can operate in one of three modes
  1. FULL_AUTO
  2. SEMI_AUTO
  3. CUSTOMIZED

Cluster events can be one of the following:

  • Nodes start and/or stop
  • Nodes experience soft and/or hard failures
  • New nodes are added/removed


[1] http://helix.apache.org/Concepts.html

0

Add a comment

  1. We used have  Singleton Design Pattern in our applications whenever it is needed. As we know that in singleton design pattern we can create only one instance and can access in the whole application. But in some cases, it will break the singleton behavior.

    There are mainly 3 concepts which can break singleton property of a singleton class in java. In this post, we will discuss how it can break and how to prevent those.

    Here is sample Singleton class and SingletonTest class.

    Singleton.Java

    package demo1;
    
    public final class Singleton {
    
        private static volatile Singleton instance = null;
    
        private Singleton() {
        }
    
        public static Singleton getInstance() {
            if (instance == null) {
                synchronized (Singleton.class) {
                    if (instance == null) {
                        instance = new Singleton();
                    }
                }
            }
            return instance;
        }
    }

    SingletonTest.java


    package demo1;
    
    public class SingletonTest {
        public static void main(String[] args) {
            Singleton object1 = Singleton.getInstance();
            Singleton object2 = Singleton.getInstance();
            System.out.println("Hashcode of Object 1 - " + object1.hashCode());
            System.out.println("Hashcode of Object 2 - " + object2.hashCode());
        }
    }

    Here is output, you can see it the same hashcode for objectOne and objectTwo

    Hashcode of Object 1 - 1836019240
    Hashcode of Object 2 - 1836019240
    
    
    Now we will break this pattern. First, we will use java reflection.

    Reflection

    Java  Reflection is an API which is used to examine or modify the behavior of methods, classes, interfaces at runtime. Using Reflection API we can create multiple objects in singleton class. Consider the following example.

    ReflectionSingleton.java

    package demo1;
    
    import java.lang.reflect.Constructor;
    
    public class ReflectionSingleton {
        public static void main(String[] args)  {
    
            Singleton objOne = Singleton.getInstance();
            Singleton objTwo = null;
            try {
                Constructor constructor = Singleton.class.getDeclaredConstructor();
                constructor.setAccessible(true);
                objTwo = (Singleton) constructor.newInstance();
            } catch (Exception ex) {
                System.out.println(ex);
            }
    
            System.out.println("Hashcode of Object 1 - "+objOne.hashCode());
            System.out.println("Hashcode of Object 2 - "+objTwo.hashCode());
    
        }
    }
    
    
    Example to show how reflection can break the singleton pattern with Java reflect. You will get two hash code as below. It has a break on the singleton pattern.

    Hashcode of Object 1 - 1836019240
    Hashcode of Object 2 - 325040804
    
    
    Prevent Singleton pattern from Reflection

    There are many ways to prevent Singleton pattern from Reflection API, but one of the best solutions is to throw run time exception in the constructor if the instance already exists. In this, we can not able to create a second instance.

        private Singleton() {
            if( instance != null ) {
               throw new InstantiationError( "Creating of this object is not allowed." );
            }
        }
    
    

    Deserialization

    In serialization, we can save the object of a byte stream into a file or send over a network. Suppose if you serialize the Singleton class and then again de-serialize that object will create a new instance, hence deserialization will break the Singleton pattern.

    Below code is to illustrate how the Singleton pattern breaks with deserialization.

    Implements Serializable interface for Singleton Class.

    DeserializationSingleton.Java

    package demo1;
    
    import java.io.*;
    
    public class DeserializationSingleton {
    
        public static void main(String[] args) throws Exception {
    
            Singleton instanceOne = Singleton.getInstance();
            ObjectOutput out = new ObjectOutputStream(new FileOutputStream("file.text"));
            out.writeObject(instanceOne);
            out.close();
    
            ObjectInput in = new ObjectInputStream(new FileInputStream("file.text"));
            Singleton instanceTwo = (Singleton) in.readObject();
            in.close();
    
            System.out.println("hashCode of instance 1 is - " + instanceOne.hashCode());
            System.out.println("hashCode of instance 2 is - " + instanceTwo.hashCode());
        }
    
    }
    The output is below and you can see two hashcodes.

    hashCode of instance 1 is - 2125039532
    hashCode of instance 2 is - 381259350

    Prevent Singleton Pattern from Deserialization

    To overcome this issue, we need to override readResolve() method in Singleton class and return same Singleton instance. Update Singleton.java, with below method.

       protected Object readResolve() { 
               return instance; 
         }  

    Now run above DeserializationDemo class and see the output.

    hashCode of instance 1 is - 2125039532
    hashCode of instance 2 is - 2125039532

    Cloning

    Using the "clone" method we can create a copy of original object, samething if we applied clone in singleton pattern, it will create two instances one original and another one cloned object. In this case will break Singleton principle as shown in below code.

    Implement the "Cloneable" interface and override the clone method in the above Singleton class.

    Singleton.java


        @Override
        protected Object clone() throws CloneNotSupportedException  {
            return super.clone();
        }

    Then Test with cloning for breaking the singleton
    CloningSingleton.java


    public class CloningSingleton {
        public static void main(String[] args) throws CloneNotSupportedException, Exception {
            Singleton instanceOne = Singleton.getInstance();
            Singleton instanceTwo = (Singleton) instanceOne.clone();
            System.out.println("hashCode of instance 1 - " + instanceOne.hashCode());
            System.out.println("hashCode of instance 2 - " + instanceTwo.hashCode());
        }
    
    }

    Here is the output

    hashCode of instance 1 - 1836019240
    hashCode of instance 2 - 325040804

    If we see the above output, two instances have different hashcodes means these instances are not the same.


    Prevent Singleton Pattern from Cloning

    In the above code, breaks the Singleton principle i. e created two instances. To overcome the above issue we need to implement/override clone() method and throw an exception CloneNotSupportedException from clone method. If anyone try to create clone object of Singleton, it will throw an exception as see below code.

        @Override
        protected Object clone() throws CloneNotSupportedException  {
            throw new CloneNotSupportedException();
        }

    Now we can run the CloningSingleton class, it will throw CloneNotSupportedException while creating a clone object of Singleton object.


    13

    View comments

    • Reduce Cost: MSA will reduce the overall cost of designing, implementing, and maintaining IT services.
    • Increase Release Speed: MSA will increase the speed from idea to deployment of services.
    • Improve Resilience: MSA will improve the resilience of our service network.
    • Enable Visibility: MSA support for better visibility on your service and network.
    You need to understand what principles microservice architecture has been built
    • Scalability
    • Availability
    • Resiliency
    • Flexibility
    • Independent, autonomous
    • Decentralized governance
    • Failure isolation
    • Auto-Provisioning
    • Continuous delivery through DevOps
    Add hearing to the above principles, brings several challenges and issues while bring your solution or system to live. Those problems are common for many solutions. Those can overcome with using correct and matching design patterns. There are design patterns for microservices and those can divide into five Patterns. Each many contains many patterns. Below diagram shows the those.
    Design Patterns for Microservices

    Decomposition Patterns

    Decompose by Business Capability
    Microservices is all about making services loosely coupled, applying the single responsibility principle. It decomposes by business capability. Define services corresponding to business capabilities. A business capability is a concept from business architecture modeling [2]. It is something that a business does in order to generate value. A business capability often corresponds to a business object, e.g.
    • Order Management is responsible for orders
    • Customer Management is responsible for customers
    Decompose by Subdomain
    Decomposing an application using business capabilities might be a good start, but you will come across so-called “God Classes” which will not be easy to decompose. These classes will be common among multiple services. Define services corresponding to Domain-Driven Design (DDD) subdomains. DDD refers to the application’s problem space — the business — as the domain. A domain is consists of multiple subdomains. Each subdomain corresponds to a different part of the business.
    Subdomains can be classified as follows:
    • Core — key differentiator for the business and the most valuable part of the application
    • Supporting — related to what the business does but not a differentiator. These can be implemented in-house or outsourced
    • Generic — not specific to the business and are ideally implemented using off the shelf software
    The subdomains of an Order management include:
    • Product catalog service
    • Inventory management services
    • Order management services
    • Delivery management services
    Decompose by Transactions / Two-phase commit (2pc) pattern
    You can decompose services over the transactions. Then there will be multiple transactions in the system. One of the important participants in a distributed transaction is the transaction coordinator [3]. The distributed transaction consists of two steps:
    • Prepare phase — during this phase, all participants of the transaction prepare for commit and notify the coordinator that they are ready to complete the transaction
    • Commit or Rollback phase — during this phase, either a commit or a rollback command is issued by the transaction coordinator to all participants
    The problem with 2PC is that it is quite slow compared to the time for operation of a single microservice. Coordinating the transaction between microservices, even if they are on the same network, can really slow the system down, so this approach isn’t usually used in a high load scenario.
    Strangler Pattern
    Above three, design patterns that you go through were decomposing applications for Greenfield, but 80% of the work you do is with brownfield applications, which are big, monolithic applications (legacy codebase). The Strangler pattern comes to the rescue or solution. This creates two separate applications that live side by side in the same URI space. Over time, the newly refactored application “strangles” or replaces the original application until finally, you can shut off the monolithic application. The Strangler Application steps are transformed, coexist, and eliminate [4]:
    • Transform — Create a parallel new site with modern approaches.
    • Coexist — Leave the existing site where it is for a time. Redirect from the existing site to the new one so the functionality is implemented incrementally.
    • Eliminate — Remove the old functionality from the existing site.
    Bulkhead Pattern
    Isolate elements of an application into pools so that if one fails, the others will continue to function. This pattern is named Bulkhead because it resembles the sectioned partitions of a ship’s hull. Partition service instances into different groups, based on consumer load and availability requirements. This design helps to isolate failures, and allows you to sustain service functionality for some consumers, even during a failure.
    Sidecar Pattern
    Deploy components of an application into a separate processor container to provide isolation and encapsulation. This pattern can also enable applications to be composed of heterogeneous components and technologies. This pattern is named Sidecar because it resembles a sidecar attached to a motorcycle. In the pattern, the sidecar is attached to a parent application and provides supporting features for the application. The sidecar also shares the same lifecycle as the parent application, is created and retired alongside the parent. The sidecar pattern is sometimes referred to as the sidekick pattern and is the last decomposition pattern that we show in the post.

    Integration Patterns

    API Gateway Pattern
    When an application is broken down to smaller microservices, there are a few concerns that need to be addressed
    • There are multiple calls for multiple microservices by different channels
    • There is a need for handling different type of Protocols
    • Different consumers might need a different format of the responses
    An API Gateway helps to address many concerns raised by the microservice implementation, not limited to the ones above.
    • An API Gateway is the single point of entry for any microservice call.
    • It can work as a proxy service to route a request to the concerned microservice.
    • It can aggregate the results to send back to the consumer.
    • This solution can create a fine-grained API for each specific type of client.
    • It can also convert the protocol request and respond.
    • It can also offload the authentication/authorization responsibility of the microservice.
    Aggregator Pattern
    When breaking the business functionality into several smaller logical pieces of code, it becomes necessary to think about how to collaborate the data returned by each service. This responsibility cannot be left with the consumer.
    The Aggregator pattern helps to address this. It talks about how we can aggregate the data from different services and then send the final response to the consumer. This can be done in two ways [6]:
    • A composite microservice will make calls to all the required microservices, consolidate the data, and transform the data before sending back.
    • An API Gateway can also partition the request to multiple microservices and aggregate the data before sending it to the consumer.
    It is recommended if any business logic is to be applied, then choose a composite microservice. Otherwise, the API Gateway is the established solution.
    Proxy Pattern
    API gateway we just expose Microservices over API gateway. I allow to get API features such as security and categorizing APIs in GW. In this example, the API gateway has three API modules:
    • Mobile API, which implements the API for the FTGO mobile client
    • Browser API, which implements the API to the JavaScript application running in the browser
    • Public API, which implements the API for third-party developers
    Gateway Routing Pattern
    The API gateway is responsible for request routing. An API gateway implements some API operations by routing requests to the corresponding service. When it receives a request, the API gateway consults a routing map that specifies which service to route the request to. A routing map might, for example, map an HTTP method and path to the HTTP URL of service. This function is identical to the reverse proxying features provided by web servers such as NGINX.
    Chained Microservice Pattern
    There will be multiple dependencies of for single services or microservice eg: Sale microservice has dependency products microservice and order microservice. Chained microservice design pattern will help you to provide the consolidated outcome to your request. The request received by a microservice-1, which is then communicating with microservice-2 and it may be communicating with microservice-3. All these services are synchronous calls.
    Branch Pattern
    A microservice may need to get the data from multiple sources including other microservices. Branch microservice pattern is a mix of Aggregator & Chain design patterns and allows simultaneous request/response processing from two or more microservices. The invoked microservice can be chains of microservices. Brach pattern can also be used to invoke different chains of microservices, or a single chain, based your business needs.
    Client-Side UI Composition Pattern
    When services are developed by decomposing business capabilities/subdomains, the services responsible for user experience have to pull data from several microservices. In the monolithic world, there used to be only one call from the UI to a backend service to retrieve all data and refresh/submit the UI page. However, now it won’t be the same. With microservices, the UI has to be designed as a skeleton with multiple sections/regions of the screen/page. Each section will make a call to an individual backend microservice to pull the data. Frameworks like AngularJS and ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). Each team develops a client-side UI component, such an AngularJS directive, that implements the region of the page/screen for their service. A UI team is responsible for implementing the page skeletons that build pages/screens by composing multiple, service-specific UI components.

    Database Patterns

    Defining the database architecture for microservices we need to consider below points.
    • Services must be loosely coupled. They can be developed, deployed, and scaled independently.
    • Business transactions may enforce invariants that span multiple services.
    • Some business transactions need to query data that is owned by multiple services.
    • Databases must sometimes be replicated and shared in order to scale.
    • Different services have different data storage requirements.
    Database per Service
    To solve the above concerns, one database per microservice must be designed; it must be private to that service only. It should be accessed by the microservice API only. It cannot be accessed by other services directly. For example, for relational databases, we can use private-tables-per-service, schema-per-service, or database-server-per-service.
    Shared Database per Service
    We have talked about one database per service being ideal for microservices. It is anti-pattern for microservices. But if the application is a monolith and trying to break into microservices, denormalization is not that easy. The later phase we can move to DB per services pattern, Till that we make follow this. A shared database per service is not ideal, but that is the working solution for the above scenario. Most people consider this an anti-pattern for microservices, but for brownfield applications, this is a good start to break the application into smaller logical pieces. This should not be applied for greenfield applications.
    Command Query Responsibility Segregation (CQRS)
    Once we implement database-per-service, there is a requirement to query, which requires joint data from multiple services. it’s not possible. CQRS suggests splitting the application into two parts — the command side and the query side.
    • The command side handles the Create, Update, and Delete requests
    • The query side handles the query part by using the materialized views
    The event sourcing pattern is generally used along with it to create events for any data change. Materialized views are kept updated by subscribing to the stream of events.
    Event Sourcing
    Most applications work with data, and the typical approach is for the application to maintain the current state. For example, in the traditional create, read, update, and delete (CRUD) model a typical data process is to read data from the store. It contains limitations of locking the data with often using transactions.
    The Event Sourcing pattern [8] defines an approach to handling operations on data that’s driven by a sequence of events, each of which is recorded in an append-only store. Application code sends a series of events that imperatively describe each action that has occurred on the data to the event store, where they’re persisted. Each event represents a set of changes to the data (such as AddedItemToOrder).
    The events are persisted in an event store that acts as the system of record. Typical uses of the events published by the event store are to maintain materialized views of entities as actions in the application change them, and for integration with external systems. For example, a system can maintain a materialized view of all customer orders that are used to populate parts of the UI. As the application adds new orders, adds or removes items on the order, and adds shipping information, the events that describe these changes can be handled and used to update the materialized view. The figure shows an overview of the pattern.
    Event Sourcing pattern[8]
    Saga Pattern
    When each service has its own database and a business transaction spans multiple services, how do we ensure data consistency across services? Each request has a compensating request that is executed when the request fails. It can be implemented in two ways:
    • Choreography — When there is no central coordination, each service produces and listens to another service’s events and decides if an action should be taken or not. Choreography is a way of specifying how two or more parties; none of which has any control over the other parties’ processes, or perhaps any visibility of those processes — can coordinate their activities and processes to share information and value. Use choreography when coordination across domains of control/visibility is required. You can think of choreography, in a simple scenario, as like a network protocol. It dictates acceptable patterns of requests and responses between parties.
    Saga pattern — Choreography
    • Orchestration — An orchestrator (object) takes responsibility for a saga’s decision making and sequencing business logic. when you have control over all the actors in a process. when they’re all in one domain of control and you can control the flow of activities. This is, of course, most often when you’re specifying a business process that will be enacted inside one organization that you have control over.
    Sage pattern — Orchestration

    Observability Patterns

    Log Aggregation
    Consider a use case where an application consists of multiple services. Requests often span multiple service instances. Each service instance generates a log file in a standardized format. We need a centralized logging service that aggregates logs from each service instance. Users can search and analyze the logs. They can configure alerts that are triggered when certain messages appear in the logs. For example, PCF does have Log aggregator, which collects logs from each component (router, controller, diego, etc…) of the PCF platform along with applications. AWS Cloud Watch also does the same.
    Performance Metrics
    When the service portfolio increases due to a microservice architecture, it becomes critical to keep a watch on the transactions so that patterns can be monitored and alerts sent when an issue happens.
    A metrics service is required to gather statistics about individual operations. It should aggregate the metrics of an application service, which provides reporting and alerting. There are two models for aggregating metrics:
    • Push — the service pushes metrics to the metrics service e.g. NewRelic, AppDynamics
    • Pull — the metrics services pulls metrics from the service e.g. Prometheus
    Distributed Tracing
    In a microservice architecture, requests often span multiple services. Each service handles a request by performing one or more operations across multiple services. While in troubleshoot it is worth to have trace ID, we trace a request end-to-end.
    The solution is to introduce a transaction ID. Follow approach can be used;
    • Assigns each external request a unique external request id.
    • Passes the external request id to all services.
    • Includes the external request id in all log messages.
    Health Check
    When microservice architecture has been implemented, there is a chance that a service might be up but not able to handle transactions. Each service needs to have an endpoint which can be used to check the health of the application, such as /health. This API should o check the status of the host, the connection to other services/infrastructure, and any specific logic.

    Cross-Cutting Concern Patterns

    External Configuration
    A service typically calls other services and databases as well. For each environment like dev, QA, UAT, prod, the endpoint URL or some configuration properties might be different. A change in any of those properties might require a re-build and re-deploy of the service.
    To avoid code modification configuration can be used. Externalize all the configuration, including endpoint URLs and credentials. The application should load them either at startup or on the fly. These can be accessed by the application on startup or can be refreshed without a server restart.
    Service Discovery Pattern
    When microservices come into the picture, we need to address a few issues in terms of calling services.
    With container technology, IP addresses are dynamically allocated to the service instances. Every time the address changes, a consumer service can break and need manual changes.
    Each service URL has to be remembered by the consumer and become tightly coupled.
    A service registry needs to be created which will keep the metadata of each producer service and specification for each. A service instance should register to the registry when starting and should de-register when shutting down. There are two types of service discovery:
    • client-side : eg: Netflix Eureka
    • Server-side : eg: AWS ALB.
    service discovery [9]
    Circuit Breaker Pattern
    A service generally calls other services to retrieve data, and there is the chance that the downstream service may be down. There are two problems with this: first, the request will keep going to the down service, exhausting network resources, and slowing performance. Second, the user experience will be bad and unpredictable.
    The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote service will fail immediately. After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed, the circuit breaker resumes normal operation. Otherwise, if there is a failure, the timeout period begins again. This pattern is suited to, prevent an application from trying to invoke a remote service or access a shared resource if this operation is highly likely to fail.
    Circuit Breaker Pattern [10]
    Blue-Green Deployment Pattern
    With microservice architecture, one application can have many microservices. If we stop all the services then deploy an enhanced version, the downtime will be huge and can impact the business. Also, the rollback will be a nightmare. Blue-Green Deployment Pattern avoid this.
    The blue-green deployment strategy can be implemented to reduce or remove downtime. It achieves this by running two identical production environments, Blue and Green. Let’s assume Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic. All cloud platforms provide options for implementing a blue-green deployment.
    Blue-Green Deployment Pattern
    References
    [1] “Microservice Architecture: Aligning Principles, Practices, and Culture” Book by Irakli Nadareishvili, Matt McLarty, and Michael Amundsen
    12

    View comments

  2. Last few years has been a great year for API Gateways and API companies. APIs (Application Programming Interfaces) are allowing businesses to expand beyond their enterprise boundaries to drive revenue through new business models. Larger enterprises are adopting API paradigm — developing many internal and external services that developers connect to in order to create user-facing products. As the number of APIs management products increases in 2018.API and API Mangement / Gateways can be found in a lot of enterprises today. Those enterprises level API management solutions allow external companies and external users to use these APIs. Enlightened businesses are recognizing a new channel to market through APIs.
    Companies with API-M and End-users
    As the above figure shows Company A, B, and C are interconnected with API-M solutions. End-user also connected to the APIs. Developers are rapidly driving new opportunities for businesses, developers and customers from these APIs.

    API Economy

    The API Economy allows access to business assets and digital services through a simple to use API. Software developing company see the economic advantages of integration, many large, monolithic software systems currently supported on premises will decompose into highly-organized sets of microservices available in the cloud.
    The ultimate goal of the API economy is to facilitate the creation of user-focused apps that support line of business goals. Enterprise uses APIs to bring together ecosystem partners and unlock new sources of value. Successful companies or enterprise will see APIs not just as technical tools, but as sources of strategic value in today’s digital economy. As managers look for creative ways to monetize services and assets through APIs.

    API Monetization

    Access to the assets or services is provided via APIs enabling new and innovative usage of the assets to drive additional revenue. This is referred to as API Monetization. APIs can also be used in this indirect or direct monetization model. However, the monetization model leads someone paying you for the use of the API (APIs on banks, news, telco), you pay them to use (API for advertising, marketing).
    API Monetization Models

    Free

    Free Model is used when there is typically low valued assets. While no money is exchanged clearly there must be a business
    purpose. When a company drives brand or loyalty and enter new channels Free model is tried. There are many portals used this API example is rapidapi, any-api. Some company used this free model to identify user usage patterns. An example is Facebook APIs

    Subscriber Pays

    Subscriber Pays model
    API must be of value to the Subscriber. The Subscriber may obtain downstream revenue through its use of the API. This Subscriber may developer application/mobile App using those APIs.
    • Pay As You Go: Developer / Subscriber pays for what has been used. There is no minimums, no tiers. It is usually billed periodically (e.g. monthly / weekly).
    • Freemium: The Basic API is free, with higher value APIs priced.
      Tiered — Multiple tiered options: A developer/subscriber chooses the tier they believe they need and pays for the tier. Tier contains the level of access.
    • Unit Based: There are different API features or APIs have a different value. Those are assigned a number of units/score. The developer buys the unit before API use and the point will reduce upon the usage.
    • Point (Box) Base: Same like Unit Base. A point will buy before the API usage. Point reduces upon call and call will contain the Categories, like Freemium.
    • Transaction Fee: A fixed or percentage of a transaction is paid to the API Provider.

    Subscriber Gets Paid

    Subscriber gets paid
    API Gateway holder provides a monetary incentive for a developer to leverage your web API. Basic scenarios include you selling an asset or services through an agent. This payment method can be seen in the marketing industry a lot.
    • Revenue Share: It is acting as an agent to help sell a provider product/asset. A fixed or percentage of a transaction is paid to the API Consumer
    • Affiliate: In this model, a partner includes your content/ advertisements to drive potential customer traffic to you. There may be several possible sub-models:
    • Cost Per Action (CPA): This specific Developer earns the affiliate a commission based on a successful conversion. Generally, a flat rate per user who subscribes to the merchant’s API / service. There can be commission structure also
    • Cost Per Click (CPC): This method developer / API subscriber is paid for every click they send to the merchant’s site / API consume.
    • Sign-up Reteral: Developer gets paid once, he able onboard API / API list consumer (Completing process). There are two sub-models, one is once he completes a full list of API consumer he gets paid the amount of money. (Pre-define amount) is paid for each completion. Next submodel is ‘Recurring’. This is where the developer will get paid each consumer after 3rd party completes the process of API calls.

    Indirect Payment

    Indirect Payment Methods
    By Indirect Payment, API achieves some goal that drives the business model.
    • Content Acquisition: APIs allow for content submission by 3rd parties which attract customers to you.
    • Content Syndication APIs: allow third parties to distribute your content. Multiple financial models may surround this. You might create a contract between parties
    • Software as a Service (SaaS): More than one lever should drive SaaS pricing, API based pricing makes things one-dimensional. It’s easy to set up any additional user parameters, as your parameter to define pricing as an add-on. Price help attracts a different lot of users. This model can be seen in many place Salesforce (Upsell — model). Software as a service is a software licensing and delivery model in which software is licensed on a subscription basis it helps reduce licensing price and bring cost also depending on the feature in the software.
    • Internal use Consumer: APIs are used by the same company employees to build customer-facing capability for your company. Typical scenarios include creating Mobile Apps and Web Commerce sites.
    • Internal Nonconsumer: APIs are used internally to assist in productivity, It aligns cross-Lines of Business and Business units in the company. Typical scenarios include providing simplified secure access to systems of record and managing asset. Those help handle charge bill in company asset over the business units.
    • B2B Customer: APIs are used by your customers to integrate into your enterprise. Customer value is provided through the use of the API so they are incented to use the API. The same way those APIs are used to expand into new geographies or new demographics, offer new products or upsell new capabilities to existing clients
    • B2B Partner: APIs are used by your partners to integrate into your enterprise. This is used to increase existing partner relationships or expand to new partners.
    1

    View comments

  3. kubectl (Kubernetes command-line tool) is to deploy and manage applications on Kubernetes. Using kubectl, you can inspect cluster resources; create, delete, and update components.
    NOTE
    You must use a kubectl version that is within one minor version difference of your cluster. If not you may see errors as below

    1. Download the latest release v1.13.0 using curl installed, use this command:
    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/windows/amd64/kubectl.exe


    2. Configure your kube config and it located in “C:\Users\<UserName>\.kube\config”
    3. Check you kube version

    2

    View comments

  4. WSO2 Enterprise Integrator is shipped with a separate message broker profile (WSO2 MB). In this Post I will be using message broker profile in EI (6.3.0).

    1) Setting up the message broker profile

    1.1) Copy the following JAR files from the <EI_HOME>/wso2/broker/client-lib/ directory to the <EI_HOME>/lib/ directory.
    andes-client-3.2.13.jar
    geronimo-jms_1.1_spec-1.1.0.wso2v1.jar
    org.wso2.securevault-1.0.0-wso2v2.jar

    1.2) Open the <EI_HOME>/conf/jndi.properties file and add the following line after the queue.MyQueue = example.MyQueue line:

    queue.JMSMS=JMSMS

    1.3) Open axis2.xml in <EI_HOME>/confconf/axis2/axis2.xml and uncomment configure to JMS transport support with WSO2 EI Broker Profile
    There you will find transportReceiver and transportSender.

    1.4) Add transport.jms.SessionTransacted value for each transportReceiver

    <parameter name="transport.jms.SessionTransacted">true</parameter>

    Eg:

    <parameter name="myQueueConnectionFactory" locked="false">
                
       <parameter name="java.naming.factory.initial" locked="false">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</parameter>

      <parameter name="java.naming.provider.url" locked="false">conf/jndi.properties</parameter>
                
       <parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false">QueueConnectionFactory</parameter>

      <parameter name="transport.jms.ConnectionFactoryType" locked="false">queue</parameter>

      <parameter name="transport.jms.SessionTransacted">true</parameter>

    </parameter>

    1.5) Start EI and Broker profiles

    2) Implementation 

    Use case : There is API it will called WSO2 MB Queue and pass to the End point when EP is on Error it need to pass Message to dead letter channel.


    HLD - Page 2 (1)

    2.1) First we will put message to Queue in WSO2 MB with API

    <?xml version="1.0" encoding="UTF-8"?>
    <api context="/dl-test" name="dl-test" xmlns="http://ws.apache.org/ns/synapse">
         <resource methods="POST">
             <inSequence>
                 <log level="custom">
                     <property name="property_name" value="DL-test API is called"/>
                 </log>
                 <property xmlns="http://ws.apache.org/ns/synapse" name="HEADER" value="VALUE" scope="transport" type="STRING"/>
                 <property description="xml" name="ContentType" scope="axis2" type="STRING" value="application/xml"/>
                 <property name="messageType" scope="axis2" type="STRING" value="application/xml"/>
                 <property name="OUT_ONLY" scope="default" type="STRING" value="true"/>
                 <property name="FORCE_SC_ACCEPTED" scope="axis2" type="STRING" value="true"/>
                 <send>
                     <endpoint>
                         <address uri="jms:/jmsms?transport.jms.ConnectionFactoryJNDIName=QueueConnectionFactory&amp;java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&amp;java.naming.provider.url=conf/jndi.properties&amp;transport.jms.DestinationType=queue"/>
                     </endpoint>
                 </send>
             </inSequence>
             <outSequence/>
             <faultSequence>
                 <log level="custom">
                     <property name="property_name" value="faultSequence - test API is hitted"/>
                 </log>
             </faultSequence>
         </resource>
    </api>


    2.2) Then Create Inbound End point for this queue

    <?xml version="1.0" encoding="UTF-8"?>
    <inboundEndpoint name="abc-inbound-ep" onError="c2b-1-integration-v1-common-fault-sequence" protocol="jms" sequence="test-seq" suspend="false" xmlns="http://ws.apache.org/ns/synapse">
         <parameters>
             <parameter name="interval">1000</parameter>
             <parameter name="sequential">true</parameter>
             <parameter name="coordination">true</parameter>
             <parameter name="transport.jms.Destination">jmsms</parameter>
             <parameter name="transport.jms.CacheLevel">3</parameter>
             <parameter name="transport.jms.ConnectionFactoryJNDIName">QueueConnectionFactory</parameter>
             <parameter name="java.naming.factory.initial">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</parameter>
             <parameter name="java.naming.provider.url">conf/jndi.properties</parameter>
             <parameter name="transport.jms.SessionAcknowledgement">AUTO_ACKNOWLEDGE</parameter>
             <parameter name="transport.jms.SessionTransacted">true</parameter>
             <parameter name="transport.jms.SubscriptionDurable">false</parameter>
             <parameter name="transport.jms.ConnectionFactoryType">queue</parameter>
             <parameter name="transport.jms.SharedSubscription">false</parameter>
         </parameters>
    </inboundEndpoint>

    2.3) Here is sample Sequence to  used in Inbound.

    <?xml version="1.0" encoding="UTF-8"?>
    <sequence name="test-seq" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
         <log level="custom">
             <property name="property_name" value="test-seq ABC Queue is called"/>
             <property expression="$trp:HEADER" name="property_name"/>
         </log>
         <log level="full"/>
    </sequence>

    2.4) Then Improve Sequence to send value (XML) to End point.

    If the Endpoint have issue message should retry and move to the dead letter channel.

    The Dead Letter Channel (DLC) is a sub-set of a queue, specifically designed to persist messages that are typically marked for deletion, providing you with a choice on whether to delete, retrieve or reroute the messages from the DLC.

    <?xml version="1.0" encoding="UTF-8"?>
    <sequence name="test-seq" onError="test-fault-sequence" trace="disable" xmlns="http://ws.apache.org/ns/synapse">
         <log level="custom">
             <property name="property_name" value="test-seq ABC Queue is hitted"/>
             <property expression="$trp:HEADER" name="property_name"/>
         </log>
         <log level="full"/>
         <log level="custom">
             <property name="Message" value="test-proxy is hitted"/>
         </log>
         <property name="OUT_ONLY" scope="default" type="STRING" value="true"/>
         <call blocking="true">
             <endpoint key="dl-test-ep"/>
         </call>
    </sequence>

    2.4.2) Adding new sequence for on Error that make message to DLC.

    <?xml version="1.0" encoding="UTF-8"?>
    <sequence name="test-fault-sequence" trace="disable"
         xmlns="http://ws.apache.org/ns/synapse">
         <log level="full">
             <property name="MESSAGE" value="Executing default &quot;fault&quot; sequence" />
             <property expression="get-property('ERROR_CODE')" name="ERROR_CODE" />
             <property expression="get-property('ERROR_MESSAGE')" name="ERROR_MESSAGE" />
         </log>
         <property name="SET_ROLLBACK_ONLY" scope="axis2" type="STRING"
             value="true" />
         <log level="custom">
             <property name="Transaction Action" value="Rollbacked" />
         </log>
    </sequence>


    OR you can have proxy listen for the Queue (Inbound is much better)

    <?xml version="1.0" encoding="UTF-8"?>
    <proxy name="test-proxy" startOnLoad="true" transports="http https jms" xmlns="http://ws.apache.org/ns/synapse">
         <target>
             <inSequence>
                 <log level="custom">
                     <property value="test-proxy is hitted" name="Message"/>
                 </log>
                 <property name="OUT_ONLY" scope="default" type="STRING" value="true"/>
                 <call blocking="true">
                     <endpoint key="dl-test-ep"/>
                 </call>
             </inSequence>
             <outSequence/>
             <faultSequence>
                 <log level="full">
                     <property name="MESSAGE" value="Executing default &quot;fault&quot; sequence"/>
                     <property expression="get-property('ERROR_CODE')" name="ERROR_CODE"/>
                     <property expression="get-property('ERROR_MESSAGE')" name="ERROR_MESSAGE"/>
                 </log>
                 <property name="SET_ROLLBACK_ONLY" scope="axis2" type="STRING" value="true"/>
                 <log level="custom">
                     <property name="Transaction Action" value="Rollbacked"/>
                 </log>
             </faultSequence>
         </target>
         <parameter name="transport.jms.ContentType">
             <rules>
                 <jmsProperty>contentType</jmsProperty>
                 <default>application/xml</default>
             </rules>
         </parameter>
    </proxy>

    3) Test the use case

    image

    Do small change for end point for testing

    <?xml version="1.0" encoding="UTF-8"?>
    <endpoint name="dl-test-ep" xmlns="http://ws.apache.org/ns/synapse">
         <http method="get" uri-template="http://bacde.com/175"/>
    </endpoint>

    Wrong URI

    Then you will see the retrying it for 10 times.

    image 

    If you go the MB Dead Letter Channel, you will find the message.

    Home     -> Manage     -> Dead Letter Channel     -> List

    image

    You can restore your message or delete any message or reroute.

    Once you restore you will see it worked (after fixing EP URL)

    image

    1

    View comments

  5. When two or many applications want to exchange data, they do so by sending the data through a channel that connects the each others. The application sending the data may not know which application will receive the data, but by selecting a particular channel to send the data on, the sender knows that the receiver will be one that is looking for that sort of data by looking for it on that channel.

    When designing an application, a developer has to know where to put what types of data to share that data with other applications, and likewise where to look for what types of data coming from other applications. These paths of communication cannot be dynamically created and discovered at runtime; they need to be agreed upon at design time so that the application knows where its data is coming from and where the data is going to. One exception is the reply channel in Request-Reply. The requestor can create or obtain a new channel the replier knows nothing about, specify it as the Return Address of a request message, and then the replier can make use of it. Another exception is messaging system implementations that support hierarchical channels. A receiver can subscribe to a parent in the hierarchy, then a sender can publish to a new child channel the receiver knows nothing about, and the subscriber will still receive the message.
    First the applications determine the channels the messaging system will need to provide. Subsequent applications will try to design their communication around the channels that are available, but when this is not practical, they will require that additional channels be added. When a set of applications already use a certain set of channels, and new applications wish to join in, they too will use the existing set of channels. When existing applications add new functionality, they may require new channels.
    Another common source of confusion is whether a Message Channel is unidirectional or bidirectional. Technically, it’s neither; a channel is more like a bucket that some applications add data to and other applications take data from. But because the data is in messages that travel from one application to another, that gives the channel direction, making it unidirectional. If a channel were bidirectional, that would mean that an application would both send messages to and receive messages from the same channel because the application would tend to keep consuming its own messages, the messages it’s supposed to be sending to other applications.  So for all practical purposes, channels are unidirectional. As a consequence, for two applications to have a two-way conversation, they will need two channels, one in each direction.

    Therefore different types of channels used in a messaging system. A Message channel is a basic architectural pattern of a messaging system and is used fundamentally for exchanging data between applications.

    One-to-one or one-to-many
    When application shares data with just one other application or any other application that is interest on that data. Then you can use a Point-to-Point Channel. This does not guarantee that every piece of data sent on that channel will necessarily go to the same receiver, because the channel might have multiple receivers; but it does ensure that any one piece of data will only be received by one of the applications.

    If all of the receiver need to be receive the data, use a Publish-Subscribe Channel. Then piece of data is copied effectively by  the channel and pass it to each of the receivers. Simple the sender broadcasts an event to all interested receivers.

    Type of data (Datatype Channel)
    The message contents must conform to some type so that the receiver understands the data’s structure. Datatype Channel is the principle that all of the data on a channel has to be of the same type. This is the main reason why messaging systems need lots of channels; if the data could be of any type, the messaging system would only need one channel (in each direction) between any two applications.

    Invalid and dead messages
    The message system can ensure that a message is delivered properly, but it cannot guarantee that the receiver will know what to do with it. The receiver has expectations about the data’s type and meaning.  What it can do, though, is put the strange message on a specially designated Invalid Message Channel, in hopes that some utility monitoring the channel will pick up the message and figure out what to do with it.
    Many messaging systems have a similar built-in feature, a Dead Letter Channel for messages which are successfully sent but ultimately cannot be successfully delivered. Again, hopefully some utility monitoring the channel will know what to do with the messages that could not be delivered.

    Crash proof
    If the messaging system crashes or is shut down for maintence. When it is back up and running, those messages will still be in its channels. By default, no; channels store their messages in memory. However, Guaranteed Delivery makes channels persistent so that their messages are stored on disk. This hurts performance but makes messaging more reliable.

    Non-messaging clients
    If an application cannot connect to a messaging system but still wants to participate in messaging. But if the messaging system can connect to the application somehow—through its user interface, its business services API, its database, or through a network connection such as TCP/IP or HTTP—then a Channel Adapter on the messaging system can be used to connect a channel (or set of channels) to the application without having to modify the application and perhaps without having to have a messaging client running on the same machine as the application.

    Sometimes the "non-messaging client" really is a messaging client, just for a different messaging system. In that case, an application that is a client on both messaging systems can build a Messaging Bridge between the two, effectively connecting them into one composite messaging system.

    2

    View comments

  6. Microservices are going completely over the enterprise and changed the way people write software within an enterprise ecosystem.

    Let build you microservices with msf4j for Auto Mobile.

    1) Create msf4j using apache archetype with below command

    mvn archetype:generate -DarchetypeGroupId=org.wso2.msf4j -DarchetypeArtifactId=msf4j-microservice -DarchetypeVersion=1.0.0 -DgroupId=org.example -DartifactId=automobile -Dversion=0.1-SNAPSHOT -Dpackage=org.example.service -DserviceClass=AutomobileService
    

    image

    2) Open it on IDE respectively  idea and eclipse by going into the directory

    mvn idea:idea

    mvn eclipse:eclipse

    image

    3) Change the context to ‘/automobile’ from ‘/service’

    image

    4) Create new Java class called “Automobile” that will contain the blueprint for our micro services. You can used auto generator build getter and setter and constructor.

    public class Automobile {

    private String brand;
    private String name;
    private int engineeSize;
    private double price;
    }

    5) Improve the GET and POST as below

    @Path("/automobile")
    public class AutomobileService {

    private Map<String, Automobile> automobiles = new HashMap<>();

    public AutomobileService() {
    automobiles.put("toyota", new Automobile("Toyota", "Prado", 2800, 21.3));
    }
    @GET
    @Path("/{brand}")
    @Produces("application/json")
    public Response get(@PathParam("brand") String brand) {
    Automobile automobile = automobiles.get(brand);
    return automobile == null ?
    Response.status(Response.Status.NOT_FOUND).entity("{\"result\":\"brand not found = " + brand + "\"}")
    .build() :
    Response.status(Response.Status.OK).entity(automobile).build();
    }

    @POST
    @Consumes("application/json")
    public Response addStock(Automobile automobile) {
    if(automobiles.get(automobile.getBrand()) != null) {
    return Response.status(Response.Status.CONFLICT).build();
    }
    automobiles.put(automobile.getBrand(), automobile);
    return Response.status(Response.Status.OK).
    entity("{\"result\":\"Updated the automobile with brand = " + automobile.getBrand() + "\"}").build();
    }


    6) Build it from maven

    maven clean install


    7) Run it and test

    java -jar target/automobile-0.1-SNAPSHOT.jar


    8) Go to Postman and test GET and POST

    image

    POST request

    image

    Then Check the post has recorded value

    image

    Now, you can try DELETE and PUT from you self.

    0

    Add a comment

  7. The SMPP inbound endpoint allows you to consume messages from SMSC via WSO2 ESB OR EI.

    image

    1.  Start SMSC

    2.  Create custom inbound end point with below parameter. (Make sure you pick correct system-id and password correct for your SMSC)

    image

    3. Create Sequence for Inbound EP.

    4. Once ESB or EI start.

    you will start see SMSC log

    image 

    ESB log will contains

    image

    1

    View comments

  8. WSO2 APIM Components

    WSO2 API Manager includes five main components as the Publisher, Store, Gateway, Traffic Manager and Key Manager.

    • API Gateway - responsible for securing, protecting, managing, and scaling API calls. it intercepts API requests and applies policies such as throttling and security checks. It is also instrumental in gathering API usage statistics.
    • API Store - provides a space for consumers to self-register, discover API functionality, subscribe to APIs, evaluate them, and interact with API publishers.
    • API Publisher - enables API providers to easily publish their APIs, share documentation, provision API keys, and gather feedback on API features, quality, and usage.
    • API Key Manager Server - responsible for all security and key-related operations. When an API call is sent to the Gateway, it calls the Key Manager server and verifies the validity of the token provided with the API call.
    • API Traffic Manager - regulate API traffic, make APIs and applications available to consumers at different service levels, and secure APIs against security attacks. The Traffic Manager features a dynamic throttling engine to process throttling policies in real-time, including rate limiting of API requests.
    • LB (load balancers) - In distributed deployment requires two load balancers.
      - First load balancer (NGINX Plus) to manage internally cluster.
      - Second load balancer is set up externally to handle the requests sent to the clustered server nodes, and to provide failover and autoscaling. It may be NGINX Plus or any other third-party product.
    • RDBMS (shared databases) - The distributed deployment setup depicted above shares the following databases among the APIM components set up in separate server nodes.
      - User Manager Database : Stores information related to users and user roles. This information is shared among the Key Manager Server, Store, and Publisher
      - API Manager Database : Stores information related to the APIs along with the API subscription details. The Key Manager Server uses this database
      - Registry Database : Shares information between the Publisher and Store

    Note

    • It is recommended to separate the worker and manager nodes in scenarios where you have multiple Gateway nodes

    Message Flow

    The three main use cases of API Manager are API publishing, subscribing and invoking.


    APIMDeployment


    WSO2 API Manager deployment patterns

    • Pattern 1 (Single node)
      All-in-one deployment

    • Pattern 2 (Partially Distributed Deployment)
      Deployment with a separate Gateway and separate Key Manager
    • Pattern 3 (Fully distributed setup)
      It contains scalability at each layer and higher flexibility at each component

    • Pattern 4  (Internal and external / on-premise API Management)
      This pattern require a separate internal and external API Management with separated Gateway instances
    • Pattern 5 (Internal and external /public and private cloud API Management)
      It maintain a cloud deployment as an external API Gateway layer


    Database Configuration for Distributed Deployment

    image


    API Manager Profiles

    The following are the different profiles available in WSO2 API Manager.

    • Gateway manager: Acts as a manager node in a cluster. This profile starts frontend/UI features such as login as well as backend services that allow the product instance to communicate with other nodes in the cluster.
    • Gateway worker: Acts as a worker node in a cluster. This profile starts the backend features for data processing and communicating with the manager node.
    • Key Manager: Handle features relevant to the Key Manager component of the API Manager.
      Traffic Manager: Handle features relevant to the Traffic Manager component. The Traffic Manager helps users to regulate API traffic, make APIs and applications available to consumers at different service levels, and secure APIs against security attacks. The Traffic Manager features a dynamic throttling engine to process throttling policies in real-time, including rate limiting of API requests.
    • API Publisher: Only starts the front end/backend features relevant to the API Publisher.
    • Developer Portal: Only starts the front end/backend features relevant to the Developer Portal (API Store).
    0

    Add a comment

  9. 1. Introduction to SMPP and SMSC

    SMPP - Short Message Peer to Peer protocol is an open, industry standard protocol designed to provide a flexible data communications interface for transfer of short message data between a Message Center, such as a Short Message Service Centre (SMSC), GSM Unstructured Supplementary Services Data (USSD) Server or other type of Message Center and a SMS application system, such as a WAP Proxy Server, EMail Gateway or other Messaging Gateway.The advantage of supporting SMPP protocol with the Axis2 SMS Transport is it can be use to send receive high volume of Short messages very fast.SMPP protocol is a Application layer protocol which can be used over TCP. There are many SMPP gateways available in the world.and now almost all the Message centers support SMPP.

    Use case 01

    There is HTTP SMS API and user can invoke it with http call with JSON. API is able to send a SMS for the request number in JSON request with message in JSON


    2. Pre-requirement

    • JSMPP lib

    JSMPP is a java implementation of SMPP protocol. It provides an API to communicate with a SMSC and JSMPP can be download from here (http://central.maven.org/maven2/com/googlecode/jsmpp/jsmpp/)

    • A SMSC Simulator

    SMSC Simulator is an application which can be act like a SMSC. Using a simulator we can test our scenario without having access to a real SMSC. For the real production servers we have to use a real SMSC. In here we will be using OpenSmpp (https://github.com/OpenSmpp/opensmpp)

    • SMPP connector for ESB

    It can be download from here. (https://store.wso2.com/store/assets/esbconnector/details/1f5ca0e2-3fe0-42e5-ae9b-05af1f8e361b)

    3. Setting the SMSC Simulator

    3.1. Get Git clone on OpenSmpp and build it.

    3.2. Create the ‘users.txt’ file in ‘etc’

    3.3. Add following contain for the file

    name=wso2esb
    password=passesb
    timeout=unlimited

    3.4. Start the simulator with below command

    java -cp opensmpp-core-3.0.3-SNAPSHOT.jar;opensmpp-sim-3.0.3-SNAPSHOT.jar;opensmpp-charset-3.0.3-SNAPSHOT.jar org.smpp.smscsim.Simulator

    image

    3.5. Enter ‘1’ to ‘start simulation’ and give the port number as ‘2775’.

    Then it will start a listener and it is notified with log message as below

    image

    4. Setting up the ESB

    4.1 Add the ‘jsmpp_2.1.0.jar’ to lib directory in ESB or EI

    4.2 Add the SMPP Connector to the WSO2 ESB from web console or developer studio

    5. Creating API

    5.1 Create API in wso2 ESB with “POST” method.

    5.2 You can get full WSO2 ESB API from here.

    5.3 Here is JSON request to pass for that newly created API.

    6. Testing API with OpenSmpp

    6.1 Send the JSON request from post-man (You will get 202 status respond)

    image

    6.2 You will able to see some logs in you simulator as below, such as ‘Connection accepted’

    image

    6.3 In simulator when you press ‘5’ You should see SMS you send from API on ESB.

    image

    References

    [1] https://docs.wso2.com/display/ESBCONNECTORS/Sending+SMS+Message#SendingSMSMessage-Overview

    5

    View comments

I am
I am
Archives
Total Pageviews
Total Pageviews
2 0 5 8 0 4 3
Categories
Categories
Loading