WSO2 AM - Calling web service for enrichment from an out sequence - wso2-api-manager

This is what I try to solve:
Proxy in-sequence receives a request message and passes it to backend service
Proxy out-sequence receives the response message and iterates over each returned item
Call another service for each item and enrich the original message with some parts of the result from the second service
Return the enriched result from proxy to caller
What will be the proper mediators to use in this scenario?

This is a case of service chaining, where you can use the mediators iterate and aggregate
In this link, you can find a similar case.
http://dakshithar.blogspot.com/2012/07/routing-and-service-chaining-with-wso2_23.html

Related

Pact Request That Depends on the Response from A Previous Request

I am using the Pact framework to test some APIs from a service. I have one API that initiates some backend execution. Let's call it request A and the response returns a unique execution ID. The second API (request B) send the execution ID returned from request A to pull the execution status. How do I set up the pact test in this case? The problem is the execution ID that is generated dynamically. I know a provider can inject some provider state to the consumer. So potentially, the execution ID could be injected. But I am not sure how to make the injection from the provider side. It requires access to the response from the response A and inject the execution ID (with the provider state callback, perhaps) for the second request.
You need to have a lot of control over what is happening in your provider for Pact to work for you.
Each interaction is verified individually (and in some frameworks, in a random order), and all state should be cleared in between interactions, so you need to use provider states to set up any data that would have been created by the initial request. In regards to something like the execution IDs, you could use a different implementation of the code that generates the IDs that you only use for Pact Tests.

Calling external service from corda

There is a bank which creates a contract which is then accepted by the lender and the borrower. After signing the contract the lender provides fund to the borrower. The bank then creates an obligation state based on the data received by calling an external service automatically.
And Now
1) In API Layer, I am calling first flow which creates one state.
2) In API layer itself, On success of first flow , I am calling the http request to external service and get the data.
3) Now I pass the http response to the the second flow for creating the other state.
Can you please let me know if there is any issue with this approach.
Requirment is I want to trigger the first flow manually, but calling external service and initiating the second flow should happen automatically
I had referred the link given below.
Making asynchronous HTTP calls from flows
You'll make calls to an external service during the running of flows.
The best place to get started would be looking at the CorDapp samples here. In particular, take a look at the Accessing External Data section

Microservices async operation HTTP response

We're building a microservice app where clients can create projects. The following diagram shows the technical flow of this process:
My question: what HTTP response should the API gateway return to the client (step 1.)?
My initial idea was to give back a 202, but the problem there is that I don't know the Location yet (/projects/{id}), because the id of the project will be created at the Project Management Service.
Considering that the IDs of the newly created project entity is not known at the request time (i.e. it is generated after the insertion into the database) you indeed cannot generate the url to the project resource.
Instead, you could assign an ID (i.e. 1234-abcd-5678-efgh) to the command before sending to the bus and keep track of its execution status on the API gateway itself. Then you can respond to the client with an command execution status endpoint like /commands/1234-abcd-5678-efgh where it can query by polling.
The alternative would be to use another service that would reserve&deliver unique IDs but you must make a blocking call to it and this hurts scalability. Or you can host this service inside the API gateway itself (onto the same node) to minimize latency. Also, there is a risk of loosing some IDs in case of project creation failures but this can be compensated by releasing those IDs in those situations (thus increasing the architecture complexity).
A third solution could be the use of a project surogate ID, like a GUID, assigned as a property of the project, included in the command, having the purpose of an alternate identity that can be used only in the pre-creation phase of the process. Then, the response to the client could be like this: /projects/by-guid/1234-abcd-5678-efgh and after the project is created a GET to this url would permanently redirect to the final project url.

Track status of multiple async requests when using Spring #Async

I am using spring boot for developing services in my application.
I have a scenario where-in the request submitted to the back-end would take some time to complete.
To avoid waiting the client I want to return the response immediately with a message your request has been accepted. The request would be in progress in a background thread.
I see Spring provides the #Async annotation which can be used to create a separate processing thread from the main thread and using that I am able to offload the processing in a separate thread.
What I want to do is when I return the initial response as accepted I also want to provide the client with a tracking key/token which the client can later use to check the status of the request.
Since there can be multiple clients who would be accessing the service there should be a way of uniquely identifying each client's request from another.
I see there is no such feature in Spring Async or Future which can return a tracking id as such.
One possibility I see it to put the Future returned in HttpSession and later use that to check for the status by the client. But, I prefer not to use HttpSession and want my services to be stateless.
Is there any way/approach I can accomplish my requirement.
Thanks,
BS
Generate the key before calling the Async method, and pass it to the method:
String key = generateUniqueKey();
callAsyncMethod(key);
return key;
The Async method will have to persist the status of the execution somewhere (let's call it dataStore). When the client asks for the status using the key, you look it up on the dataStore and return it.

Getting a list of target endpoints

Can I get a list of target endpoints in a javascript policy?
Let's say I have a proxy endpoint that connects to multiple target endpoints. Can I write a javascript policy so that if a request is made to a specific url on that proxy, it will make a call to all the target endpoints and aggregate the results?
Yes that's possible. The apiproxy definition itself holds all the target endpoints defined for it.
For example:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets
would give you the list of all targets.
Then you can get each target URL from the list by calling:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets/{target}
You can parse out each URL in a for loop and then make a request to each of these URLs. If your requests a simple GET calls without any variation in the request object like headers, body etc. then a simple for loop would be good enough.
For example:
var geocoding = httpClient.get(URL);
context.session["geocoding"] = geocoding;
This piece of code can be called in a loop for all the target endpoints that you might have.
The only catch here is that, to get the target endpoints you are making a management api call from the runtime layer. Which means if at any point of time the Apigee management layer is down for maintenance or experiencing degraded service due to scheduled maintenance, your runtime calls would tend to fail. The other solution could be to isolate the two scripts:
Get the list of endpoints in one javascript and maybe store the URLs in cache (populateCache policy) or keyvaluemaps (given that proxy endpoint URLs won't change too often)
Read the list of endpoints from cache or kvm and then trigger another javascript that can make calls to these endpoints and then aggregate the response.
There is not a way to call a target endpoint from JavaScript. In fact, you can only call 0 or 1 target endpoints for a single call to the proxy, not multiple target endpoints.
You can call make multiple HTTP requests from within JavaScript using httpClient, and aggregate the results, but not target endpoints. An example of this is found here.

Resources