Getting a list of target endpoints - apigee

Can I get a list of target endpoints in a javascript policy?
Let's say I have a proxy endpoint that connects to multiple target endpoints. Can I write a javascript policy so that if a request is made to a specific url on that proxy, it will make a call to all the target endpoints and aggregate the results?

Yes that's possible. The apiproxy definition itself holds all the target endpoints defined for it.
For example:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets
would give you the list of all targets.
Then you can get each target URL from the list by calling:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets/{target}
You can parse out each URL in a for loop and then make a request to each of these URLs. If your requests a simple GET calls without any variation in the request object like headers, body etc. then a simple for loop would be good enough.
For example:
var geocoding = httpClient.get(URL);
context.session["geocoding"] = geocoding;
This piece of code can be called in a loop for all the target endpoints that you might have.
The only catch here is that, to get the target endpoints you are making a management api call from the runtime layer. Which means if at any point of time the Apigee management layer is down for maintenance or experiencing degraded service due to scheduled maintenance, your runtime calls would tend to fail. The other solution could be to isolate the two scripts:
Get the list of endpoints in one javascript and maybe store the URLs in cache (populateCache policy) or keyvaluemaps (given that proxy endpoint URLs won't change too often)
Read the list of endpoints from cache or kvm and then trigger another javascript that can make calls to these endpoints and then aggregate the response.

There is not a way to call a target endpoint from JavaScript. In fact, you can only call 0 or 1 target endpoints for a single call to the proxy, not multiple target endpoints.
You can call make multiple HTTP requests from within JavaScript using httpClient, and aggregate the results, but not target endpoints. An example of this is found here.

Related

How to properly mock API results for testing with NextJS and MSW?

First of all, if I should ask this question elsewhere please let me know, as I'm not sure where it belongs.
I have a working application with NextJS and MSW, where I can capture the server-side API requests with MSW and return mocked JSON results.
The issue I have is that there are probably about 15 API calls that I need to mock in order to test my application properly
While I could just call each one of these manually, copy and paste the results into a file and then just return that data from the file when I capture the API call, this not a very scalable solution, especially if the back-end API changes.
Question: What are your best methods for automating the generation of these results?
In the past I have created JSON files that have all of the URL paths and query parameters explicitely listed, and I would parse through this file and query every endpoint, and then I had template files which would be used to re-populate my fixtures directory with all of the mocked responses, however this was also very cumbersome.
(For reference, the API has a somewhat similar structure to this one: https://api.gouv.fr/documentation/api-geo, where there are multiple endpoints for fetching data, and each endpoint supports a number of different query parameters to tweak the call.)

Asynchronous communication between Microservices

For the last week I've been researching a lot on the microservice architecture pattern and its requirements and constraints.
The majority of ressources suggest to use event buses/message brokers (asynchronous communication) to communicate between microservices rather than using REST API endpoints.
Synchronous calling would result in a higher response time and may cause cascading failure in case of a particular microservice failing in the chain.
Question:
Let's say the user requests a particular functionality or page on a website/mobile app which then needs to fetch data from multiple microservices and use theire respective functionalities to provide the desired outcome. But to achieve the desired outcome (response to client) ALL the services need to do their work before the backend sends the response back to the client (website/mobile app).
But if we use asynchronous service requests - which means the calling service doesnt wait for a response and would send its own response back to the client without getting the data from the asynchronously called service - the outcome might not be complete if an asychronously called service doesnt respond in time (service is unavailable or network issues). This would mean that the backend will send an incomplete response back to the client which is not acceptable.
How can I deal with this issue or did I get the concept wrong?
I'm thankful for every answer
If it's absolutely essential that a request gets a full response (i.e. that the request is synchronous), that's a strong argument in favor of the service stitching together synchronous requests and responses (and potentially needing to handle rollback in cases of partial success etc.).
Many requests don't fall into that pattern, though. For instance, a response might well be interpretable as "we've received your request and the operation will be performed. You can track the progress of your operation by using this request ID"; such an approach fits well with asynchronous messaging.

Microservices async operation HTTP response

We're building a microservice app where clients can create projects. The following diagram shows the technical flow of this process:
My question: what HTTP response should the API gateway return to the client (step 1.)?
My initial idea was to give back a 202, but the problem there is that I don't know the Location yet (/projects/{id}), because the id of the project will be created at the Project Management Service.
Considering that the IDs of the newly created project entity is not known at the request time (i.e. it is generated after the insertion into the database) you indeed cannot generate the url to the project resource.
Instead, you could assign an ID (i.e. 1234-abcd-5678-efgh) to the command before sending to the bus and keep track of its execution status on the API gateway itself. Then you can respond to the client with an command execution status endpoint like /commands/1234-abcd-5678-efgh where it can query by polling.
The alternative would be to use another service that would reserve&deliver unique IDs but you must make a blocking call to it and this hurts scalability. Or you can host this service inside the API gateway itself (onto the same node) to minimize latency. Also, there is a risk of loosing some IDs in case of project creation failures but this can be compensated by releasing those IDs in those situations (thus increasing the architecture complexity).
A third solution could be the use of a project surogate ID, like a GUID, assigned as a property of the project, included in the command, having the purpose of an alternate identity that can be used only in the pre-creation phase of the process. Then, the response to the client could be like this: /projects/by-guid/1234-abcd-5678-efgh and after the project is created a GET to this url would permanently redirect to the final project url.

Duplicate REST requests to second endpoint

In order to test how a REST service performs under load I would like to send production data to two service end points, one production service and one test service. I would like the responses from the test service to be ignored so the client has no idea it's request is being sent to 2 services. I would like something in between the client and the service so that I do not need to make any changes to either, just have some sort of additional 'filter' service that the traffic will go through which will then pass the request on untouched but also send a duplicate request to the test service. Someone suggested that I might be able to do this using Apache Camel but it's a bit overwhelming and I don't want to get started learning Camel if my aim is not possible. Has anyone achieved anything similar, either using Camel or some other method?
This can be done by defining a jetty or a cxf endpoint that acts as a proxy routing the request to two other http endpoints ie, the REST services.
from("jetty:http://somehost:8282/xxx").
to("http://prod:8181/rest/service/xyz").
to("http://test:8182/rest/service/xyz");
The client can fire the load to http://somehost:8282/xxx. The client will not be aware that it is being routed to two services.
Note: The test will not yeild a real load result if you are routing the client request to two services via the above route or other proxy/router methodologies. Because some latency will be introduced by the proxy/route itself. Instead i suggest to use some load test tool to generate the load and directly fire to the prod and test endpoints separately.
For example, apache ab benchmark tool. Take a look at this and this.

Can I check if a SOAP web service supports a certain WebMethod?

Our web services are distributed across different servers for various reasons (such as decreasing latency to the client), and they're not always all up-to-date. Rather than throwing an exception when a method doesn't exist because the particular web service is too old, it would be nicer if we could have the client check if the service responds to a given method before calling it, and otherwise disable the feature (or work around it).
Is there a way to do that?
Get the WSDL (append ?wsdl to the URL) - you can parse that any way you like.
Unit test the web service to ensure its signatures don't break. When you write code that breaks the method signature, you'll know and can adjust the other applications accordingly.
Or just don't break the web services and publish them in a way that enable syou to version them. As in http://services.domain.com/MyService/V1.1/Service.asmx (for .NET) so that way your applications that use v1.1 won't break when you publish v1.2 and make breaking changes.
I would also check out using an internal UDDI server if it's really that big of a hasle to manage your web services. Using the Green Pages of UDDI will tell you what you want to know about the service.
When you are making a SOAP request you are just sending an HTTP request to a server. If the server understands it, it will respond with an HTTP 200 and some XML back, if it doesn't it will send you some error HTTP code (404, 500, ...)
There is no general way to ask for the existance of a "method" exposed by a web service. Try to use the WSDL exposed if it is automatic, or just try to use the "method" and check for an error in the response (you don't have to send an exception to the user...)
Also, I don't know if I understood you well, but you are thinking of quering the server twice, once to check if the method exists, and second to make the actual call it if it does? I would just check for the error if it doesn't, and proceed normally if it does.

Resources