Karate HTTP requests listener - http

Is there any way to listen to incoming requests during Karate test?
I saw that there is the Async option, but there it talks about Queues listeners only.
Is there anything similar, but for HTTP requests?

First, this is a rare use-case, so there is not much support for it.
You can try using a RuntimeHook, it has a beforeHttpCall() handler.
Also refer the new API that allows you to inspect an HTTP response "after the fact": https://github.com/karatelabs/karate/issues/1962
Finally, if you really have some exotic use-case, refer this: https://twitter.com/getkarate/status/1417023536082812935

Related

rest API return result from callback request of another endpoint

I want to standup an endpoint /foo which is a synchronous endpoint for clients but the response is dependent on a callback /foo_callback being called on the app as a result of the request to the synchronous endpoint.
to elaborate the workflow:
Flow diagram
I havent decided on a technology to use so ideally would look for a recommendation.
High level what I am thinking of is starting an async thread in the request handler and check for an update on a singleton map to see if the server has responded but I am wondering if there is a better way
I dont have control over the client and cannot really use websocket or long polling.

Does gRPC resend messages

A question related to the idempotence of serverside code, or the necessity of it. Either for gRPC in general, or specifically to the java implementation.
Is it possible that when we send a message once from a client, that it is handled twice by our service implementation?
Maybe this would be related to retries when service seems unavailable; or could be configured by some policy?
Right now, when you send a message from a client it will be seen (at most) once by the server. If you have idempotent methods, you will soon be able to specify a policy for automatic retries (design doc) but these retry attempts will not be enabled by default. You do not have to worry about the gRPC library sending your RPCs more than once unless you have this configured in your retry policy.
According to grpc-java/5724, the retry logic has already been implemented. The OP does it using a Map, that is not type safe. A better way would be as follows:
NettyChannelBuilder builder = NettyChannelBuilder.forAddress(host, port)
.enableRetry()
.maxRetryAttempts(3);
There are other retry configurations available on the NettyChannelBuilder.
There's also an example here, although it's pretty hard to find.

Guzzle vs ReactPHP vs Amphp for parallel requests

What's the difference between:
GuzzleHttp
ReactPHP
Amphp
How they differ and what would be typical use case to use with?
The main difference between those is that Guzzle is an HTTP client, while Amp and ReactPHP are generic async / event loop libraries. Both of these offer HTTP clients based on the core event loop they offer. Those are amphp/artax and reactphp/http-client.
Now, the difference between those and Guzzle is that those can do other things concurrently that are not HTTP requests. That is, because the user has full control over the event loop and can register own I/O watchers and timers, while the event loop that Guzzle uses is hidden from the user inside Curl.
If you just want to make a few concurrent HTTP requests, the decision mainly boils down to the API you like and a performance consideration maybe. If you want to do other I/O related things concurrently, use Amp or ReactPHP. If you want to stream your bodies, I'd suggest against using Guzzle, too.
Hey ReactPHP core team member here. Both ReactPHP and Amp assume you're building an app with an event loop. If you just want to do a bunch of async requests and then continue, I would suggest using Guzzle's async requests: http://docs.guzzlephp.org/en/stable/quickstart.html#async-requests
If how ever you want to dive deeper into async request I suggest https://github.com/clue/php-buzz-react which gives you more control over the process plus it supports PSR-7.

How should a synchronous public api be integrated with message-based services?

I've been reading about microservices, and have found a lot of interesting advice in Jonas Bonér's Reactive Microservices Architecture (available to download free here). He emphasises the need for asynchronous communication between miroservices, but says that APIs for external clients sometimes need to be synchronous (often REST).
I've been trying to think how asynchronous response messages sent back from microservices should best be routed back to the waiting client. To me the most obvious way would be to record something like a request id in all messages sent when processing the request, and then copy this id into response messages sent by the services. The public API would block when processing the request, collecting all expected response messages which have the matching id, before finally sending the response to the client.
Am I on the right lines here? Are there better approaches? Do any frameworks take the work of doing this routing away from the developer (I'm looking at Spring Cloud Streams etc, but others would be interesting too)?
He emphasises the need for asynchronous communication between
miroservices, but says that APIs for external clients sometimes need
to be synchronous (often REST).
When dealing with client - backend communications you can have a couple of types of operations and they should be handled seperetly (look at the idea of CQS):
State changing operations - they should be one way fire and forget using messaging (it can be the client calling an HTTP API and the api dispatching the message)
read operations: synchronous (request response) operations (using an HTTP API) and this is does not involve any messaging what so ever
Does that make sense?

Need to validate approach: Spring Integration + AMQP + Async

I've been recently investigating about Spring Integration and AMQP (RabbitMQ), as I need to communicate two applications (middleware and backend) with async approach, so that the middleware doesn't block when receiving client calls.
I first followed the simpler approach of implementing this in a synchronous, this meaning that I have a gateway interface and an outbound gateway (with requiresReply=true) on the middleware, and then an inbound gateway and a service activator on the backend. This initial approach works well (I've used Spring Integration XML config).
Now I need clarification on the approach to follow to make this work in an async way.
By looking at the RabbitMQ Tutorial 6, it's better to work with a callback queue and a correlationId, and per what I understood, this would be similar to calling Spring RabbitTemplate's convertAndSend() and then receive(), instead of convertSendAndReceive() (which would block until response is received).
I've checked the Spring Integration docs, where I need to replace the gateway interface on the middleware for it to return Future or ListenableFuture.
Async Gateway
Once that's done, I also looked at the documentation for the outbound gateway, where it says that it can work together with the RabbitTemplate to manage the correlationID and replyTo message attributes.
My questions are:
In order to make this work with an async approach, should I keep working with outbound/inbound gateways, instead of outbound/inbound message converters?
In case of following the outbound/inbound message converters approach (which sounds to me similar to what the RabbitMQ tutorial shows), how do I associate the Future on the gateway interface with the result coming back from with inbound channel adapter?
To be honest you don't provide an original business requirement. It might be a fact that there is really no reason to get deal with this async handsoff, because you have a #Gateway as an entry point which is thread-free and even if it is blocked to wait for the reply it doesn't impact other threads which may perform similar sendAndReceive operation. In most cases it is really just enough to do everything within the same requestor thread and don't loose performance with shifting to the shared ThreadPoolExecutor.
Right, the Future allows you to free a caller a bit to be ready to accept new requests within the same thread.
Since it is a MessagingGateway and you want to have a reply anyway, there is a hook associated with the request - TemporaryReplyChannel header. That's why that <outbound-gateway> works properly: it place its blocking reply to that channel for the gateway's return (or for FutureTask#set()).
I'd say that we can achieve the same TemporaryReplyChannel gain with that your async reply requirement.
You should use inbound/outbound channel adapter pair.
Before send the message to the <int-amqp:outbound-channel-adapter> you should do this <header-channels-to-string> for the <header-enricher>.
The server side maybe the same - <int-amqp:inbound-gateway>
You should use fixed replyQueue as a header for those message to send through the <int-amqp:outbound-channel-adapter>
the <int-amqp:inbound-channel-adapter> should be configured for that fixed replyQueue.
Both <int-amqp:outbound-channel-adapter> on client side and <int-amqp:inbound-gateway> must be configured for the mapped-request-headers="*" to allow to propagate that reply-channel header to the server and vise versa.
The <int-amqp:inbound-channel-adapter> on the client side will just send the reply to the reply-channel as it is for the <int-amqp:outbound-gateway>
You may need to take care about the correlationId manually, since <int-amqp:inbound-gateway> may require that to produce a reply properly.
Well, something like that...
HTH
Feel free to ask more questions. Or correct me if I misunderstood your question.

Resources