Fibers and multiple http requests in Sinatra - asynchronous

I have problems understanding what is happening when calling external APIs using the fibers model with eventmachine. I have this code in Sinatra:
get '/' do
conn = Faraday.new 'http://slow-api-call' do |con|
con.adapter :em_http
end
resp = conn.get
resp.on_complete {
request.env['async.callback'].call(resp)
}
throw :async
end
Also, I am booting Rainbows server using the :EventMachine connector with 2 connections (that means 2 fibers handling 2 http requests at a time).
Now, If I made 4 concurrent requests, the app should manage 2 at first, and when the external API calls are being made, those fibers should be able to manage 2 new http requests while waiting for the external call to finish, right?
This is not happening. No new http requests are being accepted until the slowapi call returns and free up the Fiber.
Is this the correct behavior? Am I missing something?
Thanks.

Actually, this was the correct behaviour.
When configuring Rainbows to handle 2 http requests using 2 fibers, it actually means that the number of incoming http requests in being limited to 2.
So, the resources being used by the fibers while the slow API is called are free (memory, files, database connections, etc), but the server is not accepting more than 2 http connections and those free fibers cannot actually process anything.
Rainbows should point this out more clearly in the documentation. Will send them an email.
Hope this helps somebody.

Related

how top-level async web requests handle the reponse

I have fundumental question about how async requests work at top level.
Imagin if we have a top level route called HomePage(). This route is an async route and within this route we call to 10 different APIs before sending the response(image it takes like 5 seconds, remember this is an example to understand the concept and these numbers are for learning purposes). All of these api requests are awaited. So the request handler just releases the thread hanlding this request and goes to handle other requests until the response for these apis come back. So lets add this constraint. Our network card can handle only 1 connection and that one is held open till the response for the request to HomePage is ready. Therefor we cannot make any other requests to the server so whats the difference if this whole thing was sync from the beggining. We cannot drop the connection to the first request to HomePage because if that's the case then how are we ever going to send back the response for that request and we cannot handle new requests because the connection is kept open.
I suspect that my problem is how the reponse is sent back on top level async routes.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE). Examples would be much appreciated.
So lets add this constraint. Our network card can handle only 1 connection
That constraint cannot exist. Network cards handle packets, not connections. Connections are a virtual construct that exist in the host computer.
Can anybody give a deep dive explaination on how these requests are handled that can take more requests and still send back the response(because if it can send back a response the connection HAS TO HAVE KEPT ALIVE).
Of course the connection is kept alive. The top-level async method will return the thread to the thread pool, where it is available to handle any other requests.
If you have some artificial constraint on your web app that prevents it from having more than one connection, then there won't be any other requests to handle, and the thread pool threads will do nothing.

How to get gRPC retry logs on client | java gRPC

I'm using gRPC for internal communication b/w 2 java services.
I configured gRPC retry using service config . I am able to get retry count in server using "grpc-previous-rpc-attempts" metadata header. However , I don't find any logs those are getting printed in the client app while retries are happening .
why gRPC is not logging retry attempts which ideally should have
been done when retries are configured
Is there any way to log each
retry attempt in the client app? This is needed for better
observability.
Thanks
At this moment there is no logging support for retries, but it would be a reasonable thing to add.
You can either file a feature request to get that added, or better yet make a pull request for the change. If you decide to make the change, it should be localized to just RetriableStream.java. Feel free to tag me (#temawi) on it and I'll review it for you.

limit number of concurrent_rpcs in grpc c++

I am using grpc for bidirection streaming service. Beause this is an online real-time service, and we don't want our clients to wait too long, so we want to limit the rpcs to a certain number and reject the extra requests.
We are able to do it in Python with following code:
grpcServer = grpc.server(futures.ThreadPoolExecutor(max_workers=10), maximum_concurrent_rpcs=10)
In such case, only 10 requests will be processed, and other request will be rejected, the clients will throw a RESOURCE_EXHAUSTED error.
However, we find it hard to do it in C++. We use code following grpc sync server limit handle thread
ServerBuilder builder;
builder.SetSyncServerOption(ServerBuilder::SyncServerOption::NUM_CQS, 10);
grpc::ResourceQuota quota;
quota.SetMaxThreads(10 * 2);
builder.SetResourceQuota(quota);
We are using grpc in many services. Some using kaldi, some using libtorch.
In some cases, the above code behaves normally, it processes 10 requests at a time, and rejects other requests. and the processing speed (our service requires a log of cpu calculation) is ok.
In some cases, it only accepts 9 request at a time.
In some cases, it accepts 10 request at a time, but the processing speed is significantly lower than before.
We also tried
builder.AddChannelArgument(GRPC_ARG_MAX_CONCURRENT_STREAMS, 10);
But it is useless becuase GRPC_ARG_MAX_CONCURRENT_STREAMS is the max_concurrent_rpcs on a single http connection.
Could someone please point out the equivalent C++ version of the Python code? We want to make our service to handle 10 requests at a time, and rejects other request, we do not want any service to wait in the queue.

Rebus HTTP gateway and MSMQ health state

Let's say we have
Client node with HTTP gateway outbound service
Server node with HTTP gateway inbound service
I consider situation where MSMQ itself stops from some reason on the client node. In current implementation Rebus HTTP gateway will catch the exception.
What do you think about idea that instead of just catching, the MessageQueueException exception could be also sent to server node and put on error queue? (name of error queue could be gathered from headers)
So without additional infrastructure server would know that client has a problem so someone could react.
UPDATE:
I guessed problems described in the answer would be raised. I should have explained my scenario deeper :) Sorry about it. Here it is:
I'm going to modify HTTP gateway in the way that InboundService would be able to do both - Send and Receive messages. So the OutboundService would be the only one who initiate the connection(periodically e.g. once per 5 minutes) in order to get new messages from server and send its messages to server. That is because client node is not considered as a server but as a one of many clients which are behind the NAT.
Indeed, server itself is not interested in client health but I though that instead of creating separate alerting service on client side which would use HTTP gateway HTTP gateway code, the HTTP gateway itelf could do this since it's quite in business of HTTP gateway to have both sides running.
What if the client can't reach the server at all?
Since MSMQ would be dead I thought about using in-process standalone persistent queue object like that http://ayende.com/blog/4540/building-a-managed-persistent-transactional-queue
(just an example implementation, I'm not sure what kind of license it has)
to aggregate exceptions on client side until server is reachable.
And how often will the client notify the server that is has experienced an error?
I'm not sure about that part - I thought it could be related to scheduled time of message synchronization like once per 5 minutes but what in case there would be no scheduled time just like in current implementation (while(true) loop)? Maybe it could be just set by config?
I like to have a consistent strategy about handling errors which usually involves plain old NLog logging
Since client nodes will be in the Internet behind the NAT standard monitoring techniques won't work. I thought about using queue as NLog transport but since MSMQ would be dead it wouldn't work.
I also thought about using HTTP as NLog transport but on the server side it would require queue (not really, but I would like to store it in queue) so we are back to sbus and HTTP gateway...that kind of NLog transport would be de facto clone of HTTP gateway.
UPDATE2: HTTP as NLog transport (by transport I mean target) would also require client side queue like I described in "What if the client can't reach the server at all?" section. It would be clone of HTTP gateway embedded into NLog. Madness :)
All the thing is that client is unreliable so I want to have all the information about client on the server side and log it in there.
UPDATE3
Alternative solution could be creating separate service, which would however be part of HTTP gateway (e.g. OutboundAlertService). Then three goals would be fulfilled:
shared sending loop code
no additional server infrastructure required
no negative impact on OutboundService (no complexity of adding in-process queue to it)
It wouldn't take exceptions from OutboundService but instead it would check MSMQ perodically itself.
Yet other alternative solution would be simply using other than MSMQ queue as NLog target but that's ugly overkill.
Regarding your scenario, my initial thought is that it should never be the server's problem that a client has a problem, so I probably wouldn't send a message to the server when the client fails.
As I see it, there would be multiple problems/obstacles/challenges with that approach because, e.g. what if the client can't reach the server at all? And how often will the client notify the server that is has experienced an error?
Of course I don't know the details of your setup, so it's hard to give specific advice, but in general I like to have a consistent strategy about handling errors which usually involves plain old NLog logging and configuring WARN and ERROR levels to go the Windows Event Log.
This allows for setting up various tools (like e.g. Service Center Operations Manager or similar) to monitor all of your machines' event logs to raise error flags when someting goes wrong.
I hope I've said something you can use :)
UPDATE
After thinking about it some more, I think I'm beginning to understand your problem, and I think that I would prefer a solution where the client lets the HTTP listener in the other end know that it's having a problem, and then the HTTP listener in the other end could (maybe?) log that as an error.
Another option is that the HTTP listener in the other end could have an event, ReceivedClientError or something, that one could attach to and then do whatever is right in the given situation.
In your case, you might put a message in an error queue. I would just avoid putting anything in the error queue as a general solution because I think it confuses the purpose of the error queue - the "thing" in the error queue wouldn't be a message, and as such it would not be retryable etc.

Random "Connection reset" error when calling 2 or more RemoteObject function

I know this question is something difficult to resolve, but I have been walking thourgh sources, googling, etc... and haven't found anything clear bout the problem I'm going to expose.
I have an application that uses php as backend. I use amf as my transport protocol. The problem is that when I do some requests to the server backends through remote objects, randomly, I recive a connection reset on all/several requests. This does not happen when just one remote service request is made, or at least it didn't happen to me. Problem is more visible as more concurrent calls are execute at once.
Could anybody guess what is hapenning from this little info? At first I thougt it was an Apache problem that reset the connection because of a bunch of requests, but I have following requests and I do 3 or 4 concurrect requests, not more.
Thanks in advance for your time.

Resources