How to get gRPC retry logs on client | java gRPC - grpc

I'm using gRPC for internal communication b/w 2 java services.
I configured gRPC retry using service config . I am able to get retry count in server using "grpc-previous-rpc-attempts" metadata header. However , I don't find any logs those are getting printed in the client app while retries are happening .
why gRPC is not logging retry attempts which ideally should have
been done when retries are configured
Is there any way to log each
retry attempt in the client app? This is needed for better
observability.
Thanks

At this moment there is no logging support for retries, but it would be a reasonable thing to add.
You can either file a feature request to get that added, or better yet make a pull request for the change. If you decide to make the change, it should be localized to just RetriableStream.java. Feel free to tag me (#temawi) on it and I'll review it for you.

Related

Rebus retry policy when RabbitMQ is temporarily down

I have a dockerized microservice architecture where I am using Rebus with RabbitMQ as message bus.
One container is running RabbitMQ. Other containers are running services that communicate with each other via Rebus/RabbitMQ.
I want my solution to be resilient to container restarts so if for example the RabbitMQ container restarts I expect the other services to be unaffected by that.
I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus
in the sending service and that they are delivered when the RabbitMQ connection is restored.
To verify that I run this test scenario:
Service A sends a message to service B via Rebus and RabbitMQ. That works fine.
I stop the RabbitMQ container.
Service A sends a message to service B via Rebus and RabbitMQ. That fails because RabbitMQ is unavailable.
I start the RabbitMQ container again.
I can see that Rebus in my services automatically reconnect to RabbitMQ when it is up. That is as expected.
Now that the RabbitMQ connection is restored I would expect that Rebus sends the pending message from Service A to service B, but it does not.
Is this not expected behaviour of Rebus? If not, can I enable this feature?
I have read this topic https://github.com/rebus-org/Rebus/wiki/Automatic-retries-and-error-handling
and tried to configure Rbus like this:
Configure.With(...)
.Options(b => b.SimpleRetryStrategy(maxDeliveryAttempts: 10))
.(...)
but with no luck.
The "delivery attempts" you're configuring is how you configure how many Rebus should try to consume a received message before giving up (i.e. moving it to the error queue).
If Rebus loses its connection to the broker, it will not be able to receive anything for the entire duration of the outage, so stopping RabbitMQ should effectively pause all message processing (possibly with some exceptions in all messages being handled at the instant where RabbitMQ goes away).
Since no Rebus handlers will be running then, while RabbitMQ is down, you will have to deal with outgoing messages sent from other places, e.g. like messages sent/published from a web request.
(...) I expect that messages sent while RabbitMQ is down are queued up for delivery by Rebus (...)
...but Rebus cannot queue anything up, because RabbitMQ is down(*).
The natural thing to do for Rebus in this situation is to give you, the caller, the responsibility of deciding what to do about the problem.
In .NET, you usually do that by throwing an exception back at you. 🙂
This leaves you with the option of
performing some alternative action, or
retrying some more times, or
whatever makes sense in that particular situation
A simple approach to building some resilience into your system in this case would be to use something like Polly to try sending outgoing messages multiple times in cases where it could fail.
I hope that makes sense. Please let me know if anything needs to be elaborated on. 🙂
(*) Of course Rebus could have "cheated" and queued outgoing messages up in memory, but that would make it very hard for you to write resilient code, because you would not know whether an outgoing message had been safely delivered to the broker, or whether it was just sitting in memory waiting to be saved somewhere.

Establishing a Persistent Connection to Firebase Using TCP / IP on the Backend Instead of Using RESTFul HTTP Requests?

I am currently using PHP / CURL on the back-end to update values in Firebase. We use Firebase primarily as a JavaScript layer so we can show browser and app clients real time status progression of jobs we process.
We've gotten to the point where we're doing quite a bit of status updating using CURL from our back-end and I feel we are close to the threshold where establishing a persistent connection between Firebase and our server would be more efficient than opening and closing dozens of HTTP requests per minute.
Is there anyway to do this with Firebase right now?
Firebase has server-side SDKs for Java and Node.js. If you can't use those, the REST API is your only alternative.
If you'd like to listen for data over REST, you can use Firebase's REST Streaming API, which uses a long-lived HTTP connection to return a stream of events. It is similar to the Firebase SDKs, but it can only attach a single listener per connection, and you'll still need separate requests for write operations.
That last part seems to the crux of your problem. So I'm afraid there really aren't any alternatives from using the SDKs as I mentioned. In my testing using HTTP requests for frequent small (although in my case admittedly read) operations was quite fast.

Rebus HTTP gateway and MSMQ health state

Let's say we have
Client node with HTTP gateway outbound service
Server node with HTTP gateway inbound service
I consider situation where MSMQ itself stops from some reason on the client node. In current implementation Rebus HTTP gateway will catch the exception.
What do you think about idea that instead of just catching, the MessageQueueException exception could be also sent to server node and put on error queue? (name of error queue could be gathered from headers)
So without additional infrastructure server would know that client has a problem so someone could react.
UPDATE:
I guessed problems described in the answer would be raised. I should have explained my scenario deeper :) Sorry about it. Here it is:
I'm going to modify HTTP gateway in the way that InboundService would be able to do both - Send and Receive messages. So the OutboundService would be the only one who initiate the connection(periodically e.g. once per 5 minutes) in order to get new messages from server and send its messages to server. That is because client node is not considered as a server but as a one of many clients which are behind the NAT.
Indeed, server itself is not interested in client health but I though that instead of creating separate alerting service on client side which would use HTTP gateway HTTP gateway code, the HTTP gateway itelf could do this since it's quite in business of HTTP gateway to have both sides running.
What if the client can't reach the server at all?
Since MSMQ would be dead I thought about using in-process standalone persistent queue object like that http://ayende.com/blog/4540/building-a-managed-persistent-transactional-queue
(just an example implementation, I'm not sure what kind of license it has)
to aggregate exceptions on client side until server is reachable.
And how often will the client notify the server that is has experienced an error?
I'm not sure about that part - I thought it could be related to scheduled time of message synchronization like once per 5 minutes but what in case there would be no scheduled time just like in current implementation (while(true) loop)? Maybe it could be just set by config?
I like to have a consistent strategy about handling errors which usually involves plain old NLog logging
Since client nodes will be in the Internet behind the NAT standard monitoring techniques won't work. I thought about using queue as NLog transport but since MSMQ would be dead it wouldn't work.
I also thought about using HTTP as NLog transport but on the server side it would require queue (not really, but I would like to store it in queue) so we are back to sbus and HTTP gateway...that kind of NLog transport would be de facto clone of HTTP gateway.
UPDATE2: HTTP as NLog transport (by transport I mean target) would also require client side queue like I described in "What if the client can't reach the server at all?" section. It would be clone of HTTP gateway embedded into NLog. Madness :)
All the thing is that client is unreliable so I want to have all the information about client on the server side and log it in there.
UPDATE3
Alternative solution could be creating separate service, which would however be part of HTTP gateway (e.g. OutboundAlertService). Then three goals would be fulfilled:
shared sending loop code
no additional server infrastructure required
no negative impact on OutboundService (no complexity of adding in-process queue to it)
It wouldn't take exceptions from OutboundService but instead it would check MSMQ perodically itself.
Yet other alternative solution would be simply using other than MSMQ queue as NLog target but that's ugly overkill.
Regarding your scenario, my initial thought is that it should never be the server's problem that a client has a problem, so I probably wouldn't send a message to the server when the client fails.
As I see it, there would be multiple problems/obstacles/challenges with that approach because, e.g. what if the client can't reach the server at all? And how often will the client notify the server that is has experienced an error?
Of course I don't know the details of your setup, so it's hard to give specific advice, but in general I like to have a consistent strategy about handling errors which usually involves plain old NLog logging and configuring WARN and ERROR levels to go the Windows Event Log.
This allows for setting up various tools (like e.g. Service Center Operations Manager or similar) to monitor all of your machines' event logs to raise error flags when someting goes wrong.
I hope I've said something you can use :)
UPDATE
After thinking about it some more, I think I'm beginning to understand your problem, and I think that I would prefer a solution where the client lets the HTTP listener in the other end know that it's having a problem, and then the HTTP listener in the other end could (maybe?) log that as an error.
Another option is that the HTTP listener in the other end could have an event, ReceivedClientError or something, that one could attach to and then do whatever is right in the given situation.
In your case, you might put a message in an error queue. I would just avoid putting anything in the error queue as a general solution because I think it confuses the purpose of the error queue - the "thing" in the error queue wouldn't be a message, and as such it would not be retryable etc.

Fibers and multiple http requests in Sinatra

I have problems understanding what is happening when calling external APIs using the fibers model with eventmachine. I have this code in Sinatra:
get '/' do
conn = Faraday.new 'http://slow-api-call' do |con|
con.adapter :em_http
end
resp = conn.get
resp.on_complete {
request.env['async.callback'].call(resp)
}
throw :async
end
Also, I am booting Rainbows server using the :EventMachine connector with 2 connections (that means 2 fibers handling 2 http requests at a time).
Now, If I made 4 concurrent requests, the app should manage 2 at first, and when the external API calls are being made, those fibers should be able to manage 2 new http requests while waiting for the external call to finish, right?
This is not happening. No new http requests are being accepted until the slowapi call returns and free up the Fiber.
Is this the correct behavior? Am I missing something?
Thanks.
Actually, this was the correct behaviour.
When configuring Rainbows to handle 2 http requests using 2 fibers, it actually means that the number of incoming http requests in being limited to 2.
So, the resources being used by the fibers while the slow API is called are free (memory, files, database connections, etc), but the server is not accepting more than 2 http connections and those free fibers cannot actually process anything.
Rainbows should point this out more clearly in the documentation. Will send them an email.
Hope this helps somebody.

soap messages over a single HTTP(S) connection

Can anyone give me an answer for the following question:
I have a remote web service and a requirement about 100 TPS. (transaction per second ). As far as I know creation of the connection ( HTTP connection ) is quite expensive operation. So, I need to create just one HTTP connection with the web service and being able to send a lot of SOAP messages (envelopes) through that connection, so it be not one SOAP message and one HTTP connection, but many SOAP messages and one HTTP connection. Of course I need to create as much HTTP connections as I need, but each of them must serve to some SOAP messages.
May be there is some development pattern or other issue which I do not know.
I would very appreciate any kind of help!
SOAP does not have to be over HTTP. It just happens that it is nearly always implemented over HTTP.
If you really want to use SOAP you can use a socket, or message queue as well as HTTP. For an example see: http://msdn.microsoft.com/en-us/library/51f6ye7k.aspx
However, I think if you need 100 TPS, SOAP is probably not the right technology to use.

Categories

Resources