I'm implementing hot update for my service behind a proxy. During the update process my service goes into a quiescent state for some time so I want to buffer the gRPC messages received during this time so that I can process them later (after update). Does envoy or Nginx proxy provide such a feature?
Related
From time to time I still receive the following error:
c.v.f.s.communication.ServerRpcHandler : Resynchronizing UI by client's request. A network message was lost before reaching the client and the client is reloading the full UI state. This typically happens because of a bad network connection with packet loss or because of some part of the network infrastructure (load balancer, proxy) terminating a push (websocket or long-polling) connection. If you are using push with a proxy, make sure the push timeout is set to be smaller than the proxy connection timeout
I use NGINX as a proxy and Spring Boot with Vaadin application.
Could you please explain what properties in NGINX and Vaadin responsible for:
If you are using push with a proxy, make sure the push timeout is set to be smaller than the proxy connection timeout
What property at NGINX configuration responsible for push timeout or proxy connection timeout and the same for Vaadin application.
In Vaadin application I use:
#Push(transport = Transport.LONG_POLLING)
Right now I'm playing with different properties but without any success, so I'll really appreciate your guidance on this. Thanks!
Having a grpc client and server and they are exchanging messages in grpc unary mode. I want to log all the messages the client sends to the server without changing a single line of code in both client or server. I came across to Nginx with its new graceful grpc support. Is it possible to route grpc messages from client to server via Nginx while sending a copy of them to a remote logging service? If No, please let me know if there are any other tools out there that do the same stuff.
Is there an easy way to enable TCP connectivity for the default puma rails server?
My use case need is to start my rest api puma server and enable TCP connections as well, so that when I call a specific endpoint and use some TCP service inside the rest api it would keep connection alive.
Is this possible? I am not talking about websockets, but TCP pure sockets.
I configured a nginx instances as a reverse proxy for a websocket server and established a websocket between client and server, according to the official tutorial https://www.nginx.com/blog/websocket-nginx/.
Then I run nginx -s quit to gracefully shut down nginx.
I found that a worker process is always in a status shutting down.. and I can still send message via the established websocket connection, then the nginx master and worker process will hang up until timeout.
I'd like to know if nginx supports the function that telling both client and server to close the socket connection on transportation level and exit normally, instead of waiting for the websocket time out.
Hi,I am using nginx stream mode to proxy tcp connection. Could this be possible if I restart my app on the upstream, nginx could auto reconnect to upstream without lost the tcp connection on the downstream?
I found some clue from this HiveMQ blog post comment, hope this help. I copied them as below:
Hi Sourav,
the load balancer doesn’t have any knowledge of MQTT; at least I don’t
know any MQTT-aware load balancer.
HiveMQ replicates its state automatically in the cluster. If a cluster
node goes down and the client reconnects (and is assigned to another
broker instance by the LB), it can resume its complete session. The
client does not need to resubscribe.
Hope this helps, Dominik from the HiveMQ Team