So I'm having reverse proxy server, where Nginx working as proxy server and loadbalancer. My biggest problem, that I have 2 app backends, which I need sometimes to shutdown. When I write after server down the backend, shutdown and looses sessions. How can I gracefully shutdown one of my app server? So that Nginx wait while all sessions will be completed or for some time?
My simple config:
upstream loadbalancer {
ip_hash;
server 192.168.0.1:443;
server 192.168.0.2:443;
}
Ok the issue is that each server has it's own session manager, and when the server is dead the session data is lost with that server, a good solution is to make a centralized session storage, for example the same server which is load balancing, and the other 2 servers connect to it to get the session data, if one server is down, and the other server tries to serve the connection that was being served by the other server then the data will still be found because the data is stored elsewhere, common methods to do so is using memcached as session storage.
As for the pros, you can add and remove as much app servers as you want and the users won't even notice any change.
But for the cons, if that single server dies, all session data is lost, because the data is centralized.
You haven't really tagged your question with what language you are using, but if you search for it on google you'll easily find useful posts to help you.
Related
I am testing one Shiny script on an instance in GCP. The instance resides behind a load-balancer that serves as a front end with a static IP address and SSL certificate to secure connections. I configured the GCP instance as part of a backend service to which the load-balancer forwards the requests. The connection between the load-balancer and the instance is not secured!
The issue:
accessing the Shiny script via the load-balancer works, but the web browser's screen gets grayed (time-out) on the client-side after a short time of initiating the connection!! When the browser screen grayed out, I have to start over!!
If I try to access the Shiny script on the GCP instance directly (not through the load-balancer), the script works fine. I suppose that the problem is in the load-balancer, not the script.
I appreciate any help with this issue.
Context: Shiny uses a websocket (RFC 6455) for its constant client-server communication. If, for whatever reason, this websocket connection gets dicsonnected, the user experience is the described "greying out". Fortunately GCP supports websockets.
However, it seems that your load balancer has an unexpected http time out value set.
Depending on what type of load balancer you are using (TCP, HTTPS) this can be configured differently. For their HTTPS offering:
The default value for the backend service timeout is 30 seconds. The full range of timeout values allowed is 1-2,147,483,647 seconds.
Consider increasing this timeout under any of these circumstances:
[...]
The connection is upgraded to a WebSocket.
Answer:
You should be able to increase the timeout for your backend service with the help of this support document.
Mind you, depending on your configuration there could be more proxies involved which might complicate things.
Alternatively you can try to prevent any timeout by adding a heartbeat mechanism to the Shiny application. Some ways of doing this have been discussed in this issue on GitHub.
I have two servers A and B.
Most of the time, both servers are not running at the same time.
So when A is running, it is most likely that B is not running.
When B is running, it is most likely that A is not running.
This switch between A and B can happen after some weeks.
So I would like that nginx redirect to the running server and keep using that server until it is down.
I have tried that solution:
upstream backend {
server SERVER_A;
server SERVER_B;
}
server {...}
This is working but I can see in the logs that it is periodically trying to connect to the "down" server.
Then I tried that:
upstream backend {
server SERVER_A;
server SERVER_B backup;
}
server {...}
This is working correctly if SERVER_A is up. But if it is SERVER_B, then it is frequently trying to access SERVER_A.
Actually, in that case, the correct configuration would be "server SERVER_A backup;" but I don't think we can do dynamic reconfiguration ;-)
Actually, it is not a very big deal that nginx is periodically trying to access the down server, but if I can avoid that using a correct configuration, it would be better.
I know about that fail-timeout argument. But I don't think it will really solve my issue, and it might even add some down time during switching.
Thanks in advance.
According to controlled mechanism for server switch a hook to mark an individual server down is only required:
sed -i 's/server SERVER_A;/server SERVER_A down;/' /etc/nginx/nginx.conf
nginx -s reload
A configuration load of standard procedure that handles graceful reload and it is safe: http://nginx.org/en/docs/beginners_guide.html#control
Once the master process receives the signal to reload configuration,
it checks the syntax validity of the new configuration file and tries
to apply the configuration provided in it. If this is a success, the
master process starts new worker processes and sends messages to old
worker processes, requesting them to shut down. Otherwise, the master
process rolls back the changes and continues to work with the old
configuration. Old worker processes, receiving a command to shut down,
stop accepting new connections and continue to service current
requests until all such requests are serviced. After that, the old
worker processes exit.
My question is on ASP.net session management. In the current web application we have "sticky sessions" (user is always redirected to server it started talking to). Below is my problem statement.
From one of our client there are huge number of request hitting our servers. Somehow requests are sent from 1 or at most 2 IPs. We have 5 servers running to serve those request. Now the problem here is that 1-2 server might be heavily getting hits while other servers might be idle because sticky sessions will not allow request to be processed by serverB which was initially answered by serverA
What we need is exactly the opposite. Any server should be able to process incoming request maintaining the continued conversation.
I have put my problem in very plain words. Any pointer will be appreciated.
Why not just store the session state on a SQL server?
I'm new to web.config and ASP.NEt. I want to make my client to point to different servers which ever is free dynamically....
Is it possible???
It's their a way to have multiple entries in web.config file which we can choose at run time???
Let me be more specific. I have multiple clients which contacts a server for the resource but due to excess load on server I want to have multiple server and which ever server is free that client should contact that server.
Thanks for the Help in advance...
You have to have a separate load balancer in front of your servers.
Also, if you need application sessions, then you will need to move application state out of process - to SQL Server or to ASP.NET State Service, so different servers will share the session state.
You can read about your options about sharing session between servers here: https://web.archive.org/web/1/http://articles.techrepublic%2ecom%2ecom/5100-10878_11-1049585.html
Redirect might work for you:
http://www.w3schools.com/asp/met_redirect.asp
Also you could try using a proxy server:
http://www.iisproxy.net/
or load balancing server:
http://www.c-sharpcorner.com/UploadFile/gopenath/Page107182007032219AM/Page1.aspx
Problem: I have a server farm which uses non-sticky IP's and a Session Server to maintain sessions for all the servers. So it doesn't matter which server a client comes back to because the server will always go to the Session Server to get that client's session data. When I take the Session Server down all the servers lose their session data.
One of the solutions to this problem is to use SQL Server as the Session Server. This unfortunately is not possible.
So I'm thinking in terms of Memcached. If I managed my session using memcached I still have the problem that session will be lost if I take one of the memcached servers down. However, if I could issue a call against that server saying "redistribute your cache to the other servers" then this should solve the problem.
How would you redistribute memcached's cache from a server being taken down to other servers?
I'm not sure there's such a feature in memcached. But, the point is, it's cache. The worst that can happen if a server goes down or restarts is that you get some cache misses (until it's rebuilt). Memcached isn't supposed to be a reliable DB (though there is a DB based on memcached) - your data should be kept in some lower persistent level as well and you shouldn't sweat it when a cache server goes down every once in a while as long as it doesn't invalidate your entire cache and doesn't happen too often.
Have you considered using SQL Server to maintain the session state?
Using SQL Server for ASP.Net session state
ASP.NET Session State (MSDN)