nginx load balancing - keep last working server - nginx

I have two servers A and B.
Most of the time, both servers are not running at the same time.
So when A is running, it is most likely that B is not running.
When B is running, it is most likely that A is not running.
This switch between A and B can happen after some weeks.
So I would like that nginx redirect to the running server and keep using that server until it is down.
I have tried that solution:
upstream backend {
server SERVER_A;
server SERVER_B;
}
server {...}
This is working but I can see in the logs that it is periodically trying to connect to the "down" server.
Then I tried that:
upstream backend {
server SERVER_A;
server SERVER_B backup;
}
server {...}
This is working correctly if SERVER_A is up. But if it is SERVER_B, then it is frequently trying to access SERVER_A.
Actually, in that case, the correct configuration would be "server SERVER_A backup;" but I don't think we can do dynamic reconfiguration ;-)
Actually, it is not a very big deal that nginx is periodically trying to access the down server, but if I can avoid that using a correct configuration, it would be better.
I know about that fail-timeout argument. But I don't think it will really solve my issue, and it might even add some down time during switching.
Thanks in advance.

According to controlled mechanism for server switch a hook to mark an individual server down is only required:
sed -i 's/server SERVER_A;/server SERVER_A down;/' /etc/nginx/nginx.conf
nginx -s reload
A configuration load of standard procedure that handles graceful reload and it is safe: http://nginx.org/en/docs/beginners_guide.html#control
Once the master process receives the signal to reload configuration,
it checks the syntax validity of the new configuration file and tries
to apply the configuration provided in it. If this is a success, the
master process starts new worker processes and sends messages to old
worker processes, requesting them to shut down. Otherwise, the master
process rolls back the changes and continues to work with the old
configuration. Old worker processes, receiving a command to shut down,
stop accepting new connections and continue to service current
requests until all such requests are serviced. After that, the old
worker processes exit.

Related

Is it normal for my router to have the activity on port 111?

What are typical results of nmap 198.168.1.1 for an average Joe? What would be a red flag?
PORT STATE SERVICE
111/tcp filtered rpcbind
What does this mean in context and is it something to worry about?
Basically, RCPBind is a service that enables file sharing over NFS,The rpcbind utility is a server that converts RPC program numbers into universal addresses. It must be running on the host to be able to make RPC calls on a server on that machine. When an RPC service is started, it tells rpcbind the address at which it is listening, and the RPC program numbers it is prepared to serve So if you have the use for file sharing, It's fine, otherwise unneeded and are a potential security risk.
You can disable them by running the following commands as root:
update-rc.d nfs-common disable
update-rc.d rpcbind disable
That will prevent them from starting at boot, but they will continue if already running until you reboot or stop them yourself.
And if you are looking to get into the system through this, There are lots of reading material available in the google.

Nginx if flask app not running redirect to different url

Some times flask app server may not be running which the page will just say server not reached. Is there any way we can have Nginx to redirect to different url if flask app is not able to be reached
This kind of dynamic change of proxying is not possible in Nginx directly. One way you could do is by having a dedicated service(application) that takes care of this by polling your primary flask endpoint at regular intervals.
If the is a negatory response then your service could simply change the nginx config and then send a HUP signal to the nginx process which in turn reloads nginx with the newly available config. This method is pretty efficient and fast.
In case you are making this service in Python, you could use signals library to send the signal to nginx master process and also the nginxparser library to play around with the nginx config

Nginx backend shutdown without losing sessions

So I'm having reverse proxy server, where Nginx working as proxy server and loadbalancer. My biggest problem, that I have 2 app backends, which I need sometimes to shutdown. When I write after server down the backend, shutdown and looses sessions. How can I gracefully shutdown one of my app server? So that Nginx wait while all sessions will be completed or for some time?
My simple config:
upstream loadbalancer {
ip_hash;
server 192.168.0.1:443;
server 192.168.0.2:443;
}
Ok the issue is that each server has it's own session manager, and when the server is dead the session data is lost with that server, a good solution is to make a centralized session storage, for example the same server which is load balancing, and the other 2 servers connect to it to get the session data, if one server is down, and the other server tries to serve the connection that was being served by the other server then the data will still be found because the data is stored elsewhere, common methods to do so is using memcached as session storage.
As for the pros, you can add and remove as much app servers as you want and the users won't even notice any change.
But for the cons, if that single server dies, all session data is lost, because the data is centralized.
You haven't really tagged your question with what language you are using, but if you search for it on google you'll easily find useful posts to help you.

How to enable a maintenance page on the frontend server while you are restarting the backend service?

I am trying to improve the user experience while a backend service is down due to maintenance, shutdown manually.
We do had a frontend web proxy, which happens to be nginx but it could also be something else like a NetScaler instance. An important note is that the frontend proxy is running on a different machine than the backend application.
Now, for the backend service it takes a lot of time to start, even more than 10 minutes in some cases.
Note, I am asking this question on StackOverflow, as opposed to ServerFault because providing a solution for this problem is more likely to require writing some bash code inside the daemon startup script.
What we want to achive:
service mydaemon stop should enable the maintenance page on the frontend proxy
service mydaemon start should disabled the maintenance page on the frontend proxy
In the past we used to create a maintenance.html page and had nginx configured to check the existence of this page using try, before falling back to the backend.
Still, because we decided to move nginx to another machine we cannot do this and doing this using ssh raises security concerns.
We already considered writing this file to a NFS drive which would be accessible by both machine, but even this solution does not scale for a service that has a lot of traffic. Nginx will end-up trying this for every request, slowing down the responses quite a lot.
We are looking for another solution for this problem, one that would ideally be more flexible.
As a note, we still want to be able to trigger this behaviour from inside the daemon script of the backend application. So if the backend application stops responsing for other reasons, we do expect to see the same from the frontend.

Is there a way to make nginx start a uwsgi process at the first request?

I was thinking if is there a way to make nginx start a uwsgi proccess at the first request, so I can save a lot of memory with idle sites.
Someone knows how to do this?
Thanks!
Nginx don't start uwsgi processes at all. It's uWSGI server job.
Probably, you're looking for "cheap" mode:
http://projects.unbit.it/uwsgi/wiki/Doc#cheap
Nginx (by design) cannot generate new processes (this is why you do not have cgi support in nginx).
You can use the cheap+idle modes of uwsgi, to start with only the master and rip-off workers after a specified time (set by --idle) of inactivity.
If even starting only the master is too much for you (i suppose you want the minimal memory usage) you can look at the old-school inetd/xinetd or newer upstart socket bridge and systemd socket activation to activate uWSGI only on specific connections

Resources