Docker restart not showing the desired effect - nginx

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.

So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4

I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

Related

Ubuntu + nginx - trying to install GeoIP module

I'm using vagrant (VVV actually) to run local wordpress installs. I want to test different behaviors for different GEO's on my local machine instead of upload it every time to the server which is annoying.
So, I've tried to install the GeoIP nginx module to the local machine with the following guide https://piwik.org/faq/how-to/faq_166/ (and a bit more google but it doesn't matter at the moment).
When I'm using ./configure the following is exists:
checking for GeoIP library ... found
checking for GeoIP IPv6 support ... found
I've also set the .dat files in my conf file, and set the $_SERVER (fastcgi_param) parameters - so they displayed when I'm printing the $_SERVER var.
But those GeoIP vars are empty. I'm not sure about the reason, but 2 things is bothering me. First, when I'm write nginx -V in the terminal the argument --with-http_geoip_module is missing. Second, could it actually works if the REMOTE_ADDR (IP) is not my real IP? (192.168.1.50 for example).
nginx is a bit strange for me, so sorry if something isn't exact..
--
Operating system - macOS, nginx version - 1.3.15, running with VVV (vagrant box)
If there is a reverse proxy in front of your nginx, use geoip_proxy to set IPs whose X-Forwarded-For-Header can be trusted.
You can also use that without actually having a reverse proxy when you're developing. Add your local IP to the geoip_proxy-list and set the X-Forwarded-For-Header to your public IP in your browser (use a plugin like Modify Headers).

Hosting multiple meteor apps on one server

I have 2 meteor apps running on one Ubuntu server on DO. I have also set up nginx for "servering"
Config files:
sailsadria.conf : http://pastebin.com/eCicpNxK
ytp.klancir.work.conf : http://pastebin.com/cNKtA0dV
Now...
http://sailsadria.com/ which is on port 3000 works smoothly as expected while http://ytp.klancir.work/ goes on ngnix root. On the other hand http://ytp.klancir.work:3010 goes to the right app that is working on that port (but I suppose that any URL or the IP i forward with the appended port will end up on the right location)
Symlinks are also set up
The domains are configured:
sailsadria: http://screencast.com/t/iqKUlQlDgj8
ytp.klancir.work: http://screencast.com/t/DJJdLfqna
I dont know how to set up that http://ytp.klancir.work/ goes directly to port 3010 in other words - directly to the app...
The SOLUTION: sudo service nginx restart....

Docker dynamic load balancing with Nginx

I'm doing an internship focused on Docker and I have to load-balance an application which have a client, a server and a database. I use Nginx as a load-balancer and my goal is to dynamically scale the number of server containers according their CPU usage. For instance if the CPU usage is over 60% I want to add a new container on the fly without restarting Nginx to divide the CPU usage.
I have to modify the nginx.conf file to add a new container but I have to restart the Nginx container to apply the changes, which is very slow.
So my question is : is there a (free) way to do it dynamically ?
Tell me if you want further information and forgive my poor english.
Thanks.
EDIT : I did as #Konstantin Azizov told me :
docker cp ./new.conf $(docker ps -f "name=dockerizedrubis_nginx" -q ):/etc/nginx/nginx.conf
docker exec $(docker ps -f "name=dockerizedrubis_nginx" -q) bash -c 'kill -HUP $(cat /run/nginx.pid)'
docker exec $(docker ps -f "name=dockerizedrubis_nginx" -q) bash -c '/etc/init.d/nginx reload'
The configuration file is well pasted in the container supporting Nginx, I send the HUP signal to reconfigure the Nginx process et then I reload to apply my changes. There are no errors and the reload on-the-fly works fine but my new nodes are not taken into account by Nginx, the requests are still only directed to the first node created ...
EDIT 2 : I found the origin of the problem. It seems like in order to update the /etc/hosts of a container after a 'docker-compose scale', this container needs to be stopped, removed and restarted. In my case, I really don't want to stop the container supporting Nginx.
Question : Anyone has an idea of how to update /etc/hosts of a container after a re-scale without having to restart the container (beside a dirty script) ?
Thanks.
I used the nginx-proxy image from Json Wilder for a while to load balance between containers, and it works for more than one scalable service. It monitors the docker daemon and if an event happens it rebuilds the nginx config file adding the new container instances when you are scaling out or removing it if you are scaling in.
Since Docker 1.10 (not sure if this is the correct version) there is a internal DNS embedded into Docker daemon, so since then I am using the round robin feature from it. Now I am using the oficial nginx image to proxy pass the requests to the a domain that I define as alias into network options. I do not know if I was clear due to my poor english but I believe my Github example may help.
Unfortunately there is no easy(free) way to change configuration without restarting, the only way to achieve zero-downtime scaling it's graceful restart, when you restarting Nginx gracefully it will spawn new instance with new configuration wait until it boots up and then kill old instance with the previous configuration.
See official guide.

Xdebug with PHPStorm and a Docker container

Setup: Windows 10; Docker running with Boot2Docker on Hyper-V; PHPStorm 9
Webserver on the VM is Nginx. I've configured the xdebug.ini for php5-fpm as:
zend_extension=xdebug.so
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_connect_back=On
xdebug.remote_handler=dbgp
xdebug.profiler_enable=0
If I set a breakpoint and reload the page I get an incoming connection from Xdebug in PHPStorm:
I wonder that there is only one file shown and not the entire project which is much bigger. If I accept the connection I can debug the very first line but it is not stopping on my breakpoint and creates a server entry which looks like:
What is very strange that host is empty.
I already added the server with the correct mapping but it got ignored.
So how to get Xdebug to stop on breakpoints?
What is very strange that host is empty.
PhpStorm requires this field to be filled as it uses this to recognize what server entry (and therefore path mappings) to use -- IDE supports debugging the same code base running on different domains / remote servers.
In this particular case the servername field / parameter of your nginx configuration is empty. You can fix this by providing some value in nginx config file.

Ensure nginx master process stays running

I am currently trying to setup a docker container using ubuntu:14.04 as my base image, with nginx and gunicorn/django/celery running inside. I am using supervisor to start all of the processes, and have tested to make sure gunicorn is relaunched when it goes down. However, I can't figure out how to do it with nginx.
My supervisord.conf for nginx is as follows:
[program:nginx]
command=nginx
autorestart=false
I have autorestart set to false because, from what I can tell, the nginx command simply starts the master process and worker processes, and then exits with status code 0. If I have autorestart set to true, it simply keeps trying to restart that nginx command, which will fail for subsequent retries because the master/worker processes are already running and bound to the port.
On the surface, this seems okay, because if I try and kill a worker process, the master will start another worker to take it's place. But how do I ensure the master process stays running as well?
You need to append daemon off; to your nginx.conf configuration instructing nginx to run in the foreground.
Then modify your supervisor stanza to be:
[program:nginx]
command=nginx
autorestart=true
It will still spawn master/worker processes/subprocesses and can be used this way in production setups just fine. In this case it's supervisor that runs the process in the background and controls and supervises it.
See this FAQ entry

Resources