Docker container's http response slower when not detached - http

I am using Docker version 17.06.2-ce, build cec0b72 on CentOS 7.2.1511.
I am going through the docker getting started tutorial. I have played around with docker a bit beyond this, but not much.
I have built the friendlyhello image by copy-pasting from the website. When running with
docker run -d -p 8080:80 friendlyhello
I can curl localhost:8080 and get a response in ~20ms. However, when I run
docker run -p 8080:80 friendlyhello
i.e., without being detached from the container, trying to curl localhost:8080 takes over 50 seconds. This makes no sense to me.
EDIT: it seems that killing containers repeatedly may have something to do with this. Either that, or it's random whether a given container can serve quickly or not. After stopping and starting a bunch of identical containers with the -d flag as the only change, I have only seen quick responses from detached containers, but detached containers can also be slow to respond. I also think it's worth mentioning that 95%+ of the slow response times have been either 56s or 61s.
Trying to research this error gives me plenty of responses about curl being slower when run inside a container, but that is about all I can find.
If it matters, I'm working on a VM, have no access to the host, am always root, and am behind a network firewall and proxy, but I don't think this should matter when only dealing with localhost.

I'm dumb.
The getting started tutorial says that it can take a long time for http responses with this app because of an unmet dependency that they have you add further in the tutorial. Unfortunately, they say this on the next page, so if you're on part 2 and a beginner, it's not clear why this problem occurs until you give up and go on to part 3. They claim the response may take "up to 30 seconds"; mine is double that, but it's clear that this is the root cause.

Related

Can I configure Celery Flower to run after I close my Unix shell?

I have inherited a corporate server & application that consists of several python scripts, html files, and Unix services from an IT employee that recently left my company. He left absolutely no documentation, so I'm struggling to support this application for my work group--I am not an IT professional (though I can read/write python, html, and a few other languages). I'm extremely unfamiliar with servers in general and Unix specifically.
From what I can tell from digging around, our application uses the following:
nginx
circus / gunicorn
rabbitmq-server
celery
celery flower
I've finally got most of these services running, but I'm struggling with Celery Flower. I've been able to launch Flower from my PuTTY SSH connection with the command:
/miniconda3/envs/python2/bin/flower start
but it appears to stop whenever I disconnect (server:5555 no longer shows the monitor web page). Is it possible to configure it to run in the background so I don't have to keep my SSH connection open 24/7? I saw in the Flower documentation that there is a persistence mode, but I'm not sure what does.
Thanks for any suggestions!
Tom,
I assume you are using a Linux platform. If this is the case I suggest you use screen (or even tmux) to run Flower. It will keep the application running in the background as well as offer the additional benefit of allowing you to connect back to the process if you need to inspect output, stop the process, etc.
To start the application use screen -S Flower -d -m /miniconda3/envs/python2/bin/flower start.
To see if the process is still running use screen -ls which will list the processes out like;
There is a screen on:
17256.Flower (02/09/16 08:01:16) (Detached)
1 Socket in /var/run/screen/S-hooligan.
To connect back to it, use screen -r Flower.
If you have connected back to the screen then disconnect with ^a ^d, assuming the escape character has not been changed from the default. To see a full list of key bindings look the the man page, it's pretty straight forward.
You might also consider adding this command to the system crontab with a #REBOOT directive so that it starts when the system boots.

Docker overlay network doesn't clean up removed containers

We're running Docker across two hosts, with overlay networking enabled and configured. It's version 1.12.1, with Consul as the KV store - but we aren't using Swarm, largely because we didn't feel it gave us the relevant control over ensuring availability and minimising resources, but anyway.
Our setup is micro service based, and we run quite a lot of containers which get restarted fairly frequently. Our model uses nginx as a "reverse proxy" for service discovery, for various reasons, and so we start multiple containers which share a --host of "nginx-lb". This works fine, and other containers on the network can connect to nginx-lb, which gives them a random one of the containers' IP addresses.
The problem we have is that in killing containers and creating new ones, sometimes (I don't know what specific circumstance this occurs in), the overlay network does not remove the old container from the system, and so other containers then try to connect to the dead ones, causing problems.
The only way to then resolve this, is to manually call a docker network disconnect -f overlay_net [container], having already run a docker network inspect overlay_net to find the errant containers.
Is there a known issue with the overlay networking not removing dead containers from the KV data, or any ideas of a fix?
Yes it's a known issue. Follow it here https://github.com/docker/docker/issues/26244

reload nginx with monit

I'm looking to reload, not restart, nginx with monit. The docs say that valid service methods are start, stop and restart but not reload.
Does anyone have a workaround for how to reload nginx rather than restart it?
Edit - I should have pointed out that I still require the ability to restart nginx but I also need, under certain conditions, to reload nginx only.
An example might be that if nginx goes down it needs to be restarted but if it has an uptime > 3 days (for example) it should be reloaded.
I'm trying to achieve this: https://mmonit.com/monit/documentation/monit.html#UPTIME-TESTING
...but with nginx reloading, not restarting.
Thanks.
I solved this issue using the exec command when my conditions are met. For example:
check system localhost
if memory > 95%
for 4 cycles
then exec "/etc/init.d/nginx reload"
I've found that nginx memory issues can be resolved in the short term by reloading rather than restarting.
You can pass the reload signal which should do the job:
nginx -s reload
"Use the docs. Luke!"
According to the documentation, sending HUP signal will cause nginx to re-read its configuration file(s), to check it, and to apply new configuration.
See for details: http://nginx.org/en/docs/control.html#reconfiguration
Here's a config that will achieve what you wanted:
check process nginx with pidfile /usr/local/var/run/nginx.pid
start program = "/usr/local/bin/nginx -s start"
stop program = "/usr/local/bin/nginx -s stop"
if uptime > 3 days then exec "/usr/local/bin/nginx -s reload"
I've tried this on my configuration. The only problem I'm seeing is that Monit assumes that you're defining an error condition when checking the uptime like this. The nginx -s reload command, as I see it on my machine, does not reset the process' uptime back to 0. Since Monit thinks that the uptime being > 3 days is an error condition being remedied by the command you give it in the config, but that command doesn't reset the uptime to be less than 3 days, Monit will report Uptime failed as the status of the process, and you'll see this in the logs:
error : 'nginx' uptime test failed for /usr/local/var/run/nginx.pid -- current uptime is 792808 seconds
You'll see hundreds of these, actually (my config has Monit run every 30 seconds, so I get one of these every 30 seconds).
One question: I'm not sure what reloading nginx after a long time, like 3 days, will do for it - is it helpful to do that for nginx? If you have a link to info on why that would be good for nginx to do, that might help other readers getting to this page via search. Maybe you accepted the answer you did because you saw that it would only make sense to do this when there is an issue, like memory usage being high?
(old post, I know, but I got here via Google and saw that the accepted answer was incomplete, and also don't fully understand the OP's intent).
EDIT: ah, I see you accepted your own answer. My mistake. So it seems that you did in fact see that it was pointless to do what you initially asked, and instead opted for a memory check! I'll leave my post to give this clarity to any other readers with the same confusion

Make uWSGI use all workers

My application is very heavy (it downloads some data from internet and puts it into a zip file), and sometimes it takes even more than a minute to respond (please, note, this is a proof of concept). CPU has 2 cores and internet bandwidth is at 10% utilization during a request. I launch uWSGI like this:
uwsgi --processes=2 --http=:8001 --wsgi-file=app.py
When I start two requests, they queue up. How do I make them get handled simultaneously instead? Tried adding --lazy, --master and --enable-threads in all combinations, neither helped. Creating two separate instanced does work, but that doesn't seem like a good practice.
are you sure you are not trying to make two connections from the same browser (it is generally blocked) ? try with curl or wget

php5-fpm craches

I have a webserver (nginx) running debian and php5-fpm randomly seems to crach, it replys with 504 bad gateway if i call php files.
when it is in a crashed state and i do sudo /etc/init.d/php5-fpm it says that it is running, but it will still it gives 504 bad gateway until i do sudo /etc/init.d/php5-fpm
I'm thinking that it has maybe to do with one of my php files which is in a infinity loop until a certain event occurs (change in mysql database) or until it will be time-outed. I don't know if generally that is a good thing or if i should make the loop quit itself before a timeout occurs.
Thanks in advice!
First look at the nginx error.log for the actual error. I don't think PHP crashed, just your loop is using all available php-fpm processes, so there is none free to serve your next request from nginx. That should produce Timeout error in the logs (nginx will wait for some time for available php-fpm process).
Regarding your second question. You should not use infinite loops for this. And if you do, insert sleep() command inside the loop - otherwise you will overload your CPU with that loop and also database with queries.
Also I guess it is enough to have one PHP process in that loop waiting for a event. In that case use some type of semaphore (file or info in db) to let other processes know that one is already waiting for that event. Otherwise you will always eat up all available PHP processes.

Resources