Flask application not taking all responses - nginx

im building an app to track changes of contacts in Hubspot via a Webhook.
I deployed the service on Cloud Run (GCP) using the following services:
A single Docker container listening to port 8080
Nginx + UWSGI for production purposes
Flask as a framework to update some external data
The problem i got is that everything is built correctly, but when theres more than one request received to the container (for example 100 contact deletions) only a few of them are being processed.
I though that nginx handled the requests and sent it one per one to my Flask app.
If this is not the case. How can i handle this situation?
Thanks in advance :)

You can use gunicorn to handle more requests, gunicorn runs your flask app with more than one workers.
Just add gunicorn in your requirements.txt then put the CMD line to Dockerfile:
CMD ["gunicorn","-t 30", "-w3", "-b 0.0.0.0:8080", "app:app"]
-t max timeout (in seconds)
-w workers (recommended to keep same with processor cores you have in server)
-b binding adress
app:app means run app in app.py so you should edit this according to your flask app.

Related

Seeing error logging for a Flask application in digitalocean

I am running a Flask application in digitalocean via a Gunicorn and NGINX set up.
I have SSH access to my digitalocean droplet and am able to log in via the terminal.
Gunicorn, NGINX and Flask are already running and this is a production server.
Now, I'd like to SSH into my droplet and run a terminal command in order to see a print out of any errors that occur from my Flask application. I guess it would gunicorn errors.
Is such a thing possible? Or would I have to print things out to an error log? If so, I'll probably have questions about how to do that too! :D
Thank you in advance!!
I
Have a look at this Digital Ocean tutorial for Flask, gunicorn and NGINX. It includes a section on logging obtaining logs at the end of section #5 that should be helpful.
A common approach with Cloud-based deployments is to centralize logs by aggregating them automatically from multiple resources e.g. Droplets to save ssh'ing (or scp'ing) to to machines to query logs.
With a single Droplet, it's relatively straightforward to ssh in to the Droplet and query the logs but, as the number of resources grows, this can become burdensome. Other Cloud providers provide logging-as-a-service solutions. With Digital Ocean you may want to look at alternative solutions. I'm not an expert but the ELK stack, Splunk and Datadog are often used.

How do web servers stay alive?

I am wondering how web servers i.e. Nginx, Flask, Django stay alive and wait for requests. And how I can write my own program which stays alive and waits for a request before launching an action.
The short answer for the overwhelming number of cases involving nginx is systemd service. When you install nginx it sets itself up as a systemd service which is configured to start nginx on boot and keep it running.
You can adapt systemd to load and keep your own services (like Flask, etc.) alive and waiting for requests as well. Here is an article that explains the basics.
An alternative to systemd (which is built into most of the Linux systems you would be using on a server) is supervisord. Like systemd, supervisord can be configured to monitor, start, and keep your service running in the background, waiting for a request.

Nginx if flask app not running redirect to different url

Some times flask app server may not be running which the page will just say server not reached. Is there any way we can have Nginx to redirect to different url if flask app is not able to be reached
This kind of dynamic change of proxying is not possible in Nginx directly. One way you could do is by having a dedicated service(application) that takes care of this by polling your primary flask endpoint at regular intervals.
If the is a negatory response then your service could simply change the nginx config and then send a HUP signal to the nginx process which in turn reloads nginx with the newly available config. This method is pretty efficient and fast.
In case you are making this service in Python, you could use signals library to send the signal to nginx master process and also the nginxparser library to play around with the nginx config

Bind docker container port to path

Docker noob here. Have setup a dev server with docker containers. I am able to run a basic containers.
For example
docker run --name node-test -it -v "$(pwd)":/src -p 3000:3000 node bash
Works as expected. As soon as I have many small projects, I would like to bind/listen to actual http localhost path instead of port. Something like that
docker run --name node-test -it -v "$(pwd)":/src -p 3000:80/node-test node bash
Is it possible? Thanks.
EDIT. Basically I want to type localhost/node-test instead of localhost:3000 in my browser window
It sounds like what you want is for your Docker container to respond to a URL like http://localhost/some/random/path by somehow specifying that path in the Docker --port option.
The short answer to that is no, that is not possible. The reason is that a port is not related to a path in any way - an HTTP server listens on a port, and serves resources that are found at a path. Note that there are many different types of servers and all of them listen on some port, but many (most?) of them have no concept of a path at all. For example, consider an SMTP (mail transfer) server - it often listens on port 25, but what does a path mean to it? All it does is transfer mail from one server to another.
There are two ways to accomplish what you're trying to do:
write your application to respond to particular paths. For example, if you're using the Express framework in your node application, create a route for the path you want.
use a proxy server to accept requests on one path and relay them to a server that's listening to another path.
Note that this has nothing to do with Docker - you'd be faced with the same two options if you were running your application on any server.

How can I use nginx as a dynamic load balancing proxy server on Bluemix?

I am using docker-compose to run an application on the bluemix container service. I am using nginx as a proxy webserver and load balancer.
I have found an image that uses docker events to automatically detect new web servers and adds those to the nginx configuration dynamically:
https://github.com/jwilder/nginx-proxy
But for this to work, I think the container needs to connect to a docker socket. I am not very familiar with docker and I dont know exactly what this does, but essentially it is necessary so that the image can listen to docker events.
The run command from the image documentation is the following:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
I have not been able to run this in the container service, as it does not find the /var/run/docker.sock file on the host.
The bluemix documentation has a tutorial explaining how to do load balancing with nginx. But it requires a "hard coded" list of web servers in the nginx configuration.
I was wondering how I could run the nginx-proxy image so that web instances are detected automatically?
The containers service on Bluemix doesn't expose that docker socket (not surprising, it would be a security risk to the compute host). A couple of alternate ways to accomplish what you want:
something like amalgam8 or consul, which is basically doing just that
similar, but self written - have a shared volume, and then each
container on startup adds a file to that shared volume saying what it
is, plus its private ip. nginx container has a watch on the shared
volume, and reloads when those change. (more work than amalgam8 or
consul, but perhaps more control)

Resources