Upload large files Flask Nginx Uwsgi - nginx

I have a Flask App running on remote ubuntu server with Nginx and Uwsgi.
I can't upload files greater than ~200Kb.
To make my app I followed this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uswgi-and-nginx-on-ubuntu-18-04
at first I had error 413 which I fixed adding client_max_body_size 5M; in /etc/nginx/nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 5M;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Now I don't have any errors but still I'm not able to upload larger files. I also changed my app uwsgi configuration file:
[uwsgi]
module = wsgi:app
master = true
processes = 10
limit-post = 20000000
ignore-sigpipe=true
ignore-write-errors=true
disable-write-exception=true
socket = main.sock
chmod-socket = 660
vacuum = true
die-on-term = true
logto= /var/log/uwsgi/%n.log
What else can I do to fix it?

Thank you Frank for your answer, has been some time since I opened this post. If I recall correctly the error was a bit more subtle: Flask doesn't have by default a "retry" logic builtin. This in combination with the cheapest server I could find for testing contribuite to the failure of some of the most expensive requests. Implementing a "retry" logic with the retry decorator solved my issue.

It looks like you have set the variables correctly. Perhaps you have the config setting MAX_CONTENT_LENGTH set in your Flask app? It would look something like this:
app.config['MAX_CONTENT_LENGTH'] = 50 * 1024 * 1024 # 50 MB
By default, Flask shouldn't complain with large requests, but if this variable is set then it will throw 413 errors when the request exceeds the specified value. Check here for more info.
Lastly, make sure you're reloading Nginx and your uWSGI service with these commands:
> sudo systemctl restart nginx.service
> sudo systemctl restart <yourservice>.service
The .service suffixes are optional.

Related

Nginx not serving on the domain

I have installed nginx on a VM (OS: Ubuntu 18). I am following this tutorial but the issue is that I am not able to see the content getting served on your_domain.com. Here's my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
Files in sites-enabled and sites-available directory: default your_domain
your_domain (both in sites-enabled and sites-available)
server {
listen 80;
listen [::]:80;
root /var/www/your_domain/html;
index index.html index.htm index.nginx-debian.html;
server_name your_domain.com www.your_domain.com;
location / {
try_files $uri $uri/ =404;
}
}
index.html file in /var/www/your_domain/html
<html>
<head>
<title>Welcome to your_domain!</title>
</head>
<body>
<h1>Success! The your_domain server block is working!</h1>
</body>
</html>
Lastly, this is my /etc/hosts
127.0.0.1 localhost
127.0.0.1 your_domain.com (trying out)
35.188.213.229 your_domain.com (trying out)
10.128.0.48 your_domain.com (trying out)
I am not sure where the issue is because whenever I open your_domain.com, it says the following in chrome browser
This site can’t be reached
your_domain.com’s server IP address could not be found.
I have tried doing traceroute your_domain.com as well:
traceroute: unknown host your_domain.com
Tried nginx in macOS, it works there but I need to set it up in ubuntu VM for my project.
Given that traceroute is unable to resolve host name into ip address, I suppose that problem is caused by your /etc/hosts or some other issues with name resolution process on client side.
Most probably linux resolver library is unhappy with () in lines. Try removing them, keeping statement as clean as possible - e.g.:
127.0.0.1 your_domain.com
Note - this thing may be cached, so you may also need to restart your browser after making changes.
On MacOS you may even need to flush system-level dns cache:
dscacheutil -flushcache && killall -HUP mDNSResponder

NGINX *7060 upstream timed out (110: Connection timed out)

I had this error today with my NGINX on Vutr, with php7:
2017/08/22 07:46:09 [error] 19191#19191: *7060 upstream timed out
(110: Connection timed out) while reading response header from
upstream, client: 111.11.11.111, server: somedomain.com, request: "GET
/com$
I did a hard reset of the server and now everything is working fine. But what was it, why it happened and will it happen again?
This is my nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
#custom added 22.08.2017
proxy_read_timeout 300;
#end custom
client_max_body_size 800m;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json
application/javascript text/xml application/xml appli$
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Finally, i found the issue!
I had too many processes at the same time.
the problem was in php-fpm settings.
so this error I had in php logs:
(so if u have same problem check php logs)
On the web I found this article.
After I add change my www.conf file:
pm.max_children = 40
Then I restart the php-fpm:
systemctl restart php7.0-fpm.service

R Plumber API: Prevent "504 Gateway Time-out"

I have written and deployed an R API using plumber to a Digital Ocean droplet as in the instructions.
I am posting in .json data and expecting .json data back. To do this I use the curl command from the command line, for example:
curl --data #data/data.json http://[API ADDRESS] > results/output.json
This works fine when I post a small dataset but as the dataset gets bigger I start to get an HTTP error as follows:
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.10.0 (Ubuntu)</center>
</body>
</html>
I tried editing /etc/nginx/nginx.conf to allows for longer timeouts and larger files but still no luck. The nginx.conf file is as follows:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 3000;
types_hash_max_size 2048;
# server_tokens off;
##
# Allow for longer jobs
##
client_header_timeout 3000;
client_body_timeout 3000;
fastcgi_read_timeout 3000;
client_max_body_size 100M;
fastcgi_buffers 8 128k;
fastcgi_buffer_size 128k;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I then restart the nginx server with sudo service nginx restart but still get the timeout error.
The /var/log/nginx/error.log line reads as follows:
*4 upstream timed out (110: Connection timed out) while reading response header from upstream, client: [MY IP], server: _, request: "POST [API]", upstream: "http://127.0.0.1:8000/[API]", host: "[HOST ADDRESS]"
Any help or tips you can give on how plumber works under the hood would be very useful indeed. Many thanks!
I have fixed this now by adding the following lines to /etc/nginx/sites-available/[my site]/mysite.conf
location {
# time out settings
proxy_connect_timeout 3000s;
proxy_send_timeout 3000;
proxy_read_timeout 3000;
}
I also commented out the keepalive_timout flag in nginx.conf and specified the http version as in this article but I am not sure exactly what made the difference. If I find out I will update the answer.

Why does nginx default to my proxypass?

I am using nginx as a proxy server to serve two web apps on a single server that are running on separate ports (for local development purposes). Below is the full nginx.conf file:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
server {
server_name news.mysite;
location / {
proxy_pass http://localhost:3001;
}
}
server {
server_name blog.mysite;
location / {
proxy_pass http://localhost:3002;
}
}
# include /etc/nginx/conf.d/*.conf;
# include /etc/nginx/sites-enabled/*;
}
When accessing the subdomains from the browser I see the expected content based on the web apps that are running. However, when I access the main domain through the browser (http://mysite), it displays the content from the first proxypass (news.mysite # localhost:3001). I would have expected one of the following two scenarios:
Serve content from the default directory # /var/www/html
The typical "This site can’t be reached" error in the browser.
Why is nginx proxying the first proxypass it finds by default and how can I change it?
The first server that nginx encounters for a socket will be considered the default unless you create another one that you explicitly mark as the default.
So for your case, you would want to add an additional server block as a catch all:
server {
listen 80 default_server;
root /var/www/html;
}

docker nginx stream balancer 404

I have docker and nginx version: nginx/1.10.0 (Ubuntu 16.04)
my nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 1024;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
include /etc/nginx/tcpconf.d/*;
#mail {
# # See sample authentication script at:
# # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
# # auth_http localhost/auth.php;
# # pop3_capabilities "TOP" "USER";
# # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
# server {
# listen localhost:110;
# protocol pop3;
# proxy on;
# }
#
# server {
# listen localhost:143;
# protocol imap;
# proxy on;
# }
#}
this is default nginx.conf, and I added include /etc/nginx/tcpconf.d/*;
tcpconf.d contains 1 file:
stream {
upstream docker{
server localhost:8182;
server localhost:8183;
}
server {
listen 443;
proxy_pass docker;
}
}
So basically i have glassfish 4 server on docker, and when i start container on port 8182 i want to nginx balance request to port 8183(if 8182 not responding) and in reverse order.
And this is works perfectly, except one thing, when i start container, glassfish server is starting and web application on this server is starting too. Glassfish starts after 1-5 seconds and web application after 30 sec - 1 min, so when glassfish is up(for example on port 8182) nginx send request to this port and i getting 404, because glassfish is up, but web application is not, in this case I want to be redirected on port 8183 because 404 is not what i want to see)
so my question is how to tell nginx to not showing me 404 and try to request another port?
Is there any reason you are using the stream module for this? If it is a regular http server NGINX is proxying to then use regular http{} and proxy_next_upstream to define behavior on 404 error:
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream

Resources