I am using nginx + uwsgi over a flask app. In nginx settings the server block is having server_name *.mydomain.com; and location block for uwsgi is like
location /api/ {
include uwsgi_params;
uwsgi_pass unix:///var/uwsgi/app.sock;
.........
}
so the issue is I can access app.mydomain.com, but when i am trying app1.mydomain.com uwsgi log is not showing any request. nginx error log is showing
upstream timed out (110: Connection timed out) while reading response header from upstream, client: 122.166.94.231, server: *.mydomain.com, request: "GET /api/client/generic/ping HTTP/1.1", upstream: "uwsgi://unix:///var/uwsgi/app.sock", host: "app1.mydomain.com
I have another test setup where all these settings are same and its working. Any pointers? When i restart uwsgi and nginx app1.mydomain.com works, until i load app.mydomain.com (initial load of app.mydomain.com fails, but if i keep on refreshing it loads then app1.mydomain.com raises 504 gateway timeout and log shows Connection timed out while reading response header from upstream).
It worked when I added single-interpreter = true in uwsgi.ini settings.
A newly added python library was causing the issue.
Don't know whether this will help others.
I also ran into the same issue. uWSGI has "http", "http-socket" and "socket" options. When putting uWSGI behind a full webserver like Nginx, we should spawn uWSGI to natively speak the uWSGI protocol:
uwsgi --socket 127.0.0.1:3031 --wsgi-file foobar.py --master --processes 4 --threads 2 --stats 127.0.0.1:9191
More details from uwsgi documentation: https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#putting-behind-a-full-webserver
Looking at the uwsgi error logs and understanding what the problem is helped me. Issue was not related to Nginx configurations at all. My email host has changed and the code threw error while calling the send email code.
Related
I have a very simple flask app that is deployed on GKE and exposed via google external load balancer. And getting random 502 responses from the backend-service (added a custom headers on backend-service and nginx to make sure the source and I can see the backend-service's header but not nginx's)
The setup is;
LB -> backend-service -> neg -> pod (nginx -> uwsgi) where pod is the application built using flask and deployed via uwsgi and nginx.
The scenario is to handle image uploads in simple-secured way. Sender sends me a token with upload request.
My flask app
receive request and check the sent token via another service using "requests".
If token valid, proceed to handle the image and return 200
If token is not valid, stop and send back a 401 response.
First, I got suspicious about the 200 and 401's. And reverted all responses to 200. Following some of the expected responses, server starts to respond 502 and keep sending it. "Some of the messages at the very beginning succeeded".
nginx error logs contains below lines
2023/02/08 18:22:29 [error] 10#10: *145 readv() failed (104: Connection reset by peer) while reading upstream, client: 35.191.17.139, server: _, request: "POST /api/v1/imageUpload/image HTTP/1.1", upstream: "uwsgi://127.0.0.1:21270", host: "example-host.com"
my uwsgi.ini file is as below;
[uwsgi]
socket = 127.0.0.1:21270
master
processes = 8
threads = 1
buffer-size = 32768
stats = 127.0.0.1:21290
log-maxsize = 104857600
logdate
log-reopen
log-x-forwarded-for
uid = image_processor
gid = image_processor
need-app
chdir = /server/
wsgi-file = image_processor_application.py
callable = app
py-auto-reload = 1
pidfile = /tmp/uwsgi-imgproc-py.pid
my nginx.conf is as below
location ~ ^/api/ {
client_max_body_size 15M;
include uwsgi_params;
uwsgi_pass 127.0.0.1:21270;
}
Lastly, my app has a healthcheck method with simple JSON response. It does no extra stuff and simply returns. This never fails as explained above.
Edit : my nginx access logs in the pod shows the response as 401 while the client receives 502.
for those who gonna face with the same issue, the problem was post data reading (or not reading).
nginx was expecting to get post data read by the proxied, in our case uwsgi, app. But according to my logic I was not reading it in some cases and returning back the response.
Setting uwsgi post-buffering solved the issue.
post-buffering = %(16 * 1024 * 1024)
Which led me to this solution;
https://stackoverflow.com/a/26765936/631965
Nginx uwsgi (104: Connection reset by peer) while reading response header from upstream
Linux installation of Phusion Passenger(R) 6.0.14 on nginx/1.14.1. I'm not able to load a site (on a simplified nginx.conf with the conf.d/passenger.conf and conf.d/site1.conf includes. That error:
2022/08/03 19:24:34 [alert] 55601#0: *3 Error opening '/home/user3/sites/Passengerfile.json' for reading: Permission denied (errno=13);
This error means that the Nginx worker process (PID 55601, running as UID 992) does not have permission to access this file.
Please read this page to learn how to fix this problem: https://www.phusionpassenger.com/library/admin/nginx/troubleshooting/?a=upon-accessing-the-web-app-nginx-reports-a-permission-denied-error;
Extra info, client: 192.168.1.4, server: domain1.com, request: "GET / HTTP/1.1", host: "server_f.local"
I don't even know what to ask, other than how can I get this to work? That file doesn't exist. I've restarted nginx many times, and this is the only feedback that I get. I've checked over that page, which tells me to look at the error log it's reported in already. The host and server are correct, and I am on my LAN.
I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv
I am running Gitlab on Debian using the package from the Repository. Most of the time Gitlab is running very fast, but after longer idle times Gitlab is very slow or even times out (error 502). One time I also had a timeout on a remote git access (could not reproduce the issue - timeout on the internal API).
In my setup the the Debian machine is behind another nginx proxy which also serves some other services just fine. I did the gitlab-cli checks and everything seems fine.
In the error log of my reverse proxy I only see connection timeouts:
[error] 8643#0: *4139 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.1.1.10, server: gitlab.mydomain.tld, request: "GET / HTTP/1.1", upstream: "http://{SERVER-IP}:80/", host: "gitlab.mydomain.tld"
I can see some errors in my unicorn_stderr.log
E, [2016-03-30T19:40:20.183991 #783] ERROR -- : worker=1 PID:16798 timeout (61s > 60s), killing
E, [2016-03-30T19:40:20.194969 #783] ERROR -- : reaped #<Process::Status: pid 16798 SIGKILL (signal 9)> worker=1
I, [2016-03-30T19:40:20.197554 #16871] INFO -- : worker=1 spawned pid=16871
I, [2016-03-30T19:40:20.197909 #16871] INFO -- : worker=1 ready
E, [2016-03-30T20:08:42.911429 #783] ERROR -- : worker=0 PID:16866 timeout (61s > 60s), killing
E, [2016-03-30T20:08:43.191151 #783] ERROR -- : reaped #<Process::Status: pid 16866 SIGKILL (signal 9)> worker=0
I, [2016-03-30T20:08:43.758363 #18728] INFO -- : worker=0 spawned pid=18728
I, [2016-03-30T20:08:44.108244 #18728] INFO -- : worker=0 ready
What I am a bit curious about is the fact that there are no errors in the log of the nginx delivered with gitlab.
Some more system information:
#sudo gitlab-rake gitlab:env:info
System information
System: Debian 8.3
Current User: git
Using RVM: no
Ruby Version: 2.1.8p440
Gem Version: 2.5.1
Bundler Version:1.10.6
Rake Version: 10.5.0
Sidekiq Version:4.0.1
GitLab information
Version: 8.5.0
Revision: a513e09
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: postgresql
URL: http://gitlab.mydomain.tld
HTTP Clone URL: http://gitlab.mydomain.tld/some-group/some-project.git
SSH Clone URL: git#gitlab.mydomain.tld:some-group/some-project.git
Using LDAP: no
Using Omniauth: no
GitLab Shell
Version: 2.6.10
Repositories: /var/opt/gitlab/git-data/repositories
Hooks: /opt/gitlab/embedded/service/gitlab-shell/hooks/
Git: /opt/gitlab/embedded/bin/git
Edit:
My nginx config on the "external" reverse proxy looks like this:
server {
listen 443;
ssl on;
server_name gitlab.mydomain.tld;
access_log /var/log/nginx/gitlab.mydomain.tld.access.log;
error_log /var/log/nginx/gitlab.mydomain.tld.error.log;
ssl_certificate /etc/nginx/ssl/gitlab.mydomain.tld_unified.crt;
ssl_certificate_key /etc/nginx/ssl/mydomain.tld.key;
location / {
proxy_pass http://gitlab:80;
proxy_redirect default;
proxy_set_header Host $http_host;
proxy_set_header X_FORWARDED_PROTO "https";
satisfy any;
}
}
Edit2:
I took the suggested answer into account and also considered this source: https://github.com/gitlabhq/gitlabhq/blob/master/doc/install/requirements.md
I assigned 2GB RAM to the VM now, and also added one additional unicorn worker.
Edit3:
The problem seems to be solved by adding more memory and using 3 unicorn workers.
Jan,
I have a similar setup although our box is dedicated to GITlab. Without knowing the specs of your server (GITLAB likes memory) and the load on that box I would suggest the following diagnostics:
Does your upstream nginx use identical parameters as the gitlab nginx configuration? They have tweaked a number of things including timeouts.
What kind of request result in time outs? Some operations (like generating diffs) can take some time to render.
If you run the requests via SSH do you also experience time outs?
Have you checked global logs in /var/log?
FYI: I had to enlarge my small GitLab installation to have 4GB RAM not to throw OOM errors
Now I think, I'd better go with gogs or other alternative.
This question was already answered here.
I'm having some problems to serve large file downloads/uploads (3gb+).
As I'm using Django I guess that the problems to server the file can become from Django or NGinx.
In my NGinx enabled site I have
server {
...
client_max_body_size 4G;
...
}
And over django I'm serving the files in chunk sizes:
def return_file(path):
filename = os.path.basename(path)
chunk_size = 8192
response = StreamingHttpResponse(FileWrapper(open(path), chunk_size), content_type=mimetypes.guess_type(path)[0])
response['Content-Length'] = os.path.getsize(path)
response['Content-Disposition'] = 'attachment; filename={0}'.format(filename)
return response
This method allowed me to pass from downloads of 600Mb~ to 2.6Gb, but it seems that the downloads are getting truncated at 2.6Gb. I traced the error:
2015/09/04 11:31:30 [error] 831#0: *553 upstream prematurely closed connection while reading upstream, client: 127.0.0.1, server: localhost, request: "GET /chat/download/photorec.zip/ HTTP/1.1", upstream: "http://unix:/web/rsmweb/run/gunicorn.sock:/chat/download/photorec.zip/", host: "localhost", referrer: "http://localhost/chat/2/"
After reading some posts I added the following to my NGinx conf:
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
But I got the same error with an *1 instead of a *553*
I also thought that It could be a Django database Timeout, so I added:
DATABASE_OPTIONS = {
'connect_timeout': 14400,
}
But it is not working either. (the download over the development server takes about 30 seconds)
Thanks for any help!
For large files try to use NGINX itself with X-Accel. NGINX is intended to server static content, while Django is for your application logic.
For more information
NGINX X-Accel Wiki and this answer.
The error from nginx indicates that the upstream closed the connection, so it's a problem with django. I'd recommend looking for errors and debugging information in the django logs.