I installed uWSGI in a Docker container running ubuntu:16.04 using the following commands:
apt-get update
apt-get install -y build-essential python-dev python-pip
pip install uwsgi
I then created a single static file:
cd /root
mkdir static
dd if=/dev/zero bs=1 count=1024 of=static/data
...and finally started uWSGI with the following command:
uwsgi \
--buffer-size 32768 \
--http-socket 0.0.0.0:80 \
--processes 4 \
--http-timeout 60 \
--http-keepalive \
--static-map2=/static=./
I was able to access the static file without any problems. However, despite passing the --http-keepalive option, issuing multiple requests with cURL resulted in the following output:
# curl -v 'http://myserver/static/data' -o /dev/null 'http://myserver/static/data' -o /dev/null
* Trying 192.168.1.101...
...
> GET /static/data HTTP/1.1
> Host: 192.168.1.101:8100
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 1024
< Last-Modified: Sat, 03 Dec 2016 22:06:49 GMT
<
{ [1024 bytes data]
100 1024 100 1024 0 0 577k 0 --:--:-- --:--:-- --:--:-- 1000k
* Connection #0 to host 192.168.1.101 left intact
* Found bundle for host 192.168.1.101: 0x563fbc855630 [can pipeline]
* Connection 0 seems to be dead!
* Closing connection 0
...
Of particular interest is the line:
* Connection 0 seems to be dead!
This was confirmed with WireShark:
As you can see, there are two completely separate TCP connections. The first one is closed by uWSGI (packet #10 - the [FIN, ACK]).
What am I doing wrong? Why doesn't uWSGI honor the --http-keepalive flag instead of immediately closing the connection?
In my case, I faced with random 502 responses from aws ALB / ELB.
I provided configuration by .ini file like:
http-keepalive = true
but after hours of debugging, I saw a similar picture in Wireshark - after each response, the connection was closed by the server, so the keep-alive option was ignored
In uWSGI#2018, the discussion points that it should be an integer (doc here), but unfortunately can't find exact info on whether it represents seconds of socket lifetime, or it could be simple '1'. After this change - the random 502 disappeared and uwsgi started work in expected mode.
Hope it could be also helpful for somebody.
I was finally able to get keepalive working by switching from --http-socket to simply --http. According to uWSGI docs:
If your web server does not support the uwsgi protocol but is able to speak to upstream HTTP proxies, or if you are using a service like Webfaction or Heroku to host your application, you can use http-socket. If you plan to expose your app to the world with uWSGI only, use the http option instead, as the router/proxy/load-balancer will then be your shield.
In my particular case, it was also necessary to load the http plugin.
Related
I am developing an Asp.Net application on MacOS with F# (.NET 6.0.301). While writing code I run a dotnet watch:
dotnet watch run -v --project Server/Server.fsproj
and send a curl message to one of the api endpoints of the server
curl -k -i -d "#loginInfo.json" -H "Accept: application/json" -H "Content-Type: application/json" -v 'https://localhost:5001/services/IAdminApi/login'
* Trying 127.0.0.1:5001...
* Connected to localhost (127.0.0.1) port 5001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
[...] // More handshake data
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
which returns the expected result. It worked seamessly until a few months ago, when I started to receive the following error
* Trying 127.0.0.1:5001...
* Connected to localhost (127.0.0.1) port 5001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection 0
curl: (35) error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
However, when I run the server directly without the watch command:
/usr/local/share/dotnet/dotnet Server/bin/Debug/net6.0/Server.dll
everything works perfectly, and the API sends back the proper info. The server uses a self-signed certificate that is read from file.
Everything is running locally in a macOS machine. I have tried in two machines with different macOS versions, and problems started after updating to Monterey 12.6 and Ventura 13. Now both machines run updated versions (Monterey 12.6.2 and Ventura 13.1), but the problem persists.
However, dotnet watch works as expected in Windows 10. Codes are run from a terminal, without any intervention from the IDE (Rider in my case). Even though I lean towards something at the os level, also tried sending the curl command with the --tlsv1.x --tls-max 1.x options (x=0,1,2,3) with no luck. The version of curl is 7.79.1.
Any pointer to keep investigating will be greatly appreciated.
I think that that might be something related to how the dotnet watch command handles encrypted traffic. as per this page: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-watch
"As part of dotnet watch, the browser refresh server mechanism reads this value to determine the WebSocket host environment. The value 127.0.0.1 is replaced by localhost, and the http:// and https:// schemes are replaced with ws:// and wss:// respectively."
So perhaps while the https traffic (when you run the application without dotnet watch) works fine because it uses appropriate cyphers and version of TLS, there is a bug or some omission in the implementation of the wss protocol, where TLS is fixed to version 1.
It would appear that you have two options.
run your application on local host without https
configure your operating system to allow for TLS 1.0
Problem
Receive 502 bad gateway when i try to execute a django management command via gunicorn
Logic Line
I think the problem is about permissions, something like gunicorn is not able call the command. I say that because i can run it locally where i don't use gunicorn.
I can run it in these two methods:
python manage.py runserver and after that, fire it using Postman and that's ok.
The second one is calling by terminal python manage.py command_name and that's ok also.
On production, i'm also able to run with python manage.py command_name. But not by postman, because it return 502 (the main problem)
PS. If i remove call_command it returns 200 ok, so, it seems like the core problem is the execution of this command.
The code
class TestCommandView(views.APIView):
def post(self, request):
id = request.data['id']
try:
call_command('command_name', target_id=id)
return Response({"status": "success"})
except Exception as error:
return Response({"status": "error: " + str(error) })
Return sample
<html>
<head>
<title>502 Bad Gateway</title>
</head>
<body bgcolor="white">
<center>
<h1>502 Bad Gateway</h1>
</center>
<hr>
<center>nginx/1.14.0 (Ubuntu)</center>
</body>
</html>
Gunicorn Conf
[Unit]
Description=gunicorn daemon
After=network.target
[Service]
User=ubuntu
Group=www-data
RuntimeDirectory=gunicorn
WorkingDirectory=/var/www/project
ExecStart=/var/www/project/venv/bin/ddtrace-run /var/www/project/venv/bin/guni$
Environment="DJANGO_SETTINGS_MODULE=project.settings.prod"
[Install]
WantedBy=multi-user.target
Nginx Log error
2019/03/13 13:43:38 [error] 27552#27552: *3128 upstream prematurely closed connection while reading response header from upstream, client: IP, server: api.project.com, request: "POST /api/project/endpoint/ HTTP/1.1", upstream: "http://unix:/tmp/project.sock:/api/project/endpoint/", host: "api.project.com"
What i've tried
sudo chown -R www-data:www-data /var/www/project
sudo chown -R ubuntu:ubuntu /var/www/project
Change my Environment value on gunicorn config based on this question solution: Django call_command permissions nginx+gunicorn+supervisord. Adding PYTHONPATH, but this guy use it on supervisor config, this project don't use supervisor, so i tried to put it on gunicorn file, it was just a try.
I realized it was a problem of timeout
The default timeout of gunicorn is 30 seconds based on its documentation.
Doc. Workers silent for more than this many seconds are killed and restarted.
My request get more than 30 seconds, so, gunicorn killed the process and nginx returned 502.
Solution
Change gunicorn default timeout
Change nginx timeout
Gunicorn
I added the timeout option to gunicorn ExecStart line
--timeout 300
ExecStart=/var/www/project/venv/bin/ddtrace-run /var/www/project/venv/bin/gunicorn --bind unix:/tmp/project.sock project.wsgi:application --access-logfile /home/ubuntu/gunicorn.log --error-logfile /home/ubuntu/gunicorn.error.log --timeout 720 --workers 3
Nginx
Added this option to HTTP part of nginx conf
proxy_read_timeout 300s;
Restarted nginx and gunicorn and that's worked like a charm
I'm trying to figure out how safe curl -u is to use with a real username and password. Investigating the header of such a request, it seems the user name and password are turned into some kind of hash.
In the example below, it seems jujuba:lalalala is being turned to anVqdWJhOmxhbGFsYWxh
Is this encryption or compression? Is it safe? How does the recipient decode this data?
curl -u jujuba:lalalala -i -X Get http://localhost:80/api/resource -v
* timeout on name lookup is not supported
* Trying 127.0.0.1...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to localhost (127.0.0.1) port 80 (#0)
* Server auth using Basic with user 'jujuba'
> Get /api/resource HTTP/1.1
> Host: localhost
> Authorization: Basic anVqdWJhOmxhbGFsYWxh
If you run the command:
echo anVqdWJhOmxhbGFsYWxh | base64 -d
You will get jujuba:lalala showing that the content is just Base-64 encoded, which is the standard for Basic authentication.
You should use HTTPS for any site that requires authentication.
I am getting this error in my nginx-error.log file:
2014/02/17 03:42:20 [crit] 5455#0: *1 connect() to unix:/tmp/uwsgi.sock failed (13: Permission denied) while connecting to upstream, client: xx.xx.x.xxx, server: localhost, request: "GET /users HTTP/1.1", upstream: "uwsgi://unix:/tmp/uwsgi.sock:", host: "EC2.amazonaws.com"
The browser also shows a 502 Bad Gateway Error. The output of a curl is the same, Bad Gateway html
I've tried to fix it by changing permissions for /tmp/uwsgi.sock to 777. That didn't work. I also added myself to the www-data group (a couple questions that looked similar suggested that). Also, no dice.
Here is my nginx.conf file:
nginx.conf
worker_processes 1;
worker_rlimit_nofile 8192;
events {
worker_connections 3000;
}
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
I am running a Flask application with Nginsx and Uwsgi, just to be thorough in my explanation. If anyone has any ideas, I would really appreciate them.
EDIT
I have been asked to provide my uwsgi config file. So, I never personally wrote my nginx or my uwsgi file. I followed the guide here which sets everything up using ansible-playbook. The nginx.conf file was generated automatically, but there was nothing in /etc/uwsgi except a README file in both apps-enabled and apps-available folders. Do I need to create my own config file for uwsgi? I was under the impression that ansible took care of all of those things.
I believe that ansible-playbook figured out my uwsgi configuration since when I run this command
uwsgi -s /tmp/uwsgi.sock -w my_app:app
it starts up and outputs this:
*** Starting uWSGI 2.0.1 (64bit) on [Mon Feb 17 20:03:08 2014] ***
compiled with version: 4.7.3 on 10 February 2014 18:26:16
os: Linux-3.11.0-15-generic #25-Ubuntu SMP Thu Jan 30 17:22:01 UTC 2014
nodename: ip-10-9-xxx-xxx
machine: x86_64
clock source: unix
detected number of CPU cores: 1
current working directory: /home/username/Project
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 4548
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
Python version: 2.7.5+ (default, Sep 19 2013, 13:52:09) [GCC 4.8.1]
*** Python threads support is disabled. You can enable it with --enable-threads ***
Python main interpreter initialized at 0x1f60260
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72760 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
WSGI app 0 (mountpoint='') ready in 3 seconds on interpreter 0x1f60260 pid: 26790 (default app)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 26790, cores: 1)
The permission issue occurs because uwsgi resets the ownership and permissions of /tmp/uwsgi.sock to 755 and the user running uwsgi every time uwsgi starts.
The correct way to solve the problem is to make uwsgi change the ownership and/or permission of /tmp/uwsgi.sock such that nginx can write to this socket. Therefore, there are three possible solutions.
Run uwsgi as the www-data user so that this user owns the socket file created by it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --uid www-data --gid www-data
Change the ownership of the socket file so that www-data owns it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chown-socket=www-data:www-data
Change the permissions of the socket file, so that www-data can write to it.
uwsgi -s /tmp/uwsgi.sock -w my_app:app --chmod-socket=666
I prefer the first approach because it does not leave uwsgi running as root.
The first two commands need to be run as root user. The third command does not need to be run as root user.
The first command leaves uwsgi running as www-data user. The second and third commands leave uwsgi running as the actual user that ran the command.
The first and second command allow only www-data user to write to the socket. The third command allows any user to write to the socket.
I prefer the first approach because it does not leave uwsgi running as root user and it does not make the socket file world-writeable .
While the accepted solution is true there might also SELinux be blocking the access. If you did set the permissions correctly and still get permission denied messages try:
sudo setenforce Permissive
If it works then SELinux was at fault - or rather was working as expected! To add the permissions needed to nginx do:
# to see what permissions are needed.
sudo grep nginx /var/log/audit/audit.log | audit2allow
# to create a nginx.pp policy file
sudo grep nginx /var/log/audit/audit.log | audit2allow -M nginx
# to apply the new policy
sudo semodule -i nginx.pp
After that reset the SELinux Policy to Enforcing with:
sudo setenforce Enforcing
Anyone who lands here from the Googles and is trying to run Flask on AWS using the default Ubuntu image after installing nginx and still can't figure out what the problem is:
Nginx runs as user "www-data" by default, but the most common Flask WSGI tutorial from Digital Ocean has you use the logged in user for the systemd service file. Change the user that nginx is running as from "www-data" (which is the default) to "ubuntu" in /etc/nginx/nginx.conf if your Flask/wsgi user is "ubuntu" and everything will start working. You can do this with one line in a script:
sudo sed -i 's/user www-data;/user ubuntu;/' /etc/nginx/nginx.conf
Trying to make Flask and uwsgi run as www-data did not work off the bat, but making nginx run as ubuntu worked just fine since all I'm running with this instance is Flask anyhow.
You have to set these permissions (chmod/chown) in uWSGI configuration.
It is the chmod-socket and the chown-socket.
http://uwsgi-docs.readthedocs.org/en/latest/Options.html#chmod-socket
http://uwsgi-docs.readthedocs.org/en/latest/Options.html#chown-socket
Nginx connect to .sock failed (13:Permission denied) - 502 bad gateway
change the name of the user on the first line in /etc/nginx/nginx.conf file.
the default user is www-data and change it to root or your username
I know it's too late, but it might helps to other. I'll suggest to follow Running flask with virtualenv, uwsgi, and nginx very simple and sweet documentation.
Must activate your environment if you run your project in virtualenv.
here is the yolo.py
from config import application
if __name__ == "__main__":
application.run(host='127.0.0.1')
And create uwsgi.sock file in /tmp/ directory and leave it blank.
As #susanpal answer said "The permission issue occurs because uwsgi resets the ownership and permissions of /tmp/uwsgi.sock to 755 and the user running uwsgi every time uwsgi starts." it is correct.
So you have to give permission to sock file whenever uwsgi starts.
so now follow the below command
uwsgi -s /tmp/uwsgi.sock -w yolo:application -H /var/www/yolo/env --chmod-socket=666
A little different command from #susanpal.
And for persist connection, simply add "&" end of command
uwsgi -s /tmp/uwsgi.sock -w yolo:app -H /var/www/yolo/env --chmod-socket=666 &
In my case changing some php permission do the trick
sudo chown user:group -R /run/php
I hope this helps someone.
You should post both nginx and uwsgi configuration file for your application (the ones in /etc/nginx/sites-enabled/ and /etc/uwsgi/ - or wherever you put them).
Typically check that you have a line similar to the following one in your nginx app configuration:
uwsgi_pass unix:///tmp/uwsgi.sock;
and the same socket name in your uwsgi config file:
socket=/tmp/uwsgi.sock
I ssh to the dev box where I am suppose to setup Redmine. Or rather, downgrade Redmine. In January I was asked to upgrade Redmine from 1.2 to 2.2. But the plugins we wanted did not work with 2.2. So now I'm being asked to setup Redmine 1.3.3. We figure we can upgrade from 1.2 to 1.3.3.
In January I had trouble getting Passenger to work with Nginx. This was on a CentOS box. I tried several installs of Nginx. I'm left with different error logs:
This:
whereis nginx.conf
gives me:
nginx: /etc/nginx
but I don't think that is in use.
This:
find / -name error.log
gives me:
/opt/nginx/logs/error.log
/var/log/nginx/error.log
When I tried to start Passenger again I was told something was already running on port 80. But if I did "passenger stop" I was told that passenger was not running.
So I did:
passenger start -p 81
If I run netstat I see something is listening on port 81:
netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 localhost:81 localhost:42967 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:51874 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62993 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62905 ESTABLISHED
tcp 0 0 10.0.1.253:ssh 10.0.1.91:50886 ESTABLISHED
tcp 0 0 localhost:81 localhost:42966 TIME_WAIT
tcp 0 0 10.0.1.253:ssh 10.0.1.91:62992 ESTABLISHED
tcp 0 0 localhost:42967 localhost:81 ESTABLISHED
but if I point my browser here:
http: // 10.0.1.253:81 /
(StackOverFlow does not want me to publish the IP address, so I have to malform it. There is no harm here as it is an internal IP that no one outside my company could reach.)
In Google all I get is "Oops! Google Chrome could not connect to 10.0.1.253:81".
I started Phusion Passenger at the command line, and the output is verbose, and I expect to see any error messages in the terminal. But I'm not seeing anything. It's as if my browser request is not being heard, even though netstat seems to indicate the app is listening on port 81.
A lot of other things could be wrong with this app (I still need to reverse migrate the database schema) but I'm not seeing any of the error messages that I expect to see. Actually, I'm not seeing any error messages, which is very odd.
UPDATE:
If I do this:
ps aux | grep nginx
I get:
root 20643 0.0 0.0 103244 832 pts/8 S+ 17:17 0:00 grep nginx
root 23968 0.0 0.0 29920 740 ? Ss Feb13 0:00 nginx: master process /var/lib/passenger-standalone/3.0.19-x86_64-ruby1.9.3-linux-gcc4.4.6-1002/nginx-1.2.6/sbin/nginx -c /tmp/passenger-standalone.23917/config -p /tmp/passenger-standalone.23917/
nobody 23969 0.0 0.0 30588 2276 ? S Feb13 0:34 nginx: worker process
I tried to cat the file /tmp/passenger-standalone.23917/config but it does not seem to exist.
I also killed every session of "screen" and every terminal window where Phusion Passenger might be running, but clearly, looking at ps aux, it looks like something is running.
Could the Nginx be running even if the Passenger is killed?
This:
ps aux | grep phusion
brings back nothing
and this:
ps aux | grep passenger
Only brings back the line with nginx.
If I do this:
service nginx stop
I get:
nginx: unrecognized service
and:
service nginx start
gives me:
nginx: unrecognized service
This is a CentOS machine, so if I had Nginx installed normally, this would work.
The answer is here - Issue Uploading Files from Rails app hosted on Elastic Beanstalk
You probably have /etc/cron.daily/tmpwatch removing the /tmp/passenger-standalone* files every day, and causing you all this grief.