I am trying to set up a really simple uWSGI + nginx project.
I ran into the following problems
wsgi.input appears to be empty (which I found out trying to get
POST variables)
curl http://localhost:8080/parsings works fine, but curl http://localhost:8080/parsings --header "Content-Length:1" hangs and then returns curl: (52) Empty reply from server
Nothing appears on nginx error.log. uWSGI does not log anything either. It looks as if the request does not go anywhere near the server.
Here are the configs
nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_connect_timeout 10;
proxy_send_timeout 15;
proxy_read_timeout 20;
}
sites-enabled/mysite
server {
listen 127.0.0.1:8080;
server_name myproject.local;
client_max_body_size 2M;
location / {
include uwsgi_params;
uwsgi_read_timeout 300;
uwsgi_pass unix:///tmp/my_project.sock;
}
}
uwsgi.ini
[uwsgi]
module = wsgi:application
master = true
processes = 5
static-map = /static/=/home/breln/projects/myproject/static
socket = /tmp/my_project.sock
chmod-socket = 777
chown-socket = www-data
vacuum = true
die-on-term = true
Nginx is 1.11.8 (same problem appeared with 1.10.2). uWSGI is 2.0.14
Any hints appreciated.
Probably related to uWSGI not buffering/reading the POST data from Nginx, unless your WSGI app tries to use the POST data in the request handler:
Nginx connection reset, response from uWsgi lost
Related
I have inherited an nginx instance. I am barely more than a newbie when it comes to nginx.
When I navigate to the IP address of the box "http://192.168.1.10" I get a custom page back created by the previous dev somehow.
The nginx.conf looks like this:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
The conf.d directory is empty.
The sites-enabled directory contains a link default which points to default in sites-enabled. Here is that file:
server {
location / {
proxy_pass http://localhost:3002;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 80;
listen [::]:80;
}
I have looked in /var/www/html and found the default index.nginx-debian.html that is not the one being served.
So, my question is, where is the file I'm getting back being served from? I'm running on a ubuntu server distro.
I'm trying to set up nginx as reverse proxy to an application.
When I set up the same request over http it works fine
I think I've done everything and I still have the 400 error. Any help will be really nice.
My nginx configuration file :
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
large_client_header_buffers 4 16k;
client_max_body_size 10M;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log debug;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
My site configuration :
server {
listen 80;
server_name example.com;
location /eai {
proxy_pass http://192.168.44.128:8000;
}
}
server {
listen 443 ssl;
ssl_certificate /etc/nginx/certificates/myssl.crt;
ssl_certificate_key /etc/nginx/certificates/myssl.key;
server_name example.com;
location /eai {
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://192.168.44.128:8000;
}
}
My python code to call the application behind the proxy :
import requests
url = 'https://example.com/eai/request/import'
file_list = [
('file', ('test.csv', open('test.csv', 'rb'), 'text/html')),
]
r = requests.post(url, files=file_list, proxies={"https":"https://192.168.44.241","http":"http://192.168.44.241"}, verify=False)
The info line in the error.log
client sent invalid request while reading client request line, client: 192.168.44.1, server: example.com, request: "CONNECT example.com:443 HTTP/1.0"
Thanks in advance for any help
Regards
Here is your problem:
proxies={"https":"https://192.168.44.241","http":"http://192.168.44.241"}
Your client connection is not actually going through a proxy, so this should not be present at all. You are just making a normal HTTPS request to a normal HTTPS server.
Using ubuntu server 20.04, nginx version: nginx/1.17.10 (Ubuntu), localhost,
when I shut down my dot net application, "502 Bad Gateway" following by nginx appears ( version is hidden successfully). I did this to hide nginx:
sudo apt-get update -y
sudo apt-get install -y nginx-extras
sudo nano /etc/nginx/nginx.conf
http {
more_set_headers "Server: Your_New_Server_Name";
server_tokens off; }
sudo service nginx restart
Refrence
Even tried
more_clear_headers Server;
more_set_input_headers -r 'Server: howdy';
but still having problem. All the web suggest just the same way, but not working to me.
My currently nginx.conf :
user ehsan1362;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
server_tokens off;
# more_clear_headers Server;
#more_set_input_headers -r 'Server: howdy';
add_header X-Frame-Options "SAMEORIGIN";
server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/proxy.conf;
include /etc/nginx/mime.types;
default_type application/octet-stream;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
After updating y let's encrypt domain update, I can't start Nginx. I got a message :
nginx: [emerg] zero size shared memory zone "one".
I did not find any solution, did anyone solve it ?
I saw that this memory is used by my 4 workers, I added a proxy but my server still does not restart.
Thank you
user nginx;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.2;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
#proxy
proxy_cache_path /var/nginx/cache levels=1:2 keys_zone=app_cache:10m max_size=5g inactive=45m use_temp_path=off;
}
Eventually I found this line in one of my config files and commented it :
# limit_req zone=one burst=1 nodelay;
Thank you Shawn
Jean
Hi I am trying to setup a nginx to work as a reverse proxy to an application that I am running on a tomcat server. when I try to access my application through http it works fine, but when I try to access it over https I am getting a 502 error
here follows my nginx config file
user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log notice;
gzip on;
gzip_disable "msie6";
rewrite_log on;
server{
ssl on;
listen 80;
listen 443 ssl;
server_name myapp.local;
ssl_certificate max.local.crt;
ssl_certificate_key server.key;
#ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers RC3:HIGH:!aNULL:!MD5;
#ssl_prefer_server_ciphers on;
ssl_session_timeout 5m;
keepalive_timeout 60;
error_log /var/log/nginx/hybris.log;
rewrite_log on;
set $my_port 9001;
set $my_protocol "http";
if ($scheme = https){
set $myport 9002;
set $my_protocol "https";
}
location / {
if ( $http_user_agent ~ "Chrome"){
#just a proof of concept
return 301 http://$host/AE/en;
}
if ( $http_user_agent ~ "Firefox"){
#just a proof of concept
return 301 http://google.com/;
}
}
location /AE/en {
proxy_pass $scheme://10.0.2.2:$my_port;
proxy_set_header Host $host;
}
location ~(?:/..)?/_ui/(.*) {
proxy_pass http://10.0.2.2:9001/_ui/$1;
proxy_set_header Host $host;
}
}
}
When using https you are changing the port and also scheme for connecting to the tomcat server - this does not really make sense. You would only use https for a backend server if it is in another datacenter, not within a local network. It should work fine if you remove the $my_port and $my_protocol definitions and change your /AE/en location block to
location /AE/en {
proxy_pass http://10.0.2.2:9001;
proxy_set_header Host $host;
}
I think you need to create two server sections. One for listening on port 80 and the other for listening on port 453 which is for https.