many days trying to setup nginx + socketio + flask. After fixing many different problems I got one I can't even find in google ( maybe I'm just too dumb, but still :) ).
After starting all services (uWSGI + Nginx) my app becomes available and everything looks ok. Socketio makes handshake, get response 200. Still ok. After that long polling (xhr) requests start to get 504 error. In nginx error log I see that ping was sent but pong wasn't received...and after that any request starts to get 504...
Please help, I haven't more ideas where I'm wrong...
My settings:
/etc/nginx/sites-avaliable/myproject
server {
listen 80;
server_name mydomen.ru;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
location /socket.io/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection ‘’upgrade’’;
include uwsgi_params;
uwsgi_pass unix:/home/myproject/ftp/files/myproject.sock;
}
}
/etc/systemd/system/myproject.service
[Unit]
Description=myproject description
After=network.target
[Service]
User=myuser
Group=www-data
WorkingDirectory=/home/myproject/ftp/files
Environment=‘’PATH=/home/myproject/ftp/files/venv/bin’’
ExecStart=/home/myproject/ftp/files/venv/bin/uwsgi —ini /home/myproject/ftp/files/uwsgi.ini
[Install]
WantedBy=multi-user.target
/home/myproject/ftp/files/uwsgi.ini
[uwsgi]
module = my_module:application
master = true
gevent = 500
buffer-size = 32768
http-websockets = true
socket = myproject.sock
chmod-socket = 660
vacuum = true
die-on-term = true
Related
I have a very weird problem with socket.io and I was hoping someone can help me out.
For some reason few clients cannot connect to the server no matter what, when I am using https.
I am getting the following error code: ERR_CRYPTO_OPERATION_FAILED (see the detailed log below)
Again, most of the time the connection is perfectly fine, only some (random) clients seem to have this problem.
I have created a super simple server.js and client.js to make it easy to test.
I am using socket.io#2.4.1, and socket.io-client#2.4.0
Unfortunately version 3.x.x is not an option.
The OS is Ubuntu 18.04, both on the server, and the client side.
Nginx:
server {
listen 80;
server_name example.domain.com;
return 301 https://example.domain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name example.domain.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/cert.key;
ssl_protocols TLSv1.2 TLSv1.3;
location /
{
proxy_pass http://127.0.0.1:8000;
include /etc/nginx/proxy_params;
}
location /socket.io {
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_pass http://127.0.0.1:8000/socket.io;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
client.js:
const client = io.connect("https://example.domain.com", {
origins: '*:*',
transportOptions: {
polling: {
extraHeaders: {
'Authorization': token
}
}
},
});
Tried adding secure: true, reconnect: true, and rejectUnauthorized : false, but no difference.
Also, I tested it with and without the transportOptions.
server.js:
const port = 5000;
const app = express();
const server = app.listen(port, () => {
console.log(`Listening on port: ${port}`);
});
const io = socket(server);
io.on("connection", (socket) => {
console.log("Client connected", socket.id);
});
Of course, when I remove the redirect in nginx and use plain old http to connect, then everything is fine.
When I run DEBUG=* node client.js, I get the following:
socket.io-client:url parse https://example.domain.com/ +0ms
socket.io-client new io instance for https://example.domain.com/ +0ms
socket.io-client:manager readyState closed +0ms
socket.io-client:manager opening https://example.domain.com/ +1ms
engine.io-client:socket creating transport "polling" +0ms
engine.io-client:polling polling +0ms
engine.io-client:polling-xhr xhr poll +0ms
engine.io-client:polling-xhr xhr open GET: https://example.domain.com/socket.io/?EIO=3&transport=polling&t=NVowV1t&b64=1 +2ms
engine.io-client:polling-xhr xhr data null +2ms
engine.io-client:socket setting transport polling +61ms
socket.io-client:manager connect attempt will timeout after 20000 +66ms
socket.io-client:manager readyState opening +3ms
engine.io-client:socket socket error {"type":"TransportError","description":{"code":"ERR_CRYPTO_OPERATION_FAILED"}} +12ms
socket.io-client:manager connect_error +9ms
socket.io-client:manager cleanup +1ms
socket.io-client:manager will wait 1459ms before reconnect attempt +3ms
engine.io-client:socket socket close with reason: "transport error" +6ms
engine.io-client:polling transport not open - deferring close +74ms
socket.io-client:manager attempting reconnect +1s
...
Searching for ERR_CRYPTO_OPERATION_FAILED error, only leads me to the node.js errors page
which has only the following description:
Added in: v15.0.0
A crypto operation failed for an otherwise unspecified reason.
I am using Let's Encrypt certificate.
I don't get it. If it is an SSL issue, why am I getting this error only for few clients?
Maybe I am missing something in nginx?
Any help is much appreciated.
I've seem similar error with node-apn. My solution was to downgrade to nodejs v14. Maybe give that a try?
two step
the version of node must be 14.x
add this config when connect rejectUnauthorized: false
I am having a problem with my Nginx configuration.
I have an Nginx server(A) that adds custom headers and then that proxy_passes to another server(B) which then proxy passes to my flask app(C) that reads the headers. If I go from A -> C the flask app can read the headers that are set but if I go through B (A -> B -> C) the headers seem to be removed.
Config
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
Flask app running on 127.0.0.1:5000
If I change the server A config to proxy_pass http://127.0.0.1:5000 then the Flask app can see the X-Forwarded-User but if I go through server B the headers are "lost"
I am not sure what I am doing wrong. Any suggestions?
Thanks
I can not reproduce the issue, sending the custom header X-custom-header: custom in my netcat server i get:
nc -l -vvv -p 5000
Listening on [0.0.0.0] (family 0, port 5000)
Connection from localhost 41368 received!
GET / HTTP/1.0
Host: 127.0.0.1:5000
Connection: close
X-Forwarded-User: username
User-Agent: curl/7.58.0
Accept: */*
X-custom-header: custom
(see? the X-custom-header is on the last line)
when i run this curl command:
curl -H "X-custom-header: custom" http://127.0.0.1:4999/
against an nginx server running this exact config:
events {
worker_connections 512;
}
http {
# Server B
server {
listen 127.0.0.1:5001;
server_name 127.0.0.1;
location / {
proxy_pass http://127.0.0.1:5000;
}
}
# Server A
server {
listen 4999;
server_name domain.com;
location / {
proxy_pass http://127.0.0.1:5001;
proxy_set_header X-Forwarded-User 'username';
}
}
}
thus i can only assume that the problem is in the part of your config that you isn't showing us. (you said it yourself, it's not the real config you're showing us, but a replica. specifically, a replica that isn't showing the problem)
thus i have voted to close this question as "can not reproduce" - at least i can't.
Looking for some insight into a problem I have been unable to solve, as I am inexperienced with server workings.
Main question: how can I see more detailed error information?
I have a web application that allows a user to upload a csv file; this csv is altered and returned to the user as a download. As it stands, this webapp worked just fine in my localhost development environment. As I try to use it on a separate server I have run into an issue when the file is uploaded. I am getting a 500 Internal Server Error (111 connection refused).
Since I am inexperienced, I don't know how to diagnose this error beyond checking error logs at /var/log/myapp-error.log
2018/10/04 09:24:53 [error] 24401#0: *286 connect() failed (111: Connection refused) while connecting to upstream, client: 128.172.245.174, server: _, request: "GET / HTTP/1.1", upstream: "http://[::1]:8000/", host: "ip addr here"
I would like to know how I can learn more about what is causing this error. I have set flask into debug mode but can't get a stack trace on that either. I will put some of the flask app and nginx configurations below in case that is helpful.
#app.route('/<path:filename>')
def get_download(filename):
return send_from_directory(UPLOAD_FOLDER, filename, as_attachment=True)
#app.route('/', methods = ['GET', 'POST'])
#app.route('/index', methods = ['GET', 'POST'])
def index():
path = os.path.join(dirname(realpath(__file__)), 'uploads')
uploaded = ''
if request.method == 'POST':
if 'file' not in request.files:
flash('no file part')
return redirect(request.url)
file = request.files['file']
if file.filename == '':
flash('no selected file')
return redirect(request.url)
if file and allowed_file(file.filename):
filename = secure_filename(file.filename)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
uploaded = str(path + '/' + filename)
df = preprocess_table(uploaded)
fp = classify(df, filename)
print(fp)
head, tail = os.path.split(fp)
print(tail)
get_download(tail)
return send_from_directory(UPLOAD_FOLDER, tail, as_attachment=True)
return render_template('index.html')
And here are my nginx settings.
server {
#listen on port 80 http
listen 80;
server_name _;
location / {
#redirect requests to same URL but https
return 301 https://$host$request_uri;
}
}
server {
#listen on port 443 https
listen 443 ssl;
server_name _;
#location of the self sign ssl cert
ssl_certificate /path/to/certs/cert.pem;
ssl_certificate_key /path/to/certs/key.pem;
#write access and error logs to /var/log
access_log /var/log/myapp_access.log;
error_log /var/log/myapp_error.log;
location / {
#forward application requests to the gunicorn server
proxy_pass http://localhost:8000;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location /static {
#handle static files directly, without forwarding to the application
alias /path/to/static;
expires 30d;
}
}
Appreciate any insight/advice. Thank you!
(If needed, please see my last question for some more background info.)
I'm developing an app that uses a decoupled front- and backend:
The backend is a Rails app (served on localhost:3000) that primarily provides a REST API.
The frontend is an AngularJS app, which I'm building with Gulp and serving locally (using BrowserSync) on localhost:3001.
To get the two ends to talk to each other, while honoring the same-origin policy, I configured nginx to act as a proxy between the two, available on localhost:3002. Here's my nginx.conf:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 3002;
root /;
# Rails
location ~ \.(json)$ {
proxy_pass http://localhost:3000;
}
# AngularJS
location / {
proxy_pass http://localhost:3001;
}
}
}
Basically, any requests for .json files, I'm sending to the Rails server, and any other requests (e.g., for static assets), I'm sending to the BrowserSync server.
The BrowserSync task from my gulpfile.coffee:
gulp.task 'browser-sync', ->
browserSync
server:
baseDir: './dist'
directory: true
port: 3001
browser: 'google chrome'
startPath: './index.html#/foo'
This all basically works, but with a couple of caveats that I'm trying to solve:
When I run the gulp task, based on the configuration above, BrowserSync loads a Chrome tab at http://localhost:3001/index.html#/foo. Since I'm using the nginx proxy, though, I need the port to be 3002. Is there a way to tell BrowserSync, "run on port 3001, but start on port 3002"? I tried using an absolute path for startPath, but it only expects a relative path.
I get a (seemingly benign) JavaScript error in the console every time BrowserSync starts: WebSocket connection to 'ws://localhost:3002/browser-sync/socket.io/?EIO=3&transport=websocket&sid=m-JFr6algNjpVre3AACY' failed: Error during WebSocket handshake: Unexpected response code: 400. Not sure what this means exactly, but my assumption is that BrowserSync is somehow confused by the nginx proxy.
How can I fix these issues to get this running seamlessly?
Thanks for any input!
To get more control over how opening the page is done, use opn instead of browser sync's mechanism. Something like this (in JS - sorry, my Coffee Script is a bit rusty):
browserSync({
server: {
// ...
},
open: false,
port: 3001
}, function (err, bs) {
// bs.options.url contains the original url, so
// replace the port with the correct one:
var url = bs.options.urls.local.replace(':3001', ':3002');
require('opn')(url);
console.log('Started browserSync on ' + url);
});
I'm unfamiliar with Nginx, but according to this page, the solution to the second problem might look something like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# ...
# BrowserSync websocket
location /browser-sync/socket.io/ {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
I only succeed by appending /browser-sync/socket.io to proxy_pass url.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# ...
# BrowserSync websocket
location /browser-sync/socket.io/ {
proxy_pass http://localhost:3001/browser-sync/socket.io/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
}
You can also do this from the gulp/browsersync side very simply by using its proxy option:
gulp.task('browser-sync', function() {
browserSync({
...
proxy: 'localhost:3002'
});
});
This means your browser connects to browsersync directly like normally via gulp, except now it proxies nginx. As long as your front-end isn't hard-coding hosts/ports in URLs, the requests to Rails will go through the proxy and have the same origin so you can still POST and such. This might be desirable for some as this change for a development setting goes in the development section of your code (gulp+browsersync) versus conditionalizing/changing your nginx config which also runs in production.
Setup for browser-sync to work with python (django) app that runs on uwsgi via websocket. Django app is prefixed with /app to generate url that looks like http://example.com/app/admin/
server {
listen 80;
server_name example.com;
charset utf-8;
root /var/www/example/htdocs/static;
index index.html index.htm;
try_files $uri $uri/ /index.html?$args;
location /app {
## uWSGI setup
include /etc/nginx/uwsgi_params;
uwsgi_pass unix:///var/run/example/uwsgi.sock;
uwsgi_param SCRIPT_NAME /app;
uwsgi_modifier1 30;
}
location /media {
alias /var/www/example/htdocs/storage;
}
location /static {
alias /var/www/example/htdocs/static;
}
}
I need to keep alive my connection between nginx and upstream nodejs.
Just compiled and installed nginx 1.2.0
my configuration file:
upstream backend {
ip_hash;
server dev:3001;
server dev:3002;
server dev:3003;
server dev:3004;
keepalive 128;
}
server {
listen 9000;
server_name dev;
location / {
proxy_pass http://backend;
error_page 404 = 404.png;
}
}
My programe (dev:3001 - 3004) detect that the connection was closed by nginx after response.
document
The documentation states that for http keepalive, you should also set proxy_http_version 1.1; and proxy_set_header Connection "";