I have already looked into the answer to RoR 5.0.0 ActionCable wss WebSocket handshake: Unexpected response code: 301 but it was not applicable to my case.
I use an nginx-proxy as a front for several web-servers running in docker-containers. I use the nginx-config-template from https://github.com/jwilder/nginx-proxy
Now in my docker-container I have another nginx with the following config:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server my-websocket-docker-container:8080;
}
server {
root /src/html;
location /websocket/ {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
# try to serve file directly, fallback to index.php
try_files $uri /index.php$is_args$args;
}
...
}
When trying to connect to wss://example.com/websocket I get the aforementioned error about unexpected response code 301. When I curl the websocket-url manually I can see the nginx response telling me "301 moved permanantly". But why? Where is this coming from?
Can anybody help? Thanks!
I was facing a similar issue today. I was using the "classic" WebSocket API and NGINX to test the connection between some services. I came up with the following solution:
WebSocket instance creation:
const websocket = new WebSocket('ws://' + window.location.host + '/path');
Nginx config:
location /path {
proxy_pass http://websocket:port;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Where:
path is the path to be redirected
websocket is the hostname or the IP of the host
port is the port where the application is served on the host
Related
I have a server that I'm trying to set up. I have a Flask server that needs to run on api.domain.com, while I have other subdomains pointing to the server. I have one problem. 2/3 subdomains have no problem using nginx. Meanwhile, my script tries to bind to port 80 on the same machine, therefore failing. Is there a way I can bind my Flask REST script to port 80 ONLY for the subdomain 'api'?
My current config is:
server {
server_name api.domain.me;
location / {
error_page 404 /404.html;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_max_temp_file_size 0;
proxy_pass http://127.0.0.1:5050/;
proxy_cache off;
proxy_read_timeout 240s;
}
}
There's a little problem though, nginx likes to turn all POST requests into GET requests, any ideas?
Thanks!
There is no way binding two different applications on port 80 at the same time.
I would set up your api like this:
Bind your Flask API to Port 8080.
On NGINX you can configure you subdomain pointing to your Flask Application
upstream flask_app {
server 127.0.0.1:8080;
}
sever {
listen 80;
server_name api.domain.com;
location / {
proxy_pass http://flask_app/;
proxy_set_header Host $host;
}
}
I actually found out after a bit of diagnosis.
server {
if ($host = api.domain.me) {
return 301 https://$host
}
# managed by Certbot
had to become:
server {
if ($host = api.domain.me) {
return 497 '{"code":"497", "text": "The client has made a HTTP request to a port listening for HTTPS requests"}';
}
Because Certbot tries to upgrade the request to https but the HTTP method gets changed to GET because of the 301 response code.
Issue: I have an nginx reverse proxy installed in a ubuntu server with private IP only. The purpose of this reverse proxy is to route the incoming request to various 3rd party web sockets and RestAPIs. Furthermore, to distribute the load, I have a http loadbalancer sitting behind the nginx proxy server.
So this is how it looks technically:
IncomingRequest --> InternalLoadBalancer(Port:80) --> NginxReverseProxyServer(80) --> ThirdParyAPIs(Port:443) & WebSockets(443)
The problem I have is that, Nginx does not reverse_proxy correctly to the RestAPIs and gives a 502 error, but it does work successfully for Web Sockets.
Below is my /etc/nginx/sites-available/default config file: (No changes done elsewhere)
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
location /binance-ws/ {
# Web Socket Connection
####################### THIS WORKS FINE
proxy_pass https://stream.binance.com:9443/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
location /binance-api/ {
# Rest API Connection
##################### THIS FAILS WITH 502 ERROR
proxy_pass https://api.binance.com/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $host;
}
}
I have even tried adding https://api.binance.com:443/ but no luck.
The websocket connection works fine:
wscat -c ws://LOADBALANCER-DNS/binance-ws/ws/btcusdt#aggTrade
However, the below one fails:
curl http://LOADBALANCER-DNS/binance-api/api/v3/time
When I see the nginx logs for 502 error, this is what I see:
[error] 14339#14339: *20 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 10.5.2.187, server: , request: "GET /binance-api/api/v3/time HTTP/1.1", upstream: "https://52.84.150.34:443/api/v3/time", host: "internal-prod-nginx-proxy-xxxxxx.xxxxx.elb.amazonaws.com"
This is the actual RestAPI call which I am trying to simulate from nginx:
curl https://api.binance.com/api/v3/time
I have gone through many almost similar posts but unable to find what/where am I going wrong. Appreciate your help!
I am trying to make websocket requests to go through two nginx proxies. It is to do SSR(server side rendering)
My stack is like:
External World <-> nginx <-> rendora(SSR) <-> nginx <-> daphne <-> django
When I was doing
External World <-> nginx <-> daphne <-> django
Websocket connections were established very well. But when I implemented rendora(SSR) and added another proxy to nginx, it does not work. Websocket connection fails while handshakes. I guess "Upgrade" requests fail somewhere in my nginx servers. My nginx conf is like below:
server {
listen 8000;
location / {
include proxy_params;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
}
}
server {
server_name mydomain;
charset utf-8;
location /static {
alias /home/ubuntu/myproject/apps/web-build/static;
}
location / {
include proxy_params;
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
listen 443 ssl; # managed by Certbot
...
}
So the location / port 443 sends all requests from external world to 127.0.0.1:3001 where rendora(SSR server) is listening to. And the SSR server (rendora) forwards them to 127.0.0.1:8000 where nginx proxy to connect to daphne unix socket.
SSR itself works well except the websocket requests.
But I have no idea why websocket upgrade is not done when requests have to go through two nginx proxies.
I solved it by making /graphql websocket requests going directly to Daphne server. I still have no idea why websocket connection does not work when I route them to another nginx proxy server.
location /graphql {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/myproject/myproject.sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
location / {
include proxy_params;
proxy_pass http://127.0.0.1:3001;
}
I have a server with ubuntu 16.04, kestrel and nginx as a proxy server that redirects to localhost where my app is. And my app is on Asp.Net Core 2. I'm trying to add push notifications and using SignalR core. On localhost everything is working well, and on a free hosting with iis and windows as well. But when I deploy my app on the linux server I have an error:
signalr-clientES5-1.0.0-alpha2-final.min.js?v=kyX7znyB8Ce8zvId4sE1UkSsjqo9gtcsZb9yeE7Ha10:1
WebSocket connection to
'ws://devportal.vrweartek.com/chat?id=210fc7b3-e880-4d0e-b2d1-a37a9a982c33'
failed: Error during WebSocket handshake: Unexpected response code:
204
But this error occurs only if I request my site from different machine via my site name. And when I request the site from the server via localhost:port everything is fine. So I think there is a problem in nginx. I read that I need to configure it for working with websockets which are used in signalr for establishing connection but I wasn't succeed. May be there is just some dumb mistake?
I was able to solve this by using $http_connection instead of keep-alive or upgrade
server {
server_name example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I did this because SignalR was also trying to use POST and GET requests to my hubs, so doing just an Upgrade to the connection in a separate server configuration wasn't enough.
The problem is the nginx configuration file. If you are using the default settings of the ASP.NET Core deployment guide then the problem is the one of the proxy headers. WebSocket requires Connection header as "upgrade".
You have to set a new path for SignalR Hub on nginx configuration file.
such as
location /api/chat {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
You can read my full blog post
https://medium.com/#alm.ozdmr/deployment-of-signalr-with-nginx-daf392cf2b93
For SignalR in my case, besides the "proxy_set_header" settings, there is another critical setting "proxy_buffering off;".
So, a full example is now like,
http {
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name some_name;
listen 80 default_server;
root /path/to/wwwroot;
# Configure the SignalR Endpoint
location /hubroute {
proxy_pass http://localhost:5000;
# Configure WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache_bypass $http_upgrade;
# Configure ServerSentEvents
proxy_buffering off;
# Configure LongPolling
proxy_read_timeout 100s;
proxy_set_header Host $host;
}
}
}
See reference: Document reverse proxy usage with SignalR
I'm working on a Google Cloud Compute Engine instance. Ubuntu 12.04.
I have a Tornado app installed on the server working on port 8888 and I have nginx configuration as shown below:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream chat_servers {
server 127.0.0.1:8888;
}
server {
listen 80;
server_name chat.myapp.com;
access_log /home/ubuntu/logs/nginx_access.log;
error_log /home/ubuntu/logs/nginx_error.log;
location /talk/ {
proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://chat_servers;
}
}
When I try to connect to ws://chat.myapp.com/talk/etc/ via Javascript, Tornado app's open() method on WebSocketHandler gets called and I print the log on the server successfully, but on the client side, the code never enters the onopen() and after some time I get 1006 error code,WebSocket opening handshake timed out`.
This app was working fine on Amazon (AWS) EC2 server with the same configuration but after I moved to Google Cloud, somehow the handshake cannot be done.
Is there any configuration specific to Google Cloud? Or any nginx update on the file?
I am confused and I spent two days on this but couldn't solve the problem.
Default nginx's version on Ubuntu was nginx/1.1.19. I updated it to nginx/1.8.0. The problem is solved.