SignalR in ASP.NET Core behind Nginx - nginx

I have a server with ubuntu 16.04, kestrel and nginx as a proxy server that redirects to localhost where my app is. And my app is on Asp.Net Core 2. I'm trying to add push notifications and using SignalR core. On localhost everything is working well, and on a free hosting with iis and windows as well. But when I deploy my app on the linux server I have an error:
signalr-clientES5-1.0.0-alpha2-final.min.js?v=kyX7znyB8Ce8zvId4sE1UkSsjqo9gtcsZb9yeE7Ha10:1
WebSocket connection to
'ws://devportal.vrweartek.com/chat?id=210fc7b3-e880-4d0e-b2d1-a37a9a982c33'
failed: Error during WebSocket handshake: Unexpected response code:
204
But this error occurs only if I request my site from different machine via my site name. And when I request the site from the server via localhost:port everything is fine. So I think there is a problem in nginx. I read that I need to configure it for working with websockets which are used in signalr for establishing connection but I wasn't succeed. May be there is just some dumb mistake?

I was able to solve this by using $http_connection instead of keep-alive or upgrade
server {
server_name example.com;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
I did this because SignalR was also trying to use POST and GET requests to my hubs, so doing just an Upgrade to the connection in a separate server configuration wasn't enough.

The problem is the nginx configuration file. If you are using the default settings of the ASP.NET Core deployment guide then the problem is the one of the proxy headers. WebSocket requires Connection header as "upgrade".
You have to set a new path for SignalR Hub on nginx configuration file.
such as
location /api/chat {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
You can read my full blog post
https://medium.com/#alm.ozdmr/deployment-of-signalr-with-nginx-daf392cf2b93

For SignalR in my case, besides the "proxy_set_header" settings, there is another critical setting "proxy_buffering off;".
So, a full example is now like,
http {
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
server_name some_name;
listen 80 default_server;
root /path/to/wwwroot;
# Configure the SignalR Endpoint
location /hubroute {
proxy_pass http://localhost:5000;
# Configure WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache_bypass $http_upgrade;
# Configure ServerSentEvents
proxy_buffering off;
# Configure LongPolling
proxy_read_timeout 100s;
proxy_set_header Host $host;
}
}
}
See reference: Document reverse proxy usage with SignalR

Related

No messages are being sent in Blazor Server behind Nginx

I have a blazor-server application, which works correctly in all cases other than running behind reverse proxy (I've only tested with NGINX).
Browser is able to connect to /_blazor?id=xyz endpoint and successfully send/receive heartbeat messages. But events with button click etc. does not work at all. There are no error or warnings in console and application's logs.
Nginx config is written according to this .NET docs
Here is my setup:
map $http_connection $connection_upgrade {
"~*Upgrade" $http_connection;
default keep-alive;
}
server {
listen 80;
server_name service.example.com;
location / {
# App server url
proxy_pass http://localhost:9000;
# Configuration for WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache off;
# WebSockets were implemented after http/1.0
proxy_http_version 1.1;
# Configuration for ServerSentEvents
# proxy_buffering off; # Removed according to docs
# Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the problem is because of nginx, as app works fine on my local machine.
If you need any more data, please comment below, I will provide.
Thanks in advance.
Apparently, cloudflare was also configured to proxy my request, and this somehow affected it.
Disabling cloudflare proxy and configuring https with let's encrypt solved the issue

How to forward all requests to .net core app from nginx

I've deployed my .net core web application to Ubuntu 16.04 server with nginx and I want to send all incoming requests to my .net core application. I used tutorial from here here. My sites-available/default file
server {
listen 80;
server_name example.com *.example.com;
location / {
proxy_pass http://localhost:5000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Everything works fine except of one action when I want to pass parameters to change my image size on the fly
http://example.com/api/files/get/5beffcb65a8e8f1c700a1a22/image?w=400&h=400
In that case I receive 404 error. That error returned by Nginx. I tested it locally by curl and perform direct request to my .net core app and it works ok.
So how to configure nginx to send all requests with all parameters as is to my .net core applicatoin?
Don't set proxy-redirect to off. Refer to this link for an explanation:
https://unix.stackexchange.com/questions/290141/nginx-reverse-proxy-redirection

NGINX - Connecting the webSocket to the same initial http connection

I have a scenario that i'm trying to configure in nginx where I have a number of processes each listening from ports 8000,8001... Upon establishing a http connection to one of these ports I then get the client (within javascript) to establish a WebSocket connection. All the listening processes have the same /SS websocket endpoint. However if a http connection initially makes a connection to 8000 it needs to also establish the websocket connection to 8000 too. I have the following nginx configuration:
upstream backends {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
include /etc/nginx/conf.d/*.conf;
server {
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://backends;
}
location /SS {
proxy_set_header Host $host;
proxy_pass http://backends;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
However this doesn't route the the websocket to the same place as the initial connection.
I thought of a way to do this where I setup different endpoints for each process and pass this through the initial http request. I would get the client to then use this endpoint for the WebSocket connection. This would however need me to configure all the different endpoints in nginx. I was wondering if there's a better way to solve this just within the nginx configuration?

Use nginx as proxy for websocket connection

How do I use nginx as proxy for a websocket ?
If I need to connect to socketSite.com:port from clientSite.com( javascript)
And I won't to show user's link "socketSite.com:port "
Can I use nginx proxy for redirecting requests from/to websocket server ?
Absolutely, you can! Use the following configuration:
location /myHandler{
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header HOST $host;
proxy_set_header X_Forwarded_For $remote_addr;
proxy_pass http://localhost:8880;
proxy_redirect default;
client_max_body_size 1000m;
}
I use spring websocket. /myHandler is my URL to create the websocket connection, http://localhost:8880; is my Tomcat server address. Nginx server and Tomcat are running on the same machine.

WebSocket opening handshake timed out

I'm working on a Google Cloud Compute Engine instance. Ubuntu 12.04.
I have a Tornado app installed on the server working on port 8888 and I have nginx configuration as shown below:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream chat_servers {
server 127.0.0.1:8888;
}
server {
listen 80;
server_name chat.myapp.com;
access_log /home/ubuntu/logs/nginx_access.log;
error_log /home/ubuntu/logs/nginx_error.log;
location /talk/ {
proxy_set_header X-Real-IP $remote_addr; # http://wiki.nginx.org/HttpProxyModule
proxy_set_header Host $host; # pass the host header - http://wiki.nginx.org/HttpProxyModule#proxy_pass
proxy_http_version 1.1; # recommended with keepalive connections - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version
# WebSocket proxying - from http://nginx.org/en/docs/http/websocket.html
proxy_connect_timeout 7d;
proxy_send_timeout 7d;
proxy_read_timeout 7d;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_pass http://chat_servers;
}
}
When I try to connect to ws://chat.myapp.com/talk/etc/ via Javascript, Tornado app's open() method on WebSocketHandler gets called and I print the log on the server successfully, but on the client side, the code never enters the onopen() and after some time I get 1006 error code,WebSocket opening handshake timed out`.
This app was working fine on Amazon (AWS) EC2 server with the same configuration but after I moved to Google Cloud, somehow the handshake cannot be done.
Is there any configuration specific to Google Cloud? Or any nginx update on the file?
I am confused and I spent two days on this but couldn't solve the problem.
Default nginx's version on Ubuntu was nginx/1.1.19. I updated it to nginx/1.8.0. The problem is solved.

Resources