My request to the upstream are timing out after 60 seconds.
I have configured the below proxy details.
location /myapp/ {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://aws-elb:80/myapp/;
proxy_read_timeout 300s;
}
Is there any other way to increase timeout or wait till I get response from my upstream
To configure the connection timeout you can change proxy_connect_timeout, which is 60 seconds by default.
This most likely won't solve your problem, however - have you confirmed that you receive a response if you curl your backend service?
Is your ELB successfully forwarding requests to your application? Your application would have to be listening on a port defined under your load balancers listeners.
Related
I have a blazor-server application, which works correctly in all cases other than running behind reverse proxy (I've only tested with NGINX).
Browser is able to connect to /_blazor?id=xyz endpoint and successfully send/receive heartbeat messages. But events with button click etc. does not work at all. There are no error or warnings in console and application's logs.
Nginx config is written according to this .NET docs
Here is my setup:
map $http_connection $connection_upgrade {
"~*Upgrade" $http_connection;
default keep-alive;
}
server {
listen 80;
server_name service.example.com;
location / {
# App server url
proxy_pass http://localhost:9000;
# Configuration for WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_cache off;
# WebSockets were implemented after http/1.0
proxy_http_version 1.1;
# Configuration for ServerSentEvents
# proxy_buffering off; # Removed according to docs
# Configuration for LongPolling or if your KeepAliveInterval is longer than 60 seconds
proxy_read_timeout 100s;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I think the problem is because of nginx, as app works fine on my local machine.
If you need any more data, please comment below, I will provide.
Thanks in advance.
Apparently, cloudflare was also configured to proxy my request, and this somehow affected it.
Disabling cloudflare proxy and configuring https with let's encrypt solved the issue
I have a mosquitto_sub running on background on serverA, let's say with topic "TEST", port 1883.
I followed this to use nginx as a stream proxy to mosquitto, on ServerB.
Testing the setup sending a message to ServerB, using mosquitto_pub, the message is received and displayed correctly on serverA.
Now I'd like that a webapp running on serverC could receive the mqtt messages I send using a websocket, as far as I understand that nginx setup is made exactly for this purpose because browser can't use directly mqtt protocol.
I did two tests:
pointing the websocket to ServerB stream (wss://serverB:1883)
pointing the websocket to nginx reverse proxy with this config:
.
...
server {
listen 443 ssl;
...
location /webapp/websocket {
proxy_set_header HOST $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_request_headers on;
proxy_pass http://serverB:1883/;
proxy_http_version 1.0;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 1800s;
}
}
With both the websocket doesn't work, with error 502 Bad Gateway.
My questions are, did I understand wrong and can it be done?
Does it say error 502 just because the webapp must be programmed to specify the topic to listen?
The Mosquitto broker supports MQTT over WebSockets, but it has to be on a separate port to native MQTT over TCP.
So if Mosquitto is normally listening on port 1883, you need to pick a different port to run the MQTT over WebSockets listener. e.g.
listener 1883
listener 8083
protocol websockets
You would then need to update the port in the proxy_pass entry to match.
proxy_pass http://myserver;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
Above is my nginx configuration, and myserver requires NTLM authentication.
I access myserver through nginx proxy and provide correct auth info,but the browser prompt auth again.
Any wrong with my configuration?
EIDT:
Referred to this ,I use stream proxy and problem solved!
Thanks to #Tarun Lalwani
According to nginx documentation:
Allows proxying requests with NTLM Authentication. The upstream connection is bound to the client connection once the client sends a request with the “Authorization” header field value starting with “Negotiate” or “NTLM”. Further client requests will be proxied through the same upstream connection, keeping the authentication context.
upstream http_backend {
server 127.0.0.1:8080;
ntlm;
}
The "ntlm" option is available only for Nginx Plus.
I created a custom module that is able to provide a similar functionality
gabihodoroaga/nginx-ntlm-module
There is also a blog post about this at hodo.dev
I have a scenario that i'm trying to configure in nginx where I have a number of processes each listening from ports 8000,8001... Upon establishing a http connection to one of these ports I then get the client (within javascript) to establish a WebSocket connection. All the listening processes have the same /SS websocket endpoint. However if a http connection initially makes a connection to 8000 it needs to also establish the websocket connection to 8000 too. I have the following nginx configuration:
upstream backends {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
}
include /etc/nginx/conf.d/*.conf;
server {
location / {
proxy_pass_header Server;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Scheme $scheme;
proxy_pass http://backends;
}
location /SS {
proxy_set_header Host $host;
proxy_pass http://backends;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
However this doesn't route the the websocket to the same place as the initial connection.
I thought of a way to do this where I setup different endpoints for each process and pass this through the initial http request. I would get the client to then use this endpoint for the WebSocket connection. This would however need me to configure all the different endpoints in nginx. I was wondering if there's a better way to solve this just within the nginx configuration?
How do I use nginx as proxy for a websocket ?
If I need to connect to socketSite.com:port from clientSite.com( javascript)
And I won't to show user's link "socketSite.com:port "
Can I use nginx proxy for redirecting requests from/to websocket server ?
Absolutely, you can! Use the following configuration:
location /myHandler{
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header HOST $host;
proxy_set_header X_Forwarded_For $remote_addr;
proxy_pass http://localhost:8880;
proxy_redirect default;
client_max_body_size 1000m;
}
I use spring websocket. /myHandler is my URL to create the websocket connection, http://localhost:8880; is my Tomcat server address. Nginx server and Tomcat are running on the same machine.