I have a website on www.mydomain.com and would like to direct traffic via proxy to another server when the URL ends with /blog. This other server runs a Wordpress blog and WORDPRESS_URL and SITE_URL both point to its IP address.
My current setup is, DNS points to mydomain.com, where nginx works as a reverse proxy. Any request is directed via proxy_pass to a web application running on localhost:3000, except the ones matching /blog. For these requests I have in my NGINX conf, inside the single server block, the following:
location /blog {
rewrite ^/blog/(.*)$ /$1 break;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://<blog-ip>;
proxy_redirect off;
}
The proxy works well and requests do go to the blog server, but page resources, like fonts and themes are not returned propertly due to CORS. I assume I have to change WORDPRESS_URL and SITE_URL to www.mydomain.com/blog and everything does function properly at first but, about 10 minutes after the URL changes, it stops working completely and www.mydomain.com/blog starts returning Bad Gateway.
The strangest part is, before the URL changes, ping and curl work just fine running from mydomain server:
$ ping <blog-ip> -W 2 -c 3
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.391/0.507/0.690/0.132 ms
$ curl -I <blog-ip>
HTTP/1.1 200 OK
After Bad Gateway begins, ping still works:
$ ping <blog-ip> -W 2 -c 3
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.391/0.507/0.690/0.132 ms
but curl does not:
$ curl -Iv <blog-ip>
Hostname was NOT found in DNS cache
* Trying <blog-ip>...
* connect to <blog-ip> port 80 failed: Connection refused
* Failed to connect to <blog-ip> port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to <blog-ip> port 80: Connection refused
Interestingly, running curl or ping from my local machine works just fine, but the wordpress server becomes curl-invisible to mydomain server. The only way to stop Bad Gateway is to change WORDPRESS_URL and SITE_URL back to the server's IP and even then, it again only starts working after some time.
I am completely clueless about what is going on. Both servers are Digital Ocean droplets. I had issues before with undocumented blocks on their side (they do not allow sending email from a Droplet by default, so you have to contact support for that) and wondered if it is not the case. Their support, however, doesn't seem to know what is happening either, so I decided to try and post the question here.
Any thoughts or suggestions are much appreciated.
Related
I’m working on an ios app that needs to communicate with a server. As part of that communication, the app sends a private cookie that must be transferred **securely**.
After a ton of research and frustration, I’ve successfully managed to setup my webserver in the following manner:
My entire setup is running on an AWS EC2 machine running linux.
My routes are defined with FastAPI
The webserver is deployed with GUNICORN launching multiple uvicorn workers, as recommended by the official uvicorn docs:
gunicorn -w 4 -k uvicorn.workers.UvicornWorker example:app
The webserver is launched on port 8080 using a docker container:
Dockerfile
... docker setup ...
EXPOSE 8080
CMD ["gunicorn", "-b", "0.0.0.0:8080", "-w", "2", "-k", "uvicorn.workers.UvicornWorker", "main:app"]
My webserver runs behind a NGINX reverse proxy. The proxy listens on port 80 and 443 and redirects the requests to my webserver (that sits on port 8080).
My NGINX .conf file is very minimal and looks like that:
server {
server_name example.*;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
NGINX is using certbot generated certificates to only support HTTPS communication.
The certificates were generated using python-certbot-nginx, with the following command:
sudo certbot --nginx -d example.club -d www.example.club
Finally, to ensure that no one surpasses my PROXY and sends requests directly to my webserver, I’ve configured my machine to only allow communication to port 8080 from the machine’s IP address.
Port 80 and port 443 are obviosuly open to any IP address.
Since I’m a newbie to webservers in general and webserver deployment in particular, I would like to know: how efficient and secure is this setup?
Do you have recommendation or other stuff I should implement to make sure no private data leaks out, while also being to handle requests load?
Thanks!
Without knowing the exact configuration in detail, here some things to think about. At all the setup seems about right.
I’ve configured my machine to only allow communication to port 8080 from the machine’s IP address. -> Have you really used the external IP of the machine, or are you using a localhost/127.0.0.1 value there? For proxy pass on 127.0.0.1 it is okay to only allow connections over the loopback adapter.
I don't know the ssl params you use for the nginx. There is a lot which you can configure. Try https://www.ssllabs.com/ssltest/ to see if the config is good enough. A++ or A+ is nearly impossible to reach without giving lots of restrictions for your user base. But A is definitely a good point, which you want to reach.
You might want to set up an http -> https forwarding for everything except the path certbot needs. And then set the HSTS Header: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
Hope this helps a little bit. But as I said. At all, I don't the a security breach there. The entrypoints I see are your nginx, certbot and your dockerized webapp. You have to trust nginx and certbot. You might want to make sure, that you automatically install security updates for those, maybe with unattended-upgrades to not "forget" them. Same is for docker, and all the rest of the os-based software which comes from your package manager.
Greetings
I'm using java-websocket for my websocket needs, inside a wowza application, and using nginx for ssl, proxying the requests to java.
The problem is that the connection seems to be cut after exactly 1 hour, server-side. The client-side doesn't even know that it was disconnected for quite some time. I don't want to just adjust the timeout on nginx, I want to understand why the connection is being terminated, as the socket is functioning as usual until it isn't.
EDIT:
Forgot to post the configuration:
location /websocket/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
include conf.d/proxy_websocket;
proxy_connect_timeout 1d;
proxy_send_timeout 1d;
proxy_read_timeout 1d;
}
And that included config:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://127.0.0.1:1938/;
Nginx/1.12.2
CentOS Linux release 7.5.1804 (Core)
Java WebSocket 1.3.8 (GitHub)
The timeout could be coming from the client, nginx, or the back-end. When you say that it is being cut "server side" I take that to mean that you have demonstrated that it is not the client. Your nginx configuration looks like it shouldn't timeout for 1 day, so that leaves only the back-end.
Test the back-end directly
My first suggestion is that you try connecting directly to the back-end and confirm that the problem still occurs (taking nginx out of the picture for troubleshooting purposes). Note that you can do this with command line utilities like curl, if using a browser is not practical. Here is an example test command:
time curl --trace-ascii curl-dump.txt -i -N \
-H "Host: example.com" \
-H "Connection: Upgrade" \
-H "Upgrade: websocket" \
-H "Sec-WebSocket-Version: 13" \
-H "Sec-WebSocket-Key: BOGUS+KEY+HERE+IS+FINE==" \
http://127.0.0.1:8080
In my (working) case, running the above example stayed open indefinitely (I stopped with Ctrl-C manually) since neither curl nor my server was implementing a timeout. However, when I changed this to go through nginx as a proxy (with default timeout of 1 minute) as shown below I saw a 504 response from nginx after almost exactly 1 minute.
time curl -i -N --insecure \
-H "Host: example.com" \
https://127.0.0.1:443/proxied-path
HTTP/1.1 504 Gateway Time-out
Server: nginx/1.14.2
Date: Thu, 19 Sep 2019 21:37:47 GMT
Content-Type: text/html
Content-Length: 183
Connection: keep-alive
<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
real 1m0.207s
user 0m0.048s
sys 0m0.042s
Other ideas
Someone mentioned trying proxy_ignore_client_abort but that shouldn't make any difference unless the client is closing the connection. Besides, although that might keep the inner connection open I don't think it is able to keep the end-to-end stream intact.
You may want to try proxy_socket_keepalive, though that requires nginx >= 1.15.6.
Finally, there's a note in the WebSocket proxying doc that hints at a good solution:
Alternatively, the proxied server can be configured to periodically send WebSocket ping frames to reset the timeout and check if the connection is still alive.
If you have control over the back-end and want connections to stay open indefinitely, periodically sending "ping" frames to the client (assuming a web browser is used then no change is needed on the client-side as it is implemented as part of the spec) should prevent the connection from being closed due to inactivity (making proxy_read_timeout unnecessary) no matter how long it's open or how many middle-boxes are involved.
Most likely it's because your configuration for the websocket proxy needs tweaking a little, but since you asked:
There are some challenges that a reverse proxy server faces in
supporting WebSocket. One is that WebSocket is a hop‑by‑hop protocol,
so when a proxy server intercepts an Upgrade request from a client it
needs to send its own Upgrade request to the backend server, including
the appropriate headers. Also, since WebSocket connections are long
lived, as opposed to the typical short‑lived connections used by HTTP,
the reverse proxy needs to allow these connections to remain open,
rather than closing them because they seem to be idle.
Within your location directive which handles your websocket proxying you need to include the headers, this is the example Nginx give:
location /wsapp/ {
proxy_pass http://wsbackend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
This should now work because:
NGINX supports WebSocket by allowing a tunnel to be set up between a
client and a backend server. For NGINX to send the Upgrade request
from the client to the backend server, the Upgrade and Connection
headers must be set explicitly, as in this example
I'd also recommend you have a look at the Nginx Nchan module which adds websocket functionality directly into Nginx. Works well.
I am trying to set up a Jenkins master and a Jenkins slave node where the Jenkins Master is behind Nginx reverse proxy on a different server with SSL termination. The nginx configuration is as following:
upstream jenkins {
server <server ip>:8080 fail_timeout=0;
}
server {
listen 443 ssl;
server_name jenkins.mydomain.com;
ssl_certificate /etc/nginx/certs/mydomain.crt;
ssl_certificate_key /etc/nginx/certs/mydomain.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect http:// https://;
proxy_pass http://jenkins;
}
}
server {
listen 80;
server_name jenkins.mydomain.com;
return 301 https://$server_name$request_uri;
}
The TCP port for JNLP agents is set as 50000 in Jenkins master Global Security configuration. Port 50000 is set to be accessible from anywhere on the host machine.
The JNLP slave is launched with the following command:
java -jar slave.jar -jnlpUrl https://jenkins.mydomain.com/computer/slave-1/slave-agent.jnlp -secret <secret>
The JNLP slave fails to connect to the configured JNLP port on the master:
INFO: Connecting to jenkins.mydomain.com:50000 (retrying:4)
java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at hudson.remoting.Engine.connect(Engine.java:400)
at hudson.remoting.Engine.run(Engine.java:298)
What is the configuration required for the JNLP slave to connect to the Jenkins master?
The JNLP port seems to use a binary protocol, not a text-based HTTP protocol, so unfortunately it can't be reverse-proxied through NGINX like the normal Jenkins pages can be.
Instead, you should:
Configure Global Security > Check "Enable security" and set a Fixed
"TCP port for JNLP slave agents". This will cause all Jenkins pages
to emit extra HTTP headers specifying this port: X-Hudson-CLI-Port,
X-Jenkins-CLI-Port, X-Jenkins-CLI2-Port.
Allow your fixed TCP JNLP
port through any firewall(s) so CLI clients and JNLP agents can
directly reach the Jenkins server on the backend.
Set the system property hudson.TcpSlaveAgentListener.hostName to the
hostname or IP address of your Jenkins server on the backend. This
will cause all pages to emit an extra HTTP header
(X-Jenkins-CLI-Host) containing this specified hostname. This tells
CLI clients where to connect, but supposedly not JNLP agents.
For each of your build slave machines in the list of nodes at
jenkins.mydomain.com/computer/ that uses the Launch method "Launch slave agents via Java Web Start", click the computer, click Configure, click the Advanced... button on the right side under Launch method, and set the "Tunnel connection through" field appropriately. Read the question mark help. You probably just need the "HOST:" syntax, where HOST is the hostname or IP address of your Jenkins server on the backend.
References:
https://issues.jenkins-ci.org/browse/JENKINS-11982
https://support.cloudbees.com/hc/en-us/articles/218097237-How-to-troubleshoot-JNLP-slaves-connection-issues-with-Jenkins
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
It's been almost 4 years since OP has asked this question, nevertheless, if you reached this page and looking for a proper solution, well, it's now possible.
I use Traefik as reverse proxy to Jenkins. TCP port inbound completely disabled now.
The only thing you need to make sure is your agent/slave is trusting Jenkins server certificate (as webSocket cannot be used with -disableHttpsCertValidation or -noCertificateCheck
If this is a Windows agent, use:
C:\Program Files (x86)\Java\jre1.8.0_251\bin\keytool.exe -import -storepass "changeit" -keystore "C:\Program Files (x86)\Java\jre1.8.0_251\lib\security\cacerts" -alias <cert_alias> -file "<path_to_cert>"
(Change path accordingly to your java version)
I have 2 subdomains I want to catch and forward from one server running nginx: foo.acme.com, bar.acme.com
In my nginx.conf file I have set up 2 server blocks:
server {
listen 80;
server_name foo.acme.com;
location / {
proxy_pass http://<my_ip_server_1>:80;
}
}
server {
listen 80;
server_name bar.acme.com;
location / {
proxy_pass http://<my_ip_server_2>:80;
}
}
My 2 subdomains point to the same IP (the one with nginx running on it).
I'm getting 502 Bad Gateway errors on both servers in this configuration.
The 502 code means 502 Bad Gateway, The server was acting as a gateway or proxy and received an invalid response from the upstream server.
It usually means the backend servers are not reachable, which could be a problem with them, not with your front-end configuration.
On the machine running Nginx, you should test that you can rest the backend servers. Using w3m or another HTTP client on that machine, check these URLs. Do they load what you expect?
http://<my_ip_server_1>:80
http://<my_ip_server_2>:80
If not, you may have some updates to make sure that your Nginx server can reach the backend servers.
I should add, you may need send the Host: header to get the backend servers to serve the expected content, if they each host multiple virtual domains. I like to use GET and HEAD tools from the libwww-perl distribution:
GET -H 'Host: bar.acme.com' http://http://<my_ip_server_1>:80
It's important to run the test from the machine hosting Nginx, as running it from your desktop could produce a different result.
I'm using nginx as the front end of my mongrel. And mongrel is listening on 3001, and nginx is listening on 3000.
In my application, there will be a redirection after creating a model. let's say, I post a request to http://xxxx:3000/users, it should be redirect to http://xxxx:3000/users/1, (1 is the id of the new user), but actually, it was redirected to http://xxxx/users/1, which cause a 404 error.
Why the port 3000 is missing?
Are you using proxy_pass ? You should add this line:
proxy_set_header Host $host:3000;
You need put your nginx config up here.
====
better solution:
proxy_set_header Host $http_host;
$host not include port, and $http_host is the value from http header, it is added by browser.