With NGINX upstreams, is it possible to proxy pass to both HTTP and HTTPS backends in the same upstream? - nginx

Suppose I want to proxy some portion of my traffic to a remote backend instead of the local listener on the server. For example:
upstream backends {
server 127.0.0.1:8080 weight=20; # local process (HTTP)
server other-remote-backend.company-internal.com:443; # remote server (HTTPS)
}
location / {
# ...other stuff...
proxy_pass http://backends;
}
In the above configuration, every 20 or so requests NGINX will try to route to http://other-remote-backend.company-internal.com:443 which is only listening for SSL.
Is there a way for the upstream to define its own protocol scheme? Right now this seems undoable without changing the local listener process to be SSL as well (which is a less than desirable change to make).
Thanks

As is the usual case, I've figured out my own problem and its quite obvious. If you're trying to accomplish the above the trick is quite simple.
First create a new NGINX Virtual Host that listens on HTTP and proxy_passes to your remote HTTPS backend like so:
/etc/nginx/sites-available/remote_proxy
upstream remote {
server other-remote-backend.company-internal.com:443;
}
server {
# other-remote-backend.company-internal.com:443;
listen 8181;
server_name my_original_server_name;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_pass https://remote;
}
}
You can now use just http for your upstreams in the original configuration listening on 443:
/etc/nginx/sites-available/default
upstream backends {
server 127.0.0.1:8080 weight=20; # local process (HTTP)
server 127.0.0.1:8181 # local nginx proxying to HTTPS remote
}
location / {
# ...other stuff...
proxy_pass http://backends;
}
Now just enable your new site and restart
$ ln -s /etc/nginx/sites-available/remote_proxy /etc/nginx/sites-enabled/ && systemctl restart nginx

Related

Kubernets (k3s) kubectl acces through reverse proxy

I'm running a Kubernetes (k3s) server on a raspberry pi cluster locally, which then is connect to a VM on digital ocean via a VPN (Tailscale), I've successfuly manage to make reverse proxy to my services on the cluster using nginx, but when I want to point a domain to my kube api server it just keep getting unauthorized responses.
In my Nginx config I've set it up something like this:
server {
server_name kube.domain.com;
location / {
proxy_pass https://xx.xx.xx.xx:6433;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
I'm using kubectl setting my server to: kube.comain.com
And here I get the 401, but if i set the server to my ip on the localhost it works fine, so im wondering why do I get a 401, since I clearly contact my Kube API Server.
Nginx Reverse proxy by default strips the certifcate data that kubectl sends with the request and there for the request will end up with at 401.
Solution should be to create a raw TCP stream targeting the IP of the kubernetes api server minimal example like so:
nginx.conf
stream {
upstream api {
server <kube-api-server-ip>:6443;
}
server {
listen <port>; # this is the port exposed by nginx on your proxy server
proxy_pass api;
proxy_timeout 20s;
}
}
Doing the above would proxy the raw request directly to the kube-apiserver.
more detailed answer can be found here both TCP L4 layer and L7 layer solutions: https://www.henryxieblogs.com/2021/12/how-to-expose-kube-api-server-via-nginx.html

Nginx preserve $request_uri

I'm not sure if the behavior I want is actually possible natively with nginx but here goes.
I have a server running on port 81 with the following nginx config:
CONFIGURATION OF SERVER1 NGINX
server {
listen 81;
server_name SERVER_DNS_NAME;
location /server1 {
proxy_pass http://127.0.0.1:8084/;
proxy_set_header Host $host;
}
location / {
proxy_pass http://127.0.0.1:8084;
proxy_set_header Host $host:$server_port;
}
}
I have another server running on port 82 with similar configuration. Now what'd i'd like to do is be able to visit them both from port 80 with just different uris.
For example: URL/server1 would take me to the first server, and URL/server2 would take me to the second.
CONFIGURATION OF NGINX LISTENING ON PORT 80
server {
listen SERVER_IP:80;
location /server1{
proxy_set_header Host $host;
http://SERVER_IP:81;
}
location /server2 {
proxy_pass http://SERVER_IP:82;
proxy_set_header Host $host;
}
This works fine when I go to URL/server1. I am successfully routed to the main page on server1. However as soon as I click any of the links present on the page on server1 I get a 404. This is because the site tries to go to URL/some_subdir_of_server1 (for which there is no mapping) rather than doing URL/server1/some_subdir_of_server1. Is this behavior doable? If so how?
Thanks!
Be careful with trailing slashes: in your example, you have
proxy_pass http://SERVER_IP:81/; which will set the proxy URL to root /

NGINX Reverse Proxy for 2 jenkins servers. How?

I would like to run 2 jenkins server behind nginx reverse proxy, but I can not find the proper to configure it.
The config below is working fine
location /jenkins {
proxy_pass https://contoso.com/jenkins;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
If i try to change location to /jenkins_test, than it does not work anymore.
What do I wrong?
You will need two define each jenkins instance in its own server section.
Then depending on the url that you are calling on nginx, the right jenkins server will respond.
Your nginx config could have a structure like this:
http{
# application server for first jenkins instance
upstream app_servers_first_jenkins_instance {
# if jenkins is running on the same server this should be something like 127.0.0.1 ...
server https://contoso.com/jenkins;
}
# application server for secons jenkins instance
upstream app_servers_second_jenkins_instance {
server https://contoso.com/jenkins;
}
# JENKINS SERVER 1
server{
listen 80;
server_name jenkinsfirstinstance.yourdomain.com;
location / {
proxy_pass http://app_servers_first_jenkins_instance;
}
}
# JENKINS SERVER 2
server{
listen 80;
server_name jenkinssecondinstance.yourdomain.com;
location / {
proxy_pass http://app_servers_second_jenkins_instance;
}
}
} # END OF HTTP SECTION
In this example both urls will call the same jenknins endpoint (https://contoso.com/jenkins) if you want them to be different jenkins instances you will have modify this url in one of the upstream sections
If you want to run 2 servers behind the nginx proxy, that's mean you need 2 location contexts (also called "blocks").
In your configuration file which is probably located in /etc/nginx/sites-availables you should add the locations:
http{
listen 80;
location /jenkins1 {
proxy_pass http://jenkins1-local-ip-address:8000;
include /etc/nginx/proxy_params;
}
location /jenkins2 {
proxy_pass http://jenkins2-local-ip-address:8001;
include /etc/nginx/proxy_params;
}
}
One thing you schould note is that I consider that your jenkins server is in same LAN (Local Area Network) otherwise it will not make sense to habe a proxy in front because your sever is already accessible via internet.
If your jenkins servers are accessible via HTTPS you schould change http to https in a location context and edit the port number to listen 443 and some ssl certificates configurations.

nginx - Forward requests to another proxy

So, I have a third party proxy (probably under squid) which will only accept connections from one of my IP's, but I need to be able to access it from a variety of IPs.
So I'm trying to put a nginx to forward requests to this proxy. I know nginx can forward request like this:
location / {
proxy_pass http://$http_host$uri$is_args$args;
}
This would work if I needed nginx to forward requests directly to the target site, but I need it to pass it to proxy X first. I tried this:
upstream myproxy {
server X.X.X.X:8080;
}
location / {
proxy_pass http://myproxy$uri$is_args$args; // also tried: http://myproxy$http_host$uri$is_args$args
}
But I get "(104) Connection reset by peer". I guess because nginx is proxying like this:
GET /index.html HTTP/1.1
Host: www.targetdomain.com.br
But I need it to proxy like this:
GET http://www.targetdomain.com.br/index.html HTTP/1.1
I found out that this works:
http {
# resolver 8.8.8.8; # Needed if you use a hostname for the proxy
server_name ~(?<subdomain>.+)\.domain\.com$;
server {
listen 80;
location / {
proxy_redirect off;
proxy_set_header Host $subdomain;
proxy_set_header X-Forwarded-Host $http_host;
proxy_pass "http://X.X.X.X:8080$request_uri";
}
}
}
You need to use resolver if X.X.X.X is a hostname and not an IP.
Check https://github.com/kawanet/nginx-forward-proxy/blob/master/etc/nginx.conf for more tricks.
EDIT: also check nginx server_name wildcard or catch-all and http://nginx.org/en/docs/http/ngx_http_core_module.html#var_server_name

ActiveMQ and NGINX

I can get my head wrapped around ... We have requirement using ActiveMQ hidden behind NGINX proxy, but I have no idea how to set it up.
For the ActiveMQ I've setup different ports for all protocols
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:62716?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5782?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:62713?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1993?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:62714?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
And the nginx configuration like this:
server {
listen *:61616;
server_name 192.168.210.15;
index index.html index.htm index.php;
access_log /var/log/nginx/k1.access.log combined;
error_log /var/log/nginx/k1.error.log;
location / {
proxy_pass http://localhost:62716;
proxy_read_timeout 90;
proxy_connect_timeout 90;
proxy_redirect off;
proxy_method stream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Proxy "";
}
}
(same for all other five redefined ports)
I though that this would expose default ports ActiveMQ ports and Nginx would map it to the new definition, but this doesn't work.
For communication, we're using NodeJs library amqp10 in version 3.1.4.
And all the ports are enabled on the server ... if using standard ports without nginx proxy, it works.
Anyone idea what am I missing? Thanks for any thoughts.
You can hide ActiveMq behind nginx proxy, even if you are trying to proxy OpenWire for a AMQP client.
If you are adding your configuration inside http block, its bound to fail.
But get it that, nginx not only supports http, but also tcp block.
If you proxy activemq over tcp, then what happens at http level won't matter and you would still be able to proxy.
Off-course you would lose flexibility that comes along with http.
Open your nginx.conf (at /etc/nginx/nginx.conf).
This would have http block, which in turn would have some include statements.
Outside this http block, add another include statement.
$ pwd
/etc/nginx
$ cat nginx.conf | tail -1
include /etc/nginx/tcpconf.d/*;
The include statement is directing nginx to look for additional configurations in directory "/etc/nginx/tcpconf.d/".
Add desired configuration in this directory. Let's call it amq_stream.conf.
$ pwd
/etc/nginx/tcpconf.d
$ cat amq_stream.conf
stream {
upstream amq_server {
# activemq server
server <amq-server-ip>:<port like 61616.;
}
server {
listen 61616;
proxy_pass amq_server;
}
}
Restart your nginx service.
$ sudo service nginx restart
You are done
Nginx is a HTTP server that is capable of proxying WebSocket and HTTP.
But you are trying to proxy OpenWire for a AMQP client. Which does not work with Nginx or Node.js.
So - if you really need to use Nginx, you need to change client protocol to STOMP or MQTT over WebSocket. Then setup a WebSocket proxy in Nginx.
Nginx-example with TLS. More details at https://www.nginx.com/blog/websocket-nginx/
upstream websocket {
server amqserver.example.com:62714;
}
server {
listen 8883 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/certificate.cer;
ssl_certificate_key /etc/nginx/ssl/key.key;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade websocket;
proxy_set_header Connection upgrade;
proxy_read_timeout 120s;
}
}
However, since you have to rewrite all client code, I would rethink the Nginx idea. There are other software and hardware that can front TCP based servers and do TLS termination and whatnot.

Resources