I would like to use nginx to serve files from
https://example.com/documents/*
But to redirect all other requests to localhost:4443. I need to use stream to pass on the client certificate to the listening application on 4443, since otherwise it seems that nginx conducts the TLS handshake and it fails.
I have tried this, but nginx claims (of course) that the 443 port is in use:
http {
server {
listen 443;
location /documents/ {
sendfile on;
}
}
stream {
server {
listen 443;
proxy_pass localhost:4443;
}
}
I cannot use another port, it should be the same port.
How can I make this work?
Related
I'm new to nginx.
I have a machine, behind my router, that runs a server and handles correctly 80 and 443 request with Https.
Problem is that I want to host a second website on another device but I have only one IP address. I bought a raspberry pi zero to use it as a reverse proxy behind my router. I install nginx and want to redirect all the request to my other machines. Both the RPI 0 and the old machine have local IP.
To redirect requests from my router to RPI 0 and then to my old machine, I used proxy_pass. On port 80 everything works fine, but on port 443 I get a certificate error on my browser.
Is it possible to let the whole request go on the old machine and let the old machine handles the https certificate like before ? Or is it mandatory to have the certificate processed by nginx ?
Diagram of the old but functional installation
Current installation with certificate error
My configuration:
upstream backend_a {
server 192.168.0.20:80;
}
upstream backend_a_s {
server 192.168.0.20:443;
}
server {
listen 80;
server_name mydomain;
location / {
include proxy_params;
proxy_pass http://backend_a;
}
}
server {
listen 443 ssl;
server_name mydomain;
location / {
include proxy_params;
proxy_pass https://backend_a_s;
}
}
I found a solution. I need to use port forwarding. To do this in nginx, I need to use stream keyword.
stream {
server {
listen 443;
proxy_pass 192.168.0.20:443;
}
}
The stream keyword need to be at the same level as http, so I needed to edit /etc/nginx/nginx.conf source. Other solution is to manually compile a version of nginx, with the parameter --with-stream source.
I have a debian server with MySQL and Meilisearch, I want to use Nginx as a reverse proxy for future load balancing and also having TLS security.
I'm following Meilisearch's Documentation to set up Nginx with Meilisearch and Lets Encrypt with success, but they force Nginx to proxy everything to port 7700, I want to proxy to 3306 for MySQL, 7700 for Meilisearch, and 80 for an error page or a fallback web server. But after modifying /etc/nginx/nginx.conf, the website reloads too many times.
This is the configuration I'm trying at /etc/nginx/nginx.conf:
user www-data;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
stream {
upstream mysql {
server 127.0.0.1:3306;
}
upstream meilisearch {
server 127.0.0.1:7700;
}
server {
listen 6666;
proxy_pass mysql;
}
server {
listen 9999;
proxy_pass meilisearch;
}
}
http {
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com;
return 301 https://\$server_name$request_uri;
}
server {
server_name example.com;
location / {
proxy_pass http://127.0.0.1:80;
}
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
# ssl_certificate managed by Certbot
# ssl_certificate_key managed by Certbot
}
}
The only difference is example.com is replaced by my domain, which has been already set to redirect to the server ip.
As ippi pointed out, it isn't necessary to reverse proxy MySQL in this particular case.
I used proxy_pass http://127.0.0.1:7700 as Meilisearch docs suggest.
For future DB load balancing, I'd use MySQL Clusters, and point out to them on another Nginx implementation that proxies everything (ie. HTTPS to web server, DB access to list of clusters, etc.).
Also, in this particular case, I don't actually require encrypted connections to the DB, but if I needed them, I'd use a self signed certificate in MySQL, and a CA certificate for the website, since my front ends communicate with a "central" Nginx proxy that could encrypt communication between backend servers and the proxy.
If I actually wanted to use the Let's Encrypt certificates for MySQL, I've found a good
recipe, but instead of copy pasting the certificate (which is painful) I'd mount the Let's Encrypt certificates' directory into MySQL's and change the permissions to 600 with chown. But then again, this is probably bad practice and would be better to use self signed certificates for MySQL as it's own docs suggest.
I am attempting to put a Load Balancer in front of a Turn Server for use with WebRTC. I am using one turn server in my examples below until I get the load balancer working. The turn server requires multiple ports including one UDP as listed below:
TCP 80
TCP 443
TCP 3478
TCP 3479
UDP 3478
I have attempted to place an Amazon Elastic Load Balancer (AWS ELB) in front of the Turn Server, but it does not support the UDP port. So I am now running Ubuntu on an EC2 Instance with all these ports open and I have installed NGINX.
I've edited the /etc/nginx/nginx.conf file and added a "stream" section to it with both upstream and servers for each port. However, it does not appear to be passing the traffic correctly.
stream {
# IPv4 Section
upstream turn_tcp_3478 {
server 192.168.1.100:3478;
}
upstream turn_tcp_3479 {
server 192.168.1.100:3479;
}
upstream turn_upd_3478 {
server 192.168.1.100:3478;
}
# IPv6 Section
upstream turn_tcp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
upstream turn_tcp_ipv6_3479{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3479;
}
upstream turn_udp_ipv6_3478{
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:3478;
}
server {
listen 3478; # tcp
proxy_pass turn_tcp_3478;
}
server {
listen 3479; # tcp
proxy_pass turn_tcp_3479;
}
server {
listen 3478 udp;
proxy_pass turn_upd_3478;
}
server {
listen [::]:3478;
proxy_pass turn_tcp_ipv6_3478;
}
server {
listen [::]:3479;
proxy_pass turn_tcp_ipv6_3479;
}
server {
listen [::]:3478 udp;
proxy_pass turn_udp_ipv6_3478;
}
}
I have also created a custom load balancer configuration file at /etc/nginx/conf.d/load-balancer.conf and placed the following in it.
upstream turn_http {
server 192.168.1.100;
}
upstream turn_https {
server 192.168.1.100:443;
}
upstream turn_status {
server 192.168.1.100:8080;
}
upstream turn_ipv6_http {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:80;
}
upstream turn_ipv6_https {
server [2600:myaw:esom:e:ipv6:addr:eswo:ooot]:443;
}
server {
listen 80;
location / {
proxy_pass http://turn_http;
}
}
server {
listen 443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_https;
}
}
server {
listen 8080;
location / {
proxy_pass http://turn_status;
}
}
server {
listen [::]:80;
location / {
proxy_pass http://turn_ipv6_http;
}
}
server {
listen [::]:443 ssl;
server_name turn.awesomedomain.com;
ssl_certificate /etc/ssl/private/nginx.ca-bundle;
ssl_certificate_key /etc/ssl/private/nginx.key;
location / {
proxy_pass https://turn_ipv6_https;
}
}
The http and https traffic appear to be working fine based on the custom load-balancer.conf file.
I am unsure why the TCP/UDP Ports I have configured in the ngnix.conf file are not working as intended.
Your configuration of the NGINX Load Balancer is fine.
I suggest verifying the following:
The security groups in your Amazon EC2 Turn Server instance should have matching inbound ports with your Load Balancer configuration.
Check the configuration files on your turn server and verify that the ports it is listening to are the same ports as you are forwarding on your load balancer. For example, you have TCP 3479 being forwarded on your NGINX config. You need to make sure that the turn server is listening to that port.
Lastly, you may also need to setup some IP Tables similar to what you have setup on your Turn Server. Review your Turn Server's configuration and see if you need to do any iptables or ip6table configuration on the Load Balancer.
Take a look at this confing method link
I'm trying to create an NGinx configuration in which NGinx receives the proxy protocol header on the incoming connection and passes it on the outgoing connection. Essentially to propagate the real ip address to the final target. I'm using the following configuration:
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
}
}
However, the proxy protocol header I receive on the 'some_backend' contains NGinx' ip address and not the source IP address.
Is something wrong with the configuration I'm using?
Can this at all be done?
Oops, I did it again...
It seems that the missing part is adding the set_real_ip_from directive with the ip range you use to access NGinx
http://nginx.org/en/docs/stream/ngx_stream_realip_module.html
stream {
upstream some_backend {
server some_host:8090;
}
server {
listen 8090 proxy_protocol;
proxy_pass some_backend;
proxy_protocol on;
set_real_ip_from 172.17.0.0/24;
}
}
I have met an annoying issue for Nginx Load Balancer, please see following configuration:
http {
server {
listen 3333;
server_name localhost;
location / {
proxy_pass http://node;
proxy_redirect off;
}
}
server {
listen 7777;
server_name localhost;
location / {
proxy_pass http://auth;
proxy_redirect off;
}
}
upstream node {
server localhost:3000;
server localhost:3001;
}
upstream auth {
server localhost:8079;
server localhost:8080;
}
}
So what I want is to provide two load balancers, one is to send port 3333 to internal port 3000,3001, and second one is to send request to 7777 to internal 8079 and 8000.
when I test this setting, I noticed all the request to http://localhost:3333 is working great, and URL in the address bar is always this one, but when I visit http://localhsot:7777, I noticed all the requests are redirected to internal urls, http://localhost:8080 or http://localhost:8079.
I don't know why there are two different effects for load balancing, I just want to have all the visitors to see only http://localhost:3333 or http://localhost:7777, they should never see internal port 8080 or 8079.
But why node server for port 3000 and 3001 are working fine, while java server for port 8080 and 8079 is not doing url rewrite, but only doing redirect?
If you see the configuration, they are exactly the same.
Thanks.