I'm having issues with my nginx configuration, we are using a stream in order to use SSL passthrough, however we plan on having multiple URLs pointed to this nginx and we are wanting to redirect to different load balancers depending on the address
Whats currently happening is..
Both site1.example.com and site2.example.com is showing site1.example.com content
And if site1.example.com load balancer stops working, both site1.example.com and site2.example end up showing site2.example.com content
/etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
}
stream {
log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';
access_log /var/log/nginx/access.log basic;
error_log /var/log/nginx/error.log;
map $ssl_preread_server_name $name {
site1.example.com site1_example_com;
site2.example.com site2_example_com;
}
upstream site1_example_com {
server site1.amazonaws.com:443 max_fails=3 fail_timeout=10s;
}
upstream site2_example_com {
server site2.amazonaws.com:443 max_fails=3 fail_timeout=10s;
}
server {
listen 443;
proxy_pass $name;
ssl_preread on;
}
}
Related
We have one "OpenLDAP" server with port 389 currently active,using nginx we want to proxypass this TCP port 389 to TCP based ingress. can any one please share the nginx.conf detail for this.
So far, left with incomplete as per below,
upstream rtmp_servers {
server acme.example.com:389;
}
server {
listen 389;
server_name localhost:389;
proxy_pass rtmp_servers;
proxy_protocol on;
}
Getting an error, any recommendation is appreciated
2021/03/02 09:45:39 [emerg] 1#1: "proxy_pass" directive is not allowed
here in /etc/nginx/conf.d/nginx-auth-tunnel.conf:9 nginx: [emerg]
"proxy_pass" directive is not allowed here in
/etc/nginx/conf.d/nginx-auth-tunnel.conf:9
Your configuration should be in a stream block
You don't need server_name localhost:389;
You are including the configuration from /etc/nginx/conf.d folder which is included inside http block in main nginx.conf file. The stream block should be at the same level as http block. Check the /etc/nginx/nginx.conf for the include and maybe you have to add one for the stream section
This is a sample nginx.conf,
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf; #This include is your problem
}
stream {
upstream rtmp_servers {
server acme.example.com:389;
}
server {
listen 389;
proxy_pass rtmp_servers;
proxy_protocol on;
}
}
I'm building a website that I'm running through a docker container. I am receiving this error
webserver_1 | 2019/10/01 17:50:24 [emerg] 1#1: unexpected "}" in /etc/nginx/nginx.conf:36
webserver_1 | nginx: [emerg] unexpected "}" in /etc/nginx/nginx.conf:36
All of the curly brackets are counted for and removing this one (it's the third to last brace) would leave me with an uneven number.
This is my nginx.conf. Does anyone know what's going on?
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
location /api {
auth_basic 'closed site';
auth_basic_user_file /etc/nginx/conf.d/.htpasswd
}
}
}
My design is:
Media Server -> edge servers (Multiple Nginx Cache server -> Nginx Load Balancer)
it is my private CDN system (for Live content delivery)
I have content source and multiple edges; in each edge, there is multiple cache server and a Load Balancer
I started step by step, so for this job I face a problem with Nginx Load Balancer.
In this configuration, I am balancing between two servers s1 and s2.
but when I check traffic by(nload), I see big traffic on the primary server(load balancer) for example from nload see s1=1GBPS s2=1GBPS loadbalancer=2GBPS
note: my content is HLS (.m3u8)
user www-data;
worker_processes 5; ## Default: 1
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
worker_rlimit_nofile 8192;
events {
worker_connections 4096; ## Default: 1024
}
http {
include mime.types;
include /etc/nginx/proxy.conf;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
# upstream
upstream origins {
server s1.ip;
server s2.ip;
}
# default route
server {
listen 80;
server_name example.com;
access_log /var/log/nginx/example.com main;
location / {
proxy_set_header Host $host;
proxy_pass http://origins;
}
}
}
I tried installing nginx with virtual hosting enabled with a single site currently hosted.
my nginx.conf
user nginxsite;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
#server_names_hash_bucket_size 64;
}
I assume that the user nginxsite; is the useradd created for the root directory ownership. default of that is just nginx.
my virtual.conf inside /etc/nginx/conf.d/
server {
listen 80;
#listen *:80;
server_name www.nginxsite.domain.com *.nginxsite.domain.com;
#access_log /var/log/nginx/access.log
location / {
root /var/www/nginxsite.com/public_html/;
index index.html index.htm; }
}
The server name and ip has already been added in my hostfile
XX.XX.XX.XX www.nginxsite.domain.com
I'm pretty sure the issue lies in my conf files but I can't seem to point out where.
Checked the logs but there's nothing.
Please help.
Thanks so much!
Below is my nginx.conf file, I have two upstreams, http_upstream and tcp_upstream, I make duplicated HTTP traffic and send it to load-balancer2.example.com:80 by using post_action, now I am wondering if I could make duplicated TCP/UDP traffic by using something similar to post_action?
daemon on;
user root;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
stream {
upstream tcp_upstream {
server server1.example.com:2000;
server server2.example.com:2000;
server server3.example.com:2000;
}
server {
listen 2000;
proxy_pass tcp_upstream;
}
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
client_header_buffer_size 512k;
large_client_header_buffers 4 512k;
client_max_body_size 1000M;
client_body_buffer_size 1000M;
upstream http_upstream {
server server1.example.com:8088;
server server2.example.com:8088;
server server3.example.com:8088;
}
server {
listen 80;
location / {
proxy_pass http://http_upstream/;
post_action #post_action;
}
location #post_action {
proxy_pass http://load-balancer2.example.com:80;
}
}
include /etc/nginx/conf.d/*.conf;
}