I am trying to use the nginx proxy_bind directive to have upstream traffic to a gRPC server use a specific network interface. For some reason, nginx seems to be completely ignoring the directive and just using the default network interface. I tried using the proxy_bind directive for a different server that doesn't use gRPC (it is http1.1 I believe) and that worked fine, so I am led to believe that nginx is ignoring the proxy_bind directive because of something related to the server being a gRPC server. I have confirmed that it works for the normal server and not the gRPC server by running ss and looking for traffic originating from the ip I am trying to bind. There was traffic, but it was only ever going to the normal server. All traffic to the gRPC server had the default local ip.
This is the server config block for the gRPC server where proxy_bind is not working:
server {
listen 8980 http2;
# set client body size to 128M to prevent HTTP 413 errors
client_max_body_size 128M;
# set client buffer size to 128M to prevent writing temp files
client_body_buffer_size 128M;
# We have plenty of RAM, up the output_buffers
output_buffers 256 128M;
# Allow plenty of connections
http2_max_concurrent_streams 100000;
keepalive_requests 100000;
keepalive_timeout 120s;
# Be forgiving with grpc
grpc_connect_timeout 240;
grpc_read_timeout 2048;
grpc_send_timeout 2048;
grpc_socket_keepalive on;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
grpc_pass grpc://my8980;
}
}
and this is the server config block for a normal server where proxy_bind is working:
server {
listen 4646;
# set client body size to 16M to prevent HTTP 413 errors
client_max_body_size 64M;
# set client buffer size to 32M to prevent writing temp files
client_body_buffer_size 64M;
# We have plenty of RAM, up the output_buffers
output_buffers 64 64M;
# Fix Access-Control-Allow-Origin header
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $http_origin;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
proxy_pass http://myapp1;
proxy_buffering off;
}
}
The grpc_pass directive belongs to ngx_http_grpc_module, while proxy_set_header, proxy_hide_header and proxy_bind directives comes from ngx_http_proxy_module. Those modules are two different things. The ngx_http_grpc_module has its own grpc_set_header, grpc_hide_header and grpc_bind analogs to be used instead.
Related
I`m building a proxy server that should receive HTTPS requests on port 9700 and send them as is to another webserver on another machine, also to port 9700, where the requests will be processed by the relevant application.
I have tried multiple Nginx configurations till now, here are the last configuration I tried:
On the proxy machine:
server {
listen 9700 ssl;
ssl_certificate /etc/nginx/cert/example.crt;
ssl_certificate_key /etc/nginx/cert/example.key;
ssl_client_certificate /etc/nginx/cert/example.crt;
ssl_verify_client on;
location / {
proxy_pass https://example.myhost.com:9700/;
proxy_set_header User-Agent "";
set $max_chunk_size 10485760;
set $max_body_size 10485760;
proxy_http_version 1.1;
client_max_body_size 10M;
}
}
On the second machine that should process the requests:
upstream receiver {
server reciverIP:PORT;
}
server {
listen 9700 ssl;
ssl_certificate /etc/nginx/cert/example.crt;
ssl_certificate_key /etc/nginx/cert/example.key;
ssl_client_certificate /etc/nginx/cert/example.crt;
ssl_verify_client on;
location / {
proxy_set_header User-Agent "";
proxy_pass http://receiver/;
set $max_chunk_size 10485760;
set $max_body_size 10485760;
proxy_http_version 1.1;
client_max_body_size 10M;
}
}
The result is that the proxy server seems like succeeding to transfer the requests but the receiver server replies with a 400 error.
At the error log, I receive an error about the certificate, although the certificate is configured for all example.myhost.com DNS and present at both configurations. This is the error message:
2022/06/06 18:08:23 [info] 8484#8484: *677 client sent no required SSL certificate while reading client request headers, client: IP, server: , request: "POST /SOMEINFO?key=902e6d820cb84ytdaaa618ae74f677e0&token=3af69f74db7872f89f67b5154c41f4de HTTP/1.0", host: "example.myhost.com:9700"
Any ideas on how I can make this work would be deeply appreciated.
I tried to follow this solution but it didn't
work at all. Also thanks to this article I managed to get where I am now.
I have 3 docker containers in the same network:
Storage (golang) - it provides API for uploading video files.
Streamer (nginx) - it streams uploaded files
Reverse Proxy (let's call it just Proxy)
I have HTTPS protocol between User and Proxy.
Let's assume that there is a file with id=c14de868-3130-426a-a0cc-7ff6590e9a1f and User wants to see it. So User makes a request to https://stream.example.com/hls/master.m3u8?id=c14de868-3130-426a-a0cc-7ff6590e9a1f. Streamer knows video id (from query param), but it doesn't know the path to the video, so it makes a request to the storage and exchanges video id for the video path. Actually it does proxy_pass to http://docker-storage/getpath?id=c14de868-3130-426a-a0cc-7ff6590e9a1f.
docker-storage in an upstream server. And protocol is http, because
I have no SSL-connection between docker containers in local network.
After Streamer gets path to the file it starts streaming. But User's browser start throwing Mixed Content Type error, because the first request was throw HTTPS and after proxy_pass it became HTTP.
Here is nginx.conf file (it is a Streamer container):
worker_processes auto;
events {
use epoll;
}
http {
error_log stderr debug;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
vod_mode local;
vod_metadata_cache metadata_cache 16m;
vod_response_cache response_cache 512m;
vod_last_modified_types *;
vod_segment_duration 9000;
vod_align_segments_to_key_frames on;
vod_dash_fragment_file_name_prefix "segment";
vod_hls_segment_file_name_prefix "segment";
vod_manifest_segment_durations_mode accurate;
open_file_cache max=1000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
aio on;
upstream docker-storage {
# There is a docker container called storage on the same network
server storage:9000;
}
server {
listen 9000;
server_name localhost;
root /srv/static;
location = /exchange-id-to-path {
proxy_pass $auth_request_uri;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Original-URI $request_uri;
# I tried to experiment with this header
proxy_set_header X-Forwarded-Proto https;
set $filepath $upstream_http_the_file_path;
}
location /hls {
# I use auth_request module just to get the path from response header (The-File-Path)
set $auth_request_uri "http://docker-storage/getpath?id=$arg_id";
auth_request /exchange-id-to-path;
auth_request_set $filepath $upstream_http_the_file_path;
# Here I provide path to the file I want to stream
vod hls;
alias $filepath/$arg_id;
}
}
}
Here is a screenshot from browser console:
Here is the response from success (200) request:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1470038,RESOLUTION=1280x720,FRAME-RATE=25.000,CODECS="avc1.4d401f,mp4a.40.2"
http://stream.example.com/hls/index-v1-a1.m3u8
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=171583,RESOLUTION=1280x720,CODECS="avc1.4d401f",URI="http://stream.example.com/hls/iframes-v1-a1.m3u8"
The question is how to save https protocol after proxy_pass to http?
p.s. I use Kaltura nginx-vod-module for streaming video files.
I think proxy_pass isn't the problem here. When the vod module returns the index path it uses an absolute URL with HTTP protocol. A relative URL should be enough since the index file and the chunks are under the same domain (if I understood it correctly).
Try setting vod_hls_absolute_index_urls off; (and vod_hls_absolute_master_urls off; as well), so your browser should send requests relative to stream.example.com domain using HTTPS.
We use NGINX in docker swarm, as a reverse proxy. NGINX sits within the overlay network and relays external requests on to the relevant swarm service.
However we have an issue, where every time we restart / update or otherwise take down a swarm service, NGINX returns 502 Bad Gateway. NGINX then continues to serve a 502 even after the service is restarted, and this is not corrected until we restart the NGINX service, which obviously defies the whole point of having a load balancer and services running in multiple places.
Here is our NGINX CONF:
events {}
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 20M;
large_client_header_buffers 8 256k;
client_header_buffer_size 256k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
map $host $client {
default clientname;
}
#Healthcheck
server {
listen 443;
listen 444;
location /is-healthy {
access_log off;
return 200;
}
}
#Example service:
server {
listen 443;
server_name scheduler.clientname.com;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://$client-scheduler:60911;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#catchll
server {
listen 443;
listen 444;
server_name _;
location / {
return 404 'Page not found';
}
}
}
We use the $client placeholder as otherwise we can't even start nginx when one of the services is down.
The other alternative is to use an upstream directive that has health checks, which can work well. Issue with this is that if any of the services are unavailable, NGINX won't even start!
What are we doing wrong?
UPDATE
It appears what we want here is impossible (please prove me wrong though!). Seems crazy to miss such a feature in the world of docker and micro-services!
We are currently looking at HAPROXY as an alternative, as this can be setup with default-server init-addr none to stop failure on startup.
Here is how I do it, create an upstream with max_fails=0
upstream docker-api {
server docker.api:80 max_fails=0;
}
# load configs
server {
listen 80;
listen [::]:80;
server_name localhost;
location /api {
proxy_pass http://docker-api;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Others config...
}
}
I had the same problem by using docker-compose. Nginx container could not connect the web service after docker-compose restart.
Finally I figure out two circumstances cause this glitch. First, docker-compose restart do not follow the depends_on which should be restart the nginx after web restarted. Second, docker-compose restart reassign a new internal ip address to containers and nginx do not refresh the web ip address after it start up.
My solution is define a variable to force nginx resolve the ip everytime:
location /api {
$web_service "http://web_container_name:13579"
proxy_pass $web_service;
}
Short version:
I want to use NGINX as a reverse proxy so that a client accessing the public facing URL gets served API data from the internal Gunicorn server sitting behind the proxy:
external path (proxy) => internal app
<static IP>/ABC/data => 127.0.0.1:8001/data
I'm not getting the location mapping correct.
Long version:
I am setting up NGINX for the first time and am attempting to use it as a reverse proxy for a rest api served by Gunicorn. The api is served at 127.0.0.1:8001 and I can access it from the server and get the appropriate responses, so that piece I believe is working correctly. It's running persistently using Supervisord.
I'd like to access one of the API endpoints externally at <static IP>/ABC/data. On the Gunicorn server, this endpoint available at localhost:8001/data. Eventually I'd like to serve other web apps through NGINX with roots like <static IP>/foo, <static IP>/bar, etc. Each of these web apps would be from an independent Python app. But currently, when I try to access the endpoint externally, I get a 444 error code, so I think I am not configuring NGINX correctly.
I put together my first attempt at an NGINX config from the config posted on the Guincorn site. Instead of a single config, I've split it into a global config and a site specific one. My global config at etc/nginx/nginx.conf looks like:
user ops;
worker_processes 1;
pid /run/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
use epoll;
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
server_tokens off;
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Then my site specific configuration that is in /etc/nginx/sites-available (and is symlinked in /etc/nginx/sites-enabled) is:
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
# server unix:/tmp/gunicorn_abc_api.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name _;
keepalive_timeout 100;
# path for static files
#root /path/to/app/current/public;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
# error_page 500 502 503 504 /500.html;
# location = /500.html {
# root /path/to/app/current/public;
# }
}
The configs pass service nginx checkconfig, but I end up seeing the following in my access log:
XXX.XXX.X.XXX - - [09/Sep/2016:01:03:18 +0000] "GET /ABC/data HTTP/1.1" 444 0 "-" "python-requests/2.10.0"
I think I've somehow not configured the routes properly. Any suggestions would be appreciated.
UPDATE:
I have it working now with a few changes. I commented out the following block:
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
I can't figure out how to get the behavior of returning 444 unless there is a valid route. I'd like to, but I'm still stuck on this part. This block seems to eat all incoming requests. I've also changed the app config to:
upstream app_server {
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 100M;
# set the correct host(s) for your site
server_name $hostname;
keepalive_timeout 100;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
rewrite ^/ABC/(.*) /$1 break;
proxy_pass http://app_server;
}
}
Basically I seem to have had to explicity set server_name and also use rewrite to get the correct mapping to the app server.
This works fine for me, returns 444 (hangs up connection) only if no other server name is matched:
server {
listen 80;
server_name "";
return 444;
}
I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_polling for clients to connect to. My webapp runs on localhost:80.
I've used nginx to proxy requests from nginx to the comet server (localhost:80/long_polling proxied to localhost:8080/long_polling), however, I have two gripes with this solution:
nginx gives me a 504 Gateway time-out after a minute, even though I changed EVERY single time out setting to 600 seconds
I don't really want nginx to have to proxy to the comet server anyway - the nginx proxy is not built for long lasting connections (up to half an hour possibly). I would rather allow the clients to directly connect to the comet server, and let the comet server deal with it.
So my question is: is there any linux trick that allows me to expose localhost:8080/long_polling to localhost:80/long_polling without using the nginx proxy? There must be something. That's why I think this question can probably be best answered by a linux guru.
The reason I need /long_polling to be exposed on port 80 is so I can use AJAX to connect to it (ajax same-origin-policy).
This is my nginx proxy.conf for reference:
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 600;
proxy_buffering off;
Here's my nginx.conf and my proxy.conf. Note however that the proxy.conf is way overkill - I was just setting all these settings while trying to debug my program.
/etc/nginx/nginx.conf
worker_processes 1;
user www-data;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/proxy.conf;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
keepalive_timeout 600;
tcp_nodelay on;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 6000;
proxy_buffering off;
proxy_next_upstream error;
I actually managed to get this working now. Thank you all. The reason nginx was 504 timing out was a silly one: I hadn't included proxy.conf in my nginx.conf like so:
include /etc/nginx/proxy.conf;
So, I'm keeping nginx as a frontend proxy to the COMET server.
i don't think, that is possible ...
localhost:8080/long_polling is a URI ... more exactly, it should be http://localhost:8080/long_polling ... in HTTP the URI would be resolved as requesting /long_polling, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sending
GET /long_polling HTTP/1.1
Host: localhost:8080
plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...
actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...
what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...
greetz
back2dos
Try
proxy_next_upstream error;
The default is
proxy_next_upstream error timeout;
The timeout cannot be more than 75 seconds.
http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream
http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout
There is now a Comet plugin for Nginx. It will probably solve your issues quite nicely.
http://www.igvita.com/2009/10/21/nginx-comet-low-latency-server-push/
without doing some serious TCP/IP mungling, you can't expose two applications on the same TCP port on the same IP address. once nginx has started to service the connection, it can't pass it to other application, it can only proxy it.
so, either user another port, another IP number (could be on the same physical machine), or live with proxy.
edit: i guess nginx is timing out because it doesn't see any activity for a long time. maybe adding a null message every few minutes could keep the connection from failing.
You might want to try listen(80) on the node.js server instead of 8080 (i presume you are using that as an async server?) and potentially miss out Ngnix altogether. I use connect middleware and express to server static files and deal with caching that would normally be handled by Ngnix. If you want to have multiple instances of node running (which I would advise), you might want to look into node.js itself as a proxy / load balancer to other node instances rather than Nginx as your gateway. I ran into a problem with this though when I was serving too many static image files at once but after I put the images on S3 it stabilized. Nginx MAY be overkill for what you are doing. Try it and see. Best of luck.