This is my nginx config :
worker_processes auto;
user nginx;
events {
worker_connections 1024;
use epoll;
}
http {
tcp_nodelay on;
add_header Cache-Control no-cache;
upstream servers {
server 127.0.0.1:9999;
server 127.0.0.1:9998;
}
proxy_request_buffering on;
server {
listen 80;
client_max_body_size 200M;
client_body_buffer_size 200M;
server_name localhost;
location / {
try_files $uri #proxy_upload;
}
location #proxy_upload {
proxy_pass_request_body on;
proxy_pass http://servers;
}
}
}
I try to upload files, I am doing something like a chunked upload, files are more than 2G, client send files in chunks, the nginx and the script are working, but nginx is not working as expected
you see, I turned proxy_request_buffering on, so I expect nginx buffer all 200M of file and then pass it to back-end(which is tornado-python) at once, but nginx pass it to back-end in 1M or 2M chunks, this behavior lead to very higher cpu usage, higher system load and lower upload speed, this behavior is not much different than setting proxy_request_buffering to off, so I think I am doing somehthing wrong here
why is nginx not buffering correctly and how am I suppose to make nginx buffer the whole request and then pass it at once ?
I tried to use post_action, but I couldn't pass request body to back-end
UPDATE: Nginx is buffering correctly, I mean it pass request body to back-end as soon as whole file uploaded by client, but Nginx pass request body to back-end in smaller chunk and it won't pass it at once, It has whole body but it won't pass the whole body at once, How can I tell to nginx to pass request body to back-end at once ?
Looks like you need to turn on proxy_buffering, set proxy_buffers to set the number and size of the buffers, and probably proxy_buffers_size to ensure it doesn't overuse memory:
location #proxy_upload {
proxy_buffering on;
proxy_buffers 10 200m;
proxy_buffer_size 1000m;
proxy_pass_request_body on;
proxy_pass http://servers;
}
Not sure how much traffic your upstream server gets, but I could imagine buffers this large isn't that efficient ... What webserver are you using upstream that requires such a large buffer? Perhaps try incrementing the buffer 1m at a time until it's at a decent sweet spot.
From: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers
And: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size
Related
I am trying to use the nginx proxy_bind directive to have upstream traffic to a gRPC server use a specific network interface. For some reason, nginx seems to be completely ignoring the directive and just using the default network interface. I tried using the proxy_bind directive for a different server that doesn't use gRPC (it is http1.1 I believe) and that worked fine, so I am led to believe that nginx is ignoring the proxy_bind directive because of something related to the server being a gRPC server. I have confirmed that it works for the normal server and not the gRPC server by running ss and looking for traffic originating from the ip I am trying to bind. There was traffic, but it was only ever going to the normal server. All traffic to the gRPC server had the default local ip.
This is the server config block for the gRPC server where proxy_bind is not working:
server {
listen 8980 http2;
# set client body size to 128M to prevent HTTP 413 errors
client_max_body_size 128M;
# set client buffer size to 128M to prevent writing temp files
client_body_buffer_size 128M;
# We have plenty of RAM, up the output_buffers
output_buffers 256 128M;
# Allow plenty of connections
http2_max_concurrent_streams 100000;
keepalive_requests 100000;
keepalive_timeout 120s;
# Be forgiving with grpc
grpc_connect_timeout 240;
grpc_read_timeout 2048;
grpc_send_timeout 2048;
grpc_socket_keepalive on;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
grpc_pass grpc://my8980;
}
}
and this is the server config block for a normal server where proxy_bind is working:
server {
listen 4646;
# set client body size to 16M to prevent HTTP 413 errors
client_max_body_size 64M;
# set client buffer size to 32M to prevent writing temp files
client_body_buffer_size 64M;
# We have plenty of RAM, up the output_buffers
output_buffers 64 64M;
# Fix Access-Control-Allow-Origin header
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $http_origin;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
proxy_pass http://myapp1;
proxy_buffering off;
}
}
The grpc_pass directive belongs to ngx_http_grpc_module, while proxy_set_header, proxy_hide_header and proxy_bind directives comes from ngx_http_proxy_module. Those modules are two different things. The ngx_http_grpc_module has its own grpc_set_header, grpc_hide_header and grpc_bind analogs to be used instead.
I added a cache to my Nginx reverse proxy, so it could cache all the API responses. But after restarting my Nginx server and confirmed that there are no errors in my configuration file, my cache directory is still empty, and I'm not noticing any performance improvements.
Am I expecting something that is not supposed to happen right away, or am I doing something wrong?
Here is my configuration file. it's stored in /etc/nginx/sites-available/api.example.com.conf and I made sure to symlink it inside /etc/nginx/sites-enabled.
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m max_size=30g inactive=60m use_temp_path=off;
server {
server_name api.example.com;
client_max_body_size 10G;
proxy_read_timeout 86400;
proxy_connect_timeout 86400;
proxy_send_timeout 86400;
location / {
proxy_cache api_cache;
proxy_pass http://127.0.0.1:4040;
}
}
Please confirm your directory has the write permission also please make sure that the port is not already occupied by any other service.
I have 3 docker containers in the same network:
Storage (golang) - it provides API for uploading video files.
Streamer (nginx) - it streams uploaded files
Reverse Proxy (let's call it just Proxy)
I have HTTPS protocol between User and Proxy.
Let's assume that there is a file with id=c14de868-3130-426a-a0cc-7ff6590e9a1f and User wants to see it. So User makes a request to https://stream.example.com/hls/master.m3u8?id=c14de868-3130-426a-a0cc-7ff6590e9a1f. Streamer knows video id (from query param), but it doesn't know the path to the video, so it makes a request to the storage and exchanges video id for the video path. Actually it does proxy_pass to http://docker-storage/getpath?id=c14de868-3130-426a-a0cc-7ff6590e9a1f.
docker-storage in an upstream server. And protocol is http, because
I have no SSL-connection between docker containers in local network.
After Streamer gets path to the file it starts streaming. But User's browser start throwing Mixed Content Type error, because the first request was throw HTTPS and after proxy_pass it became HTTP.
Here is nginx.conf file (it is a Streamer container):
worker_processes auto;
events {
use epoll;
}
http {
error_log stderr debug;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
vod_mode local;
vod_metadata_cache metadata_cache 16m;
vod_response_cache response_cache 512m;
vod_last_modified_types *;
vod_segment_duration 9000;
vod_align_segments_to_key_frames on;
vod_dash_fragment_file_name_prefix "segment";
vod_hls_segment_file_name_prefix "segment";
vod_manifest_segment_durations_mode accurate;
open_file_cache max=1000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
aio on;
upstream docker-storage {
# There is a docker container called storage on the same network
server storage:9000;
}
server {
listen 9000;
server_name localhost;
root /srv/static;
location = /exchange-id-to-path {
proxy_pass $auth_request_uri;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Original-URI $request_uri;
# I tried to experiment with this header
proxy_set_header X-Forwarded-Proto https;
set $filepath $upstream_http_the_file_path;
}
location /hls {
# I use auth_request module just to get the path from response header (The-File-Path)
set $auth_request_uri "http://docker-storage/getpath?id=$arg_id";
auth_request /exchange-id-to-path;
auth_request_set $filepath $upstream_http_the_file_path;
# Here I provide path to the file I want to stream
vod hls;
alias $filepath/$arg_id;
}
}
}
Here is a screenshot from browser console:
Here is the response from success (200) request:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1470038,RESOLUTION=1280x720,FRAME-RATE=25.000,CODECS="avc1.4d401f,mp4a.40.2"
http://stream.example.com/hls/index-v1-a1.m3u8
#EXT-X-I-FRAME-STREAM-INF:BANDWIDTH=171583,RESOLUTION=1280x720,CODECS="avc1.4d401f",URI="http://stream.example.com/hls/iframes-v1-a1.m3u8"
The question is how to save https protocol after proxy_pass to http?
p.s. I use Kaltura nginx-vod-module for streaming video files.
I think proxy_pass isn't the problem here. When the vod module returns the index path it uses an absolute URL with HTTP protocol. A relative URL should be enough since the index file and the chunks are under the same domain (if I understood it correctly).
Try setting vod_hls_absolute_index_urls off; (and vod_hls_absolute_master_urls off; as well), so your browser should send requests relative to stream.example.com domain using HTTPS.
As a prefix, I have been using the following stack for some time with great success:
NGINX - web proxy
SSL - configured in nginx
Pyramid web application, served by gunicorn
The above combo works great, here is a working configuration.
server {
# listen on port 80
listen 80;
server_name portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
server {
# listen on port 80
listen 80;
server_name www.portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
#ssl server
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
server_name www.portalapi.example.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
location ~ /.well-known/acme-challenge/ {
root /usr/local/www/nginx/portalapi;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:8005;
#proxy_intercept_errors on;
allow all;
}
error_page 404 500 502 503 504 /index.html;
location = / {
root /home/luke/ecom2/dist;
}
}
Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.
Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).
In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:
What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site
geo $local_user {
192.168.155.0/24 0;
172.16.10.0/28 1;
172.16.155.0/24 1;
}
server {
listen 80;
server_name intranet.example.com;
client_max_body_size 4M;
client_body_buffer_size 128k;
# Space for lets encrypt to perform challenges
location ~ /\.well-known/ {
root /usr/local/www/nginx/intranet;
}
if ($local_user) {
# If user is local, redirect them to SSL proxy only available locally
return 301 https://intranet.example.com:8002$request_uri;
}
# Default block all non local users see
location / {
root /home/luke/forbidden_html;
index index.html;
}
# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to
# external networks
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
root /home/luke/ecom2/dist;
index index.html;
deny all;
}
}
Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:6543;
proxy_intercept_errors on;
deny all;
}
}
I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates
request.route_url # get route url for desired function
Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.
And if I use route_url with the NGINX proxy setting:
proxy_set_header Host $http_host;
I get the error: NGINX to return a 400 error:
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Then I used the same nginx settings (header $htto), but thought I would change to using:
request.route_path
My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.
proxy_set_header Host $http_host;
I then get an error when navigating to https://intranet.example.com:8002
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002
EDIT:
Have also tried:
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host:$server_port;
proxy_pass http://10.1.1.16:8002;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# root /home/luke/ecom2/dist;
# index index.html;
deny all;
}
Which gives the same result.
I’ve checked a similar configuration and your last example seems correct,
at least for a simplistic gunicorn/pyramid app combination.
Seems something is missing in your puzzle )
Here’s my code (I’m new to Pyramid so something might be done better)
helloworld.py
from pyramid.config import Configurator
from pyramid.renderers import render_to_response
def main(request):
return render_to_response('templates:test.pt', {}, request=request)
with Configurator() as config:
config.add_route('main', '/')
config.add_view(main, route_name='main')
config.include('pyramid_chameleon')
app = config.make_wsgi_app()
templates/test.pt
<html>
<body>
Route url: ${request.route_url('main')}
</body>
</html>
My nginx config
server {
listen 80;
server_name pyramid.lan;
location / {
return 301 https://$server_name:8002$request_uri;
}
}
server {
listen 8002;
server_name pyramid.lan;
ssl on;
ssl_certificate /usr/local/etc/nginx/cert/server.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://127.0.0.1:5678;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is how I run gunicorn:
gunicorn -w 1 -b 127.0.0.1:5678 helloworld:app
And yes, it works:
$ curl --insecure https://pyramid.lan:8002/
<html>
<body>
Route url: https://pyramid.lan:8002/
</body>
</html>
$ curl -D - http://pyramid.lan
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Thu, 02 Nov 2017 20:41:50 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://pyramid.lan:8002/
Lets figure out what might go wrong in your case
http 400 usually pops up when you go over httP instead of httpS to a server awaiting httpS requests. If there’s no typo in the post and it indeed occurs when you navigate to https://intranet.example.com:8002 it would be nice to see a curl request showing this and a tcpdump showing what’s happening. Actually you can easily reproduce it by simply typing http://intranet.example.com:8002
another idea is that you’re doing a redirect from your app and the link gets broken when the redirect occurs. I better description on how the user may navigate from https://intranet.example.com:8002/login to .../welcome would be helpful
one more idea is that your app is not that simple and you use some middlewares / customization that makes the default logic work differently and your X-Forwarded-Proto header gets ignored - in this case the behavior would be just as you described
The issue here is, obviously, the missing port within the Location directives that your backend produces.
Now, why is the port missing? Most certainly, because of the following code:
proxy_set_header Host $host;
Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.
Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.
As such, from the nginx point of view, I see multiple possible independent solutions:
remove proxy_set_header Host, and let proxy_redirect do its magic;
set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.
This should be a quick fix.
So for some reason I still can't get a request that is greater than 1MB to succeed without returning 413 Request Entity Too Large.
For example with the following configuration file and a request of size ~2MB, I get the following error message in my nginx error.log:
*1 client intended to send too large body: 2666685 bytes,
I have tried setting the configuration that is set below and then restarting my nginx server but I still get the 413 error.
Is there anything I am doing wrong?
server {
listen 8080;
server_name *****/api; (*omitted*)
client_body_in_file_only clean;
client_body_buffer_size 32K;
charset utf-8;
client_max_body_size 500M;
sendfile on;
send_timeout 300s;
listen 443 ssl;
location / {
try_files $uri #(*omitted*);
}
location #parachute_server {
include uwsgi_params;
uwsgi_pass unix:/var/www/(*omitted*)/(*omitted*).sock;
}
}
Thank you in advance for the help!
I'm surprised you haven't received a response but my hunch is you already have it set somewhere else in another config file.
Take a look at nginx - client_max_body_size has no effect
Weirdly it works after adding the same thing "client_max_body_size 100M"in http,location,server all the blocks.