Im'm trying to deploy an application in kubernetes which connects to an s3 object bucket. The s3 bucket is exposed by an storage api secured with a self signed certificate.
The application which is supposed to connect to this bucket is already into a container and it't not that easy to edit to include the CA of the s3 bucket. I can only manipulate the endpoint of the s3 bucket.
Since all this is deployed into kubernetes, I thoght that It would be possible to deploy an nginx pod to act as a proxy and negotiate SSL with the bucket and expose it by a kubernetes service.
I searched about this in google and found this article in which explains how to use nginx as a proxy.
This is my nginx configuration.
http {
default_type text/html;
#access_log /;
sendfile on;
keepalive_timeout 65;
proxy_cache_path /tmp/ levels=1:2 keys_zone=s3_cache:10m max_size=500m
inactive=60m use_temp_path=off;
server {
listen 80;
# Configure your domain name here:
server_name _;
# Configure your Object Storage bucket URL here:
set $bucket "myobjectstoragebucket.int.company.net";
# This configuration provides direct access to the Object Storage bucket:
location / {
resolver 1.1.1.1;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_set_header Authorization '';
proxy_set_header Host $bucket;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header x-amz-meta-server-side-encryption;
proxy_hide_header x-amz-server-side-encryption;
proxy_hide_header Set-Cookie;
proxy_ignore_headers Set-Cookie;
proxy_intercept_errors on;
add_header Cache-Control max-age=31536000;
proxy_ssl_verify off;
proxy_pass http://$bucket;
}
And doesn't seem to be working. The client logs that is trying to connect to the upstream endpoint (myobjectstoragebucket.int.company.net) and fails to authenticate ssl.
In the client I've added the kubernetes service to use this nginx proxy. And seems to be working since it reaches the s3 bucket.
Is the idea even possible? Sorry if this is nonsese I don't know much about NGINX or S3.
Thanks for the help.
I am trying to use the nginx proxy_bind directive to have upstream traffic to a gRPC server use a specific network interface. For some reason, nginx seems to be completely ignoring the directive and just using the default network interface. I tried using the proxy_bind directive for a different server that doesn't use gRPC (it is http1.1 I believe) and that worked fine, so I am led to believe that nginx is ignoring the proxy_bind directive because of something related to the server being a gRPC server. I have confirmed that it works for the normal server and not the gRPC server by running ss and looking for traffic originating from the ip I am trying to bind. There was traffic, but it was only ever going to the normal server. All traffic to the gRPC server had the default local ip.
This is the server config block for the gRPC server where proxy_bind is not working:
server {
listen 8980 http2;
# set client body size to 128M to prevent HTTP 413 errors
client_max_body_size 128M;
# set client buffer size to 128M to prevent writing temp files
client_body_buffer_size 128M;
# We have plenty of RAM, up the output_buffers
output_buffers 256 128M;
# Allow plenty of connections
http2_max_concurrent_streams 100000;
keepalive_requests 100000;
keepalive_timeout 120s;
# Be forgiving with grpc
grpc_connect_timeout 240;
grpc_read_timeout 2048;
grpc_send_timeout 2048;
grpc_socket_keepalive on;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
grpc_pass grpc://my8980;
}
}
and this is the server config block for a normal server where proxy_bind is working:
server {
listen 4646;
# set client body size to 16M to prevent HTTP 413 errors
client_max_body_size 64M;
# set client buffer size to 32M to prevent writing temp files
client_body_buffer_size 64M;
# We have plenty of RAM, up the output_buffers
output_buffers 64 64M;
# Fix Access-Control-Allow-Origin header
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin $http_origin;
proxy_bind <local ip>;
location / {
proxy_set_header Host $host;
proxy_pass http://myapp1;
proxy_buffering off;
}
}
The grpc_pass directive belongs to ngx_http_grpc_module, while proxy_set_header, proxy_hide_header and proxy_bind directives comes from ngx_http_proxy_module. Those modules are two different things. The ngx_http_grpc_module has its own grpc_set_header, grpc_hide_header and grpc_bind analogs to be used instead.
As a prefix, I have been using the following stack for some time with great success:
NGINX - web proxy
SSL - configured in nginx
Pyramid web application, served by gunicorn
The above combo works great, here is a working configuration.
server {
# listen on port 80
listen 80;
server_name portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
server {
# listen on port 80
listen 80;
server_name www.portalapi.example.com;
# Forward all traffic to SSL
return 301 https://www.portalapi.example.com$request_uri;
}
#ssl server
server {
listen 443 ssl;
ssl on;
ssl_certificate /usr/local/etc/letsencrypt/live/portalapi.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/portalapi.example.com/privkey.pem;
server_name www.portalapi.example.com;
client_max_body_size 10M;
client_body_buffer_size 128k;
location ~ /.well-known/acme-challenge/ {
root /usr/local/www/nginx/portalapi;
allow all;
}
location / {
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:8005;
#proxy_intercept_errors on;
allow all;
}
error_page 404 500 502 503 504 /index.html;
location = / {
root /home/luke/ecom2/dist;
}
}
Now, this is how I serve my public facing apps, it works very well. For all my internal applications, I used to simply direct users to an internal domain example: http://subdomain.company.domain , again this worked well for a long time.
Now in the wake of KRACK attack although we have some very thorough firewall rules to prevent a lot of attacks, I want to force all internal traffic through SSL, and I don't want to use a self signed certificate, I want to use lets encrypt so I can auto-renew certificates which makes administration much easier (and cheaper).
In order to use lets encrypt, I need to have a public facing DNS and server to perform the ACME challenge (for auto renewing). Now again this was a very easy thing to setup in nginx, and the below config works perfectly for serving static content:
What it does is if a user from the internet accesses intranet.example.com it simply shows a forbidden message. However, if a local user tries, they get forwarded to intranet.example.com:8002 and the port 8002 is only available locally, so there is no way external users can access a webpage on this site
geo $local_user {
192.168.155.0/24 0;
172.16.10.0/28 1;
172.16.155.0/24 1;
}
server {
listen 80;
server_name intranet.example.com;
client_max_body_size 4M;
client_body_buffer_size 128k;
# Space for lets encrypt to perform challenges
location ~ /\.well-known/ {
root /usr/local/www/nginx/intranet;
}
if ($local_user) {
# If user is local, redirect them to SSL proxy only available locally
return 301 https://intranet.example.com:8002$request_uri;
}
# Default block all non local users see
location / {
root /home/luke/forbidden_html;
index index.html;
}
# This server block is only available to local users inside geo $local_user
# this block listens on an internal port only, so it is never availble to
# external networks
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
root /home/luke/ecom2/dist;
index index.html;
deny all;
}
}
Now, here comes the pyramid/nginx marrying problem, if I use the same above configuration, but have the below settings for my server on 8002:
server {
listen 8002 default ssl; # listen on a port only accessible locally
server_name intranet.example.com;
ssl_certificate /usr/local/etc/letsencrypt/live/intranet.example.com/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/intranet.example.com/privkey.pem;
client_max_body_size 4M;
client_body_buffer_size 128k;
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host;
proxy_pass http://10.1.1.16:6543;
proxy_intercept_errors on;
deny all;
}
}
I run into all sorts of problems, first off inside pyramid I was using the following code in my views/templates
request.route_url # get route url for desired function
Now using request.route_url with the above settings should cause https://intranet.example.com:8002/login to route tohttps://intranet.example.com:8002/welcome but in reality, this setup would forward a user to: http://intranet.example.com/welcome Again this is not correct.
And if I use route_url with the NGINX proxy setting:
proxy_set_header Host $http_host;
I get the error: NGINX to return a 400 error:
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Then I used the same nginx settings (header $htto), but thought I would change to using:
request.route_path
My theory was this should force everything to stay on the same url prefix, and just forward a user from https://intranet.example.com:8002/login to https://intranet.example.com:8002/welcome but in reality, this setup performed the same way as using route_url.
proxy_set_header Host $http_host;
I then get an error when navigating to https://intranet.example.com:8002
400: The plain HTTP request was sent to HTTPS port
And a request to: https://intranet.example.com:8002/ gets reverted to: http://intranet.example.com/login (omitting port and https)
Can anyone assist with the correct setup in order for me to serve my application on https://intranet.example.com:8002
EDIT:
Have also tried:
location / {
allow 192.168.155.0/24;
allow 172.16.10.0/28; # also add in allow/deny rules in this block (extra security)
allow 172.16.155.0/24;
# Forward all requests to python application server
proxy_set_header Host $host:$server_port;
proxy_pass http://10.1.1.16:8002;
proxy_intercept_errors on;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# root /home/luke/ecom2/dist;
# index index.html;
deny all;
}
Which gives the same result.
I’ve checked a similar configuration and your last example seems correct,
at least for a simplistic gunicorn/pyramid app combination.
Seems something is missing in your puzzle )
Here’s my code (I’m new to Pyramid so something might be done better)
helloworld.py
from pyramid.config import Configurator
from pyramid.renderers import render_to_response
def main(request):
return render_to_response('templates:test.pt', {}, request=request)
with Configurator() as config:
config.add_route('main', '/')
config.add_view(main, route_name='main')
config.include('pyramid_chameleon')
app = config.make_wsgi_app()
templates/test.pt
<html>
<body>
Route url: ${request.route_url('main')}
</body>
</html>
My nginx config
server {
listen 80;
server_name pyramid.lan;
location / {
return 301 https://$server_name:8002$request_uri;
}
}
server {
listen 8002;
server_name pyramid.lan;
ssl on;
ssl_certificate /usr/local/etc/nginx/cert/server.crt;
ssl_certificate_key /usr/local/etc/nginx/cert/server.key;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://127.0.0.1:5678;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
This is how I run gunicorn:
gunicorn -w 1 -b 127.0.0.1:5678 helloworld:app
And yes, it works:
$ curl --insecure https://pyramid.lan:8002/
<html>
<body>
Route url: https://pyramid.lan:8002/
</body>
</html>
$ curl -D - http://pyramid.lan
HTTP/1.1 301 Moved Permanently
Server: nginx/1.12.2
Date: Thu, 02 Nov 2017 20:41:50 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://pyramid.lan:8002/
Lets figure out what might go wrong in your case
http 400 usually pops up when you go over httP instead of httpS to a server awaiting httpS requests. If there’s no typo in the post and it indeed occurs when you navigate to https://intranet.example.com:8002 it would be nice to see a curl request showing this and a tcpdump showing what’s happening. Actually you can easily reproduce it by simply typing http://intranet.example.com:8002
another idea is that you’re doing a redirect from your app and the link gets broken when the redirect occurs. I better description on how the user may navigate from https://intranet.example.com:8002/login to .../welcome would be helpful
one more idea is that your app is not that simple and you use some middlewares / customization that makes the default logic work differently and your X-Forwarded-Proto header gets ignored - in this case the behavior would be just as you described
The issue here is, obviously, the missing port within the Location directives that your backend produces.
Now, why is the port missing? Most certainly, because of the following code:
proxy_set_header Host $host;
Note that $host itself does not contain $server_port, unlike $http_host, so, your backend would have no way of knowing which port you meant if you just use $host all by itself.
Note that proxy_redirect default of default expects Location to correspond with the value from proxy_pass in order to do its magic (according to documentation), so, your explicit header setting likely interferes with such logic.
As such, from the nginx point of view, I see multiple possible independent solutions:
remove proxy_set_header Host, and let proxy_redirect do its magic;
set proxy_set_header Host appropriately, to include the port number, e.g., using $host:$server_port or $http_host as you see fit (if that doesn't work, then perhaps the deficiency is actually within your upstream app itself, but fear not -- read below);
provide a custom proxy_redirect setting, e.g., proxy_redirect https://pyramid.lan/ / (equivalent to proxy_redirect https://pyramid.lan/ https://pyramid.lan:8002/), which will ensure that all the Location responses will have the proper port; the only way this wouldn't work is if your upstream does non-HTTP redirects with the missing port.
Short version:
I want to use NGINX as a reverse proxy so that a client accessing the public facing URL gets served API data from the internal Gunicorn server sitting behind the proxy:
external path (proxy) => internal app
<static IP>/ABC/data => 127.0.0.1:8001/data
I'm not getting the location mapping correct.
Long version:
I am setting up NGINX for the first time and am attempting to use it as a reverse proxy for a rest api served by Gunicorn. The api is served at 127.0.0.1:8001 and I can access it from the server and get the appropriate responses, so that piece I believe is working correctly. It's running persistently using Supervisord.
I'd like to access one of the API endpoints externally at <static IP>/ABC/data. On the Gunicorn server, this endpoint available at localhost:8001/data. Eventually I'd like to serve other web apps through NGINX with roots like <static IP>/foo, <static IP>/bar, etc. Each of these web apps would be from an independent Python app. But currently, when I try to access the endpoint externally, I get a 444 error code, so I think I am not configuring NGINX correctly.
I put together my first attempt at an NGINX config from the config posted on the Guincorn site. Instead of a single config, I've split it into a global config and a site specific one. My global config at etc/nginx/nginx.conf looks like:
user ops;
worker_processes 1;
pid /run/nginx.pid;
error_log /tmp/nginx.error.log;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
use epoll;
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /tmp/nginx.access.log combined;
sendfile on;
server_tokens off;
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
gzip on;
gzip_disable "msie6";
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Then my site specific configuration that is in /etc/nginx/sites-available (and is symlinked in /etc/nginx/sites-enabled) is:
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
# server unix:/tmp/gunicorn_abc_api.sock fail_timeout=0;
# for a TCP configuration
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 4G;
# set the correct host(s) for your site
server_name _;
keepalive_timeout 100;
# path for static files
#root /path/to/app/current/public;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
# error_page 500 502 503 504 /500.html;
# location = /500.html {
# root /path/to/app/current/public;
# }
}
The configs pass service nginx checkconfig, but I end up seeing the following in my access log:
XXX.XXX.X.XXX - - [09/Sep/2016:01:03:18 +0000] "GET /ABC/data HTTP/1.1" 444 0 "-" "python-requests/2.10.0"
I think I've somehow not configured the routes properly. Any suggestions would be appreciated.
UPDATE:
I have it working now with a few changes. I commented out the following block:
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
I can't figure out how to get the behavior of returning 444 unless there is a valid route. I'd like to, but I'm still stuck on this part. This block seems to eat all incoming requests. I've also changed the app config to:
upstream app_server {
server 127.0.0.1:8001 fail_timeout=0;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
listen 80 deferred;
client_max_body_size 100M;
# set the correct host(s) for your site
server_name $hostname;
keepalive_timeout 100;
location /ABC {
# checks for static file, if not found proxy to app
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# enable this if and only if you use HTTPS
# proxy_set_header X-Forwarded-Proto https;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
rewrite ^/ABC/(.*) /$1 break;
proxy_pass http://app_server;
}
}
Basically I seem to have had to explicity set server_name and also use rewrite to get the correct mapping to the app server.
This works fine for me, returns 444 (hangs up connection) only if no other server name is matched:
server {
listen 80;
server_name "";
return 444;
}
I'm trying to test simple ping & pong responses with WebSockets by using Ratchet. My WebSocket server is not seeing any responses from my web browser's WebSocket client but packets sent by WebSocket server are seen fine by the browser. I'm trying to find why.
My current guesses are:
I'm missing some HTTP header(s)
I have to encode packets on browser wsclient.send(encodeFrame("{'action': 'pong'}"));
CloudFlare is not recognizing packet in WS stream as valid and thrashes it
CloudFlare or nginx in EC2 instance is doing some weird buffering
Ratchet is not recognizing packet on lowest IOServer level and thrashes it
But I never get any errors or Exceptions from this level
Setup:
Linux server # Amazon EC2
DNS # CloudFlare with free plan and acceleration on + forced HTTPS redirect on
HTTP server is nginx
No HTTPS on nginx (Cloudflare redirects HTTPS -> EC2 HTTP)
WebSocket server (Ratchet) running at 127.0.0.1:65000 on EC2
Nginx redirects /api to 127.0.0.1:65000 (Ratchet)
Tested with own WebSocket client at EC2 instance:
127.0.0.1:65000 works fine
<Amazon NAT IP of instance>:80 works fine
<Amazon public IP of instance>:80 works fine
<CloudFlare IP of Amazon public IP>:80 Connects to WebSocket server on Application implementatuin level but doesn't see packet on onMessage method on any level (App, WsServer, HTTPServer)
<CloudFlare IP of Amazon public IP>:443 Gives 400 Bad Request because test client is just simple TCP stream
Tested from local machine:
Directly connect to host offered by CloudFlare's cached IP. Dojox.Socket connects to wss://host/api. Connection is again seen on Application implementation level on Ratchet (onOpen is fired). My browser sees the ping packets fine so sending from Ratchet works fine.
But then I try to send pong reply to the ping from browser and onMessage method is not never fired on any level on Ratched. Connection keeps open and if I watch with Fiddler both pings and pongs are constantly sent but WebSocket server never receives those pongs (onMessage).
Following Fiddler's WebSocket stream shows that pongs from browser has "Data masked by key: <some hex>" but pings from Ratched are not masked.
Connection summary:
Page load:
Local machine http://host/ → CloudFlare HTTPS redirect https://host/ → http://host-with-amazon-public-ip/ → http://host-with-amazon-NAT-ip/ → HTML + JS page that loads wss WebSocket Connection to /api
WebSocket connection to /api:
CloudFlare HTTPS redirected wss://host/api → http://host-with-amazon-public-ip/api → http://host-with-amazon-NAT-ip/api → local server nginx redirect /api → 127.0.0.1:65000 → Connection upgrade → WebSocket stream → WebSocket stream for web browser
nginx.conf
server {
listen 80;
server_name test.example.com;
root /home/raspi/test/public;
autoindex off;
index index.php;
access_log /home/raspi/test/http-access.log;
error_log /home/raspi/test/http-error.log notice;
location / {
index index.php;
try_files $uri $uri/ /index.php?$args;
}
location /api {
access_log /home/raspi/test/api-access.log;
error_log /home/raspi/test/api-error.log;
expires epoch;
proxy_ignore_client_abort on;
proxy_buffering off;
proxy_request_buffering off;
proxy_cache off;
proxy_pass http://127.0.0.1:65000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header Connection "keep-alive, Upgrade";
proxy_set_header Upgrade "websocket";
proxy_set_header Accept-Encoding "gzip, deflate";
proxy_set_header Sec-WebSocket-Extensions "permessage-deflate";
proxy_set_header Sec-WebSocket-Protocol "game";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location ^~ /js/app/ {
try_files $uri /;
expires epoch;
add_header Cache-Control "no-cache" always;
}
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
try_files $uri /;
access_log off;
expires max;
}
location = /robots.txt { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ~ /\. { access_log off; log_not_found off; deny all; }
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Web sockets are currently only available at the business and enterprise levels. You mentioned you're on the free plan. That's the issue here.
The problem is that your provider rejects Connection and Update headers. You can go to Ratchet/WebSocket/Version/RFC6455/HandshakeVerifier.php and modify the code to:
...
public function verifyAll(RequestInterface $request) {
$passes = 0;
$passes += (int)$this->verifyMethod($request->getMethod());
$passes += (int)$this->verifyHTTPVersion($request->getProtocolVersion());
$passes += (int)$this->verifyRequestURI($request->getPath());
$passes += (int)$this->verifyHost((string)$request->getHeader('Host'));
$passes += (int)$this->verifyUpgradeRequest((string)$request->getHeader('Upgrade'));
$passes += (int)$this->verifyConnection((string)$request->getHeader('Connection'));
die((string)$request);
...
And you will see the result request doesn't have the required fields;
The workaround: override WsServer::onOpen function and add that fields to request manually. But this is unsafe...