How to reverse proxy a site which use ssl by nginx? - nginx

For example:
I want to use a domain reverse proxy https://tw.godaddy.com, is this possible?
My config does not work.
location ~ /
{
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://tw.godaddy.com;
proxy_set_header Host "tw.godaddy.com";
proxy_set_header Accept-Encoding "";
proxy_set_header User-Agent $http_user_agent;
#more_clear_headers "X-Frame-Options";
sub_filter_once off;
}

Yes. It is possible.
Requirements:
Compiled with --with-stream
Compiled with --with-stream_ssl_module
You can check that with nginx -V
Configuration example:
stream {
upstream backend {
server backend1.example.com:12345;
server backend2.example.com:12345;
server backend3.example.com:12345;
}
server {
listen 12345;
proxy_pass backend;
proxy_ssl on;
proxy_ssl_certificate /etc/nginx/nginxb.crt;
proxy_ssl_certificate_key /etc/nginx/nginxb.key;
proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
proxy_ssl_ciphers HIGH:!aNULL:!MD5;
proxy_ssl_trusted_certificate /etc/ssl/certs/trusted_ca_cert.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
}
}
Explaination:
Turn on ssl backend:
proxy_ssl on;
Specify the path to the SSL client certificate required by the upstream server and the certificate’s private key:
proxy_ssl_certificate /etc/nginx/nginxb.crt;
proxy_ssl_certificate_key /etc/nginx/nginxb.key;
These client key/certificates are your certificates to start ssl session to backend. you can create self signed via:
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/nginx/nginxb.key -out /etc/nginx/nginxb.crt
If backend is selfsigned too turn off proxy_ssl_verify and remove ssl depth.

Related

Unable to reverse proxy requests to Nifi running in the backend using client cert as auth mechanism

I have configured Nginx as reverse proxy server for my Nifi Application running in the backend on port 9443;
**
Here goes my Nginx conf:
**
worker_processes 1;
events { worker_connections 1024; }
map_hash_bucket_size 128;
sendfile on;
large_client_header_buffers 4 64k;
upstream nifi {
server cloud-analytics-test2-nifi-a.insights.io:9443;
}
server {
listen 443 ssl;
#ssl on;
server_name nifi-test-nginx.insights.io;
ssl_certificate /etc/nginx/cert1.pem;
ssl_certificate_key /etc/nginx/privkey1.pem;
ssl_client_certificate /etc/nginx/nifi-client.pem; (this contains server cert and ca cert)
ssl_verify_client on;
ssl_verify_depth 2;
error_log /var/log/nginx/error.log debug;
proxy_ssl_certificate /etc/nginx/cert1.pem;
proxy_ssl_certificate_key /etc/nginx/privkey1.pem;
proxy_ssl_trusted_certificate /etc/nginx/nifi-client.pem;
location / {
proxy_pass https://nifi;
proxy_set_header X-ProxyScheme https;
proxy_set_header X-ProxyHost nifi-test-nginx.insights.io;
proxy_set_header X-ProxyPort 443;
proxy_set_header X-ProxyContextPath /;
proxy_set_header X-ProxiedEntitiesChain "<$ssl_client_s_dn>";
proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
}
}
}
When ever I try to access Nifi using Nginx Reverse Proxy Address/hostname I am getting below error.
client SSL certificate verify error: (2:unable to get issuer certificate) while reading client request headers,

Google App Engine Docker Container 502 Bad Gateway

I am trying to deploy my docker image to google app engine, I succfully mananged to build the image and push it to GCR. And deploy it using gcloud app deploy --image 'link-to-image-on-gcr'
But when accessing the application I'm getting a 502 bad gateway. I ssh into the server and checked the logs of the nginx container in docker and discovered the below log
2020/05/04 00:52:50 [error] 33#33: *127 connect() failed (111: Connection refused) while connecting to upstream, client: 74.125.24.153, server: , request: "GET /wp-login.php HTTP/1.1", upstream: "http://172.17.0.1:8080/wp-login.php", host: "myappengineservice-myrepo.ue.r.appspot.com"
By default, my docker image only has one container (its a Wordpress image), when deployed to app engine I suppose by default app engine will start my docker container within docker and expose the frontend via an Nginx proxy, so all the requests are routed through the Nginx proxy.
After playing around for a while, I edited the Nginx configuration file and came across this line
location / {
proxy_pass http://app_server;
I edited this a replaced it with my Wordpress docker containers internal IP address.
(proxy_pass http://172.17.0.6;)
And voila it seemed to have worked, and the requests are now been routed to my docker container.
This was obviously a temporary fix, how can I make this permanent and any idea on why this is happening?
app.yaml
runtime: custom
service: my-wordpress
env: flex
nginx.conf (inside the Nginx container)
daemon off;
worker_processes auto;
events {
worker_connections 4096;
multi_accept on;
}
http {
include mime.types;
server_tokens off;
variables_hash_max_size 2048;
# set max body size to 32m as appengine supports.
client_max_body_size 32m;
tcp_nodelay on;
tcp_nopush on;
underscores_in_headers on;
# GCLB uses a 10 minutes keep-alive timeout. Setting it to a bit more here
# to avoid a race condition between the two timeouts.
keepalive_timeout 650;
# Effectively unlimited number of keepalive requests in the case of GAE flex.
keepalive_requests 4294967295;
upstream app_server {
keepalive 192;
server gaeapp:8080;
}
geo $source_type {
default ext;
127.0.0.0/8 lo;
169.254.0.0/16 sb;
35.191.0.0/16 lb;
130.211.0.0/22 lb;
172.16.0.0/12 do;
}
map $http_upgrade $ws_connection_header_value {
default "";
websocket upgrade;
}
# ngx_http_realip_module gets the second IP address from the last of the X-Forwarded-For header
# X-Forwarded-For: [USER REQUEST PROVIDED X-F-F.]USER-IP.GCLB_IP
set_real_ip_from 0.0.0.0/0;
set_real_ip_from 0::/0;
real_ip_header X-Forwarded-For;
iap_jwt_verify off;
iap_jwt_verify_project_number 96882395728;
iap_jwt_verify_app_id my-project-id;
iap_jwt_verify_key_file /iap_watcher/iap_verify_keys.txt;
iap_jwt_verify_iap_state_file /iap_watcher/iap_state;
iap_jwt_verify_state_cache_time_sec 300;
iap_jwt_verify_key_cache_time_sec 43200;
iap_jwt_verify_logs_only on;
server {
iap_jwt_verify on;
# self signed ssl for load balancer traffic
listen 8443 default_server ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
proxy_pass_header Server;
gzip on;
gzip_proxied any;
gzip_types text/html text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/xml+rss application/protobuf application/x-protobuf;
gzip_vary on;
# Allow more space for request headers.
large_client_header_buffers 4 32k;
# Allow more space for response headers. These settings apply for response
# only, not requests which buffering is disabled below.
proxy_buffer_size 64k;
proxy_buffers 32 4k;
proxy_busy_buffers_size 72k;
# Explicitly set client buffer size matching nginx default.
client_body_buffer_size 16k;
# If version header present, make sure it's correct.
if ($http_x_appengine_version !~ '(?:^$)|(?:^my-wordpress:20200504t053100(?:\..*)?$)') {
return 444;
}
set $x_forwarded_for_test "";
# If request comes from sb, lo, or do, do not care about x-forwarded-for header.
if ($source_type !~ sb|lo|do) {
set $x_forwarded_for_test $http_x_forwarded_for;
}
# For local health checks only.
if ($http_x_google_vme_health_check = 1) {
set $x_forwarded_for_test "";
}
location / {
proxy_pass http://app_server;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $ws_connection_header_value;
proxy_set_header X-AppEngine-Api-Ticket $http_x_appengine_api_ticket;
proxy_set_header X-AppEngine-Auth-Domain $http_x_appengine_auth_domain;
proxy_set_header X-AppEngine-BlobChunkSize $http_x_appengine_blobchunksize;
proxy_set_header X-AppEngine-BlobSize $http_x_appengine_blobsize;
proxy_set_header X-AppEngine-BlobUpload $http_x_appengine_blobupload;
proxy_set_header X-AppEngine-Cron $http_x_appengine_cron;
proxy_set_header X-AppEngine-Current-Namespace $http_x_appengine_current_namespace;
proxy_set_header X-AppEngine-Datacenter $http_x_appengine_datacenter;
proxy_set_header X-AppEngine-Default-Namespace $http_x_appengine_default_namespace;
proxy_set_header X-AppEngine-Default-Version-Hostname $http_x_appengine_default_version_hostname;
proxy_set_header X-AppEngine-Federated-Identity $http_x_appengine_federated_identity;
proxy_set_header X-AppEngine-Federated-Provider $http_x_appengine_federated_provider;
proxy_set_header X-AppEngine-Https $http_x_appengine_https;
proxy_set_header X-AppEngine-Inbound-AppId $http_x_appengine_inbound_appid;
proxy_set_header X-AppEngine-Inbound-User-Email $http_x_appengine_inbound_user_email;
proxy_set_header X-AppEngine-Inbound-User-Id $http_x_appengine_inbound_user_id;
proxy_set_header X-AppEngine-Inbound-User-Is-Admin $http_x_appengine_inbound_user_is_admin;
proxy_set_header X-AppEngine-QueueName $http_x_appengine_queuename;
proxy_set_header X-AppEngine-Request-Id-Hash $http_x_appengine_request_id_hash;
proxy_set_header X-AppEngine-Request-Log-Id $http_x_appengine_request_log_id;
proxy_set_header X-AppEngine-TaskETA $http_x_appengine_tasketa;
proxy_set_header X-AppEngine-TaskExecutionCount $http_x_appengine_taskexecutioncount;
proxy_set_header X-AppEngine-TaskName $http_x_appengine_taskname;
proxy_set_header X-AppEngine-TaskRetryCount $http_x_appengine_taskretrycount;
proxy_set_header X-AppEngine-TaskRetryReason $http_x_appengine_taskretryreason;
proxy_set_header X-AppEngine-Upload-Creation $http_x_appengine_upload_creation;
proxy_set_header X-AppEngine-User-Email $http_x_appengine_user_email;
proxy_set_header X-AppEngine-User-Id $http_x_appengine_user_id;
proxy_set_header X-AppEngine-User-Is-Admin $http_x_appengine_user_is_admin;
proxy_set_header X-AppEngine-User-Nickname $http_x_appengine_user_nickname;
proxy_set_header X-AppEngine-User-Organization $http_x_appengine_user_organization;
proxy_set_header X-AppEngine-Version "";
add_header X-AppEngine-Flex-AppLatency $request_time always;
}
include /var/lib/nginx/extra/*.conf;
}
server {
# expose /nginx_status but on a different port (8090) to avoid
# external visibility / conflicts with the app.
listen 8090;
location /nginx_status {
stub_status on;
access_log off;
}
location / {
root /dev/null;
}
}
server {
# expose health checks on a different port to avoid
# external visibility / conflicts with the app.
listen 10402 ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
location = /liveness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 200 'ok';
}
return 200 'ok';
}
location = /readiness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 503 'app lameducked';
}
return 200 'ok';
}
}
# Add session affinity entry to log_format line i.i.f. the GCLB cookie
# is present.
map $cookie_gclb $session_affinity_log_entry {
'' '';
default sessionAffinity=$cookie_gclb;
}
# Output nginx access logs in the standard format, plus additional custom
# fields containing "X-Cloud-Trace-Context" header, the current epoch
# timestamp, the request latency, and "X-Forwarded-For" at the end.
# If you make changes to the log format below, you MUST validate this against
# the parsing regex at:
# GoogleCloudPlatform/appengine-sidecars-docker/fluentd_logger/managed_vms.conf
# (In general, adding to the end of the list does not require a change if the
# field does not need to be logged.)
log_format custom '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'tracecontext="$http_x_cloud_trace_context" '
'timestampSeconds="${msec}000000" '
'latencySeconds="$request_time" '
'x-forwarded-for="$http_x_forwarded_for" '
'uri="$uri" '
'appLatencySeconds="$upstream_response_time" '
'appStatusCode="$upstream_status" '
'upgrade="$http_upgrade" '
'iap_jwt_action="$iap_jwt_action" '
'$session_affinity_log_entry';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log warn;
}
/etc/hosts (inside Nginx container)
root#f9c9cb5df8e2:/etc/nginx# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1 gaeapp
172.17.0.5 f9c9cb5df8e2
docker ps result
I was able to solve the issue by exposing my Wordpress site through port 8080 from my docker container, it was exposed through port 80 before. It does not make much sense but if anyone knows the roots cause, please do go ahead and explain.

Webpack-serve hmr not working with nginx proxy for websocket

I've setup nginx as a proxy for my local dev environment. I'm using webpack-serve for local dev as well, and a local ssl cert i've setup. I have the site working but I'm having issues the HMR.
I see this error when the web sockets try to connect
WebSocket connection to 'wss://local.way.com:7879/' failed: Error in connection establishment: net::ERR_SSL_PROTOCOL_ERROR
I can't tell if it's an issue with the certificate or the nginx setup.
server {
listen 7879 ssl;
server_name local.way.com;
ssl_certificate /usr/local/way-fe/config/proxy/ssl/certificate.crt;
ssl_certificate_key /usr/local/way-fe/config/proxy/ssl/certificate.key;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $proxy_protocol_port;
}
}
upstream websocket {
server local.way.com:7879;
}
and the webpack-serve config
module.exports = {
clipboard: true,
host: 'local.way.com',
port: 7878,
'https-cert':
'/usr/local/way-fe/config/proxy/ssl/certificate.crt',
'https-key':
'/usr/local/way-fe/config/proxy/ssl/certificate.key',
hotClient: {
port: 7879,
https: true,
},
};

How do I fix this Nginx configuration to properly proxy WebSocket requests instead of returning a 301?

Nginx noob. Trying to configure Nginx to act as an SSL proxy server in front of another web server running at http://localhost:8082. That is, I want all requests to http://localhost to be redirected to https://localhost. That part is working just fine.
Problem is, the app on port 8082 also uses WebSocket connections at ws://localhost:8082/public-api/repossession-requests-socket. I'm trying to redirect any connections to ws://localhost/public-api/repossession-requests-socket to wss://localhost/public-api/repossession-requests-socket and have Nginx proxy those WebSocket requests to ws://localhost:8082/public-api/repossession-requests-socket.
Instead, the WebSocket connections are failing because Nginx is returning a 301 for both ws://localhost/public-api/repossession-requests-socket & wss://localhost/public-api/repossession-requests-socket. My configuration is below; I'm using the Docker image nginx:alpine in my tests ($PWD is mapped to /app).
How do I need to change this so that I no longer see 301s?
events {
worker_connections 1024;
}
http {
server {
listen 80;
return 301 https://$host$request_uri;
}
server {
listen 443;
server_name localhost;
ssl_certificate /app/docker/public.pem;
ssl_certificate_key /app/docker/private.pem;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
access_log /app/access-443.log;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8082;
proxy_read_timeout 90;
proxy_redirect http://localhost:8082 https://localhost;
}
location /public-api/repossession-requests-socket/ {
proxy_pass http://localhost:8082;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}
Found the problem. The trailing slash on the end of the location stanza.
location /public-api/repossession-reqeuests-socket/ should have been location /public-api/repossession-reqeuests-socket.

Artifactory bad gateway error

I am trying to use artifactory as a docker registry. But pushing docker images gives a Bad Gateway error.
Following is my nginx configuration
upstream artifactory_lb {
server artifactory01.mycomapany.com:8081;
server artifactory01.mycomapany.com:8081 backup;
server myLoadBalancer.mycompany.com:8081;
}
log_format upstreamlog '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request upstream_response_time $upstream_response_time msec $msec request_time $request_time';
server {
listen 80;
listen 443 ssl;
client_max_body_size 2048M;
location / {
proxy_set_header Host $host:$server_port;
proxy_pass http://artifactory_lb;
proxy_read_timeout 90;
}
access_log /var/log/nginx/access.log upstreamlog;
location /basic_status {
stub_status on;
allow all;
}
}
# Server configuration
server {
listen 2222 ssl default_server;
ssl_certificate /etc/nginx/ssl/self-signed/self.crt;
ssl_certificate_key /etc/nginx/ssl/self-signed/self.key;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
server_name myloadbalancer.mycompany.com;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/(v1|v2)/(.*) /api/docker/docker_repo/$1/$2;
client_max_body_size 0;
chunked_transfer_encoding on;
location / {
proxy_read_timeout 900;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_set_header X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
proxy_pass http://myloadbalancer.company.com:8081/artifactory/;
}
}
The docker command I use to push images is
docker push myloadbalancer:2222/image_name
Nginx error logs show the following error 24084 connect() failed (111: Connection refused) while connecting to upstream, client: internal_ip, server: , request: "GET /artifactory/inhouse HTTP/1.0", upstream: "http:/internal_ip:8081/artifactory/repo"
What am I missing?
This can be fixed by changing the proxy pass to point to any of the upstream servers.
proxy_pass http://artifactory_lb;

Resources