I want to run severel services/pages behind nginx. Each service shall be available through a subdirectory instead of a subdomain.
I'am using jwilder/nginx-proxy as proxy container:
nginx_proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock
And the owncloud container:
web:
image: owncloud:8.1
container_name: my_owncloud
environment:
- VIRTUAL_HOST=www.example.com
ports:
- 8081:80
The modified nginx config:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
upstream cloud {
server 172.17.0.3:80:
}
server {
server_name domain.org www.domain.org;
listen 80;
access_log /var/log/nginx/access.log vhost;
index index.html index.htm index.php;
root /var/www/main;
location /cloud/ {
proxy_pass http://cloud/;
}
location / {
proxy_pass http://www.some-domain.com/;
}
location /sub/ {
alias /var/www/sub/;
}
}
The problem is that owncloud tries to load styles, images, etc. from / instead of /cloud. Owncloud itself is working and is reachable by domain.org:8081. Do I have to add some rewrite, proxy_redirect or other stuff?
You can use 'overwritewebroot' as described in the manual. Edit your owncloud's 'config/config.php' and include:
<?php
$CONFIG = array (
...
'overwritewebroot' => '/cloud',
...
);
Then owncloud will consider '/cloud/' as the base for any internal URL (e.g., images, styles, etc.).
Related
I am trying to set up a homeserver with Jenkins and Jfrog Artifactory OSS using docker-compose and Nginx as a reverse proxy.
The docker host has "homeserver" as hostname and my goal is to reach the Jenkins over http://homeserver/jenkins and artifactory over http://homeserver/artifactory.
While setting this up for Jenkins was no problem, I can't get artifactory to run as I want it to.
My docker-compose.yml is as follows:
version: '3.8'
services:
reverseproxy:
image: nginx
container_name: homeserver-reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
jenkins:
image: jenkins/jenkins:lts
container_name: homeserver-jenkins
environment:
- JENKINS_OPTS="--prefix=/jenkins"
artifactory:
image: docker.bintray.io/jfrog/artifactory-oss:latest
container_name: homeserver-artifactory
volumes:
- artifactory-data:/var/opt/jfrog/artifactory
ports:
- "8081:8081"
- "8082:8082"
volumes:
artifactory-data:
I started off with the following nginx configuration:
events {}
http {
upstream jenkins {
server homeserver-jenkins:8080;
}
server {
server_name homeserver;
location /jenkins {
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://jenkins;
proxy_read_timeout 90;
proxy_http_version 1.1;
proxy_request_buffering off;
}
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
location / {
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
chunked_transfer_encoding on;
client_max_body_size 0;
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://homeserver-artifactory:8082;
proxy_next_upstream error timeout non_idempotent;
proxy_next_upstream_tries 1;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://homeserver-artifactory:8081;
}
}
}
}
This lets me access Jenkins as intended over http://homeserver/jenkins.
Artifactory is running (and accessible) with this configuration, but only directly at /.
Note that the parts for artifactory are exactly what the Jfrog website suggests as configuration.
That is not what I want, though.
In order to switch artifactory to http://homeserver/artifactory I see two possible options:
Configure artifactory to include an URL prefix (like I configured Jenkins with the JENKINS_OPTS="--prefix=/jenkins". However I could not figure out how to do this. The artifactory system.yml lets me configure the ports, but not the URL. Setting the Base URL over the web interface does not seem to work either, it just affects the redirected URLs and generated links (precisely at is stated in the tooltip of the website).
Change the nginx configuration and rewrite the request URIs. I tried, but it just doesn't work as I hoped it would.
Here is what I tried for the second option:
events {}
http {
upstream jenkins {
server homeserver-jenkins:8080;
}
server {
server_name homeserver;
location /jenkins {
# rewrite ^/jenkins(.*)$ $1 break;
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://jenkins;
proxy_read_timeout 90;
# proxy_redirect http://homeserver-jenkins:8080
proxy_http_version 1.1;
proxy_request_buffering off;
}
rewrite ^/jfrog/$ /jfrog/ui/ redirect;
rewrite ^/jfrog/ui$ /jfrog/ui/ redirect;
location /jfrog/ {
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
rewrite ^/jfrog(.*)$ $1 break;
chunked_transfer_encoding on;
client_max_body_size 0;
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_pass http://homeserver-artifactory:8082;
proxy_next_upstream error timeout non_idempotent;
proxy_next_upstream_tries 1;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/artifactory/ {
proxy_pass http://homeserver-artifactory:8081;
}
}
}
}
I already tried here to change the location from http://homeserver/artifactory to http://homeserver/jfrog to avoid clashes with the nested location block - don't know if that's important or not.
When trying to run this configuration, I get a lot of error messages from my reverseproxy like
"GET /ui/css/chunk-vendors.4485dbea.css HTTP/1.1" 404 154
and the Jfrog logo bitmap in the loading screen does not show, and neither does the website itself load correctly.
After checking the network traffic with firefox it seems to me that the problem comes from the <link/> elements in the HTML document which is loaded when visiting http://homeserver/jfrog/ui/.
For example the CSS which led to the error message above comes from:
<link href="/ui/css/chunk-vendors.4485dbea.css" rel="preload" as="style">
Firefox then tries to load http://homeserver/ui/css/chunk-vendors.4485dbea.css. This does not work, the correct URL would be http://homeserver/jfrog/ui/css/chunk-vendors.4485dbea.css.
I am trying to deploy my docker image to google app engine, I succfully mananged to build the image and push it to GCR. And deploy it using gcloud app deploy --image 'link-to-image-on-gcr'
But when accessing the application I'm getting a 502 bad gateway. I ssh into the server and checked the logs of the nginx container in docker and discovered the below log
2020/05/04 00:52:50 [error] 33#33: *127 connect() failed (111: Connection refused) while connecting to upstream, client: 74.125.24.153, server: , request: "GET /wp-login.php HTTP/1.1", upstream: "http://172.17.0.1:8080/wp-login.php", host: "myappengineservice-myrepo.ue.r.appspot.com"
By default, my docker image only has one container (its a Wordpress image), when deployed to app engine I suppose by default app engine will start my docker container within docker and expose the frontend via an Nginx proxy, so all the requests are routed through the Nginx proxy.
After playing around for a while, I edited the Nginx configuration file and came across this line
location / {
proxy_pass http://app_server;
I edited this a replaced it with my Wordpress docker containers internal IP address.
(proxy_pass http://172.17.0.6;)
And voila it seemed to have worked, and the requests are now been routed to my docker container.
This was obviously a temporary fix, how can I make this permanent and any idea on why this is happening?
app.yaml
runtime: custom
service: my-wordpress
env: flex
nginx.conf (inside the Nginx container)
daemon off;
worker_processes auto;
events {
worker_connections 4096;
multi_accept on;
}
http {
include mime.types;
server_tokens off;
variables_hash_max_size 2048;
# set max body size to 32m as appengine supports.
client_max_body_size 32m;
tcp_nodelay on;
tcp_nopush on;
underscores_in_headers on;
# GCLB uses a 10 minutes keep-alive timeout. Setting it to a bit more here
# to avoid a race condition between the two timeouts.
keepalive_timeout 650;
# Effectively unlimited number of keepalive requests in the case of GAE flex.
keepalive_requests 4294967295;
upstream app_server {
keepalive 192;
server gaeapp:8080;
}
geo $source_type {
default ext;
127.0.0.0/8 lo;
169.254.0.0/16 sb;
35.191.0.0/16 lb;
130.211.0.0/22 lb;
172.16.0.0/12 do;
}
map $http_upgrade $ws_connection_header_value {
default "";
websocket upgrade;
}
# ngx_http_realip_module gets the second IP address from the last of the X-Forwarded-For header
# X-Forwarded-For: [USER REQUEST PROVIDED X-F-F.]USER-IP.GCLB_IP
set_real_ip_from 0.0.0.0/0;
set_real_ip_from 0::/0;
real_ip_header X-Forwarded-For;
iap_jwt_verify off;
iap_jwt_verify_project_number 96882395728;
iap_jwt_verify_app_id my-project-id;
iap_jwt_verify_key_file /iap_watcher/iap_verify_keys.txt;
iap_jwt_verify_iap_state_file /iap_watcher/iap_state;
iap_jwt_verify_state_cache_time_sec 300;
iap_jwt_verify_key_cache_time_sec 43200;
iap_jwt_verify_logs_only on;
server {
iap_jwt_verify on;
# self signed ssl for load balancer traffic
listen 8443 default_server ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
proxy_pass_header Server;
gzip on;
gzip_proxied any;
gzip_types text/html text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/xml+rss application/protobuf application/x-protobuf;
gzip_vary on;
# Allow more space for request headers.
large_client_header_buffers 4 32k;
# Allow more space for response headers. These settings apply for response
# only, not requests which buffering is disabled below.
proxy_buffer_size 64k;
proxy_buffers 32 4k;
proxy_busy_buffers_size 72k;
# Explicitly set client buffer size matching nginx default.
client_body_buffer_size 16k;
# If version header present, make sure it's correct.
if ($http_x_appengine_version !~ '(?:^$)|(?:^my-wordpress:20200504t053100(?:\..*)?$)') {
return 444;
}
set $x_forwarded_for_test "";
# If request comes from sb, lo, or do, do not care about x-forwarded-for header.
if ($source_type !~ sb|lo|do) {
set $x_forwarded_for_test $http_x_forwarded_for;
}
# For local health checks only.
if ($http_x_google_vme_health_check = 1) {
set $x_forwarded_for_test "";
}
location / {
proxy_pass http://app_server;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $ws_connection_header_value;
proxy_set_header X-AppEngine-Api-Ticket $http_x_appengine_api_ticket;
proxy_set_header X-AppEngine-Auth-Domain $http_x_appengine_auth_domain;
proxy_set_header X-AppEngine-BlobChunkSize $http_x_appengine_blobchunksize;
proxy_set_header X-AppEngine-BlobSize $http_x_appengine_blobsize;
proxy_set_header X-AppEngine-BlobUpload $http_x_appengine_blobupload;
proxy_set_header X-AppEngine-Cron $http_x_appengine_cron;
proxy_set_header X-AppEngine-Current-Namespace $http_x_appengine_current_namespace;
proxy_set_header X-AppEngine-Datacenter $http_x_appengine_datacenter;
proxy_set_header X-AppEngine-Default-Namespace $http_x_appengine_default_namespace;
proxy_set_header X-AppEngine-Default-Version-Hostname $http_x_appengine_default_version_hostname;
proxy_set_header X-AppEngine-Federated-Identity $http_x_appengine_federated_identity;
proxy_set_header X-AppEngine-Federated-Provider $http_x_appengine_federated_provider;
proxy_set_header X-AppEngine-Https $http_x_appengine_https;
proxy_set_header X-AppEngine-Inbound-AppId $http_x_appengine_inbound_appid;
proxy_set_header X-AppEngine-Inbound-User-Email $http_x_appengine_inbound_user_email;
proxy_set_header X-AppEngine-Inbound-User-Id $http_x_appengine_inbound_user_id;
proxy_set_header X-AppEngine-Inbound-User-Is-Admin $http_x_appengine_inbound_user_is_admin;
proxy_set_header X-AppEngine-QueueName $http_x_appengine_queuename;
proxy_set_header X-AppEngine-Request-Id-Hash $http_x_appengine_request_id_hash;
proxy_set_header X-AppEngine-Request-Log-Id $http_x_appengine_request_log_id;
proxy_set_header X-AppEngine-TaskETA $http_x_appengine_tasketa;
proxy_set_header X-AppEngine-TaskExecutionCount $http_x_appengine_taskexecutioncount;
proxy_set_header X-AppEngine-TaskName $http_x_appengine_taskname;
proxy_set_header X-AppEngine-TaskRetryCount $http_x_appengine_taskretrycount;
proxy_set_header X-AppEngine-TaskRetryReason $http_x_appengine_taskretryreason;
proxy_set_header X-AppEngine-Upload-Creation $http_x_appengine_upload_creation;
proxy_set_header X-AppEngine-User-Email $http_x_appengine_user_email;
proxy_set_header X-AppEngine-User-Id $http_x_appengine_user_id;
proxy_set_header X-AppEngine-User-Is-Admin $http_x_appengine_user_is_admin;
proxy_set_header X-AppEngine-User-Nickname $http_x_appengine_user_nickname;
proxy_set_header X-AppEngine-User-Organization $http_x_appengine_user_organization;
proxy_set_header X-AppEngine-Version "";
add_header X-AppEngine-Flex-AppLatency $request_time always;
}
include /var/lib/nginx/extra/*.conf;
}
server {
# expose /nginx_status but on a different port (8090) to avoid
# external visibility / conflicts with the app.
listen 8090;
location /nginx_status {
stub_status on;
access_log off;
}
location / {
root /dev/null;
}
}
server {
# expose health checks on a different port to avoid
# external visibility / conflicts with the app.
listen 10402 ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
location = /liveness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 200 'ok';
}
return 200 'ok';
}
location = /readiness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 503 'app lameducked';
}
return 200 'ok';
}
}
# Add session affinity entry to log_format line i.i.f. the GCLB cookie
# is present.
map $cookie_gclb $session_affinity_log_entry {
'' '';
default sessionAffinity=$cookie_gclb;
}
# Output nginx access logs in the standard format, plus additional custom
# fields containing "X-Cloud-Trace-Context" header, the current epoch
# timestamp, the request latency, and "X-Forwarded-For" at the end.
# If you make changes to the log format below, you MUST validate this against
# the parsing regex at:
# GoogleCloudPlatform/appengine-sidecars-docker/fluentd_logger/managed_vms.conf
# (In general, adding to the end of the list does not require a change if the
# field does not need to be logged.)
log_format custom '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'tracecontext="$http_x_cloud_trace_context" '
'timestampSeconds="${msec}000000" '
'latencySeconds="$request_time" '
'x-forwarded-for="$http_x_forwarded_for" '
'uri="$uri" '
'appLatencySeconds="$upstream_response_time" '
'appStatusCode="$upstream_status" '
'upgrade="$http_upgrade" '
'iap_jwt_action="$iap_jwt_action" '
'$session_affinity_log_entry';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log warn;
}
/etc/hosts (inside Nginx container)
root#f9c9cb5df8e2:/etc/nginx# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1 gaeapp
172.17.0.5 f9c9cb5df8e2
docker ps result
I was able to solve the issue by exposing my Wordpress site through port 8080 from my docker container, it was exposed through port 80 before. It does not make much sense but if anyone knows the roots cause, please do go ahead and explain.
I've deployed two Node.js apps on Amazon Beanstalk: one is the frontend developed with React and running with serve, the other is an MQTT broker with a web socket handler.
Load balancer is nginx 1.12.1, with this configuration (written inside .ebextension folder of the backend project):
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
server 127.0.0.1:5000;
}
server {
listen 8080;
if ($time_iso8601 ~ "^(\d{4})-(\d{2})-(\d{2})T(\d{2})") {
set $year $1;
set $month $2;
set $day $3;
set $hour $4;
}
access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;
access_log /var/log/nginx/access.log main;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass "http://127.0.0.1:3003";
proxy_redirect off;
# Socket.IO Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass_request_headers on;
}
location /subscriptions {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
gzip on;
gzip_comp_level 4;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
The frontend should be able to consume wss://mqtt_url/subscriptions, but with whatever configuration I use, I always get WebSocket is closed before the connection is established. This configuration seems to work with non-secure web socket consumption.
The server waiting for connections is just a simple HTTP server like this:
const server = createServer();
server.listen(5000, '0.0.0.0', () =>
new SubscriptionServer({
execute,
subscribe,
schema,
onConnect: async (connectionParams) => {
},
}, {
server,
path: '/subscriptions',
}));
Beanstalk load balancing is configured like the following:
Port: TCP on port 80
Port: 80
Secure port: SSL on port 443 Secure
port: 443
Health: TCP pings on port 80
Cross zone load balancing is enabled
Connection draining is enabled with 200 seconds timeout
Searching around for information, I was just able to see that it's good to select TCP/SSL as protocol, but apart from that, it's not very clear how to configure WSS here.
Any suggestion will be appreciated!
Thanks.
Try adding this to your subscriptions location block
proxy_read_timeout 86400;
I setted up docker-nginx with docker-gen in a docker-compose file
version: '2'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
everything works fine, I do have a default.conf folder generated, depending on my others containers, here it is:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# bnbkeeper.thibautduchene.fr
upstream bnbkeeper.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# bnbkeeper
server 172.20.0.12:8080;
}
server {
server_name bnbkeeper.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://bnbkeeper.thibautduchene.fr;
}
}
# gags.thibautduchene.fr
upstream gags.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# gogs
server 172.20.0.7:3000;
}
server {
server_name gags.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://gags.thibautduchene.fr;
}
}
# portainer.thibautduchene.fr
upstream portainer.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# portainer
server 172.20.0.9:9000;
}
server {
server_name portainer.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://portainer.thibautduchene.fr;
}
}
however, when I reach any of these proxied address, the server does'nt exist and nginx doesnt even catch the request...
It looks like nginx is not even aware of my subdomain.
Ok, for those that are as silly as me, don't forget to add the subdomain to your provider, nginx doe'st yet handle it by itself..
I'm getting an error when using the docker image for setting up an nginx proxy server: nginx-proxy. If I hit and point on my site the response is incredibly slow to come back in some instances. This happens pretty much immediately, if I hit an endpoint three times, for example, in relatively quick succession. The log for nginx shows the following error:
2017/05/14 09:24:26 [warn] 26#26: *29 upstream server temporarily
disabled while connecting to upstream, client: 10.255.0.2, server: [ip
removed], request: "GET
/documents/5918206a-8da0-4deb-86b2-6b627867e0d5 HTTP/1.1", upstream:
"http://10.255.0.4:8080/documents/5918206a-8da0-4deb-86b2-6b627867e0d5",
host: "[ip removed]"
The log for my back end service doesn't show any errors, so I'm not sure what may be going on. I am guessing it is a configuration issue with nginx, which could be fixed by changing the settings, but I am not sure where to start. Does anyone have any ideas?
My configuration looks like this in the end when the docker instance runs:
nginx.conf:
# cat nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
conf.d/default.conf:
daemon off;
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream [ip removed] {
## Can be connect with "ingress" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.255.0.6:8080;
## Can be connect with "datemo_default" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.0.0.5:8080;
}
server {
server_name [ip removed];
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://[ip removed];
}
}