docker-nginx with docker-gen doesnt catch any of the declared subdomain - nginx

I setted up docker-nginx with docker-gen in a docker-compose file
version: '2'
services:
nginx:
image: nginx
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
container_name: nginx
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
nginx-gen:
image: jwilder/docker-gen
command: -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf
container_name: nginx-gen
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
restart: unless-stopped
volumes:
- ${NGINX_FILES_PATH}/conf.d:/etc/nginx/conf.d
- ${NGINX_FILES_PATH}/vhost.d:/etc/nginx/vhost.d
- ${NGINX_FILES_PATH}/html:/usr/share/nginx/html
- ${NGINX_FILES_PATH}/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_DOCKER_GEN_CONTAINER: "nginx-gen"
NGINX_PROXY_CONTAINER: "nginx"
networks:
default:
external:
name: nginx-proxy
everything works fine, I do have a default.conf folder generated, depending on my others containers, here it is:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# bnbkeeper.thibautduchene.fr
upstream bnbkeeper.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# bnbkeeper
server 172.20.0.12:8080;
}
server {
server_name bnbkeeper.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://bnbkeeper.thibautduchene.fr;
}
}
# gags.thibautduchene.fr
upstream gags.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# gogs
server 172.20.0.7:3000;
}
server {
server_name gags.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://gags.thibautduchene.fr;
}
}
# portainer.thibautduchene.fr
upstream portainer.thibautduchene.fr {
## Can be connect with "nginx-proxy" network
# portainer
server 172.20.0.9:9000;
}
server {
server_name portainer.thibautduchene.fr;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://portainer.thibautduchene.fr;
}
}
however, when I reach any of these proxied address, the server does'nt exist and nginx doesnt even catch the request...
It looks like nginx is not even aware of my subdomain.

Ok, for those that are as silly as me, don't forget to add the subdomain to your provider, nginx doe'st yet handle it by itself..

Related

Google App Engine Docker Container 502 Bad Gateway

I am trying to deploy my docker image to google app engine, I succfully mananged to build the image and push it to GCR. And deploy it using gcloud app deploy --image 'link-to-image-on-gcr'
But when accessing the application I'm getting a 502 bad gateway. I ssh into the server and checked the logs of the nginx container in docker and discovered the below log
2020/05/04 00:52:50 [error] 33#33: *127 connect() failed (111: Connection refused) while connecting to upstream, client: 74.125.24.153, server: , request: "GET /wp-login.php HTTP/1.1", upstream: "http://172.17.0.1:8080/wp-login.php", host: "myappengineservice-myrepo.ue.r.appspot.com"
By default, my docker image only has one container (its a Wordpress image), when deployed to app engine I suppose by default app engine will start my docker container within docker and expose the frontend via an Nginx proxy, so all the requests are routed through the Nginx proxy.
After playing around for a while, I edited the Nginx configuration file and came across this line
location / {
proxy_pass http://app_server;
I edited this a replaced it with my Wordpress docker containers internal IP address.
(proxy_pass http://172.17.0.6;)
And voila it seemed to have worked, and the requests are now been routed to my docker container.
This was obviously a temporary fix, how can I make this permanent and any idea on why this is happening?
app.yaml
runtime: custom
service: my-wordpress
env: flex
nginx.conf (inside the Nginx container)
daemon off;
worker_processes auto;
events {
worker_connections 4096;
multi_accept on;
}
http {
include mime.types;
server_tokens off;
variables_hash_max_size 2048;
# set max body size to 32m as appengine supports.
client_max_body_size 32m;
tcp_nodelay on;
tcp_nopush on;
underscores_in_headers on;
# GCLB uses a 10 minutes keep-alive timeout. Setting it to a bit more here
# to avoid a race condition between the two timeouts.
keepalive_timeout 650;
# Effectively unlimited number of keepalive requests in the case of GAE flex.
keepalive_requests 4294967295;
upstream app_server {
keepalive 192;
server gaeapp:8080;
}
geo $source_type {
default ext;
127.0.0.0/8 lo;
169.254.0.0/16 sb;
35.191.0.0/16 lb;
130.211.0.0/22 lb;
172.16.0.0/12 do;
}
map $http_upgrade $ws_connection_header_value {
default "";
websocket upgrade;
}
# ngx_http_realip_module gets the second IP address from the last of the X-Forwarded-For header
# X-Forwarded-For: [USER REQUEST PROVIDED X-F-F.]USER-IP.GCLB_IP
set_real_ip_from 0.0.0.0/0;
set_real_ip_from 0::/0;
real_ip_header X-Forwarded-For;
iap_jwt_verify off;
iap_jwt_verify_project_number 96882395728;
iap_jwt_verify_app_id my-project-id;
iap_jwt_verify_key_file /iap_watcher/iap_verify_keys.txt;
iap_jwt_verify_iap_state_file /iap_watcher/iap_state;
iap_jwt_verify_state_cache_time_sec 300;
iap_jwt_verify_key_cache_time_sec 43200;
iap_jwt_verify_logs_only on;
server {
iap_jwt_verify on;
# self signed ssl for load balancer traffic
listen 8443 default_server ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
proxy_pass_header Server;
gzip on;
gzip_proxied any;
gzip_types text/html text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/xml+rss application/protobuf application/x-protobuf;
gzip_vary on;
# Allow more space for request headers.
large_client_header_buffers 4 32k;
# Allow more space for response headers. These settings apply for response
# only, not requests which buffering is disabled below.
proxy_buffer_size 64k;
proxy_buffers 32 4k;
proxy_busy_buffers_size 72k;
# Explicitly set client buffer size matching nginx default.
client_body_buffer_size 16k;
# If version header present, make sure it's correct.
if ($http_x_appengine_version !~ '(?:^$)|(?:^my-wordpress:20200504t053100(?:\..*)?$)') {
return 444;
}
set $x_forwarded_for_test "";
# If request comes from sb, lo, or do, do not care about x-forwarded-for header.
if ($source_type !~ sb|lo|do) {
set $x_forwarded_for_test $http_x_forwarded_for;
}
# For local health checks only.
if ($http_x_google_vme_health_check = 1) {
set $x_forwarded_for_test "";
}
location / {
proxy_pass http://app_server;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $server_name;
proxy_send_timeout 3600s;
proxy_read_timeout 3600s;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $ws_connection_header_value;
proxy_set_header X-AppEngine-Api-Ticket $http_x_appengine_api_ticket;
proxy_set_header X-AppEngine-Auth-Domain $http_x_appengine_auth_domain;
proxy_set_header X-AppEngine-BlobChunkSize $http_x_appengine_blobchunksize;
proxy_set_header X-AppEngine-BlobSize $http_x_appengine_blobsize;
proxy_set_header X-AppEngine-BlobUpload $http_x_appengine_blobupload;
proxy_set_header X-AppEngine-Cron $http_x_appengine_cron;
proxy_set_header X-AppEngine-Current-Namespace $http_x_appengine_current_namespace;
proxy_set_header X-AppEngine-Datacenter $http_x_appengine_datacenter;
proxy_set_header X-AppEngine-Default-Namespace $http_x_appengine_default_namespace;
proxy_set_header X-AppEngine-Default-Version-Hostname $http_x_appengine_default_version_hostname;
proxy_set_header X-AppEngine-Federated-Identity $http_x_appengine_federated_identity;
proxy_set_header X-AppEngine-Federated-Provider $http_x_appengine_federated_provider;
proxy_set_header X-AppEngine-Https $http_x_appengine_https;
proxy_set_header X-AppEngine-Inbound-AppId $http_x_appengine_inbound_appid;
proxy_set_header X-AppEngine-Inbound-User-Email $http_x_appengine_inbound_user_email;
proxy_set_header X-AppEngine-Inbound-User-Id $http_x_appengine_inbound_user_id;
proxy_set_header X-AppEngine-Inbound-User-Is-Admin $http_x_appengine_inbound_user_is_admin;
proxy_set_header X-AppEngine-QueueName $http_x_appengine_queuename;
proxy_set_header X-AppEngine-Request-Id-Hash $http_x_appengine_request_id_hash;
proxy_set_header X-AppEngine-Request-Log-Id $http_x_appengine_request_log_id;
proxy_set_header X-AppEngine-TaskETA $http_x_appengine_tasketa;
proxy_set_header X-AppEngine-TaskExecutionCount $http_x_appengine_taskexecutioncount;
proxy_set_header X-AppEngine-TaskName $http_x_appengine_taskname;
proxy_set_header X-AppEngine-TaskRetryCount $http_x_appengine_taskretrycount;
proxy_set_header X-AppEngine-TaskRetryReason $http_x_appengine_taskretryreason;
proxy_set_header X-AppEngine-Upload-Creation $http_x_appengine_upload_creation;
proxy_set_header X-AppEngine-User-Email $http_x_appengine_user_email;
proxy_set_header X-AppEngine-User-Id $http_x_appengine_user_id;
proxy_set_header X-AppEngine-User-Is-Admin $http_x_appengine_user_is_admin;
proxy_set_header X-AppEngine-User-Nickname $http_x_appengine_user_nickname;
proxy_set_header X-AppEngine-User-Organization $http_x_appengine_user_organization;
proxy_set_header X-AppEngine-Version "";
add_header X-AppEngine-Flex-AppLatency $request_time always;
}
include /var/lib/nginx/extra/*.conf;
}
server {
# expose /nginx_status but on a different port (8090) to avoid
# external visibility / conflicts with the app.
listen 8090;
location /nginx_status {
stub_status on;
access_log off;
}
location / {
root /dev/null;
}
}
server {
# expose health checks on a different port to avoid
# external visibility / conflicts with the app.
listen 10402 ssl;
ssl_certificate /etc/ssl/localcerts/lb.crt;
ssl_certificate_key /etc/ssl/localcerts/lb.key;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AES256:!SHA1;
ssl_prefer_server_ciphers on;
ssl_session_timeout 3h;
location = /liveness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 200 'ok';
}
return 200 'ok';
}
location = /readiness_check {
if ( -f /tmp/nginx/lameducked ) {
return 503 'lameducked';
}
if ( -f /var/lib/google/ae/unhealthy/sidecars ) {
return 503 'unhealthy sidecars';
}
if ( !-f /var/lib/google/ae/disk_not_full ) {
return 503 'disk full';
}
if ( -f /tmp/nginx/app_lameducked ) {
return 503 'app lameducked';
}
return 200 'ok';
}
}
# Add session affinity entry to log_format line i.i.f. the GCLB cookie
# is present.
map $cookie_gclb $session_affinity_log_entry {
'' '';
default sessionAffinity=$cookie_gclb;
}
# Output nginx access logs in the standard format, plus additional custom
# fields containing "X-Cloud-Trace-Context" header, the current epoch
# timestamp, the request latency, and "X-Forwarded-For" at the end.
# If you make changes to the log format below, you MUST validate this against
# the parsing regex at:
# GoogleCloudPlatform/appengine-sidecars-docker/fluentd_logger/managed_vms.conf
# (In general, adding to the end of the list does not require a change if the
# field does not need to be logged.)
log_format custom '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'tracecontext="$http_x_cloud_trace_context" '
'timestampSeconds="${msec}000000" '
'latencySeconds="$request_time" '
'x-forwarded-for="$http_x_forwarded_for" '
'uri="$uri" '
'appLatencySeconds="$upstream_response_time" '
'appStatusCode="$upstream_status" '
'upgrade="$http_upgrade" '
'iap_jwt_action="$iap_jwt_action" '
'$session_affinity_log_entry';
access_log /var/log/nginx/access.log custom;
error_log /var/log/nginx/error.log warn;
}
/etc/hosts (inside Nginx container)
root#f9c9cb5df8e2:/etc/nginx# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1 gaeapp
172.17.0.5 f9c9cb5df8e2
docker ps result
I was able to solve the issue by exposing my Wordpress site through port 8080 from my docker container, it was exposed through port 80 before. It does not make much sense but if anyone knows the roots cause, please do go ahead and explain.

Nginx: (111: Connection refused) while connecting to upstream wordpress & docker

As many people I have the problem with the following error when I call the website (blog.mydomain.de):
502 Bad Gateway
nginx/1.14.2
2020/03/14 23:59:08 [error] 7#7: *1 connect() failed (111: Connection refused) while connecting to upstream, client: $IP, server: blog.mydomain.de, request: "GET / HTTP/2.0", upstream: "https://192.168.160.5:443/", host: "blog.mydomain.de"
So my problem is using WordPress. I also show you the NextCloud config because this works without any problems. I also know that the WordPress nginx config should contain more but I tried to find it if I even get this error with minimal config missing fastcgi and stuff.
worker_processes auto;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 1024;
}
http {
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml
application/javascript application/json application/xml application/rss+xml image/svg+xml;
server_names_hash_bucket_size 64;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
more_clear_headers 'server';
ssl_certificate /etc/letsencrypt/live/mydomain.de-0001/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mydomain.de-0001/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/mydomain.de-0001/chain.pem;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_ciphers "EECDH-AESGCM:EDH+ESGCM:AES256+EECDH:AES256+EDH";
ssl_prefer_server_ciphers on;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
server {
listen 80;
listen [::]:80;
server_name blog.mydomain.de cloud.mydomain.de;
return 301 https://$host$request_uri;
#return 301 https://$server_name$request_uri;
}
# NextCloudPi
server {
server_name cloud.mydomain.de;
listen 443 ssl http2;
listen [::]:443 ssl http2;
client_max_body_size 100G;
underscores_in_headers on;
location / {
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Front-End-Https on;
proxy_pass https://nextcloudpi;
}
}
# NextCloudPi Konfiguration Web-Interface
server {
server_name cloud.mydomain.de;
listen 4443 ssl http2;
listen [::]:4433 ssl http2;
location / {
more_clear_headers 'upgrade';
more_clear_headers 'Strict-Transport-Security';
proxy_ssl_verify off;
proxy_pass https://nextcloudpi:4443;
proxy_pass_header Authorization;
proxy_set_header 'X-Forwarded-Host' cloud.mydomain.de;
proxy_set_header 'X-Forwarded-Proto' https;
proxy_set_header 'X-Forwarded-For' $remote_addr;
proxy_set_header 'X-Forwarded-IP' $remote_addr;
}
}
# WordPress
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name blog.mydomain.de;
client_max_body_size 200m;
underscores_in_headers on;
location / {
proxy_pass http://wordpress;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
}
}
I'm using this with docker containers. The compose file looks like this:
version: "3"
networks:
nextcloudpi:
services:
nginx:
restart: always
container_name: nginx
image: cptdaydreamer/nginx:latest
ports:
- 80:80
- 443:443
- 4443:4443
- 6800:6800
volumes:
- /media/storage/nginx:/var/log/nginx
- /etc/ssl:/etc/ssl
- /etc/letsencrypt/live:/etc/letsencrypt/live
- /etc/letsencrypt/archive:/etc/letsencrypt/archive
links:
- wordpress
depends_on:
- nextcloudpi
networks:
- nextcloudpi
- default
nextcloudpi:
restart: always
container_name: nextcloudpi
image: cptdaydreamer/nextcloudpi:latest
expose:
- 80
- 443
- 4443
- 6800
volumes:
- /media/storage/data:/data
- /etc/localtime:/etc/localtime:ro
networks:
- nextcloudpi
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
ports:
- 9001:9000
- 8000:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /media/storage/portainer:/data
container_name: portainer
db:
container_name: mariadb
image: mariadb:latest
#ports:
# - 3306:3306
volumes:
- /media/storage/mariadb:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: $PRIVATE
MYSQL_DATABASE: $PRIVATE
MYSQL_USER: $PRIVATE
MYSQL_PASSWORD: $PRIVATE
wordpress:
container_name: wordpress
links:
- db
#ports:
# - 9000:9000
depends_on:
- db
image: wordpress:latest
expose:
- "80"
restart: always
volumes:
- /media/storage/wordpress:/var/www/html
environment:
WORDPRESS_DB_HOST: db:3306
#WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: $PRIVATE
WORDPRESS_DB_PASSWORD: $PRIVATE
WORDPRESS_DB_NAME: $PRIVATE
WORDPRESS_TABLE_PREFIX: $PRIVATE
I don't know what the exact problem is. The logs of the docker container of wordpress shows:
[15-Mar-2020 00:50:24] NOTICE: fpm is running, pid 1
[15-Mar-2020 00:50:24] NOTICE: ready to handle connections
Any ideas?
Updated on request:
Wordpress image is now latest instead of 7.3-fpm
Current used nginx.conf
Try edit to wordpress:9000 in the proxy pass script and change the Nginx config to this.
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
}
There's a mistake in your understanding. The wordpress-php-fpm image just expose the PHP-FPM service at port 9000, nothing running at https://wordpress:443 so Nginx will throw the 502 status. That's why you should use the fastcgi_pass to connect to PHP-FPM instead of proxy_pass like NextCloud API expose the https at port 4443 already.
When you split your stacks to 2 docker-compose.yml, everything will become more clearly and seperately.
-- wordpress/
--- docker-compose.yml
--- data/
-- nextcloud/
--- docker-compose.yml
--- data/
This is how Docker works.
From my experience, when using Docker, just keep a stack standalone. I mean Nextcloud going with an database, good. Then make another stack with WordPess and another database instance. It's take all the advantage of Docker and seperate the application each other.

kubernetes, port omitted when ingress-nginx redirects

I have a small cluster, and there are three external services. I use clusterIP as internal pod communication. Then I use ingress(nginx) as inverse proxy. The ingress connects internet to cluster.
When nginx redirects traffic, but did not sent port with domain. For example, ykt:31080/workflow redirects to ykt/workflow/login, it omits the port 31080. Then server could not find the page.
My ingress resource is configured as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: ykt
http:
paths:
- path: /business
backend:
serviceName: core-ykt
servicePort: 80
- host: ykt
http:
paths:
- path: /pre
backend:
serviceName: pre-core-ykt
servicePort: 80
- host: ykt
http:
paths:
- path: /workflow
backend:
serviceName: virtual-apply-ykt
servicePort: 80
And part of my ingress controller is configured as follows:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: NodePort
selector:
app: ingress-nginx
ports:
- name: http
port: 80
nodePort: 31080
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
serviceAccount: lb
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
name: ingress-nginx
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
================================================
Second edition
datalook_virtual_apply_pod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: virtual-apply-ykt
labels:
app: virtual-apply-ykt
purpose: ykt_production
spec:
containers:
- name: virtual-apply-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: volume-virtual-apply-ykt
mountPath: /usr/application
env:
- name: spring.config.location
value: application.properties
volumes:
- name: volume-virtual-apply-ykt
hostPath:
path: /opt/docker/datalook-virtual-apply
type: Directory
datalook_virtual_apply_svc.yaml file is as follows, I have no deployment file for datalook_virtual_apply service.
apiVersion: v1
kind: Service
metadata:
name: virtual-apply-ykt
labels:
name: virtual-apply-ykt
spec:
selector:
app: virtual-apply-ykt
type: ClusterIP
ports:
- port: 80
name: tcp
/////////////////////////////
second edition
///////////////////////////////
The backend generated nginx configuation is as follows: instead let application listen to 80, May I make ingress-nginx default port as 31080?
daemon off;
worker_processes 1;
pid /run/nginx.pid;
worker_rlimit_nofile 1047552;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/x86_64-linux-gnu/lua/5.1/?.so;;";
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";
init_by_lua_block {
require("resty.core")
collectgarbage("collect")
local lua_resty_waf = require("resty.waf")
lua_resty_waf.init()
}
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 0.0.0.0/0;
geoip_country /etc/nginx/geoip/GeoIP.dat;
geoip_city /etc/nginx/geoip/GeoLiteCity.dat;
geoip_org /etc/nginx/geoip/GeoIPASNum.dat;
geoip_proxy_recursive on;
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_header_buffer_size 1k;
client_header_timeout 60s;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 4k;
http2_max_header_size 16k;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 32;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
limit_req_status 503;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
gzip_vary on;
# Custom headers for response
server_tokens on;
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.96.0.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_x_forwarded_for $the_real_ip {
default $remote_addr;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
# validate $pass_access_scheme and $scheme are http to force a redirect
map "$scheme:$pass_access_scheme" $redirect_to_https {
default 0;
"http:http" 1;
"https:http" 1;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $pass_server_port $pass_port {
443 443;
default $pass_server_port;
}
# Obtain best http host
map $http_host $this_host {
default $http_host;
'' $host;
}
map $http_x_forwarded_host $best_http_host {
default $http_x_forwarded_host;
'' $this_host;
}
# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
default $http_x_request_id;
"" $request_id;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve auto;
proxy_ssl_session_reuse on;
upstream upstream-default-backend {
least_conn;
keepalive 32;
server 192.168.38.188:8080 max_fails=0 fail_timeout=0;
}
upstream default-gearbox-rack-api-gateway-5555 {
least_conn;
keepalive 32;
server 192.168.38.22:5555 max_fails=0 fail_timeout=0;
}
## start server _
server {
server_name _ ;
listen 80 default_server backlog=511;
listen [::]:80 default_server backlog=511;
set $proxy_upstream_name "-";
listen 443 default_server backlog=511 ssl http2;
listen [::]:443 default_server backlog=511 ssl http2;
# PEM sha: c1e1519ef05c8531e334ee947817a2ad495fe83a
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
location / {
log_by_lua_block {
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
access_log off;
port_in_redirect off;
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://upstream-default-backend;
proxy_redirect off;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
## start server master8g
server {
server_name master8g ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
log_by_lua_block {
}
port_in_redirect off;
set $proxy_upstream_name "default-gearbox-rack-api-gateway-5555";
set $namespace "default";
set $ingress_name "my-ingress";
set $service_name "gearbox-rack-api-gateway";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://default-gearbox-rack-api-gateway-5555;
proxy_redirect off;
}
}
## end server master8g
# default server, used for NGINX healthcheck and access to nginx stats
server {
# Use the port 18080 (random value just to avoid known ports) as default port for nginx.
# Changing this value requires a change in:
# https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
listen 18080 default_server backlog=511;
listen [::]:18080 default_server backlog=511;
set $proxy_upstream_name "-";
location /healthz {
access_log off;
return 200;
}
location /is-dynamic-lb-initialized {
access_log off;
content_by_lua_block {
local configuration = require("configuration")
local backend_data = configuration.get_backends_data()
if not backend_data then
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
return
end
ngx.say("OK")
ngx.exit(ngx.HTTP_OK)
}
}
location /nginx_status {
set $proxy_upstream_name "internal";
access_log off;
stub_status on;
}
location / {
set $proxy_upstream_name "upstream-default-backend";
proxy_pass http://upstream-default-backend;
}
}
}
stream {
log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# TCP services
# UDP services
}

Nginix proxy server error: "upstream server temporarily disabled while connecting to upstream"

I'm getting an error when using the docker image for setting up an nginx proxy server: nginx-proxy. If I hit and point on my site the response is incredibly slow to come back in some instances. This happens pretty much immediately, if I hit an endpoint three times, for example, in relatively quick succession. The log for nginx shows the following error:
2017/05/14 09:24:26 [warn] 26#26: *29 upstream server temporarily
disabled while connecting to upstream, client: 10.255.0.2, server: [ip
removed], request: "GET
/documents/5918206a-8da0-4deb-86b2-6b627867e0d5 HTTP/1.1", upstream:
"http://10.255.0.4:8080/documents/5918206a-8da0-4deb-86b2-6b627867e0d5",
host: "[ip removed]"
The log for my back end service doesn't show any errors, so I'm not sure what may be going on. I am guessing it is a configuration issue with nginx, which could be fixed by changing the settings, but I am not sure where to start. Does anyone have any ideas?
My configuration looks like this in the end when the docker instance runs:
nginx.conf:
# cat nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server_names_hash_bucket_size 128;
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
conf.d/default.conf:
daemon off;
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
upstream [ip removed] {
## Can be connect with "ingress" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.255.0.6:8080;
## Can be connect with "datemo_default" network
# datemo_datemo.1.dean8edsp7ytoevagjnemb8bb
server 10.0.0.5:8080;
}
server {
server_name [ip removed];
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://[ip removed];
}
}

Owncloud 8 with Docker behind nginx available through subdirectory

I want to run severel services/pages behind nginx. Each service shall be available through a subdirectory instead of a subdomain.
I'am using jwilder/nginx-proxy as proxy container:
nginx_proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- 80:80
volumes:
- /var/run/docker.sock:/tmp/docker.sock
And the owncloud container:
web:
image: owncloud:8.1
container_name: my_owncloud
environment:
- VIRTUAL_HOST=www.example.com
ports:
- 8081:80
The modified nginx config:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
upstream cloud {
server 172.17.0.3:80:
}
server {
server_name domain.org www.domain.org;
listen 80;
access_log /var/log/nginx/access.log vhost;
index index.html index.htm index.php;
root /var/www/main;
location /cloud/ {
proxy_pass http://cloud/;
}
location / {
proxy_pass http://www.some-domain.com/;
}
location /sub/ {
alias /var/www/sub/;
}
}
The problem is that owncloud tries to load styles, images, etc. from / instead of /cloud. Owncloud itself is working and is reachable by domain.org:8081. Do I have to add some rewrite, proxy_redirect or other stuff?
You can use 'overwritewebroot' as described in the manual. Edit your owncloud's 'config/config.php' and include:
<?php
$CONFIG = array (
...
'overwritewebroot' => '/cloud',
...
);
Then owncloud will consider '/cloud/' as the base for any internal URL (e.g., images, styles, etc.).

Resources