docker-compose error with ghost and an nginx proxy - nginx

So, I'm getting started with docker-compose
Right now, I'm having an issue with nginx proxying requests.
So I have a container which uses the ghost image and is exposed on 2368:
ghostblog:
container_name: ghostblog
image: ghost
restart: always
ports:
- 2368:2368
env_file:
- ./config.env
volumes:
- "./petemsGhost/content/themes:/usr/src/ghost/content/themes"
- "./petemsGhost/content/apps:/usr/src/ghost/content/apps"
- "./petemsGhost/content/images:/usr/src/ghost/content/images"
- "./petemsGhost/content/data:/usr/src/ghost/content/data"
- "./petemsGhost/config:/var/lib/ghost"
And I'm linking that to an nginx container that is proxying requests to the container:
ghost_nginx:
restart: always
build: ./ghostNginx/
ports:
- 80:80
- 443:443
links:
- 'ghostblog:ghostblog'
Inside that build, I copy over a bunch of stuff, keys, config etc:
Dockerfile
FROM centos:centos6
# Delete defaults
RUN yum install epel-release -y
RUN yum install -y nginx curl
RUN rm /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/nginx.conf
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-available/petersouter.co.uk.conf
COPY conf.d/ghost_blog_petersouter.co.uk-upstream.conf /etc/nginx/conf.d/ghost_blog_petersouter.co.uk-upstream.conf
COPY petersouter.co.uk.crt /etc/nginx/petersouter.co.uk.crt
COPY petersouter.co.uk.key /etc/nginx/petersouter.co.uk.key
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
/etc/nginx/conf.d/ghost_blog_petersouter.co.uk-upstream.conf
upstream ghost_blog_petersouter.co.uk {
server ghostblog:2368 fail_timeout=10s;
}
/etc/nginx/sites-enabled/petersouter.co.uk.conf
# Redirect all non-SSL to SSL
server {
listen 0.0.0.0:80;
return 301 https://$server_name$request_uri;
}
# Main SSL Config Block
server {
listen 0.0.0.0:443 ssl;
ssl on;
ssl_certificate /etc/nginx/petersouter.co.uk.crt;
ssl_certificate_key /etc/nginx/petersouter.co.uk.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA;
ssl_prefer_server_ciphers on;
index index.html index.htm index.php;
access_log /var/log/nginx/ssl-petersouter.co.uk.access.log combined;
error_log /var/log/nginx/ssl-petersouter.co.uk.error.log;
location / {
proxy_pass http://ghost_blog_petersouter.co.uk;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_buffering off;
}
}
And the linking is working, because I can see it in the nginx container:
$ docker exec -i -t petersouterblogcompose_ghost_nginx_1 bash
$ curl ghostblog:2368
$ Moved Permanently. Redirecting to https://petersouter.co.uk/
And outside of the container I can curl the ghost instance directly:
$ curl 0.0.0.0:2368
$ Moved Permanently. Redirecting to https://petersouter.co.uk/
But when I try to go to port 80 that redirects correctly, I get no response:
$ curl curl 0.0.0.0:80
$ curl: (52) Empty reply from server
I'm guessing that I've messsed something up in the nginx config somewhere, as everything else seems to be working as intended.

Worked it out, it's always the simple things!
Note this line of the nginx Dockerfile:
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-available/petersouter.co.uk.conf
I'm copying into the sites-available folder, so the conf is never getting loaded! Fixed that:
COPY sites-enabled/petersouter.co.uk.conf /etc/nginx/sites-enabled/petersouter.co.uk.conf
And everything worked! :)

Related

docker-compose nginx/certbot website does not load

I'm setting up a very simple docker compose script. It should setup nginx, create some let's Encrypt certificate and then serve the nginx default website to the browser in a secured website.
However, when I go to the website it loads for a long time and then doesn't give me any useful error message, other then yourfootprint.dk took too long to respond.
It works to create Certificates. So I know that the certbot part works.
I also know that the server and the domain works. If I run a simple nginx container without the docker-compose and the nginx.dev.conf the nginx default website is served fine.
I have a hunch that my nginx.dev.conf file is wrong and incoming requests will run in an infinite loop.
./docker-compose.yml
version: '3'
services:
webserver:
image: nginx:stable
ports:
- 80:80
- 443:443
restart: unless-stopped
volumes:
- ./data/nginx/nginx.dev.conf:/etc/nginx/conf.d/default.conf:ro
- ./data/certbot/www:/var/www/certbot/:ro
- ./data/certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:latest
volumes:
- ./data/certbot/www/:/var/www/certbot/:rw
- ./data/certbot/conf/:/etc/letsencrypt/:rw
./data/nginx/nginx.dev.conf
server {
listen 80;
listen [::]:80;
server_name yourfootprint.dk www.yourfootprint.dk;
server_tokens off;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://yourfootprint.dk$request_uri;
}
}
server {
listen 443 default_server ssl http2;
listen [::]:443 ssl http2;
server_name yourfootprint.dk;
ssl_certificate /etc/nginx/ssl/live/yourfootprint.dk/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/yourfootprint.dk/privkey.pem;
location / {
# ...
}
}
If you already have the certificate in /data/certbot/conf then the solution is easy:
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- app
ports:
- 80:80
- 443:443
volumes:
- /data/nginx/templates:/etc/nginx/templates:ro
- /data/certbot/www:/var/www/certbot/
- /data/certbot/conf/:/etc/letsencrypt/
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- /data/certbot/conf/:/etc/letsencrypt/
- /data/certbot/www:/var/www/certbot/
/data/nginx/templates/default.template.conf
server {
listen [::]:80;
listen 80;
server_name $DOMAIN; #$DOMAIN must be defined in the environment
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}
If not, then I think you need to split the process into two phases: an initiation phase and a production phase. I describe that in details here. The idea is to have a docker-compose file to initiate the letsencrypt certificate, and another docker-compose file to run the system and renew the certificate.
So without further ado, here is the file structure and content that is working really well for me (you still need to adapt the files locations and content to suit your needs):
./setup.sh
./docker-compose-initiate.yaml
./docker-compose.yaml
./etc/nginx/templpates/default.conf.template
./etc/nginx/templpates-initiation/default.conf.template
The setup in 2 phases:
In the first phase "the initiation phase" we will run an nginx container, and a certbot container just to obtain the ssl certificate for the first time and store it on the host ./etc/letsencrypt folder
I the second phase "the operation phase" we run all necessary services for the app including nginx that will use the letsencrypt folder this time to serve https on port 443, a certbot container will also run (on demand) to renew the certificate. We can add a cron job for that. So the setup.sh script is a simple convenience script that runs the commands one after another:
#!/bin/bash
# the script expects two arguments:
# - the domain name for which we are obtaining the ssl certificatee
# - the Email address associated with the ssl certificate
echo DOMAIN=$1 >> .env
echo EMAIL=$2 >> .env
# Phase 1 "Initiation"
docker-compose -f ./docker-compose-first.yaml up -d nginx
docker-compose -f ./docker-compose-first.yaml up certbot
docker-compose -f ./docker-compose-first.yaml down
# Phase 2 "Operation"
crontab ./etc/crontab
docker-compose -f ./docker-compose.yaml up -d
Phase 1: The ssl certificate initiation phase:
./docker-compose-initiate.yaml
version: "3"
services:
nginx:
container_name: nginx
image: nginx:latest
environment:
- DOMAIN
ports:
- 80:80
volumes:
- ./etc/nginx/templates-initiate:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt:ro
- ./certbot/data:/var/www/certbot
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
./etc/nginx/templates-initiate/default.conf.template
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
location ~/.well-known/acme-challenge {
allow all;
root /var/www/certbot;
}
}
Phase 2: The operation phase
./docker-compose.yaml
services:
app:
{{your_configurations_here}}
{{other_services...}}:
{{other_services_configuraitons}}
nginx:
container_name: nginx
image: nginx:latest
restart: always
environment:
- DOMAIN
depends_on:
- matomo
ports:
- 80:80
- 443:443
volumes:
- ./etc/nginx/templates:/etc/nginx/templates:ro
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
- /var/log/nginx:/var/log/nginx
certbot:
container_name: certbot
image: certbot/certbot:latest
depends_on:
- nginx
command: >-
certonly --reinstall --webroot --webroot-path=/var/www/certbot
--email ${EMAIL} --agree-tos --no-eff-email
-d ${DOMAIN}
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
- ./certbot/data:/var/www/certbot
server {
listen [::]:80;
listen 80;
server_name $DOMAIN;
return 301 https://$host$request_uri;
}
./etc/nginx/templates/default.conf.template
server {
listen [::]:443 ssl http2;
listen 443 ssl http2;
server_name $DOMAIN;
ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_pass http://app:80;
}
}

How can I make nginx reverse proxy for localhost when connectd with IP address?

I made reverse proxy on my nginx like this
server {
listen 80;
server_name localhost;
return 301 https://[my domein]$request_uri;
}
this works well, when I access http://xxx.xxx.xxx.xxx/index.html.
My nginx redirect to https://[my domain]/index.html
But, when I access https://xxx.xxx.xxx.xxx/index.html Chrome shows “Your connection is not private” error.
Self-signed certificates do not help avoid this error. A CA-signed certificate is required.
In this case, how do I get the SSL certificate for localhost? It is localhost. No one could issue a localhost certificate, I think.
Does anyone know a good way to solve this problem?
Use mkcert.
Install mkcert
sudo apt install libnss3-tools
Check mkcert releases page for the latest version. As of this writing, the latest release is.v1.4.3
export VER="v1.4.3"
wget -O mkcert https://github.com/FiloSottile/mkcert/releases/download/${VER}/mkcert-${VER}-linux-amd64
chmod +x mkcert
sudo mv mkcert /usr/local/bin
Install certificate
generate locally trusted SSL certificates
mkcert -install
ls -1 ~/.local/share/mkcert
mkdir ~/cert && cd ~/cert
mkcert crm.site '*.crm.site' localhost 127.0.0.1 ::1
Add to nginx
sudo nano /etc/nginx/sites-available/crm.site
server {
listen *:443 ssl http2;
index index.php;
root /home/andrey/crm.site;
server_name crm.site *.crm.site;
ssl_certificate /home/andrey/cert/crm.site+4.pem;
ssl_certificate_key /home/andrey/cert/crm.site+4-key.pem;
client_max_body_size 128M;
client_body_buffer_size 128k;
location / {
try_files $uri $uri/ /index.php?$args;
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains" always;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
}
}
and restart nginx
sudo service nginx restart

Nginx 1.14 and php7.2-fpm returns php as application/octet-stream to download

I got clean just installed Ubuntu 18.04 on Windows 10 WSL, here what i did with it:
sudo add-apt-repository ppa:nginx/stable
sudo add-apt-repository ppa:ondrej/php
sudo apt update
sudo apt install composer npm
sudo apt install nginx postgresql-10
sudo apt install php7.2 php7.2-cli php7.2-fpm php7.2-curl php7.2-gd php7.2-mysql php7.2-mbstring
sudo apt upgrade
There's my nginx config for project:
server {
#listen 80;
#listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
root /var/www/domains/laravel/public;
index index.php index.html index.htm;
server_name laravel.loc;
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
location / {
try_files $uri $uri/ /index.php$query_string =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
}
location ~ /\.ht {
deny all;
}
}
There's no changes in nginx.conf except user user; and
user = user
group = user
listen = 127.0.0.1:9000
in www.conf of php7.2-fpm's pool.d directory.
So... i got SPA project on Laravel 5.6 and Vue.js that worked properly on Nginx 1.10 and php7.0-fpm, that returns me page on / and work with Vue routes as well, but if i'm trying to get /login or some api route (or any another url) from browser it gives me application/octet-stream of public/index.php to download. I've tried to add php mime type to nginx configs and change default_type application/octet-stream; in nginx.conf to default_type text/html; as i read in some advices, but it did'n do the trick. Already breaked my mind, anybody please help!
This is probably related to the HTTP/2 support of this version of Nginx.
HTTP/2 should always be used over SSL.
Handling HTTP/1 and HTTP/2 over SSL/ClearTCP transparently is actually really hard and not well handled by Nginx.
Try to remove any site using http2 from your Nginx enabled sites an restart Nginx.
You can also try to make your request in HTTPS, your pages should be served without any problem.
Move to HTTP/2 over SSL only or forget about HTTP/2 for the moment.

Docker set nginx https connection refused

I have tried in many ways to setup the nginx https configuration in docker environment.
But there are not logs showed in the docker nginx logs.
Actually, The website return connection refused or website refused to connect.
docker compose file:
version: "3"
services:
nginx:
container_name: nginx
build:
context: .
dockerfile: ./nginx/Dockerfile
ports:
- "80:80"
- "443:443"
volumes:
- .:/work
depends_on:
- django
django:
container_name: django
build:
context: .
dockerfile: ./Dockerfile
expose:
- "8000"
volumes:
- .:/work
command: uwsgi --ini ./uwsgi.ini
In nginx conf:
server {
listen 80;
server_name www.canarytechnologies.com;
rewrite ^(.*)$ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
ssl on;
server_name www.canarytechnologies.com;
ssl_certificate /etc/nginx/ssl/server.crt;
ssl_certificate_key /etc/nginx/ssl/server.key;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
charset utf-8;
client_max_body_size 10m;
location / {
include uwsgi_params;
uwsgi_pass django:8000;
}
}
I don't set the https, it works fine with 80 port. But I add the https 443 port. There refused to connect and there is no logs in the docker nginx output.
I have successfully setup the server without docker. All the configuration works without the docker environment.
I wondered why add https or add 443 port the server return to refuse connect.
I made a mistake .
I should add nginx configure file as .conf
So ADD ./nginx/conf/web.conf /etc/nginx/conf.d/web.conf Instead of this
ADD ./nginx/conf/web /etc/nginx/conf.d/web

Docker network issues with nginx proxy container

I am currently trying to setup a docker based jira and confluence platform proxied by nginx and running into some kind of routing and network problems.
The basic setup consists of three docker containers - the nginx conatainer handles the https requests for specific domain names (e.g. jira.mydomain.com, confluence.mydomain.com) and redirects (proxy_pass) the requests to the specific containers for jira and confluence.
This setup is generally working - I can access the jira instance by opening https://jira.mydomain.com and the confluence instance by opening https://confluence.mydomain.com in my browser.
The problem I am running into becomes visible when logging into the jira:
And following the Find-out-more-link to:
The suggested resolutions from the provided JIRA health check link unfortunately did not help me to identify and solve the problem. Instead some exceptions in the log file lead to some more hints on the problem:
2017-06-07 15:04:26,980 http-nio-8080-exec-17 ERROR christian.schlaefcke 904x1078x1 eqafq3 84.141.114.234,172.17.0.7 /rest/applinks/3.0/applicationlinkForm/manifest.json [c.a.a.c.rest.ui.CreateApplicationLinkUIResource] ManifestNotFoundException thrown while retrieving manifest
ManifestNotFoundException thrown while retrieving manifest
com.atlassian.applinks.spi.manifest.ManifestNotFoundException: java.net.NoRouteToHostException: No route to host (Host unreachable)
...
Caused by: java.net.NoRouteToHostException: No route to host (Host unreachable)
And when I follow the hint from this Atlassian knowledge base article and running this curl statement from inside of the JIRA container:
curl -H "Accept: application/json" https://jira.mydomain.com/rest/applinks/1.0/manifest -v
I finally get this error:
* Trying <PUBLIC_IP>...
* connect to <PUBLIC_IP> port 443 failed: No route to host
* Failed to connect to jira.mydomain.com port 443: No route to host
* Closing connection 0
curl: (7) Failed to connect to jira.mydomain.com port 443: No route to host
EDIT:
The external URL jira.mydomain.com can be pinged from inside of the container:
root#c9233dc17588:# ping jira.mydomain.com
PING jira.mydomain.com (<PUBLIC_IP>) 56(84) bytes of data.
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from rs226736.mydomain.com (<PUBLIC_IP>): icmp_seq=3 ttl=64 time=0.181 ms
From outside of the JIRA container (e.g. docker host or other machine) the curl statement works fine!
I have quite a good experience with linux in general but my knowledge about networks, routing and iptables is rather limited. Docker is running the current 17.03.1-ce version in combination with docker compose on a centos 7 system:
~]# uname -a
Linux rs226736 3.10.0-514.21.1.el7.x86_64 #1 SMP Thu May 25 17:04:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
At the moment I don´t even understand what kind of problem (iptables?, routing, docker?) this actually is and how to debug this :-(
I played around with some iptables and nginx related hints found via google - all without success. Any hint pointing me to the right direction would be very much appreciated.
REQUESTED CONFIGS:
NGINX docker-compose.yml
nginx:
image: nginx
container_name: nginx
ports:
- 80:80
- 443:443
external_links:
- my_domain-jira
- my_domain-confluence
volumes:
- /opt/docker/logs/nginx:/var/log/nginx
- ./nginx.conf:/etc/nginx/nginx.conf
- ./certs/jira.mydomain.com.crt:/etc/ssl/certs/jira.mydomain.com.crt
- ./certs/jira.mydomain.com.key:/etc/ssl/private/jira.mydomain.com.key
- ./certs/confluence.mydomain.com.crt:/etc/ssl/certs/confluence.mydomain.com.crt
- ./certs/confluence.mydomain.com.key:/etc/ssl/private/confluence.mydomain.com.key
JIRA docker-compose.yml (Confluence similar):
jira:
container_name: my_domain-jira
build: .
external_links:
- postgres
volumes:
- ./inst/conf/server.xml:/opt/jira/conf/server.xml
- ./inst/bin/setenv.sh:/opt/jira/bin/setenv.sh
- /home/jira:/opt/atlassian-home
- /opt/docker/logs/jira:/opt/jira/logs
- /etc/localtime:/etc/localtime:ro
NGINX - nginx.conf
upstream jira {
server my_domain-jira:8080;
}
# begin jira configuration
server {
listen 80;
server_name jira.mydomain.com;
client_max_body_size 500M;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name jira.mydomain.com;
ssl on;
ssl_certificate /etc/ssl/certs/jira.mydomain.com.crt;
ssl_certificate_key /etc/ssl/private/jira.mydomain.com.key;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:RC4-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK';
server_tokens off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
client_max_body_size 500M;
location / {
proxy_pass http://jira/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_cache_bypass $http_upgrade;
}
}
Ideas (nginx / proxy_pass / upstream) mostly picked up from:
https://www.digitalocean.com/community/tutorials/docker-explained-how-to-containerize-and-use-nginx-as-a-proxy
http://blog.nbellocam.me/2016/03/01/nginx-serving-multiple-sites-docker/
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+behind+an+NGinX+reverse+proxy
After some discussion with the provider of the virtual server it turned out, that conflicting firewall rules between plesk firewall and iptables caused this problem. After the conflict had been fixed by the provider the container could be accessed.
This problem is solved now - thank´s to anyone who participated!

Resources