enter code hereIn web browser get error "502 Bad Gateway Nginx".
And client get error [connect() failed (111: Connection refused) while connecting to upstream, client: 206.189.90.189, server: abc.xxx.xyz, request: "POST / HTTP/1.1", upstream: "http://10.245.21.96:244/", host: "188.166.204.10"].
enter code hereCan you tell me how to solve it. I spend 1 months but can't not fix it. :(((
enter code here1/my nginx config:
server {
listen 80 ;
listen [::]:80 ;
server_name abc.xxx.xyz;
sendfile off;
# Add stdout logging
error_log /dev/stdout info;
access_log /dev/stdout;
error_page 404 /404.html;
location / {
proxy_pass http://abc-service.default:244/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}
}
2/My config abc-service:
Name: abc-service
Namespace: default
Labels: io.kompose.service=abc
Annotations:
Selector: io.kompose.service=abc
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.245.21.96
IPs: 10.245.21.96
Port: 244-tcp 244/TCP
TargetPort: 244/TCP
Endpoints: 10.244.0.211:244
Port: 18081-tcp 18081/TCP
TargetPort: 18081/TCP
Endpoints: 10.244.0.211:18081
Session Affinity: None
Related
Currently have Nginx running on the same machine as the rest of my servers, none of which are running IPv6. Relatively frequently, I get hangups when loading content while testing and I find error messages in the error.log file.
My current config:
http {
include mime.types;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
server_tokens off;
resolver 1.1.1.1 ipv6=off;
#keepalive_timeout 0;
keepalive_timeout 60s;
upstream master_process {
localhost:40088;
}
upstream http_worker {
hash $remote_addr consistent;
localhost:40089;
localhost:40090;
localhost:40091;
localhost:40092;
}
#http server
server {
listen 88;
location / {
lingering_close on;
lingering_time 15s;
lingering_timeout 2s;
proxy_pass http://http_worker;
proxy_http_version 1.1;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
}
location ~ ^/(Main|Monitor|Chart|chartfeed|getchartdata()|Live|Log$) {
proxy_pass http://master_process;
proxy_http_version 1.1;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
}
location ~.*.(gif|jpg|jpeg|png|ico|wmv|avi|asf|asx|mpg|mpeg|mp4|pls|mp3|mid|wav|swf|flv|txt|js|css|zip|tar|rar|gz|tgz|bz2|uha|7z|doc|docx|xls|xlsx|pdf|iso|woff|ttf|svg|eot|htm)$ {
proxy_pass http://master_process;
gzip_static on;
expires 7d;
}
}
}
The errors I am currently receiving:
2022/01/28 11:42:27 [error] 23732#17404: *1 connect() failed (10061: No connection could be made because the target machine actively refused it) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /Main?_SID=1*479985359 HTTP/1.1", upstream: "http://[::1]:40088/Main?_SID=1*479985359", host: "localhost:88", referrer: "http://localhost:88/login()"
2022/01/28 11:42:52 [error] 23732#17404: *1 connect() failed (10061: No connection could be made because the target machine actively refused it) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /Main?_SID=1*479985359 HTTP/1.1", upstream: "http://[::1]:40088/Main?_SID=1*479985359", host: "localhost:88", referrer: "http://localhost:88/login()"
Note that I have specified a resolver in the http section so that it can be made global. I have also tried moving that resolver into the server and location sections to no avail.
I have also tried adding {server {listen 88 default_server; listen [::]:88 ipv6only=on; ...}...} which also didn't solve this issue as others have suggested after a quick search online.
Any help would be greatly appreciated!
I have the old nginx-based OSM tile caching proxy configured by https://coderwall.com/p/--wgba/nginx-reverse-proxy-cache-for-openstreetmap, but as source tile server migrated to HTTPS this solution is not working anymore: 421-Misdirected Request.
The fix I based on the article https://kimsereyblog.blogspot.com/2018/07/nginx-502-bad-gateway-after-ssl-setup.html. Unfortunately after days of experiments - I'm still getting 502 error.
My theory is that the root cause is the upstream servers SSL certificate which uses wildcard: *.tile.openstreetmap.org but all attempts to use $http_host, $host, proxy_ssl_name, proxy_ssl_session_reuse in different combinations did't help: 421 or 502 every time.
My current nginx config is:
worker_processes auto;
events {
worker_connections 768;
}
http {
access_log /etc/nginx/logs/access_log.log;
error_log /etc/nginx/logs/error_log.log;
client_max_body_size 20m;
proxy_cache_path /etc/nginx/cache levels=1:2 keys_zone=openstreetmap-backend-cache:8m max_size=500000m inactive=1000d;
proxy_temp_path /etc/nginx/cache/tmp;
proxy_ssl_trusted_certificate /etc/nginx/ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
proxy_ssl_session_reuse on;
proxy_ssl_name *.tile.openstreetmap.org;
sendfile on;
upstream openstreetmap_backend {
server a.tile.openstreetmap.org:443;
server b.tile.openstreetmap.org:443;
server c.tile.openstreetmap.org:443;
}
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
include /etc/nginx/mime.types;
root /dist/browser/;
location ~ ^/osm-tiles/(.+) {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X_FORWARDED_PROTO http;
proxy_set_header Host $http_host;
proxy_cache openstreetmap-backend-cache;
proxy_cache_valid 200 302 365d;
proxy_cache_valid 404 1m;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass https://openstreetmap_backend/$1;
break;
}
}
}
}
But it still produces error when accessing https://example.com/osm-tiles/12/2392/1188.png:
2021/02/28 15:05:47 [error] 23#23: *1 upstream SSL certificate does not match "*.tile.openstreetmap.org" while SSL handshaking to upstream, client: 172.28.0.1, server: example.com, request: "GET /osm-tiles/12/2392/1188.png HTTP/1.0", upstream: "https://151.101.2.217:443/12/2392/1188.png", host: "localhost:3003"
Host OS Ubuntu 20.04 (here https is handled), nginx is runnig on docker from nginx:latest image, ca.crt is the default ubuntu's crt.
Please help.
I have problems getting GitLab to work behind an nginx reverse proxy.
nginx version: nginx/1.14.0 (Ubuntu)
gitlab-ce 11.3.6-ce.0
in /etc/gitlab/gitlab.rb I have set (according to documentation):
external_url 'https://gitlab.mydomain.io'
nginx['listen_port'] = 81
nginx['listen_https'] = false
I used port 81 so the reverse proxy can bind to 80 so it's easier to get LetsEncrypt certificates. This is my virtual host for the gitlab subdomain:
upstream gitlab {
server localhost:81 fail_timeout=0;
}
server {
listen 82;
listen [::]:82;
server_name gitlab.mydomain.io;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name gitlab.mydomain.io;
ssl_certificate /etc/nginx/ssl/gitlab.mydomain.io.crt;
ssl_certificate_key /etc/nginx/ssl/gitlab.mydomain.io.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
location / {
proxy_read_timeout 300;
proxy_connect_timeout 300;
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_pass https://gitlab;
}
}
When I navigate to my subdomain, I get a 502 Bad Gateway with the following error in the nginx log:
[error] 6301#6301: *6 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream, client: 88.217.180.123, server: gitlab.mydomain.io, request: "GET / HTTP/1.1", upstream: "https://127.0.0.1:81/", host: "gitlab.mydomain.io"
I tried using different protocols with nginx but to no avail. Does anyone have an idea?
I have a small cluster, and there are three external services. I use clusterIP as internal pod communication. Then I use ingress(nginx) as inverse proxy. The ingress connects internet to cluster.
When nginx redirects traffic, but did not sent port with domain. For example, ykt:31080/workflow redirects to ykt/workflow/login, it omits the port 31080. Then server could not find the page.
My ingress resource is configured as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: ykt
http:
paths:
- path: /business
backend:
serviceName: core-ykt
servicePort: 80
- host: ykt
http:
paths:
- path: /pre
backend:
serviceName: pre-core-ykt
servicePort: 80
- host: ykt
http:
paths:
- path: /workflow
backend:
serviceName: virtual-apply-ykt
servicePort: 80
And part of my ingress controller is configured as follows:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
spec:
type: NodePort
selector:
app: ingress-nginx
ports:
- name: http
port: 80
nodePort: 31080
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
app: ingress-nginx
spec:
terminationGracePeriodSeconds: 60
serviceAccount: lb
containers:
- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
name: ingress-nginx
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
================================================
Second edition
datalook_virtual_apply_pod.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: virtual-apply-ykt
labels:
app: virtual-apply-ykt
purpose: ykt_production
spec:
containers:
- name: virtual-apply-ykt
image: app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: volume-virtual-apply-ykt
mountPath: /usr/application
env:
- name: spring.config.location
value: application.properties
volumes:
- name: volume-virtual-apply-ykt
hostPath:
path: /opt/docker/datalook-virtual-apply
type: Directory
datalook_virtual_apply_svc.yaml file is as follows, I have no deployment file for datalook_virtual_apply service.
apiVersion: v1
kind: Service
metadata:
name: virtual-apply-ykt
labels:
name: virtual-apply-ykt
spec:
selector:
app: virtual-apply-ykt
type: ClusterIP
ports:
- port: 80
name: tcp
/////////////////////////////
second edition
///////////////////////////////
The backend generated nginx configuation is as follows: instead let application listen to 80, May I make ingress-nginx default port as 31080?
daemon off;
worker_processes 1;
pid /run/nginx.pid;
worker_rlimit_nofile 1047552;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/x86_64-linux-gnu/lua/5.1/?.so;;";
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";
init_by_lua_block {
require("resty.core")
collectgarbage("collect")
local lua_resty_waf = require("resty.waf")
lua_resty_waf.init()
}
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 0.0.0.0/0;
geoip_country /etc/nginx/geoip/GeoIP.dat;
geoip_city /etc/nginx/geoip/GeoLiteCity.dat;
geoip_org /etc/nginx/geoip/GeoIPASNum.dat;
geoip_proxy_recursive on;
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_header_buffer_size 1k;
client_header_timeout 60s;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 4k;
http2_max_header_size 16k;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 32;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
limit_req_status 503;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
gzip_vary on;
# Custom headers for response
server_tokens on;
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.96.0.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_x_forwarded_for $the_real_ip {
default $remote_addr;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
# validate $pass_access_scheme and $scheme are http to force a redirect
map "$scheme:$pass_access_scheme" $redirect_to_https {
default 0;
"http:http" 1;
"https:http" 1;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $pass_server_port $pass_port {
443 443;
default $pass_server_port;
}
# Obtain best http host
map $http_host $this_host {
default $http_host;
'' $host;
}
map $http_x_forwarded_host $best_http_host {
default $http_x_forwarded_host;
'' $this_host;
}
# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
default $http_x_request_id;
"" $request_id;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve auto;
proxy_ssl_session_reuse on;
upstream upstream-default-backend {
least_conn;
keepalive 32;
server 192.168.38.188:8080 max_fails=0 fail_timeout=0;
}
upstream default-gearbox-rack-api-gateway-5555 {
least_conn;
keepalive 32;
server 192.168.38.22:5555 max_fails=0 fail_timeout=0;
}
## start server _
server {
server_name _ ;
listen 80 default_server backlog=511;
listen [::]:80 default_server backlog=511;
set $proxy_upstream_name "-";
listen 443 default_server backlog=511 ssl http2;
listen [::]:443 default_server backlog=511 ssl http2;
# PEM sha: c1e1519ef05c8531e334ee947817a2ad495fe83a
ssl_certificate /ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /ingress-controller/ssl/default-fake-certificate.pem;
location / {
log_by_lua_block {
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
access_log off;
port_in_redirect off;
set $proxy_upstream_name "upstream-default-backend";
set $namespace "";
set $ingress_name "";
set $service_name "";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://upstream-default-backend;
proxy_redirect off;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
## start server master8g
server {
server_name master8g ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
log_by_lua_block {
}
port_in_redirect off;
set $proxy_upstream_name "default-gearbox-rack-api-gateway-5555";
set $namespace "default";
set $ingress_name "my-ingress";
set $service_name "gearbox-rack-api-gateway";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout invalid_header http_502 http_503 http_504;
proxy_next_upstream_tries 0;
proxy_pass http://default-gearbox-rack-api-gateway-5555;
proxy_redirect off;
}
}
## end server master8g
# default server, used for NGINX healthcheck and access to nginx stats
server {
# Use the port 18080 (random value just to avoid known ports) as default port for nginx.
# Changing this value requires a change in:
# https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
listen 18080 default_server backlog=511;
listen [::]:18080 default_server backlog=511;
set $proxy_upstream_name "-";
location /healthz {
access_log off;
return 200;
}
location /is-dynamic-lb-initialized {
access_log off;
content_by_lua_block {
local configuration = require("configuration")
local backend_data = configuration.get_backends_data()
if not backend_data then
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
return
end
ngx.say("OK")
ngx.exit(ngx.HTTP_OK)
}
}
location /nginx_status {
set $proxy_upstream_name "internal";
access_log off;
stub_status on;
}
location / {
set $proxy_upstream_name "upstream-default-backend";
proxy_pass http://upstream-default-backend;
}
}
}
stream {
log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# TCP services
# UDP services
}
I'm having a problem trying to get Nginx to proxy a path to another server that is also running in Docker.
To illustrate, I'm using Nexus server as an example.
This is my first attempt...
docker-compose.yml:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
nginx.conf:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://localhost:8081/;
}
}
}
When I hit http://localhost/nexus/, I get 502 Bad Gateway with the following log:-
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://[::1]:8081/", host: "localhost"
nginx_1 | 2017/05/29 02:20:50 [error] 7#7: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET /nexus/ HTTP/1.1", upstream: "http://127.0.0.1:8081/", host: "localhost"
nginx_1 | 172.18.0.1 - - [29/May/2017:02:20:50 +0000] "GET /nexus/ HTTP/1.1" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36"
In my second attempt...,
docker-compose.yml - I added links to Nginx configuration:-
version: '2'
services:
nexus:
image: "sonatype/nexus3"
ports:
- "8081:8081"
volumes:
- ./nexus:/nexus-data
nginx:
image: "nginx"
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
links:
- nexus:nexus
nginx.conf... Instead of using http://localhost:8081/, I use http://nexus:8081/:-
worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 80;
location /nexus/ {
proxy_pass http://nexus:8081/;
}
}
}
Now, when I hit http://localhost/nexus/, it gets proxied properly but the web content is partially rendered. When inspecting the HTML source code of that page, the javascript, stylesheet and image links are pointing to http://nexus:8081/[path]... hence, 404.
What should I change to get this to work properly?
Thank you very much.
The following additional options are what I have used
http {
server {
listen 80;
location /{
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
server_name_in_redirect on;
proxy_pass http://nexus:8081;
}
location /nexus/ {
proxy_pass http://nexus:8081/;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
server_name_in_redirect on;
}
}
}
My solution is to include the redirect for the '/' path in the nginx config. The Nexus app will be making requests to '/' for it resources which will not work.
However, this is not ideal and will not work with an Nginx configuration serving multiple apps.
The docs
cover this configuration and indicate that you need to configure Nexus to serve on /nexus. This would enable you to configure Nginx as follows (from docs) minus the hack above.
location /nexus {
proxy_pass http://localhost:8081/nexus;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
I would recommend using that configuration.