Docker Nginx stopped: [emerg] 1#1: host not found in upstream - nginx

I am running docker-nginx on ECS server. My nginx service is suddenly stopped because the proxy_pass of one of the servers got unreachable. The error is as follows:
[emerg] 1#1: host not found in upstream "dev-example.io" in /etc/nginx/conf.d/default.conf:988
My config file is as below:
server {
listen 80;
server_name test.com;
location / {
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
##proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
server {
listen 80 default_server;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/log/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
I have many servers in the config file, even if one server was down, I need to have running nginx. Is there any way to fix it?
Any suggestion to fix this issue would be appreciated.

Just adding a resolver did not resolve the issue in my case. But I was able to work around it by using a variable for the host.
Also, I guess it makes more sense to use Docker's DNS at 127.0.0.11 (this is a fixed IP).
Example:
server {
listen 80;
server_name test.com;
location / {
resolver 127.0.0.11;
set $example dev-example.io:5016;
proxy_pass http://$example;
}
}
I found the variable workaround on this page.

Include to prevent Nginx from crashing if your site is down, include a resolver directive, as follows:
server {
listen 80;
server_name test.com;
location / {
resolver 8.8.8.8;
proxy_pass http://dev-exapmle.io:5016/;
proxy_redirect off;
...
WARNING! Using a public DNS create a security risk in your backend since your DNS requests can be spoofed. If this is an issue, you should point the resolver to a secure DNS server.

This usually means that the dns name you provided as upstream server cannot be resolved. To test it, log on nginx server and try pinging upstream server provided and see if the name resolution completes correctly, If its a docker container try docker exec -it to get a shell, then try pinging the upstream to test the name resolution. If the contianer is stopped try to use IP address instead of dns name in your server block.
proxy_pass http://<IP ADDRESS>:5016/;
You can also use the resolver directive if you want to use different dns server for this location than the host system:
resolver 8.8.8.8;

When using nginx plus, you can get around this as well by adding a zone to your upstream with resolve. When use this test in your proxypass. When the server some-server starts resolving, it will starting pass traffic to it.
Make sure to as stated above, put a resolver in other parts of your config. For docker, I use
resolver 127.0.0.11 valid=1s;
upstream test {
zone test-zone 64k;
server some-server:1234 resolve;
}

Related

NGINX Forward HTTPS request to HTTP Backend Server and Back

On a single server instance, I have an NGINX web server that operates without any problems with the HTTPS and I have a backend server in Spring Boot running on port 8080. I do not want to open this port to the internet, therefore I would like to setup a reverse proxy with NGINX to forward the request that start with /api to my backend and return the response.
When I send request to the domain in the browser, my frontend application which runs in browser, sends some requests to my backend (starting with /api), my frontend uses the following base url:
http://my-ip:8080/api
And the nginx configuration is as follows:
server {
listen 80;
ssl_certificate /cert/cert.pem;
ssl_certificate_key /cert/privkey.pem;
server_name www.mydomain.com;
rewrite ^(.*) https://$server_name$1 permanent;
}
server {
listen 443 ssl;
server_name www.mydomain.com mydomain.com;
ssl_certificate /cert/cert.pem;
ssl_certificate_key /cert/privkey.pem;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
error_page 404 /index.html;
location = / {
root /usr/share/nginx/html;
internal;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location /api {
proxy_set_header X-Forwarded-Host $host:$server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://127.0.0.1:8080;
}
}
I keep getting Mixed Content Error and my backend requests are being blocked by the browser since my Frontend uses http for the request.
If I try to use https in the Frontend URL, such as:
https://my-ip:8080/api
Then I get a different error:
GET https://my-ip/api/... net::ERR_CERT_COMMON_NAME_INVALID
This is probably because my certificate is generated for my domain name and not for the IP.
Solution: In the frontend, we should send the request to https version and use the actual domain instead of the ip address because the domain name should match the domain of the certification.
The request: https://my-domain:8080/api
Then the nginx forwards this request properly.

Nginx: How to deploy front end & backend apps on same machine with same domain but different ports?

I have two apps one for frontend built using ReactJS and one is for backend built using FastAPI. I have server machine where I have deployed both the apps. Now I want to use Nginx (because of SSL) to host both my application on the same machine with same domain name but the ports are different. I know how to do it for different domains or subdomain but I don't have another domain/subdomain with me right now. So I want to aks how I can achive this in Nginx?
For example my FE is using port 5000 & BE is using 8000,I am able to configure Nginx to serve my FE but I am getting this error,
Blocked loading mixed active content
because my FE which is httpstrying to connect to backend on port 8000 which is not https.
Here is my nginx config file,
server {
listen 443 ssl;
ssl_certificate /opt/ssl/bundle.crt;
ssl_certificate_key /opt/ssl/custom.key;
# add here the ip address of your server
# or a domain pointing to that ip (like example.com or www.example.com)
server_name something-c11.main0.auto.qa.use1.mydomain.net;
keepalive_timeout 5;
client_max_body_size 100M;
access_log /opt/MY_FE/nginx-access.log;
error_log /opt/MY_FE/nginx-error.log;
# checks for static file, if not found proxy to app
location / {
try_files $uri #proxy_to_app;
}
location #proxy_to_app {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_pass http://localhost:5000;
proxy_redirect off;
}
}
server {
if ($host = something-c11.main0.auto.qa.use1.mydomain.nett) {
return 301 https://$host$request_uri;
}
listen 80;
server_name something-c11.main0.auto.qa.use1.mydomain.net;
return 404;
}
Any help would be appreciated....

NGINX requires restart when service reset

We use NGINX in docker swarm, as a reverse proxy. NGINX sits within the overlay network and relays external requests on to the relevant swarm service.
However we have an issue, where every time we restart / update or otherwise take down a swarm service, NGINX returns 502 Bad Gateway. NGINX then continues to serve a 502 even after the service is restarted, and this is not corrected until we restart the NGINX service, which obviously defies the whole point of having a load balancer and services running in multiple places.
Here is our NGINX CONF:
events {}
http {
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
client_max_body_size 20M;
large_client_header_buffers 8 256k;
client_header_buffer_size 256k;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
map $host $client {
default clientname;
}
#Healthcheck
server {
listen 443;
listen 444;
location /is-healthy {
access_log off;
return 200;
}
}
#Example service:
server {
listen 443;
server_name scheduler.clientname.com;
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://$client-scheduler:60911;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
#catchll
server {
listen 443;
listen 444;
server_name _;
location / {
return 404 'Page not found';
}
}
}
We use the $client placeholder as otherwise we can't even start nginx when one of the services is down.
The other alternative is to use an upstream directive that has health checks, which can work well. Issue with this is that if any of the services are unavailable, NGINX won't even start!
What are we doing wrong?
UPDATE
It appears what we want here is impossible (please prove me wrong though!). Seems crazy to miss such a feature in the world of docker and micro-services!
We are currently looking at HAPROXY as an alternative, as this can be setup with default-server init-addr none to stop failure on startup.
Here is how I do it, create an upstream with max_fails=0
upstream docker-api {
server docker.api:80 max_fails=0;
}
# load configs
server {
listen 80;
listen [::]:80;
server_name localhost;
location /api {
proxy_pass http://docker-api;
proxy_http_version 1.1;
proxy_cache_bypass $http_upgrade;
# Others config...
}
}
I had the same problem by using docker-compose. Nginx container could not connect the web service after docker-compose restart.
Finally I figure out two circumstances cause this glitch. First, docker-compose restart do not follow the depends_on which should be restart the nginx after web restarted. Second, docker-compose restart reassign a new internal ip address to containers and nginx do not refresh the web ip address after it start up.
My solution is define a variable to force nginx resolve the ip everytime:
location /api {
$web_service "http://web_container_name:13579"
proxy_pass $web_service;
}

Nginx doesn't listen on specified port

I've set up and deployed a rails application using unicorn and nginx on linode. However when I go to the URL I've set in the applications config in sites-enable I am served with the default site that is listening on port 80.
If I got to the IP and port I don't get anything at all.
If I set the site to listen on 80 it works
The URL is pointing from the registrar to the linode IP and is working correctly.
xxxx.co.uk file in sites-enabled
upstream unicorn
server unix:/tmp/unicorn.xxxx.co.uk.sock fail_timeout=0;
}
server {
server_name xxxx xxxx;
listen 3001 default deferred;
root /var/www/apps/nocn.org.uk/current/public;
location ^~ /assets/ {
gzip_static on;
#try $uri /old$uri
expires max;
add_header Cache-Control public;
}
try_files $uri/index.html $uri #unicorn;
location #unicorn {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://unicorn;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
Make sure you enabled remote access to port 3001 in linode's firewall or config (by using iptables probably).

Why is Nginx routing all traffic to one subdomain?

I am new to nginx. I am trying to install GitLabs alongside an existing php project which is currently being served by Apache on port 80. My plan is to get them both working side by side on port 90 and then turn off Apache, switching both projects to Nginx on port 80.
Okay. The problem is that both subdomains are being captured by the server for my php project which should only be served to requests for db.mydomain.com. For the php project I have a file called: ccdb symlinked into /etc/nginx/sites-enabled. It contains:
server {
server_name db.mydomain.com;
listen 90; ## listen for ipv4; this line is default and implied
#listen [::]:80 default ipv6only=on; ## listen for ipv6
root /var/www/ccdb;
index index.html index.htm index.php;
}
However, for some reason, traffic to git.mydomain.com is being serverd from /var/www/ccdb even though I have another file symlinked alongside that one called gitlab with this content:
# GITLAB
# Maintainer: #randx
# App Version: 5.0
upstream gitlab {
server unix:/home/git/gitlab/tmp/sockets/gitlab.socket;
}
server {
listen 90; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea
server_name git.mydomain.com; # e.g., server_name source.example.com;
server_tokens off; # don't show the version number, a security best practice
root /home/git/gitlab/public;
# individual nginx logs for this gitlab vhost
access_log /var/log/nginx/gitlab_access.log;
error_log /var/log/nginx/gitlab_error.log;
location / {
# serve static files from defined root folder;.
# #gitlab is a named location for the upstream fallback, see below
try_files $uri $uri/index.html $uri.html #gitlab;
}
# if a file, which is not found in the root folder is requested,
# then the proxy pass the request to the upsteam (gitlab unicorn)
location #gitlab {
proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694
proxy_redirect off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://gitlab;
}
}
NOTE: I am accessing the two domains from an OSX machine on the same local network which has entries in it's /etc/hosts file like so:
192.168.1.100 db.mydomain.com
192.168.1.100 git.mydomain.com
Try to use:
server_name git.mydomain.com:90;
... and:
server_name db.mydomain.com:90;

Resources