Sometimes 502 Bad Gateway , Sometimes 504 gateway timeout, sometimes website loads successfully - nginx

I have a website that is hosted on google cloud platform using NGINX server. Some hour before it was working well but suddenly an error of 502 bad gateway occured.
NGINX server is hosted on another instance and main project is another instance and following is the configuration of my server :
server {
listen 443 ssl;
server_name www.domain.com;
ssl_certificate /path-to/fullchain.pem;
ssl_certificate_key /path-to/privkey.pem;
# REST API Redirect
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http:/internal-ip:3000;
}
# Server-side CMS Redirect
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://internal-ip:4400;
}
}
when restarted the instance of nginx server then website loaded successfully but after three or four refresh it started giving Bad Gateway and after that anytime I open it is giving bad gateway error.
Sometimes, automatically it reloads well but again down.
Tried to know the error log of nginx server and following is the output of error log:
Sometimes this popular issue is logged:
and sometimes this:
Regarding first issue I tried some recommendations such that increase the proxy send and read time to some higher value as suggested here in server configuration and also shown in the image as follows :
Also backend code is working fine because I can access the deployed backend services in local during development but hosted website can not access any backend service.
But nothing worked and sadly my website is down.
Please suggest any solution.

By default nginx is having 1024 worker connections you can change it with
events {
worker_connections 4096;
}
Also you can try to increase amount of workers as workers*worker_connections is giving you amount of connections you can handle. All that is in the context your site is receiving a traffic and you simply running out of connections.

Related

How to use multiple NGINX reverse proxy servers

NGINX works great as a reverse proxy for my virtual servers to host applications using a single IP address and multiple domain names.
I have one particular VM that runs several node apps that work together as one web application. That server runs its own NGINX reverse proxy to handle everything and works great when exposed to the internet with a unique IP.
Since I want to use my single IP to serve this and the rest of my things, I'd like to configure my primary NGINX server to pass everything off to that server's NGINX instance to handle, which seemed pretty straight forward but isn't working as expected.
On my primary NGINX server (that all traffic is sent to from the firewall) I have configured a site file like this:
server {
listen 80;
listen [::]:80;
server_name example.domain.com *.example.domain.com;
client_max_body_size 1G;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://10.10.10.10;
}
}
This results in a 502 error. The applications do require web sockets and each have unique subdomains so of which are pointed at directories on that server.
Am I missing some config or thinking about this incorrectly?

Google OAuth2 OmniAuth Provider callback not working with GitLab behind reverse proxy

I've installed GitLab 8.0.2 on a VM, and I have an nginx reverse proxy set up to direct HTTP traffic to the VM. I am able to view the main login page for GitLab, but when I try to login using the Google OAuth2 method, the callback fails to log me in after entering my correct credentials. I simply get directed back to the GitLab login page.
Where might the problem be? The reverse proxy settings? GitLab settings (ie. Google OAuth config)?
Below is my nginx conf:
upstream gitlab {
server 192.168.122.134:80;
}
server {
listen 80;
server_name myserver.com;
access_log /var/log/nginx/gitlab.access.log;
error_log /var/log/nginx/gitlab.error.log;
root /dev/null;
## send request back to gitlab ##
location / {
proxy_pass http://gitlab;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Interestingly, the old setup I had used iptables to redirect port 81 on the host machine to port 80 on the GitLab VM, and, in that case, the Google OAuth callback worked. I'd prefer to have people simply use standard port 80 for accessing my GitLab instance, though, so I want this reverse proxy method to work.
GitLab 8.x has quite a few new things. Although I don't see anything specifically wrong with your nginx.conf file, it is pretty short compared to the example in the GitLab repository. Look through https://gitlab.com/gitlab-org/gitlab-ce/blob/master/lib/support/nginx/gitlab-ssl to get an idea of the configuration you should consider adding.
Once your nginx.conf file is updated, read through GitLab OmniAuth documentation and the Google OAuth2 integration documentation under 'Providers' on that OmniAuth page. Make sure you provide the correct callback URL to Google when registering.

Spawning node.js server with nginx

So this is something new to me. I have a node.js Application running on my server on Port 3000. I have an nginx Proxy:
upstream websli_nodejs_app_2 {
server 127.0.0.1:3000;
}
# this is the dynamic websli server
server {
listen 80;
server_name test.ch;
#root /var/www/websli/public;
# pass the request to the node.js server with the correct headers and much more can be added, see nginx config options
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://websli_nodejs_app_2/;
proxy_redirect off;
}
}
This works like a charm. Now I'm spawning (at least I believe that's what it's called). More node application. That's where Wintersmith comes in place: I'm running wintersmith preview
On my localhost that will result in another node.js server on localhost:8000. When I then go to localhost:8000 in my browser I'm getting the expected result, but then on my localhost I don't have the nginx proxy setup.
The issue:
Now on my production setup with nginx, I'm a bit stuck because I obviously cannot access localhost:8000
I have tried to add another upstream server, but this didn't really worked out. I have then also tried to spawn on something like dev.test.ch:8000, but that would result in something like error listen EADDRNOTAVAIL
What I'm looking for
The goal is to start another server from inside my main node.js server and make it accessible from a browser. Any input is highly welcomed.

Proxy cache timeout/delay with nginx?

To give you a bit of background on my problem:
I have an app that installs software and serves it publically. When it installs:
Installs base files to root path (i.e. /var/www/{site})
Creates a new nginx configuration file and places it in /etc/nginx/sites-available/{site}
Creates a symlink to that file from /etc/nginx/sites-enabled/{site}
Reloads the nginx configuration: service nginx reload
Sends an API call to CloudFlare to direct all requests from {site}.mydomain.com to the server's IP address
After that, {site}.mydomain.com should work, except... it doesn't!
... Until you wait ~5 minutes, and then it magically starts working. Is there a delay before proxying comes into effect with nginx?
If I delete {site} and readd it (same process as above), even if it was working before, it stops working for awhile before starting to work again.
I'm at a loss to explain what's happening!
nginx config (where {site} is foobar)
upstream mydomain_foobar {
server 127.0.0.1:4567;
}
server {
listen 80;
server_name foobar.mydomain.com;
root /var/www/mydomain/foobar/;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://mydomain-foobar/;
proxy_redirect off;
# Socket.io Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
After looking at this issue on and off over the past month, it turns out the issue was not related to nginx at all.
When a rec_new API call is sent to CloudFlare, it takes roughly five minutes (300s TTL) for their records to update. The same can be said for any other DNS related API call to CloudFlare.
This explains the five minute gap.
From Cloudflare:
Hi,
DNS updates should take place after about five minutes (the ttl is 300 seconds). It may take a little longer to propagate out everywhere on the web (recursive DNS caching by your ISP, for example).

nginx reverse proxy to backend running on localhost

EDIT: It turns out that the my setup below actually works. Previously, I was getting redirections to port 36000 but it was due to some configuration settings on my backend application that was causing it.
I am not entirely sure, but I believe I might be wanting to set up a reverse proxy using nginx.
I have an application running on a server at port 36000. By default, port 36000 is not publicly accessible and my intention is for nginx to listen to a public url, direct any request to the url to an application running on port 36000. During this entire process, the user should not know that his/her request is being sent to an application running on my server's port 36000.
To put it in more concrete terms, assume that my url is http://domain.somehost.com/
Upon visiting http://domain.somehost.com/ , nginx should pick up the request and redirect it to an application already running on the server on port 36000, the application does some processing, and passes the response back. Port 36000 is not publicly accessible and should not appear as part of any url.
I've tried a setup that looks like:
server {
listen 80;
server_name domain.somehost.com
location / {
proxy_pass http://127.0.0.1:36000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
and including that inside my main nginx.conf
However, it requires me to make port 36000 publicly accessible, and I'm trying to avoid that. The port 36000 also shows up as part of the forwarded url in the web browser.
Is there any way that I can do the same thing, but without making port 36000 accessible?
Thank you.
EDIT: The config below is from a working nginx config, with the hostname and port changed.
You need to may be able to set the server listening on port 36000 as an upstream server (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html).
server {
listen 80;
server_name domain.somehost.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://localhost:36000/;
proxy_redirect http://localhost:36000/ https://$server_name/;
}
}

Resources