To give you a bit of background on my problem:
I have an app that installs software and serves it publically. When it installs:
Installs base files to root path (i.e. /var/www/{site})
Creates a new nginx configuration file and places it in /etc/nginx/sites-available/{site}
Creates a symlink to that file from /etc/nginx/sites-enabled/{site}
Reloads the nginx configuration: service nginx reload
Sends an API call to CloudFlare to direct all requests from {site}.mydomain.com to the server's IP address
After that, {site}.mydomain.com should work, except... it doesn't!
... Until you wait ~5 minutes, and then it magically starts working. Is there a delay before proxying comes into effect with nginx?
If I delete {site} and readd it (same process as above), even if it was working before, it stops working for awhile before starting to work again.
I'm at a loss to explain what's happening!
nginx config (where {site} is foobar)
upstream mydomain_foobar {
server 127.0.0.1:4567;
}
server {
listen 80;
server_name foobar.mydomain.com;
root /var/www/mydomain/foobar/;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://mydomain-foobar/;
proxy_redirect off;
# Socket.io Support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
After looking at this issue on and off over the past month, it turns out the issue was not related to nginx at all.
When a rec_new API call is sent to CloudFlare, it takes roughly five minutes (300s TTL) for their records to update. The same can be said for any other DNS related API call to CloudFlare.
This explains the five minute gap.
From Cloudflare:
Hi,
DNS updates should take place after about five minutes (the ttl is 300 seconds). It may take a little longer to propagate out everywhere on the web (recursive DNS caching by your ISP, for example).
Related
I have a website that is hosted on google cloud platform using NGINX server. Some hour before it was working well but suddenly an error of 502 bad gateway occured.
NGINX server is hosted on another instance and main project is another instance and following is the configuration of my server :
server {
listen 443 ssl;
server_name www.domain.com;
ssl_certificate /path-to/fullchain.pem;
ssl_certificate_key /path-to/privkey.pem;
# REST API Redirect
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http:/internal-ip:3000;
}
# Server-side CMS Redirect
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://internal-ip:4400;
}
}
when restarted the instance of nginx server then website loaded successfully but after three or four refresh it started giving Bad Gateway and after that anytime I open it is giving bad gateway error.
Sometimes, automatically it reloads well but again down.
Tried to know the error log of nginx server and following is the output of error log:
Sometimes this popular issue is logged:
and sometimes this:
Regarding first issue I tried some recommendations such that increase the proxy send and read time to some higher value as suggested here in server configuration and also shown in the image as follows :
Also backend code is working fine because I can access the deployed backend services in local during development but hosted website can not access any backend service.
But nothing worked and sadly my website is down.
Please suggest any solution.
By default nginx is having 1024 worker connections you can change it with
events {
worker_connections 4096;
}
Also you can try to increase amount of workers as workers*worker_connections is giving you amount of connections you can handle. All that is in the context your site is receiving a traffic and you simply running out of connections.
I have an application running in Kubernetes with the following topology:
Some-ingress-controller--> nginx reverse proxy -->dynamically generated services.
I have set the NGINX reverse proxy with the following test configuration
location /mysite1/ {
proxy_set_header Host $host;
proxy_set_header Referer $http_referer;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto http;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Host $remote_addr;
proxy_pass http://myservice1.default.svc:9000/;
}
So far everything works fine - when I go to my website http://example.com/mysite1/ I see what I expect from the myservice1 application hosted at http://myservice1.default.svc:9000/. However, the application myservice1 issues requests to various internal (internal meaning they are part of the same container) resources on /get_resourceX. When the myservice1 application tries to access these resources they will be accessed at http://example.com/get_resourceX/ and not at http://example.com/mysite1/get_resourceX as they should - and that is my problem.
What could work is to simply reverse proxy all the relevant resource names as well. However, then I would need to do the same for http://example.com/mysite2, http://example.com/mysite3 etc. which is impractical since these are generated dynamically.
Another possible solution is to check the http Referrer header and see whether it originates from mysite1 - but that seems awfully hackish.
How can I easily have myservice1 requests issued to /get_resourceX served by itself? Is there a generic way to set the root path for the myservice1 application to myservice1?
I faced with memory allocation error in nginx. I have configured reverse proxy for a number of sites on my nginx, that I use as simple load balancer between two backend nodes. Typical config for site looks like this:
upstream backend {
ip_hash;
server <node-ip>;
server <another-node-ip>;
}
server {
server_name domain.subdomain.com;
# a HUGE bunch of redirection rules
include /etc/nginx/sites-available/root;
location / {
proxy_pass http://backend ;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I have 12 sites with configuration like above. As you see config contains include of another file - sites-available/root. This file consists of a number of unclude directives to another files:
include /etc/nginx/sites-available/rules1;
include /etc/nginx/sites-available/rules2;
include /etc/nginx/sites-available/rules3;
...
include /etc/nginx/sites-available/rules16;
Every rules file contains a number of redirection rules like:
if ($request_uri ~* ^/some-url$) {
return 302 /some-another-url/;
}
or
location ~ some-url {
return 302 "some-another-url";
}
The total count of redirection rules is around 2300. I included root file to configurations of all 12 sites. After that time after time I get info message in /var/log/nginx/error.log:
[info] 23721#23721: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:66
The main problem is that sometimes command service nginx reload fails with errors in log:
[alert] 22164#22164: fork() failed while spawning "worker process" (12: Cannot allocate memory)
2018/10/09 03:10:06
[alert] 22164#22164: sendmsg() failed (9: Bad file descriptor)
The issue is gone, if I exclude redirection rules from config. Nginx is set up on simple AWS t2.small instance with Ubuntu 16.04. It has 1GB of RAM and I see (with free -m) that at least half of the memory is free. I have default nginx.conf. So the question is how to avoid cannot allocate memory error, that is caused by huge number of redirection rules?
I'm using nginx 1.9.11 currently. I have a situation where my application is served up from a dynamic port via a load balancer, but the port on the app itself is set. It looks like this
Port [4000-4999] on load balancer -> instances of nginx all on port 80
I need to redirect /myapp to /myapp/ without losing the port information. But I don't know the port at the application level, because the apps are dynamically created and destroyed!
Here's the syntax of my rewrite rule:
rewrite ^/myapp /myapp/
What I get is:
http://example.com:4000/myapp -> http://example.com/myapp # no port 4000!
I can't hardcode the port...because there's a thousand possibilities, and they change all the time! I need to stay on the same port, but I don't know what port we came in on. How do I get nginx to leave the port alone?
What I've tried already...
In http, server, and location sections:
port_in_redirect off;
In the server section:
proxy_set_header Host $host:$server_port;
Also in the server section as well:
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_redirect off;
Classic. After two days of efforts I write up a nice Stack Overflow post, and immediately afterwards something occurs to me, I try it, and it works.
Here's how I solved the problem: I changed the rewrite rule to look like this:
rewrite ^/myapp $http_host/myapp/
And it worked!
See image below of vaadin 7, nginx. What could be wrong?
web.xml
sample config:
server {
listen 80;
server_name crm.komrus.com;
root /home/deploy/apache-tomcat-7.0.57/webapps/komruscrm;
proxy_cache one;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/komruscrm/;
}
}
As it seems (because you don't provide enough info about your problem) you are using nginx as reverse proxy for tomcat/jboss/jetty, and you are deploying a Vaadin application in it.
Just when you enter in the application, session expired message appears.
I had this problem 3 months ago. In my escenario Nginx was 1.0 and Vaadin 7.0+. The issue comes because of the cookies. I know that nginx must set or rewrite something in the cookies, but, you must set it manually in nginx.conf file, else, you will get that error.
Sadly, in my nginx version I wasn't able to pass cookies in the right way, so, I wasn't able to deploy my application under that scenario.
After some issues, I've decided to use Apache's reverse proxy, and never saw that issue again. Hope you can write a rule that enables to pass the cookies in the right way.
EDIT: I remembered this post How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?, this is the case!