I configured nginx as a load balancer and as long as the IP of the nginx server is called everything runs perfect. But the proxypass is not working.
Here is the crucial config part:
upstream discover {
hash $remote_addr consistent;
server <ipOfAppInstance01>:80;
server <ipOfAppInstance02>:80;
}
server {
listen 80;
server_name localhost;
location /discover/ {
proxy_pass http://discover; <---upstream group name
}
In some cases the configured proxypass path ("discover/discover/...") is called instead of the nginx server IP ("10.55.22.13/discover/...) and thats when I get the DNS resolve error. Did I get the config wrong? Or is that a DNS server issue, instead of nginx?
Regards
A
I'll need to test some more, but I think I solved this in the nginx configuration by doing something like this:
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://main;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Related
I installed Nginx on my server (my server uses WHM). And on this server has two accounts. Each account will run a server a NextJS site and each account has its own domain.
Site1 will run on port 3000
Site2 will run on port 3004
What I want to do is:
I want to access domain1 I see the content of my site1 in NextJS that runs on localhost:3000
And when I access domain2 I see the content of my site2 on NextJS running on localhost:3004
I tried to do a Nginx implementation for site1. But when I accessed it I saw a Cpanel screen, and the url was dominio1/cgi-sys/defaultwebpage.cgi
Here's the Nginx implementation I tried to do:
server {
listen 80;
server_name computadorsolidario.tec.br www.computadorsolidario.tec.br ;
location / {
proxy_pass http://localhost:3004;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
}
So how do I do this setting for nginx to have this behavior? And I'm changing the correct file?
Note: I created the configuration file in /etc/nginx/conf.d/users/domain1/domio1.conf And within /etc/nginx/conf.d/users have several configuration files with the name of the accounts you have on the server. (They are already implemented.)
Try
server {
listen 80;
server_name www.domain1.com;
proxy_pass http://127.0.0.1:3000;
}
server {
listen 80;
server_name www.domain2.com domain2.com;
proxy_pass http://127.0.0.1:3004;
}
Each domain listens on same port and reverse-proxies to local network on the ports you specify. To differentiate between hosts, specify the server_name field.
server {
listen 80;
server_name www.domain1.com;
location / {
proxy_pass http://127.0.0.1:3000;
}
}
server {
listen 80;
server_name www.domain2.com domain2.com;
location / {
proxy_pass http://127.0.0.1:3004;
}
}
Can't connect to application through External IP.
I started gerrit code review application on GCP's vm instance(CentOS 7).
It works on http://localhost:8080 and I can't connect to it through external IP. Also I tried to create NGINX reverse proxy, but probably my configuration is wrong. By the way after installing NGINX, the starter page were shown on external ip.
# nginx configuration /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;
auth_basic "Welcomme to Gerrit Code Review Site!";
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
gerrit.config
[httpd]
listenUrl = proxy-http://127.0.0.1:8080/
You use localhost as a server_name. I think that may cause conflict, because you connect to your server externally. You don't need server_name, cause you are going connect to your server by ip. And I recommend you enable logs in your nginx config. It will help you with bug fixing.
I recommend you try this config:
server {
listen 80;
access_log /var/log/nginx/gerrit_access.log;
error_log /var/log/nginx/gerrit_error.log;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
}
add a line in /etc/hosts
127.0.0.1 internal.domain
Update proxy config
proxy_pass http://internal.domain:8080;
It works with me
I've been trying to wrap my head around load balancing over the past few days and have hit somewhat of a snag. I thought that I'd set up everything correctly, but it would appear that I'm getting almost all of my traffic through my primary server still, while the weights I've set should be sending 1:10 to primary.
My current load balancer config:
upstream backend {
least_conn;
server 192.168.x.xx weight=10 max_fails=3 fail_timeout=5s;
server 192.168.x.xy weight=1 max_fails=3 fail_timeout=10s;
}
server {
listen 80;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
server {
listen 443;
server_name somesite.somesub.org www.somesite.somesub.org;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host somesite.somesub.org;
proxy_pass http://backend$request_uri;
}
}
And my current site config is as follows:
server {
listen 192.168.x.xx:80;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
location / {
#return 301 https://$server_name$request_uri;
}
}
server {
listen 192.168.x.xx:443 ssl http2;
server_name somesite.somesub.org;
index index.php index.html;
root /var/www/somesite.somesub.org/html;
access_log /var/www/somesite.somesub.org/logs/access.log;
error_log /var/www/somesite.somesub.org/logs/error.log;
include snippets/php.conf;
include snippets/security.conf;
include snippets/self-signed-somesite.somesub.org.conf;
}
~
And the other configuration is exactly the same, aside from a different IP address.
A small detail that may or may not matter: One of the nodes is hosted on the same machine of the load balancer - not sure if that matters.
Both machines have correct firewall config, and can be accessed separately.
No error logs are showing anything of use.
The only possible thing I could think of is that the nginx site config is being served before the load balancer; and I'm not sure how to fix that.
With another look at the configuration and realized I could have just as easily had the site config that's on the load balancer listen on 127.0.0.1 and relist that among my available servers in the load balancer.
nGinx config for site on load balancer listening on localhost:80/443 solved this issue.
I'm just getting started with Nginx and am trying to set up a server block to forward all requests on the subdomain api.mydomain.com to port 8080.
Here's what I've got:
UPDATED:
server {
server_name api.mydomain.com;
location / {
proxy_pass http://127.0.0.1:8080/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-for $remote_addr;
}
}
server {
server_name www.mydomain.com;
return 301 $scheme://mydomain.com$request_uri;
}
server {
server_name mydomain.com;
root /var/www/mydomain.com;
index index.html index.htm;
location / {
try_files $uri $uri/ =404;
}
}
The server block exists in /etc/nginx/sites-available and I have created a symlink in /etc/nginx/sites-enabled.
What I expect:
I'm running deployd on port 8080. When I go to api.mydomain.com/users I expect to get a JSON response from the deployd API, but I get no response instead.
Also, my 301 redirect for www.mydomain.com is not working. That block was code I copied from Nginx Pitfalls.
What I've tried:
Confirmed that mydomain.com:8080/users and $ curl
http://127.0.0.1:8080/users return the expected response.
Restarted the nginx service after making changes to the server block.
Tried removing the proxy_set_header lines.
Any idea what I'm missing here?
You shouldn't need to explicitly capture the URL for your use case. The following should work for your location block:
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
}
As it turns out, my problem was not with Nginx configuration but rather with my DNS settings. I had to create an A NAME record for each of my sub-domains (www and api). Rookie mistake.
A colleague of mine actually helped me troubleshoot the issue. We discovered the problem when using telnet from my local machine to connect to the server via IP address and saw that Nginx was, in fact, doing what I intended.
I have nginx serving a page on port 80.
server {
listen 80;
server_name .example.com;
root /var/www/docs;
index index.html;
}
I also have a service running a server on port 9000. How do I set up a virtual directory in nginx (such as /service) to serve whatever is on port 9000? I am unable to open other ports, so I would like to serve this through some kind of virtual directory on port 80.
Start with that (but you definetly will need more directives to make your server normally answering on this subdirectory):
location /something {
proxy_pass http://localhost:9000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
}