My initial NGINX load balancer configuration was pretty simple:
upstream myapp {
server 10.11.12.13:80; #server01
server 10.11.12.14:80; #server02
}
server {
listen 80;
server_name localhost;
location /myapp/ {
proxy_pass http://myapp;
Let's say the localhost has the IP 1.2.3.4.
Result:
The user calls 1.2.3.4/myapp and gets redirected to one of those two servers including the requested filepath.
For example: 1.2.3.4/myapp/results gets redirected to maybe 10.11.12.13/myapp/results.
Now I have ONE special case to include, this is where I struggle. ALL requests should still be handled exactly the same with this one exception:
If 1.2.3.4/specialFilePath is called I want to redirect to a totally different, static URL e.g. externalPage.com.
Can I add this case somehow to my Nginx configuration?
You could add a second location block in which you defile what to do with the specialFilePath like
location /specialFilePath {
proxy_pass http://externalservice.com;
}
Then check the configuration with nginx -t or sudo nginx -t and reload the configuration
Related
i don't understand what i'm doing wrong so i hope somebody can help :)
When i access http://10.0.0.54/index.html i get the right page but if i try to access http://10.0.0.54 instead of showing the index file it redirects me to https://10.0.0.54 showing error 502 bad gateway.
This is the configuration /etc/nginx/sites-available/default
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html/salvaderi;
index index.html;
server_name _;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html/salvaderi;
}
location / {
root /var/www/html/salvaderi;
index index.html;
}
}
I am running nginx 1.18.0 on ubuntu 22.04
i tried changing parameters inside location /{} but i always get the same result. I also changed the root directory and made sure permission where set right. Searching on for the solution i saw other people having problems about PHP and FastCGI but i am not using it.
Your configuration about to be right.
Possible there is some kind of proxy or load-balancer is placed between you and nginx you configuring since you got redirect to HTTPS whether there is no any redirection instructions in your config and, in the same time, there is no listen 443 ssl in config, but you still got response on HTTPS request.
I'd check next:
Is 10.0.0.54 in fact IP of your server?
Is there any return 301, return 302 or rewrite instructions in your nginx config (the better
way is to dump config with nginx -T command and look over).
Didn't
you previously have configured some redirects that may have been
cached by your web client previously? Try to send GET request with
curl instead of web browser (if browser been used for tests).
I'm moving some small websites in production to DDEV and, some of them has multiple domains with a 301 redirection to the main HTTPS site.
This config was working well with the "natural" Nginx when I was using a .conf file to manage the domains that should be redirect to the main site on this way:
server {
listen 80;
server_name .domain1.com
.domain2.com
.domain3.com
;
return 301 https://www.maindomain.com;
}
I tried to create a new domains.conf file and add it inside the .ddev/nginx_full directory to be loaded in the restart process but seems the Nginx didn't recognize such file.
In the main "natural" Nginx config file I has this server to redirect all requests coming from HTTP to HTTPS:
server {
listen 80;
access_log off;
error_log off;
server_name maindomain.com www.maindomain.com;
return 301 https://www.$host$request_uri;
}
I tried to add these configs inside the .ddev/nginx_full/nginx-site.conf file but the server start to be crazy, doing sometimes infinite redirections and sometimes, not recognize the domains.
Inside the config.yaml file I have:
additional_fqdns:
- domain1.com
- domain2.com
- domain3.com
- maindomain.com
- www.maindomain.com
use_dns_when_possible: false
I'm sure that's a "right way" to handle this situation but, looking the docs, I didn't find and answer for that. On this way, I ask if someone here have the catch for that.
Thanks a lot
I think this will work for you.
Add the file .ddev/nginx/redirect.conf with these contents:
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
This uses a DDEV nginx snippet, it could also be done with a full nginx config.
The ddev-router acts as a reverse proxy that terminates SSL/443 and passes along requests on port 80 to the web container.
You see the infinite redirects because it sees the request always on port 80.
I'm having trouble configuring my nginx proxy despite reading a number of guides and trying for three consecutive evenings.
Here is my topology:
(From internet) All traffic from port 80 is redirected to 192.168.1.4, a ubuntu-server virtual running nginx.
I have a NAS which has a subdomain myName.surname.com which connects to the admin page. On that NAS, I have apache webserver running hosting a couple of sites on port 81, 82,
The NAS uses virtualhosts, so domains successfully redirect (without using nginx).
I also have an ASP.NET website running on IIS on another 192.168.1.3:9810.
Now here is my NGINX configuration. I tried configuring it a few times but broke it so I've put it back to its default state:
server {
listen 80 default_server;
root /usr/share/nginx/html;
index index.html index.htm;
server_name localhost;
location / {
proxy_pass http://192.168.1.1; #WORKS OK
}
}
If I go on myName.surname.com or wordpressWebsite.co.uk or myIISSiteDomain.co.uk I am with config above greeted with the correct page at 192.168.1.1:8080 OR 192.168.1.1:81.
It's a start.
First problem is When I navigate to any other page (not home page) like wordpressWebsite.co.uk/blog, it breaks giving 404. So I have tried to differentiate between URLs? I read that the config should be something like:
server {
listen 80;
server_name wordpressWebsite.co.uk;
location / {
proxy_pass http://192.168.1.1:81;
}
}
server {
listen 80;
server_name myName.surname.com;
location / {
proxy_pass http://192.168.1.1;
}
}
server {
listen 80 myIISSiteDomain.co.uk
location / {
proxy_pass http://192.168.1.3:9810;
}
}
But this is not quite right.
1) wordpressWebsite.co.uk loads up the page, but as soon as I go to any other link like wordpressWebsite.co.uk/blog it breaks, giving me my NAS error message like its trying to access 192.168.1.1/blog rather than the virtualhost ~/blog. It actually changes my URL in navbar to 192.168.1.1 so why is it behaving like this?
2) if I'm using virtual host, I don't think I should need to pass in the port via nginx for 192.168.1.1:81 (wordpressWebsite.co.uk). Surely I just need to point it to 192.168.1.1, and then virtualhost should detect that the url maps to 81? I'm not sure how to to do this as I don't fully understand what actually gets passed from nginx to the server?
You can add try_files $uri $uri/ /index.php?$args;
See this https://www.geekytuts.net/linux/ultimate-nginx-configuration-for-wordpress/
I have my staging config setup like so:
server {
listen 80;
server_name staging.domain.com;
root /var/www/staging/public;
and my production config setup like this:
server {
listen 80;
server_name www.domain.com;
root /var/www/production/public;
With no other redirects or anything.
The issue is that even if I disable the production config I can still access the staging server at www.domain.com.
Why is it not being restricted to its configured subdomain?
I've answered a similar question like this before
Let me start with a small explanation on how nginx matches the hosts, quoting from how nginx processes a request
In this configuration nginx tests only the request’s header field
“Host” to determine which server the request should be routed to. If
its value does not match any server name, or the request does not
contain this header field at all, then nginx will route the request to
the default server for this port.
When you disable the main server you only have 1 left, so nginx passes the request to it, if you want to avoid that you need to add a main server to block all unconfigured domains
server {
listen 80 default_server;
return 403;
}
Then run
sudo service nginx reload
Then you're set
I got a new slice off slicehost, for the purposes of playing around and learning nginx and more about deployment generally. I installed a ruby app on there (which i'll call app1) which uses passenger. I made it the default app to use for that server with the following server block in my nginx config:
server {
listen 80;
server_name <my server ip>;
root <path to app1 public folder>;
passenger_enabled on;
}
This works fine. However, i want to try a few different apps out on this slice, and so thought i would set it up like so:
http:///app1
http:///app2
etc. I thought i would be able to do that by adding a location block, and moving the app1 specific stuff into it like so:
server {
listen 80;
server_name <my server ip>;
location ^~ /app1 {
root <path to app1 public folder>;
passenger_enabled on;
}
}
However, on doing this (and restarting nginx of course), going to the plain ip address gives the 'welcome to nginx' message (which i'd expect). But, going to /app1 gives an error message:
404 Not Found
The requested URL /app1 was not found on this server.
This is distinct from the error message i get when i go to another path on that ip, eg /foo:
404 Not Found
nginx/0.8.53
So, it's like nginx knows about that location but i've not set it up properly. Can anyone set me straight? Should i set up different server blocks instead of using locations? I'm sure this is simple but can't work it out.
Cheers, max
What you're after is name virtual hosting. The idea is that each domain is hosted on the same IP, and nginx chooses the virtualhost to serve based on the Host: header in the HTTP request, which is sent by the browser.
To use name virtual hosting, use the domain you want to serve instead of your server's IP for the server_name directive.
server {
listen 80;
server_name app1.com;
location / {
root /srv/http/app1/public;
passenger_enabled on;
}
}
Then, to host more apps on the same box, just declare a separate server { } block for each one.
server {
listen 80;
server_name app2.com;
location / {
root /srv/http/app2/public;
passenger_enabled on;
}
}
I'm using unicorn instead of passenger, but the vhost part of the structure is the same for any backend.
The global nginx config (which on its own hosts nothing): https://github.com/benhoskings/babushka-deps/blob/master/nginx/nginx.conf.erb
The template wrapper for each virtualhost: https://github.com/benhoskings/babushka-deps/blob/master/nginx/vhost.conf.erb
The details of the unicorn virtualhost: https://github.com/benhoskings/babushka-deps/blob/master/nginx/unicorn_vhost.common.erb
I fail to see the real problem here tho,
in order for you to figure that out
you need to view the nginx log files on most systems at:
/var/log/nginx/
and open the relevant access file here(might be error.log)
in there you can see what url nginx exactly tried to access and why did it fail.
What I really think is happening, that you got the root path wrong,
maybe it should be alias instead because
if you are proxifying the connection to another app, it might get the
"app1" word in the url instead of a direct one.
so please try:
server {
listen 80;
server_name <my server ip>;
location /app1 {
alias <path to app1 public folder>;
passenger_enabled on;
}
}
and see weather it works and also try to view the logs first to really determine whats the problem.
I think its just a slight syntax problem:
location ~ ^/app1 { ...
should work, or a little more efficient:
location = /app1 { ...
One problem is that your Rails app probably wasn't designed to run from a subdirectory. Passenger has a directive that will fix this:
passenger_base_uri /app1;
However, running Rails apps in subdirectories is somewhat non-standard. If you can, a better option may be to set up subdomains using nginx's virtual hosts.
It seems that you want to host more apps on the same server with base uri. Try this:
root /srv/http/;
passenger_base_uri /app_1;
passenger_base_uri /app_2
Also under /srv/http, create 2 symlinks:
ln -s /srv/http/app_1 /srv/http/app1/public
ln -s /srv/http/app_2 /srv/http/app2/public
The app1 can be accessed under: http://domain.com/app_1.
Here is more for reading: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri