Default nginx conf file - nginx

Suppose I have 3 nginx conf files and default_server is not defined in any of them. Now if a request comes to the serve and If its value does not match any server name, or the request does not contain that header field, It will take which nginx config to serve the request.
I mean how is it prioritized?

I guess, first vhost would be used. If you use including virtual hosts (/etc/nginx/sites-enabled/*) the hosts would be included in alphabetical order. So, if you have hosts "a", "b" and "c", first of them will be "a".

Please take a look at nginx documentation Server names
If no server_name is defined in a server block then nginx uses the
empty name as the server name.
nginx versions up to 0.8.48 used the machine’s hostname as the
server name in this case

Related

Deny access include file not working in nginx conf

Include not taken into account to deny access to certain ip
Nginx 1.12 is configured as a proxy.
I have 2 config files :
nginx.conf
mydomain.conf
I followed a tuto to have a list of deny ip list, all in a 3rd conf file called blockips.conf.
Everyline in that conf file is like :
deny xxx.xxx.xxx.xxx;
now, i tried to include like that in either the http or server section of both nginx.conf and mydomain.conf files (not both at the same time, but tring 1st and then 2n) but either it doesn't block either it crash.
include blockips.conf;
but when i put only the
deny xxx.xxx.xxx.xxx;
directly in the mydomain.conf in the server section then the ip is blocked. of course i could put all my list of ips within mydomain.conf file but it makes mote sense to have it external right ? but then it doesn't work. Of course, i've tried to have my blockips.conf file with only one line (just to make sure it is not a ; missing).
and i have checked the files right and they are all identical 644 under root
Thank you
There was 2 nginx.conf files on my system and i was not modifying the correct one!
Just do a nginx -t just to verify where is the correct file.
now it is working.

Nginx will not start with host not found in upstream

I use nginx to proxy and hold persistent connections to far away servers for me.
I have configured about 15 blocks similar to this example:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
server {
listen 80;
server_name test.rinu.test;
location / {
proxy_pass https://rinu-test;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
}
}
The problem is if the hostname can not be resolved in one or more of the upstream blocks, nginx will not (re)start. I can't use static IPs either, some of these hosts explicitly said not to do that because IPs will change. Every other solution I've seen to this error message says to get rid of upstream and do everything in the location block. That it not possible here because keepalive is only available under upstream.
I can temporarily afford to lose one server but not all 15.
Edit:
Turns out nginx is not suitable for this use case. An alternative backend (upstream) keepalive proxy should be used. A custom Node.js alternative is in my answer. So far I haven't found any other alternatives that actually work.
Earlier versions of nginx (before 1.1.4), which already powered a huge number of the most visited websites worldwide (and some still do even nowdays, if the server headers are to be believed), didn't even support keepalive on the upstream side, because there is very little benefit for doing so in the datacentre setting, unless you have a non-trivial latency between your various hosts; see https://serverfault.com/a/883019/110020 for some explanation.
Basically, unless you know you specifically need keepalive between your upstream and front-end, chances are it's only making your architecture less resilient and worse-off.
(Note that your current solution is also wrong because a change in the IP address will likewise go undetected, because you're doing hostname resolution at config reload only; so, even if nginx does start, it'll basically stop working once IP addresses of the upstream servers do change.)
Potential solutions, pick one:
The best solution would seem to just get rid of upstream keepalive as likely unnecessary in a datacentre environment, and use variables with proxy_pass for up-to-date DNS resolution for each request (nginx is still smart-enough to still do the caching of such resolutions)
Another option would be to get a paid version of nginx through a commercial subscription, which has a resolve parameter for the server directive within the upstream context.
Finally, another thing to try might be to use a set variable and/or a map to specify the servers within upstream; this is neither confirmed nor denied to have been implemented; e.g., it may or may not work.
Your scenario is very similar to the one when using aws ELB as uptreams in where is critical to resolve the proper IP of the defined domain.
The first thing you need to do and ensure is that the DNS servers you are using can resolve to your domains, then you could create your config like this:
resolver 10.0.0.2 valid=300s;
resolver_timeout 10s;
location /foo {
set $foo_backend_servers foo_backends.example.com;
proxy_pass http://$foo_backend_servers;
}
location /bar {
set $bar_backend_servers bar_backends.example.com;
proxy_pass http://$bar_backend_servers;
}
Notice the resolver 10.0.0.2 it should be IP of the DNS server that works and answer your queries, depending on your setup this could be a local cache service like unbound. and then just use resolve 127.0.0.1
Now, is very important to use a variable to specify the domain name, from the docs:
When you use a variable to specify the domain name in the proxy_pass directive, NGINX re‑resolves the domain name when its TTL expires.
You could check your resolver by using tools like dig for example:
$ dig +short stackoverflow.com
In case is a must to use keepalive in the upstreams, and if is not an option to use Nginx +, then you could give a try to openresty balancer, you will need to use/implement lua-resty-dns
A one possible solution is to involve a local DNS cache. It can be a local DNS server like Bind or Dnsmasq (with some crafty configuration, note that nginx can also use specified dns server in place of the system default), or just maintaining the cache in hosts file.
It seems that using hosts file with some scripting is quite straightforward way. The hosts file should be spitted into the static and dynamic parts (i.e. cat hosts.static hosts.dynamic > hosts), and the dynamic part should be generated (and updated) automatically by a script.
Perhaps it make sense to check from time to time the hostnames for changing IPs, and update hosts file and reload configuration in nginx on changes. In case of some hostname cannot be resolved the old IP or some default IP (like 127.0.1.9) should be used.
If you don't need the hostnames in the nginx config file (i.e., IPs are enough), the upstream section with IPs (resolved hostnames) can be generated by a script and included into nginx config — and no need to touch the hosts file in such case.
I put the resolve parameter on server and you need to set the Nginx Resolver in nginx.conf as below:
/etc/nginx/nginx.conf:
http {
resolver 192.168.0.2 ipv6=off valid=40s; # The DNS IP server
}
Site.conf:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
My problem was container related. I'm using docker compose to create the nginx container, plus the app container. When setting network_mode: host in the app container config in docker-compose.yml, nginx was unable to find the upstream app container. Removing this fixed the problem.
we can resolve it temporarily
cd /etc
sudo vim resolv.conf
i
nameserver 8.8.8.8
:wq
then do sudo nginx -t
restart nginx it will work for the momment
An alternative is to write a new service that only does what I want. The following replaces nginx for proxying https connections using Node.js
const http = require('http');
const https = require('https');
const httpsKeepAliveAgent = new https.Agent({ keepAlive: true });
http.createServer(onRequest).listen(3000);
function onRequest(client_req, client_res) {
https.pipe(
protocol.request({
host: client_req.headers.host,
port: 443,
path: client_req.url,
method: client_req.method,
headers: client_req.headers,
agent: httpsKeepAliveAgent
}, (res) => {
res.pipe(client_res);
}).on('error', (e) => {
client_res.end();
})
);
}
Example usage:
curl http://localhost:3000/request_uri -H "Host: test.rinu.test"
which is equivalent to:
curl https://test.rinu.test/request_uri

Nginx reverse proxy.. dynamic hostname with header key and value or url path

I have got some nginx problem.I hope you will help me to solve this problem.
There are sevral servers
User PC internet networked;
Nginx proxy, hostnamed "nginxproxy", located in internal network, and it has only server which has Public IP "1.1.1.1" but jumphost, 8090 listen.
server1 hostnamed "tomcat1" located in internal network (only has private IP "70.1.1.1")
server2 hostnamed "tomcat2" located in internal network (only has private IP "70.1.1.2")
and 5, 6, ... There are more servers hostnamed apache1, apache2, redis1 etc...
Now my client wants to send http request call to server located in internal network directly. but it is not possible (because there don't have Public ips..) so the call has to passed in to nginx proxy first.
I just wander that when i call request from user pc, destination server hostname put on the request's header or url, the nginx can parse it and combine to there destination in internal network?
for example i call like this,
http://nginxproxy:1888/[destination hostname]/[path, files like index.html, some keys and values.&k1=v1. etc....]
i hope nginx pass and convert it and call there destination host like this
http://[destination hostname]:8888/[path, files like index.html, some keys and values.&k1=v1. etc....]
i tried to do this. there were some errors..
error log printed
"localhost could not be resolved (10060: Operation timed out), client: 127.0.0.1, server: localhost, request: "GET /localhost/8080/index"
server {
listen 1888;
server_name localhost;
location ~^\/([a-zA-Z0-9]+)\/([0-9]+)\/([a-zA-Z0-9]+) {
proxy_pass http://$1:$2/$3;
}
}
and one more..
in the java code,
i set like this
import org.apache.http.HttpMessage;
HttpMessage request;
request.addHeader("destinationHost", "tomcat2");
request.addHeader("destinationPort", "8888");
and call to this url
http://nginxproxy:1888/[path, files like index.html, some keys and values.&k1=v1. etc....]
can nginx convert url to
http://tomcat2:8888/[path, files like index.html, some keys and values.&k1=v1. etc....]
and pass to there??
if so, how can i set nginx.conf
thank you so much and have a nice day..

Nginx server names priority

I have two server sections for nginx in different files.
The first one:
server {
server_name _;
...
}
The second one:
server {
server_name ~someRegex;
...
}
I have some constraints - I can't change the first server section (i.e. I can't edit first file)
Documentation says the following about server names priority:
exact name
longest wildcard name starting with an asterisk, e.g. “*.example.org”
longest wildcard name ending with an asterisk, e.g. “mail.*”
first matching regular expression (in order of appearance in a configuration file)
As I understand server_name _ is used as catch-all server.
So when I have request from host matched someRegex request is handled by first server section. Is there a way to handle these request by second server section?
Not quite.
_ simply renders the server_name invalid. See this document.
What makes a server block the default is either being defined first for a given port or being defined with the listen ... default_server modifier. See this document.
So your configuration will work as you expect, assuming that your regex is valid and that the second server block has indeed been installed by nginx. Check your error log after reloading nginx and/or test the configuration using
nginx -t

nginx- duplicate default server error

In my error log i get
[emerg] 10619#0: a duplicate default server for 0.0.0.0:80 in /etc/nginx/sites-enabled/mysite.com:4
on Line 4 I have:
server_name mysite.com www.mysite.com;
Any suggestions?
You likely have other files (such as the default configuration) located in /etc/nginx/sites-enabled that needs to be removed.
This issue is caused by a repeat of the default_server parameter supplied to one or more listen directives in your files. You'll likely find this conflicting directive reads something similar to:
listen 80 default_server;
As the nginx core module documentation for listen states:
The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair. If none of the directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair.
This means that there must be another file or server block defined in your configuration with default_server set for port 80. nginx is encountering that first before your mysite.com file so try removing or adjusting that other configuration.
If you are struggling to find where these directives and parameters are set, try a search like so:
grep -R default_server /etc/nginx
OS Debian 10 + nginx.
In my case, i unlinked the "default" page as:
cd/etc/nginx/sites-enabled
unlink default
service nginx restart
Execute this at the terminal to see conflicting configurations listening to the same port:
grep -R default_server /etc/nginx
If you're on Digital Ocean this means you need to go to /etc/nginx/sites-enabled/ and then REMOVE using rm -R digitalocean and default
It fixed it for me!
Pic of Console on Windows 10 using Bitvise
In my case, commenting out the wildcard directive on include in the /etc/nginx/nginx.conf worked
#include /etc/nginx/sites-enabled/*;
include /etc/nginx/sites-enabled/abcdef.com;
PS: as per the comments above, this could be a solution if there is just one configuration (either default or your custom one)
In my case junk files from editor caused the problem.
I had a config as below:
#...
http {
# ...
include ../sites/*;
}
In the ../sites directory initially I had a default.config file.
However, by mistake I saved duplicate files as default.config.save and default.config.save.1.
Removing them resolved the issue.
If davidjb's answer does not show multiple default_server lines, check for multiple include directives.
It is possible you accidentally included your default (or another site) twice.

Resources