Nginx config invalid parameter even though it is in documentation - nginx

I am tryin to run nginx latest version with the following configuration, but I get nginx: [emerg] invalid parameter "route=bloomberg" in /etc/nginx/nginx.conf:13
docker run --rm -ti -v root_to_local_nginx_directory:/etc/nginx:ro -p 3080:80 --name=mynginx --entrypoint nginx nginx
# nginx.conf file inside root_to_local_nginx_directory
http {
map $cookie_route $route_from_cookie {
~.(?P<version>w+)$ $route;
}
split_clients "${remote_addr}" $random_route {
50% server bloomberg.com route=bloomberg;
* server yahoo.com route=yahoo;
}
upstream backend {
zone backend 64k;
server bloomberg.com route=bloomberg;
server yahoo.com route=yahoo;
sticky route $route_from_cookie $randomroute;
}
server {
# ...
listen 80;
location / {
proxy_set_header Host $host;
proxy_pass http://backend;
}
}
}
Why is this? According to the documentation this should be correct http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream.

The route=string parameter of the server directive within the upstream context is considered to be an enterprise-grade feature, and is thus only available through the commercial subscription, in NGINX Plus, not in OSS NGINX. (If you look closer into the documentation, you'll notice it's grouped together with the other parameters under a separate "available as part of our commercial subscription" subsection.)
Additionally, you're also trying to use some similar "server" parameters within the split_clients context as if they were actual directives interpreted by nginx, even though everything is supposed to be string literals in that context; it's unclear whether or not that part is responsible for any errors, but even if not, it's a bad style to introduce such confusion into your configuration.
References:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
http://nginx.org/en/docs/http/ngx_http_split_clients_module.html#split_clients
https://www.nginx.com/products/nginx/

The reason why you are seeing the error is because the split_clients module does not support the route parameter. Alternatively, you can do something along the lines:
upstream bloomberg {
server bloomberg.com route=bloomberg;
}
upstream yahoo {
server yahoo.com route=yahoo;
}
split_clients "${remote_addr}" $random_route {
50% bloomberg;
* yahoo;
}

Related

How to force NGINX to use backup upstream and vice versa?

Maybe it's uncommon but i'd love to use an upstream definition in my nginx loadbalancer, which looks like this:
upstream backend {
server primary.local.net:80;
server backup.local.net:80 backup;
}
to aid maintenance processes for those hosts. First i prepare backup.local.net with newest software, then switch over the service to backup and do the same with primary.local.net. In the end, again switch back to primary.
Right now i'm doing this by loading a second configuration file:
upstream backend {
server primary.local.net:80 backup;
server backup.local.net:80;
}
using the command:
nginx -s reload
But this is laborious and hope there is a much smarter way to do this?
First of all, using upstream definitions in NGINX should NOT be uncommon! It's the preferred way of doing it.
Unfortunately, there is not really an easy solution for NGINX OpenSource. But why not trying to build something that does not require any config reload.
So given we have two upstream defitions like mentioned above
upstream blue{
server primary.local.net:80;
server backup.local.net:80 backup;
}
upstream green{
server primary.local.net:80;
server backup.local.net:80 backup;
}
Blue is primary and green is secondary. If you are saying you prepare something, do you think it would be possible to have something on your backend telling NGINX what deployment is currently active. Blue or Green?
Another option could be a file on your NGINX instance keeping that information. njs will be able to read from that file and define the upstream to be used based on the information provided.
https://nginx.org/en/docs/njs/reference.html#njs_api_fs
Quick POC:
upstream.conf
upstream blue {
server 127.1:9000;
server 127.1:9100 backup;
}
upstream green {
server 127.1:9000 backup;
server 127.1:9100;
}
js_import upstream from conf.d/upstream.js;
js_set $upstream upstream.set;
server {
listen 80;
location / {
proxy_pass http://$upstream/;
}
}
upstream.js
export default { set }
function set(r) {
var fs = require('fs');
try {
var c = fs.readFileSync("/etc/nginx/conf.d/active.txt");
} catch (e) {
r.error("Error while reading upstrem file.");
// maybe set c to somehting default then.
}
return c;
}
active.txt
blue
Note: Make sure creating the file without a new-line at the end like echo -n "blue" > active.txt.
You can now chnage the content of active.txt during runtime and the upstream will be configured dynamically. With this solution you can even check for request headers and if you want to test an inactive upstream this will work as well. Pretty flexible though.
There's a pattern for /etc/nginx where you have a master nginx.conf file that loads all of the config files in another directory, like "active_services".
Your actual config files are stored in "available_services", and symlinked into the active_services directory.
Either flip the link, or delete one and create the other.

Server side event is not working Kubernetes ingress controller and erred out

We have an API which will create a cluster and wait for the status until it creates and executes a query in database.
We tried this through Ingress and are getting timed-out.
And we have set in ingress rule as below:
nginx.ingress.kubernetes.io/configuration-snippet: |
location / {
proxy_set_header Connection "";
proxy_http_version 1.1;
}
Error:
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
If we set this it's creating under /data path which is our main API path, and it's not working as expected.
Is there any direct annotation to use server side events?
If we use the above snippet in ingress rule and get this error in controller logs.
Below error Error:
exit status 1
2020/06/26 04:57:22 [emerg] 132#132: location "/" is outside location "/data/" in /tmp/nginx-cfg140739857:11409
nginx: [emerg] location "/" is outside location "/data/" in /tmp/nginx-cfg140739857:11409
nginx: configuration file /tmp/nginx-cfg140739857 test failed.
As the doc suggests, you should never use / for hosting your data from. It should be somewhere in /data/*.
Some directories in any file system should never be used for hosting data from. These include / and root. You should never use these as your document root.
Doing this leaves you open to a request outside of your expected area returning private data.
NEVER DO THIS!!!
server {
root /;
location / {
}
}

Dynamic proxy_pass for stream

I'm trying to have openresty to reverse proxy TCP dynamicaly using lua.
For the start, I have :
stream {
server {
listen 9291;
set_by_lua_block $proxy '
ngx.var.proxy = "10.128.128.3:8291"
';
proxy_pass $proxy;
}
}
But openresty -t says:
nginx: [emerg] "set_by_lua_block" directive is not allowed here in /usr/local/openresty/nginx/conf/nginx.conf:129
I found many docs on dynamic proxy_pass, but all for 'http'.
Take a look at balancer_by_lua_block directive.
You will need to use ngx.balancer API within balancer_by_lua_block.
Read all docs carefully. There are a lot of smart details.
But all you need is here, just RTFM.

Nginx will not start with host not found in upstream

I use nginx to proxy and hold persistent connections to far away servers for me.
I have configured about 15 blocks similar to this example:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
server {
listen 80;
server_name test.rinu.test;
location / {
proxy_pass https://rinu-test;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
}
}
The problem is if the hostname can not be resolved in one or more of the upstream blocks, nginx will not (re)start. I can't use static IPs either, some of these hosts explicitly said not to do that because IPs will change. Every other solution I've seen to this error message says to get rid of upstream and do everything in the location block. That it not possible here because keepalive is only available under upstream.
I can temporarily afford to lose one server but not all 15.
Edit:
Turns out nginx is not suitable for this use case. An alternative backend (upstream) keepalive proxy should be used. A custom Node.js alternative is in my answer. So far I haven't found any other alternatives that actually work.
Earlier versions of nginx (before 1.1.4), which already powered a huge number of the most visited websites worldwide (and some still do even nowdays, if the server headers are to be believed), didn't even support keepalive on the upstream side, because there is very little benefit for doing so in the datacentre setting, unless you have a non-trivial latency between your various hosts; see https://serverfault.com/a/883019/110020 for some explanation.
Basically, unless you know you specifically need keepalive between your upstream and front-end, chances are it's only making your architecture less resilient and worse-off.
(Note that your current solution is also wrong because a change in the IP address will likewise go undetected, because you're doing hostname resolution at config reload only; so, even if nginx does start, it'll basically stop working once IP addresses of the upstream servers do change.)
Potential solutions, pick one:
The best solution would seem to just get rid of upstream keepalive as likely unnecessary in a datacentre environment, and use variables with proxy_pass for up-to-date DNS resolution for each request (nginx is still smart-enough to still do the caching of such resolutions)
Another option would be to get a paid version of nginx through a commercial subscription, which has a resolve parameter for the server directive within the upstream context.
Finally, another thing to try might be to use a set variable and/or a map to specify the servers within upstream; this is neither confirmed nor denied to have been implemented; e.g., it may or may not work.
Your scenario is very similar to the one when using aws ELB as uptreams in where is critical to resolve the proper IP of the defined domain.
The first thing you need to do and ensure is that the DNS servers you are using can resolve to your domains, then you could create your config like this:
resolver 10.0.0.2 valid=300s;
resolver_timeout 10s;
location /foo {
set $foo_backend_servers foo_backends.example.com;
proxy_pass http://$foo_backend_servers;
}
location /bar {
set $bar_backend_servers bar_backends.example.com;
proxy_pass http://$bar_backend_servers;
}
Notice the resolver 10.0.0.2 it should be IP of the DNS server that works and answer your queries, depending on your setup this could be a local cache service like unbound. and then just use resolve 127.0.0.1
Now, is very important to use a variable to specify the domain name, from the docs:
When you use a variable to specify the domain name in the proxy_pass directive, NGINX re‑resolves the domain name when its TTL expires.
You could check your resolver by using tools like dig for example:
$ dig +short stackoverflow.com
In case is a must to use keepalive in the upstreams, and if is not an option to use Nginx +, then you could give a try to openresty balancer, you will need to use/implement lua-resty-dns
A one possible solution is to involve a local DNS cache. It can be a local DNS server like Bind or Dnsmasq (with some crafty configuration, note that nginx can also use specified dns server in place of the system default), or just maintaining the cache in hosts file.
It seems that using hosts file with some scripting is quite straightforward way. The hosts file should be spitted into the static and dynamic parts (i.e. cat hosts.static hosts.dynamic > hosts), and the dynamic part should be generated (and updated) automatically by a script.
Perhaps it make sense to check from time to time the hostnames for changing IPs, and update hosts file and reload configuration in nginx on changes. In case of some hostname cannot be resolved the old IP or some default IP (like 127.0.1.9) should be used.
If you don't need the hostnames in the nginx config file (i.e., IPs are enough), the upstream section with IPs (resolved hostnames) can be generated by a script and included into nginx config — and no need to touch the hosts file in such case.
I put the resolve parameter on server and you need to set the Nginx Resolver in nginx.conf as below:
/etc/nginx/nginx.conf:
http {
resolver 192.168.0.2 ipv6=off valid=40s; # The DNS IP server
}
Site.conf:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
My problem was container related. I'm using docker compose to create the nginx container, plus the app container. When setting network_mode: host in the app container config in docker-compose.yml, nginx was unable to find the upstream app container. Removing this fixed the problem.
we can resolve it temporarily
cd /etc
sudo vim resolv.conf
i
nameserver 8.8.8.8
:wq
then do sudo nginx -t
restart nginx it will work for the momment
An alternative is to write a new service that only does what I want. The following replaces nginx for proxying https connections using Node.js
const http = require('http');
const https = require('https');
const httpsKeepAliveAgent = new https.Agent({ keepAlive: true });
http.createServer(onRequest).listen(3000);
function onRequest(client_req, client_res) {
https.pipe(
protocol.request({
host: client_req.headers.host,
port: 443,
path: client_req.url,
method: client_req.method,
headers: client_req.headers,
agent: httpsKeepAliveAgent
}, (res) => {
res.pipe(client_res);
}).on('error', (e) => {
client_res.end();
})
);
}
Example usage:
curl http://localhost:3000/request_uri -H "Host: test.rinu.test"
which is equivalent to:
curl https://test.rinu.test/request_uri

Nginx: Setting up SSL-passthorugh

I'm trying to configure SSL-passthrough for multiple webapps using the same nginx server (nginx version: nginx/1.13.6), but when restarting the nginx server, I get an error complaining that
nginx: [emerg] "stream" directive is duplicate
The configuration I have is the following:
2 files for the ssl passthrough that look like this:
server1.conf:
stream {
upstream workers {
server 192.168.1.10:443;
server 192.168.1.11:443;
server 192.168.1.12:443;
}
server {
listen server1.com:8443;
proxy_pass workers;
}
}
and server2.conf:
stream {
upstream workers {
server 192.168.1.20:443;
server 192.168.1.21:443;
server 192.168.1.22:443;
}
server {
listen server2.com:8443;
proxy_pass workers;
}
}
If I remove one of the two files, then nginx starts correctly.
How can this be achieved?
Thanks,
Cristi
Streams work on Layer 5, and cannot read encrypted traffic (which is Layer 6 on the OSI model), and thus cannot tell apart requests hitting server1.com and server2.com unless they are pointing to different IPs.
This can be solved by one of the following solutions
Decrypt the traffic on nginx, then proxy-pass it to backend processes/wockers using HTTP.
Bind server1.com to a port that is different to server2.com.
Get an additional IP address and bind server2.com on that.
Get an additional load balancer and move server2.com there.

Resources