Varnish configuration backend host domain name or localhost? - varnish-vcl

I am new to Varnish sorry for a noob question.
In the documentation they say
backend host is as
vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
I search over google all are giving example as using host as localhost or
127.0.0.1.
I am confused here should it be localhost? Or It should be my hostname? or
my Domain IP address?
I am not using it on local server.
I installed it on my hosting server. Centos 7 OS.
The problem is it is working fine with using backend host as 127.0.0.1
but I don't know how. because I think it should be my domain name? can anyone explain?

Proper examples out of production envs often help a lot. Here is one of ours:
backend lb_prod_1 {
.host = "10.10.20.248";
.port = "45021";
.probe = {
.request = "GET /health HTTP/1.0"
"Host: www.whatevercorp.net"
.interval = 5s;
.timeout = 2s;
.window = 5;
.threshold = 3;
}
}
So this backend uses a service on system 10.10.20.248, port 45021, and has some non-default healthcheck parameters configured.

Related

NGINX environment-based routing

I have a single application running in multiple K8s clusters; Let's say there is a frontend service, and two backend ones.
I use NGINX proxy the requests from the frontend to the backend services. Regular NGINX edition, not NGINX+.
Here is the nginx.conf:
server {
....
set $back1 "<k8s hostname for the backend1 service>";
set $back2 "<k8s hostname for the backend2 service>";
location /back1 {
rewrite ^/back1/(.*)$ /$1 break;
proxy_pass http://$back1;
}
<and same for the backend 2 service>
}
So basically, what happens is that in my frontend application, I set the backend service address to localhost/back1 and localhost/back2, the requests hit NGINX which strips off those back1 and back2 prefixes and call whatever endpoint I specify after in the actual backend services in K8s.
As I have multiple K8s clusters, the backend services hostnames differ, and I need to account for that in my NGINX conf.
The question is:
Is there a way for NGINX to differentiate between my K8s clusters?
Perhaps I can pass an environment variable to the container running my frontend service, and make an if statement in nginx.conf. Something like:
server {
if (${env} = "cluster1") {
set $back1 = "<cluster1 hostname>"
}
if (${env} = "cluster2") {
set $back1 = "<cluster2 hostname>"
}
}
Or if I can execute a shell command in the nginx conf to get the hostname and write similar if blocks.
I would appreciate any help on this matter!
I went a different route - via templates, environment variables, and envsubst utility which is shipping in the latest nginx docker images.
In template:
set $upstream_back1 "${BACK1}";
set $upstream_back2 "${BACK2}";
In Dockerfile
RUN envsubst < yourtemplate > /etc/nginx/nginx.conf

websockets in openresty proxy

I created proxy with MFA using OpenResty, it mainly works ok.
But I have problem with websockets: Firefox says that it "cannot connect with server wss://...". Looking in browser's network panel I can see switching protocols request that seems be ok. My nginx.conf looks as bellow:
worker_processes auto;
env TARGET_APPLICATION_HOST;
env TARGET_APPLICATION_PORT;
env TARGET_USE_SSL;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
location / {
resolver local=on ipv6=off valid=100s;
content_by_lua_block {
local http = require "resty.http"
local httpc = http.new()
httpc:set_timeout(500)
local ok, err = httpc:connect(
os.getenv("TARGET_APPLICATION_HOST"),
os.getenv("TARGET_APPLICATION_PORT"))
if not ok then
ngx.log(ngx.ERR, err)
return
end
if os.getenv("TARGET_USE_SSL") == "TRUE" then
-- Trigger the SSL handshake
session, err = httpc:ssl_handshake(False, server, False)
end
httpc:set_timeout(2000)
httpc:proxy_response(httpc:proxy_request())
httpc:set_keepalive()
}
}
}
}
It is simpler version of production proxy, but returns the same error with websockets. I tried to use proxy with pure nginx and it works ok with websockets, but I need capabilites of OpenResty (proxing different hosts basing of cookie value).
Is there any simple mistake in the above file or OpenResty does not have websocket abilities?
lua-resty-http is a HTTP(S) client libraty, it does not (and probably will not) support the WebSocket protocol.
There is another library for the WebSocket protocol: lua-resty-websocket. It implements both client and server, so it should be possible to write the proxy using this library.
I need capabilites of OpenResty (proxing different hosts basing of cookie value)
ngx.balancer does exactly what you need, check the example and this answer.

Gogs on Nginx in subdomain is not working

I have some Problems with gogs and nginx in my local-Network.
everytime i write "domainname" i mean the hostname of the server
I have little Server running with Openmediavault on it (for NAS) and i also will run some Stuff like gogs. It is running, but only on the URL "http:domainname:3000"
I will have gogs available under git.domainname or gogs.domainname
I have tried so many things, but noting is working.
I have add a config on available Site /enabled-sites (symlink):
server {
listen 80;
server_name git.domainname;
location / {
proxy_pass http://localhost:3000;
}
}
In my gogs Configuration, i have the following Server-Section:
[server]
PROTOCOL = http
DOMAIN = domainname
HTTP_PORT = 3000
ROOT_URL = http://domainname:%(HTTP_PORT)s/
DISABLE_SSH = false
SSH_PORT = 22
START_SSH_SERVER = false
OFFLINE_MODE = true
I have not very much experience in Nginx. so I hope anyone can help me to get this work. Also i hope i get more experience to get work other services in a generic Way to work in subdomain
when any Information is missing to help me, please let me know

Nginx will not start with host not found in upstream

I use nginx to proxy and hold persistent connections to far away servers for me.
I have configured about 15 blocks similar to this example:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
server {
listen 80;
server_name test.rinu.test;
location / {
proxy_pass https://rinu-test;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
}
}
The problem is if the hostname can not be resolved in one or more of the upstream blocks, nginx will not (re)start. I can't use static IPs either, some of these hosts explicitly said not to do that because IPs will change. Every other solution I've seen to this error message says to get rid of upstream and do everything in the location block. That it not possible here because keepalive is only available under upstream.
I can temporarily afford to lose one server but not all 15.
Edit:
Turns out nginx is not suitable for this use case. An alternative backend (upstream) keepalive proxy should be used. A custom Node.js alternative is in my answer. So far I haven't found any other alternatives that actually work.
Earlier versions of nginx (before 1.1.4), which already powered a huge number of the most visited websites worldwide (and some still do even nowdays, if the server headers are to be believed), didn't even support keepalive on the upstream side, because there is very little benefit for doing so in the datacentre setting, unless you have a non-trivial latency between your various hosts; see https://serverfault.com/a/883019/110020 for some explanation.
Basically, unless you know you specifically need keepalive between your upstream and front-end, chances are it's only making your architecture less resilient and worse-off.
(Note that your current solution is also wrong because a change in the IP address will likewise go undetected, because you're doing hostname resolution at config reload only; so, even if nginx does start, it'll basically stop working once IP addresses of the upstream servers do change.)
Potential solutions, pick one:
The best solution would seem to just get rid of upstream keepalive as likely unnecessary in a datacentre environment, and use variables with proxy_pass for up-to-date DNS resolution for each request (nginx is still smart-enough to still do the caching of such resolutions)
Another option would be to get a paid version of nginx through a commercial subscription, which has a resolve parameter for the server directive within the upstream context.
Finally, another thing to try might be to use a set variable and/or a map to specify the servers within upstream; this is neither confirmed nor denied to have been implemented; e.g., it may or may not work.
Your scenario is very similar to the one when using aws ELB as uptreams in where is critical to resolve the proper IP of the defined domain.
The first thing you need to do and ensure is that the DNS servers you are using can resolve to your domains, then you could create your config like this:
resolver 10.0.0.2 valid=300s;
resolver_timeout 10s;
location /foo {
set $foo_backend_servers foo_backends.example.com;
proxy_pass http://$foo_backend_servers;
}
location /bar {
set $bar_backend_servers bar_backends.example.com;
proxy_pass http://$bar_backend_servers;
}
Notice the resolver 10.0.0.2 it should be IP of the DNS server that works and answer your queries, depending on your setup this could be a local cache service like unbound. and then just use resolve 127.0.0.1
Now, is very important to use a variable to specify the domain name, from the docs:
When you use a variable to specify the domain name in the proxy_pass directive, NGINX re‑resolves the domain name when its TTL expires.
You could check your resolver by using tools like dig for example:
$ dig +short stackoverflow.com
In case is a must to use keepalive in the upstreams, and if is not an option to use Nginx +, then you could give a try to openresty balancer, you will need to use/implement lua-resty-dns
A one possible solution is to involve a local DNS cache. It can be a local DNS server like Bind or Dnsmasq (with some crafty configuration, note that nginx can also use specified dns server in place of the system default), or just maintaining the cache in hosts file.
It seems that using hosts file with some scripting is quite straightforward way. The hosts file should be spitted into the static and dynamic parts (i.e. cat hosts.static hosts.dynamic > hosts), and the dynamic part should be generated (and updated) automatically by a script.
Perhaps it make sense to check from time to time the hostnames for changing IPs, and update hosts file and reload configuration in nginx on changes. In case of some hostname cannot be resolved the old IP or some default IP (like 127.0.1.9) should be used.
If you don't need the hostnames in the nginx config file (i.e., IPs are enough), the upstream section with IPs (resolved hostnames) can be generated by a script and included into nginx config — and no need to touch the hosts file in such case.
I put the resolve parameter on server and you need to set the Nginx Resolver in nginx.conf as below:
/etc/nginx/nginx.conf:
http {
resolver 192.168.0.2 ipv6=off valid=40s; # The DNS IP server
}
Site.conf:
upstream rinu-test {
server test.rinu.test:443;
keepalive 20;
}
My problem was container related. I'm using docker compose to create the nginx container, plus the app container. When setting network_mode: host in the app container config in docker-compose.yml, nginx was unable to find the upstream app container. Removing this fixed the problem.
we can resolve it temporarily
cd /etc
sudo vim resolv.conf
i
nameserver 8.8.8.8
:wq
then do sudo nginx -t
restart nginx it will work for the momment
An alternative is to write a new service that only does what I want. The following replaces nginx for proxying https connections using Node.js
const http = require('http');
const https = require('https');
const httpsKeepAliveAgent = new https.Agent({ keepAlive: true });
http.createServer(onRequest).listen(3000);
function onRequest(client_req, client_res) {
https.pipe(
protocol.request({
host: client_req.headers.host,
port: 443,
path: client_req.url,
method: client_req.method,
headers: client_req.headers,
agent: httpsKeepAliveAgent
}, (res) => {
res.pipe(client_res);
}).on('error', (e) => {
client_res.end();
})
);
}
Example usage:
curl http://localhost:3000/request_uri -H "Host: test.rinu.test"
which is equivalent to:
curl https://test.rinu.test/request_uri

Varnish + Nginx proxy configuration on plesk

I followed the official tuto for the Varnish via Docker configuration on plesk. https://www.plesk.com/blog/product-t...cker-container
i have a VPS Ubuntu with plesk and many domains.
I followed all steps :
I created a domain test.monserveur.com
I use the Docker image million12/varnish
On the Docker container setting, the mapping redirect the 80 port to the 32780
On plesk for the hosting parameters, the option “SSL/TLS support” and “Permanent SEO-safe 301 redirect from HTTP to HTTPS” are deactivated
I deactived also the security mod for this domain
On the proxy rules of the docker container (/etc/varnish/default.vcl), i put fo the .host test.monserveur.com and .port 7080
On the function sub vcl_deliver, i put :
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
I still have a 503 page with a MISS on the header for the page on test.monserveur.com
I can't understand where is the problem. I tried to put on the .host the serveur IP and with a link to another domain of the server. I think it's a problem with a setting but i don't know where.
Thanks in advance
A 503 error response from Varnish means that your Docker container is not configured properly. You should check whether the container and Varnish within the container are running properly. Additionally, the configuration file must have valid syntax and the correct port and IP address of the server have to be set in the configuration file.
Without knowing what you've entered, I cannot give you a better advice! If you follow the tutorial completely, it will work. I've created over 10 working instances while I wrote the text!
PS: Please use the official Plesk forum with more information (also add your configuration file) if you still cannot solve your problem - https://talk.plesk.com/
Have success!

Resources