Nginx externalname Azure CDN times out - nginx

My devs want the following when going to domain.com:
Route nearly all paths to the backend server (I setup an ingress for this)
Route certain paths (i.e. domain.com/site.webmanifest) to the Azure CDN directly
Currently I have 3 files. The working ingress (1), a Service using an ExternalName that points to the CDN (2) and a third ingress for the specific pathing (3).
Sadly I keep getting 504 errors, no matter what I do. I'm also afraid that the ingress rules will fail, as 2 separate ingress files with the same host will merge (and I do not want the rewrite-target on the second).
The following is the result of the NginX log:
[error] 9499#9499: *26224546 [lua] balancer.lua:332: balance(): error while setting current upstream peer IP-ADDRESS invalid port while connecting to upstream, client: K8S-IP-ADDRESS, server: domain.com, request: "GET /site.webmanifests HTTP/2.0", host: "domain.com"
2021/09/23 07:51:39 [error] 9499#9499: *26224546 upstream timed out (110: Operation timed out) while connecting to upstream, client: K8S-IP-ADDRESS, server: domain.com, request: "GET /site.webmanifests HTTP/2.0", upstream: "https://0.0.0.1:80/site.webmanifests", host: "domain.com"
It complains of an invalid port. I have already tried setting up the second ingress with following tags:
nginx.ingress.kubernetes.io/upstream-vhost: cdn.address.com
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
And specify portname:
port:
name: https
But to no avail. I have found this post:
Kubernetes Service Object ExternalName pointing to Azure CDN
Which states what I want to be impossible, but I do not understand the proposed alternative/solution.
Any advice would be appreciated.
Kind regards

Since I only had a limited amount of URL's I had to redirect, I modified the ingress (1) yaml and deleted the service and other ingress.
I added the following annotation:
nginx.ingress.kubernetes.io/server-snippet: |
location = /robots.txt {
proxy_pass https://cdn.domain.com/robots.txt;
}
This seems to work like a charm, I'm open for alternatives though.

Related

Nginx in Cloud Run with internal traffic works but gives connect errors

I'm running an Nginx in a Cloud Run instance as a Reverse-Proxy to a backend Cloud Run app (it will do more stuff in the future, but let's keep it simple for this example).
The Nginx Cloud Run requires authentication (via IAM), but the backend app doesn't. The Nginx is connected to the same VPC and has the setting (vpc_access_egress = all-traffic), and the backend app is set to Allow internal traffic only only.
events {}
http {
resolver 169.254.169.254;
server {
listen 8080;
server_name mirror_proxy;
location / {
proxy_pass https://my-backend.a.run.app:443;
proxy_cache off;
proxy_redirect off;
}
}
}
The setup works, and I send authenticated requests to the Nginx and get the responses from the backend. However I also get a lot of error messages from the Nginx per request.
2022/12/22 13:57:51 POST 200 1.76 KiB 1.151s curl 7.68.0 https://nginx.a.run.app/main
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:34::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:34::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:36::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:36::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:32::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:32::35]:443/main", host: "nginx.a.run.app"
Why are there errors, when the request succeeds?
Doesn't the VPC router don't know the exact IP address of the Cloud Run yet, and Nginx has to try them out? Any idea?
GCP only uses IPv4 inside the VPC network.
Since I forced the Nginx to use the VPC network (vpc_access_egress = all-traffic), Nginx will fail when it tries to resolve an IPv6, and fall back to IPv4.
With the following setting you can force Nginx to immediately resolve the IPv4.
http {
resolver 169.254.169.254 ipv6=off;
...
}
``

SSL_do_handshake() failed with nginx-proxy behind cloudflare

I am struggling on this problem for 2-3 days now. My problem is : I get "SSL_do_handshake() failed" when doing proxy_pass from one reverse proxy to another.
I have a setup that looks like that :
gcp VM 1 containers :
- nginx reverse proxy 1
- acme companion for ssl
- frontend website (local nginx)
gcp VM 2 containers :
- nginx reverse proxy 2
- acme companion for ssl
- backend nodejs
DNS server is done by cloudflare :
- frontend.website.com : "gcp VM 1" IP adress
- backend.nodejs.com : "gcp VM 2" IP adress
To avoid CORS error, "frontend.website.com" make requests to "frontend.website.com/api".
"nginx reverse proxy 1" has this configuration to redirect to backend :
location /api {
proxy_pass https://backend.nodejs.com/api;
}
The error I get in the "nginx reverse proxy 1" logs :
nginx.1 | 2021/10/22 11:10:53 [error] 283#283: *11287 SSL_do_handshake() failed (SSL: error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:SSL alert number 40) while SSL handshaking to upstream, client: 2a01:e0a:4d0:4960:dc2e:8d3a:ba04:10a2, server: frontend.website.com, request: "POST /api HTTP/2.0", upstream: "https://172.67.155.25:443/api", host: "frontend.website.com", referrer: "https://frontend.website.com/"
nginx.1 | 2021/10/22 11:10:53 [warn] 283#283: *11287 upstream server temporarily disabled while SSL handshaking to upstream, client: 2a01:e0a:4d0:4960:dc2e:8d3a:ba04:10a2, server: frontend.website.com, request: "POST /api HTTP/2.0", upstream: "https://172.67.155.25:443/api", host: "frontend.website.com", referrer: "https://frontend.website.com/"
nginx.1 | 2021/10/22 11:10:53 [error] 283#283: *11287 no live upstreams while connecting to upstream, client: 2a01:e0a:4d0:4960:dc2e:8d3a:ba04:10a2, server: frontend.website.com, request: "POST /api HTTP/2.0", upstream: "https://backend.nodejs.com/api", host: "frontend.website.com", referrer: "https://frontend.website.com/"
Note : IP in error log 172.67.155.25:443 is not gcp VM 1 or 2 IP, I assume it's a Cloudflare IP?
Things that I already tried :
check SSL certs, they are okay on both sides.
proxy_pass to http instead of https, its raises other problems.
I already tried proxy_ssl_server_name on; (taken from here). Cloudflare returned a 403 forbidden with :
DNS points to prohibited IP
What happened?
You've requested a page on a website (frontend.website.com) that is on the Cloudflare network. Unfortunately, it is resolving to an IP address that is creating a conflict within Cloudflare's system.
What can I do?
If you are the owner of this website:
you should login to Cloudflare and change the DNS A records for frontend.website.com to resolve to a different IP address.
The thing is the DNS A records for frontend.website.com is good (other app are using it without problem)
I feel like there shoud be ssl handshake to "backend.nodejs.com/api" but according to the error log, it tries to do it to the cloudflare IP address instead (here 172.67.155.25:443 but different each time)
Am I missing something here ? What can the problem be ?
If you need any additionnal info do not hesitate to ask for it.

Request on k8s NGINX ingress controller fails with "upstream sent too big header while reading response header from upstream"

When I try to send a request through my NGINX ingress with headers larger than 4k, it returns a 502 error:
[error] 39#39: *356 upstream sent too big header while reading response header from upstream,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
subrequest: "/_external-auth-Lw",
upstream: "<uri>",
host: "<host>"
[error] 39#39: *356 auth request unexpected status: 502 while sending to client,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
host: "<host>"
I've followed instructions about how to allegedly resolve this issue by configuring the proxy-buffer-size in the ingress controller (nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"), but it doesn't seem to work. The only thing that I can think of, is that it has something to do with the proxy-buffer-size of the subrequest, which doesn't seem to get set.
The proxy_buffering header wasn't being set on the /_external-auth-Lw endpoint in the NGINX config. Issue has been resolved as of v. 0.14.0.

nginx forward proxy config is causing "upstream server temporarily disabled while connecting to upstream" error

I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;

unable to configure and start nginx

my nginx server seemed to run fine but when i do netstat -tupln, I cant see it bound to port 80.
When I fire a http request, it gives me
502 Bad Gateway
---
nginx/1.4.6 (Ubuntu)
Following is the nginx config I have written to both
/etc/nginx/sites-available/mysite.conf
and /etc/nginx/sites-enabled/mysite.conf
server {
listen 80;
server_name _;
location ~ / {
proxy_pass http://127.0.0.1:8001;
}
}
I am able to run following commands without any error.
nginx start/stop/restart
but making a http request to the machine gives me following error in /var/log/nginx/error.log
08:39:26 [warn] 17294#0: conflicting server name "_" on 0.0.0.0:80, ignored
08:41:17 [error] 20186#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.123.123.123, server: _, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8001/", host: "123.123.123.123"
Even changing the port 8001 to 8003 in mysite.conf files in /etc/nginx/sites-* and restarting nginx doesn't make any difference in above error message which makes me believe that it isn't picking up changes in the conf files.
Can anybody help me understand what is it that i am missing?
It is an old issue. Would like to put my finding here in case someone encounters the same issue in the future. The way I resolve this issue is by changing /etc/nginx/ permission.
sudo chmod -R 777 /etc/nginx/
Probably more than necessary, but that resolve my problem. Please let me know if anyone find any solid solution

Resources