I have set up a load balancer with nginx. In here I used health check in my configuration file. when I add 'match' parameter to health check and restart the server, I got 'no live upstream' in the error log. But at that time all the other servers are available to get requests.
Why has this kind of error occurred?
My match function in nginx.conf file
match check_server {
status 200;
header Content-Type = text/html;
}
load_balancer.conf
health_check interval=2 passes=3 fails=2 match=check_server;
error in error.log:
2019/01/17 13:16:26 [error] 9853#9853: *13 no live upstreams while connecting to upstream, client: xxx.xxx.x.x, server: xxx.com, request: "GET / HTTP/1.1", upstream: "https://backends/", host: "xxx.com"
Related
I'm running an Nginx in a Cloud Run instance as a Reverse-Proxy to a backend Cloud Run app (it will do more stuff in the future, but let's keep it simple for this example).
The Nginx Cloud Run requires authentication (via IAM), but the backend app doesn't. The Nginx is connected to the same VPC and has the setting (vpc_access_egress = all-traffic), and the backend app is set to Allow internal traffic only only.
events {}
http {
resolver 169.254.169.254;
server {
listen 8080;
server_name mirror_proxy;
location / {
proxy_pass https://my-backend.a.run.app:443;
proxy_cache off;
proxy_redirect off;
}
}
}
The setup works, and I send authenticated requests to the Nginx and get the responses from the backend. However I also get a lot of error messages from the Nginx per request.
2022/12/22 13:57:51 POST 200 1.76 KiB 1.151s curl 7.68.0 https://nginx.a.run.app/main
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:34::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:34::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:36::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:36::35]:443/main", host: "nginx.a.run.app"
2022/12/22 13:57:50 [error] 4#4: *21 connect() to [1234:5678:4802:32::35]:443 failed
(113: No route to host) while connecting to upstream, client: 169.254.1.1,
server: mirror_proxy, request: "POST /main HTTP/1.1",
upstream: "https://[1234:5678:4802:32::35]:443/main", host: "nginx.a.run.app"
Why are there errors, when the request succeeds?
Doesn't the VPC router don't know the exact IP address of the Cloud Run yet, and Nginx has to try them out? Any idea?
GCP only uses IPv4 inside the VPC network.
Since I forced the Nginx to use the VPC network (vpc_access_egress = all-traffic), Nginx will fail when it tries to resolve an IPv6, and fall back to IPv4.
With the following setting you can force Nginx to immediately resolve the IPv4.
http {
resolver 169.254.169.254 ipv6=off;
...
}
``
I want to set rate limit for a specific API request per client IP basis.
I have the nginx configuration to rate limit a specific API /testAPI - each client not to exceed more than 10r/m
# these config lines are included on http block
limit_req_zone $binary_remote_addr zone=rl:10m rate=10r/m;
limit_conn_dry_run off;
limit_req_dry_run off;
limit_conn_log_level error;
limit_req_log_level error;
limit_req_status 444;
limit_conn_status 503;
On the location block added the rl to take extra 7 request on burst.
location ~ ^/testAPI {
limit_req zone=rl burst=7 nodelay;
...
}
error logs:
2022/11/29 16:04:52 [error] 267#267: *1455 limiting requests, excess: 7.069 by zone "rl", client: 10.78.95.189, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company.com:8445"
2022/11/29 16:04:53 [error] 267#267: *1455 limiting requests, excess: 7.048 by zone "rl", client: 10.78.95.189, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company.com:8445"
2022/11/29 16:04:53 [error] 267#267: *1455 limiting requests, excess: 7.028 by zone "rl", client: 10.78.95.189, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company.com:8445"
2022/11/29 16:04:56 [error] 268#268: *1457 limiting requests, excess: 7.400 by zone "rl", client: 10.65.78.46, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company.com:8445"
2022/11/29 16:04:58 [error] 268#268: *1457 limiting requests, excess: 7.220 by zone "rl", client: 10.65.78.46, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company.com:8445"
2022/11/29 16:04:59 [error] 268#268: *1457 limiting requests, excess: 7.053 by zone "rl", client: 10.65.78.46, server: hulk-reverseproxy.company.com, request: "GET /testAPI HTTP/1.1", host: "hulk-reverseproxy125.company:8445"
I did push multiple requests from IP 10.78.95.189 and ngnix started rejecting request.
As expected the burst requests are rejected from 10.78.95.189.
And I sent only 1 request from 10.65.78.46 at the same time when the above limits are reached, this single request also being rejected, my expectation was requests from 10.65.78.46 should take first 10 requests on first minute without any issue.
Note: Here, Im directly hitting the nginx without any Loadbalancer or CDN.
Also, the rejected requests returned with 403 error code, but I set reject code as 444.
Did I miss something here?
I was doing random changes on the configs and reloading the nginx config instead of nginx restart.
These config changes requires nginx restart. If fixed my issue.
When I try to send a request through my NGINX ingress with headers larger than 4k, it returns a 502 error:
[error] 39#39: *356 upstream sent too big header while reading response header from upstream,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
subrequest: "/_external-auth-Lw",
upstream: "<uri>",
host: "<host>"
[error] 39#39: *356 auth request unexpected status: 502 while sending to client,
client: <ip>,
server: <server>,
request: "GET /path/ HTTP/1.1",
host: "<host>"
I've followed instructions about how to allegedly resolve this issue by configuring the proxy-buffer-size in the ingress controller (nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"), but it doesn't seem to work. The only thing that I can think of, is that it has something to do with the proxy-buffer-size of the subrequest, which doesn't seem to get set.
The proxy_buffering header wasn't being set on the /_external-auth-Lw endpoint in the NGINX config. Issue has been resolved as of v. 0.14.0.
I want to set up nginx as a forward proxy - much like Squid might work.
This is my server block:
server {
listen 3128;
server_name localhost;
location / {
resolver 8.8.8.8;
proxy_pass http://$http_host$uri$is_args$args;
}
}
This is the curl command I use to test, and it works the first time, maybe even the second time.
curl -s -D - -o /dev/null -x "http://localhost:3128" http://storage.googleapis.com/my.appspot.com/test.jpeg
The corresponding nginx access log is
172.23.0.1 - - [26/Feb/2021:12:38:59 +0000] "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1" 200 2296040 "-" "curl/7.64.1" "-"
However - on repeated requests, I start getting these errors in my nginx logs (after say the 2nd or 3rd attempt)
2021/02/26 12:39:49 [crit] 31#31: *4 connect() to [2c0f:fb50:4002:804::2010]:80 failed (99: Address not available) while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/omgimg.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
2021/02/26 12:39:49 [warn] 31#31: *4 upstream server temporarily disabled while connecting to upstream, client: 172.23.0.1, server: localhost, request: "GET http://storage.googleapis.com/my.appspot.com/test.jpeg HTTP/1.1", upstream: "http://[2c0f:fb50:4002:804::2010]:80/my.appspot.com/test.jpeg", host: "storage.googleapis.com"
What might be causing these issues after just a handful of requests? (curl still fetches the URL fine)
The DNS resolver was resolving to both IPV4 and IPV6 addresses. The IPV6 part seems to be causing an issue with the upstream servers.
Switching it off made those errors disappear.
resolver 8.8.8.8 ipv6=off;
Hello I have a php dashboard of stream streams that uses php-fpm + NGINX however when using the ondmand function the NGINX returns me the following error described below.
2019/02/19 14:51:19 [error] 1214#0: *438 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 186.208.107.243, server: , request: "GET /live/thythy20/60628227/166.ts HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "goldpremium.ddns.net:8080"
Those who can help me are very grateful!