nginx won't resolve hostname in K8S [duplicate] - nginx

This question already has answers here:
DNS does not resolve with NGINX in Kubernetes
(3 answers)
Closed 4 years ago.
So, I would like to have nginx resolve hostnames for backends at request time. I expect to get HTTP 502 Bad Gateway when back-end service is down and I expect service response, when it's up.
I use nginx:1.15-alpine image for nginx and here is what I have in it's config:
server {
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
server_name mysystem.com;
listen 80;
client_max_body_size 20M;
location = /nginx_status {
stub_status on;
access_log off;
}
# Services configuration
location ~ /my-service/ {
set $service_endpoint http://my-service.namespace:8080;
proxy_pass $service_endpoint$request_uri;
include includes/defaults-inc.conf;
include includes/proxy-inc.conf;
}
}
So, when I make the request to the nginx, I get 502 Bad Gateway response. Nginx's log say the name is not found:
2018/06/28 19:49:18 [error] 7#7: *1 my-service.namespace could not be resolved (3: Host not found), client: 10.44.0.1, server: mysystem.com, request: "GET /my-service/version HTTP/1.1", host: "35.229.17.63:8080"
However, when I log into the container with shell (kubectl exec ... -- sh) and test the DNS resolution, it works perfectly.
# nslookup my-service.namespace kube-dns.kube-system.svc.cluster.local
Server: 10.47.240.10
Address 1: 10.47.240.10 kube-dns.kube-system.svc.cluster.local
Name: my-service.namespace
Address 1: 10.44.0.75 mysystem-namespace-mysystem-namespace-my-service-0.my-service.namespace.svc.cluster.local
Moreover, I can wget http://my-service.namespace:8080/ and get a response.
Why nginx cannot resolve the hostname?
Update: How I managed to resolve it:
In nginx.conf at the server level I have added a resolver setting:
resolver kube-dns.kube-system.svc.cluster.local valid=10s;
Then I used a FQDN in proxy_pass:
proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;

It fails because you need to use the FQDN to Resolve the name.
Using just the hostname will usually work because in kubernetes the resolv.conf is configured with search domains so that you don't usually need to provide a service's FQDN.
However, specifying the FQDN is necessary when you tell nginx to use a custom name server because it does not get the benefit of these domain search specs.
In nginx.conf added at the server level:
resolver kube-dns.kube-system.svc.cluster.local valid=10s;
Then used a FQDN in proxy_pass:
proxy_pass http://SERVICE-NAME.YOUR-NAMESPACE.svc.cluster.local:8080;

Related

403 response when trying to redirect using nginx to API Gateway

I have the following API: https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev/user/${user-id} which proxies to a Lambda function.
When the user hits /user/1234 the function checks if 1234 exists and return the info for that user or a redirection to /users
What I want is to create is a redirection with nginx. For SEO, I want a simple 302: return 302 the-url. If someone goes to mySite.com it should redirect to https://kdhdh64g.execute-api.us-east-1.amazonaws.com/dev
No matter what I do, I always receive a 403 with the following:
x-amzn-errortype: MissingAuthenticationTokenException
x-amz-apigw-id: QrFd6GByoJHGf1g=
x-cache: Error from cloud-front
via: 1.1 dfg35721fhfsgdv36vs52fa785f5g.cloudfront.net (CloudFront)
I will appreciate help.
If you are using the reverse proxy set up in nginx, add the below line in the config file and restart or reload the nginx configuration.
proxy_set_header Host $proxy_host;
I run into the same issue trying to run a proxy on Nginx towards an API Gateway which triggers a Lambda function on AWS. When I read the error logs on Nginx, I noticed that it had to do with the SSL version which Nginx was using to connect to API Gateway, the error was the following:
*1 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:ssl3_get_record:wrong version number) while SSL handshaking to upstream
I managed to fix it by adding this line:
proxy_ssl_protocols TLSv1.3;
I attach the complete Nginx configuration in case anyone wants to build a static IP proxy on Nginx that redirects traffic towards a Lambda function:
server {
listen 443 ssl;
server_name $yourservername;
location / {
proxy_pass https://$lambdafunctionaddress;
proxy_ssl_server_name on;
proxy_redirect off;
proxy_ssl_protocols TLSv1.3;
}
ssl_certificate /home/ubuntu/.ssl/ca-chain.crt;
ssl_certificate_key /home/ubuntu/.ssl/server.key;
}
Also it is important to consider that all required information must be included on the request:
curl -X POST https://$yourservername/env/functionname -H "Content-Type: application/json" -H "x-api-key: <yourapikey>" -d $payload

Nginx vs Apache proxy pass

I am trying to convert my apache config to nginx. For apache I have following:
<VirtualHost *:443>
ServerName loc.goout.net
<Location />
ProxyPass http://localhost:8080/ retry=0
ProxyPreserveHost On
</Location>
<Location /i/>
ProxyPass https://dev.goout.net/i/ retry=0
ProxyPreserveHost Off
</Location>
...
Like this, if I fetch:
https://loc.goout.net/i/user/606456_001_min.jpg
It correctly fetches content from:
https://dev.goout.net/i/user/606456_001_min.jpg
So for nginx I am trying this:
server {
listen 443 ssl;
server_name loc.goout.net;
proxy_buffering off;
proxy_ssl_session_reuse off;
proxy_redirect off;
proxy_set_header Host dev.goout.net;
location /i/ {
proxy_pass https://dev.goout.net:443;
}
But when I fetch the content, I will always get 502.
In nginx logs I see following:
[error] 7#7: *5 no live upstreams while connecting to upstream, client: 127.0.0.1, server: loc.goout.net, request: "GET /i/user/606456_001_min.jpg HTTP/1.1", upstream: "https://dev.goout.net/i/user/606456_001_min.jpg", host: "loc.goout.net"
Note the link: https://dev.goout.net/i/user/606456_001_min.jpg
- which works correctly. It seems to me it still doesn't connect with SSL. I also tried to define the upstream section as:
upstream backend {
server dev.goout.net:443;
}
But it had no effect.
Note the server is behind CloudFlare gateway, I hope it is not preventing the correct connection, but I guess that wouldn't work in apache either.
tl;dr: SNI is off by default in nginx, as per http://nginx.org/r/proxy_ssl_server_name, but is required by Cloudflare.
It's generally not the best idea to have home-made proxies on top of Cloudflare — it's supposed to be the other way around.
However, what you're omitting is the actual error message that results from making the request like curl -v localhost:3227/i/user/606456_001_min.jpg — of course it is TLS-related:
2018/07/07 23:18:39 [error] 33345#33345: *3 SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream, client: 127.0.0.1, server: loc.goout.net, request: "GET /i/user/606456_001_min.jpg HTTP/1.1", upstream: "https://[2400:cb00:2048:1::6818:7303]:443/i/user/606456_001_min.jpg", host: "localhost:3227"
This is because nginx is not really intended to be used to steal someone else's sites via proxy_pass, so, some features that Cloudflare does require, are turned off by default in nginx for the general use; specifically, it's the SNI, Server Name Indication extension to TLS, that's making the difference here.
As per http://nginx.org/r/proxy_ssl_server_name, putting an extra proxy_ssl_server_name on; to your exact configuration does fix the issue (I've tested this myself — it works) — note that this requires nginx 1.7.0 or newer.
+ proxy_ssl_server_name on; # SNI is off by default
Additionally, note that you'll also have to ensure that the domain name resolution gets updated during the run-time, as, by default, it only gets resolved when the configuration is loaded or reloaded; you could use the trick of using variables within your http://nginx.org/r/proxy_pass to make it update the resolution of the host as appropriate, but then this also requires you to use the http://nginx.org/r/resolver directive to specify the server to use for DNS resolutions (during the runtime, after loading the configuration), so, your MVP would then be:
location /i/ {
resolver 1dot1dot1dot1.cloudflare-dns.com.;
proxy_ssl_server_name on; # SNI is off by default
proxy_pass https://dev.goout.net:443$request_uri;
}
If you want to specify upstream servers by hostname instead of IP then you must define a resolver directive for Nginx to do DNS look ups.
Is dev.goout.net on a different, remote machine?
Where are your proxy_ssl_certificate and proxy_ssl_certificate_key directives? That connection won't just secure itself.

no resolver defined to resolve s3-eu-west-1.amazonaws.com

All my xml files are stored in a AWS S3, I want to show them via my website :
Nginx conf:
location ~ \.xml$ {
proxy_pass https://s3-eu-west-1.amazonaws.com/bucket-name/sitemap$request_uri;
}
But I get a 502 and log says
[error] 13295#13295: *26 no resolver defined to resolve s3-eu-west-1.amazonaws.com
If I define a resolver like
resolver s3-eu-west-1.amazonaws.com;
I get
[error] 14430#14430: *7 s3-eu-west-1.amazonaws.com could not be resolved (110: Operation timed out),
Thank you for helping
The AWS DNS is likely at 10.0.0.2 in your VPC (it's the base CIDR plus two).
Add resolver and resolver_timeout:
location ~ \.xml$ {
resolver 10.0.0.2 valid=300s;
resolver_timeout 10s;
proxy_pass https://s3-eu-west-1.amazonaws.com/bucketname/sitemap$request_uri;
}
Note: In EC2 Classic the AWS DNS server is located at 172.16.0.23.
I found a solution:
location ~ \.xml$ {
rewrite ^ /<bucket>/sitemap$request_uri break;
proxy_pass https://s3-eu-west-1.amazonaws.com;
}
Your resolver in this instance would be your local or external DNS server you would use to resolve dns names, not the s3 url.

uWSGI nginx error : connect() failed (111: Connection refused) while connecting to upstream

I'm experiencing 502 gateway errors when accessing my IP on nginx(http://52.xx.xx.xx/), the logs simply says this:
2015/09/18 13:03:37 [error] 32636#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: xx.xx.xx.xx, request: "GET / HTTP/1.1", upstream: "uwsgi://127.0.0.1:8000", host: "xx.xx.xx.xx"
my nginx.conf file
# the upstream component nginx needs to connect to
upstream django {
# server unix:///path/to/your/mysite/mysite.sock; # for a file socket
server 127.0.0.1:8000; # for a web port socket (we'll use this first)
}
# configuration of the server
server {
# the port your site will be served on
listen 80;
# the domain name it will serve for
server_name xx.xx.xx.xx; # substitute your machine's IP address or FQDN
charset utf-8;
access_log /home/ubuntu/test_django/nginx_access.log;
error_log /home/ubuntu/test_django/nginx_error.log;
# max upload size
client_max_body_size 75M; # adjust to taste
# Django media
location /media {
alias /home/ubuntu/test_django/static/media/; # your Django project's media files - amend as required
}
location /static {
alias /home/ubuntu/test_django/static/; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass django;
include /home/ubuntu/test_django/uwsgi_params; # the uwsgi_params file you installed
}
}
Is there anything wrong with nginx.conf file.....if i use default conf then it is working.
I resolved it by changing the socket configuration in uwsgi.ini
from socket = 127.0.0.1:3031, to socket = :3031. I was facing this issue when I ran nginx in one Docker container and uWSGI in another. If you are using command line to start uWSGI then do uwsgi --socket :3031.
Hope this helps someone stuck with the same issue, during deployment of a Django application using Docker.
change this address:
include /home/ubuntu/test_django/uwsgi_params;
to
include /etc/nginx/uwsgi_params;
I ran into this issue when setting up the env by nginx + gunicorn and solve it by
adding '*' to ALLOWED_HOSTS or your specific domain.
In my case with a debian server it worked moving:
include /etc/nginx/uwsgi_params;
In the location tag in my nginx server config file, like this:
location /sistema {
include /etc/nginx/uwsgi_params;
uwsgi_pass unix://path/sistema.sock;
}
Also, check you have the following packages installed:
uwsgi-plugin-python
pip3 install uWSGI did the trick for me :D

Error with IP and Nginx as reverse proxy

I configured my Nginx as simple reverse proxy.
I'm just using basic setting
location / {
proxy_pass foo.dnsalias.net;
proxy_pass_header Set-Cookie;
proxy_pass_header P3P;
}
The problem is that after some time (few days) the site behind nginx become unaccessible. Indead nginx try to call a bad ip (the site behind nginx is at my home behind my box and I'm a using a dyn-dns because my ip is not fixe). This dyn-dns is always valid (I can call my site directly) but for obscure reason Nginx get stuck with that..
So as said, nginx just give me 504 Gateway Time-out after some time. It looks like the error come when my ip change at home.
Here is a sample of error log:
[error] ... upstream timed out (110: Connection timed out) while connecting to upstream, client: my.current.ip, server: myreverse.server.com, request: "GET /favicon.ico HTTP/1.1", upstream: "http://my.old
.home.ip", host: "myreverse.server.com"
So do you know why nginx is using ip instead of the DN ?
If the proxy_pass value doesn't contain variables, nginx will resolve domain names to IPs while loading the configuration and cache them until you restart/reload it. This is quite understandable from a performance point of view.
But, in case of dynamic DNS record change, this may not be desired. So two options are available depending on the license you possess or not.
Commercial version (Nginx+)
In this case, use an upstream block and specify which domain name need to be resolved periodically using a specific resolver. Records TTL can be overriden using valid=time parameter. The resolve parameter of the server directive will force the DN to be resolved periodically.
http {
resolver X.X.X.X valid=5s;
upstream dynamic {
server foo.dnsalias.net resolve;
}
server {
server_name www.example.com;
location / {
proxy_pass http://dynamic;
...
}
}
}
This feature was added in Nginx+ 1.5.12.
Community version (Nginx)
In that case, you will also need a custom resolver as in the previous solution. But to workaround the unavailable upstream solution, you need to use a variable in your proxy_pass directive. That way nginx will use the resolver too, honoring the caching time specified with the valid parameter. For instance, you can use the domain name as a variable :
http {
resolver X.X.X.X valid=5s;
server {
server_name www.example.com;
set $dn "foo.dnsalias.net";
location / {
proxy_pass http://$dn;
...
}
}
}
Then, you will likely need to add a proxy_redirect directive to handle redirects.
Maybe check this out http://forum.nginx.org/read.php?2,215830,215832#msg-215832
resolver 127.0.0.1;
set $backend "foo.example.com";
proxy_pass http://$backend;
In such setup ip address of "foo.example.com" will be looked up
dynamically and result will be cached for 5 minutes.

Resources