nginx-ingress pods keep crashing when request comes - AKS - nginx

our nginx-controller pods keep crashing when a request comes. From the logs, it looks like it has timeout connecting to API server, any idea how to enable more detailed logs?
I1213 14:55:35.038444 7 round_trippers.go:438] GET https://11.2.9.1:443/version?timeout=32s in 46 milliseconds
I1213 14:55:35.038543 7 round_trippers.go:444] Response Headers:
I1213 14:55:35.038650 7 request.go:784] Got a Retry-After 1s response for attempt 9 to https://11.2.9.1:443/version?timeout=32s
I1213 14:55:36.038955 7 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: nginx-ingress-controller/v0.0.0 (linux/amd64) kubernetes/$Format" -H "Authorization: Bearer XXXXXXXXXXRiWDII8dG8v-KJ90Av6HgE" 'https://11.2.9.1:443/version?timeout=32s'
I1213 14:55:36.088346 7 round_trippers.go:438] GET https://11.2.9.1:443/version?timeout=32s in 49 milliseconds
I1213 14:55:36.088382 7 round_trippers.go:444] Response Headers:
I1213 14:55:36.088598 7 request.go:947] Response Body:
I1213 14:55:36.088730 7 main.go:212] Unexpected error discovering Kubernetes version (attempt 9): an error on the server ("") has prevented the request from succeeding
F1213 14:55:36.088826 7 main.go:235] Error while initiating a connection to the Kubernetes API server. This could mean the cluster is misconfigured (e.g. it has invalid API server certificates or Service Accounts configuration). Reason: an error on the server ("") has prevented the request from succeeding
Refer to the troubleshooting guide for more information: https://kubernetes.github.io/ingress-nginx/troubleshooting/
when kubectl into the ingress pod, this is the log
C:\Users\XXXXX>kubectl exec -it nginx-ingress-controller-85d79fd99d-tlzrz -- /bin/bash
www-data#nginx-ingress-controller-85d79fd99d-tlzrz:/etc/nginx$ curl -k -v -XGET https://11.2.9.1:443/version?timeout=32s
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x56450f95cdd0)
* Trying 11.2.9.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x56450f95cdd0)
* Connected to 11.2.9.1 (11.2.9.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 11.2.9.1:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 11.2.9.1:443
www-data#nginx-ingress-controller-85d79fd99d-tlzrz:/etc/nginx$

it is due to network security policy enforced that does not allow the ingress node to ping API server by internal IP. Adding the env variable to the ingress controller deployment file to force it to use FQDN solves the issue.KUBERNETES_SERVICE_HOST=FQDN of the API server

Related

Strange problem while migrating to javalin 4.0.0

I have strange problem after migrating to javalin 4.0.0.
After starting javalin listening on specified port, but doesn't process any requests. This is response from curl command.
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8000 (#0)
> GET /manage/stores HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.55.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
In application console, after request there is only:
[qtp1276261147-20] DEBUG org.eclipse.jetty.io.IdleTimeout - SocketChannelEndPoint#400c4e3c{/0:0:0:0:0:0:0:1:59900<->/0:0:0:0:0:0:0:1:8000,OPEN,fill=-,flush=-,to=8/30000}{io=0/0,kio=0,kro=0}-><null> idle timeout check, elapsed: 8 ms, remaining: 29992 ms
I have no idea what to do.
Regards
Michal
It seems, that something was wrong in my environment and dependencies. After upgrading jetty-util package to version 9.4.43.v20210629 (same as jetty-server which is used by javalin 4.0.0) everything works fine.

proper config for reverse proxy for grafana simple json data source

I want to put few services behind reverse proxy, simple services work.
The 502 issue occurs when trying to reach grafana simple json data source (POST + json payload).
Grafana itself is not behind reverse proxy.
current haproxy config:
frontend FRONT.AWS.WEB.PROXY
mode http
bind *:8080
timeout client 1m
option httplog
acl IS_RELE path_beg /release
acl IS_GRAF path_beg /grafana
use_backend BACK.AWS.WEB.ARTIFACTS if IS_RELE
use_backend BACK.AWS.WEB.RTGRAF if IS_GRAF
backend BACK.AWS.WEB.ARTIFACTS
mode http
http-request set-path /
http-response replace-value X-Application-Context (.*)(\release).*$ \1
server AWS.WEB.ARTIFACTS *:5581/ maxconn 1000 check port 5581
backend BACK.AWS.WEB.RTGRAF
mode http
#option forwardfor
#balance source
#option httpclose
#option httpchk HEAD / HTTP/1.0
http-request set-path /
http-response replace-value X-Application-Context (.*)(\grafana).*$ \1
server AWS.WEB.RTGRAF *:5582/ maxconn 1000 check port 5582
data source config in grafana:
http://192.168.56.101:8080/grafana/
This is working request without haproxy:
curl -d '{"requestId":"Q119","timezone":"utc".....lters":[]}' -H 'Content-Type: application/json' http://localhost:8080/query
good response:
[{"columns":[{"text":"sym","type":"string"}, {"text":"time","type":""}, .... .... {"text":"mode","type":"string"}, {"text":"proto","type":"string"}],"rows":[],"type":"table"}]
BUT with haproxy:
curl -d .... http://localhost:8080/grafana/query
502 Response:
<h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
but just to confirm, service itself works:
curl http://localhost:8080/grafana/?2+1
Response:
<html><head><style>a{text-decoration:none}a:link{color:024C7E}a:visited{color:024C7E}a:active{color:958600}body{font:10pt verdana;text-align:justify}</style></head><body><pre>3
Haproxy log:
127.0.0.1:42362 [16/Sep/2020:21:57:14.430] FRONT.AWS.WEB.PROXY BACK.AWS.WEB.RTGRAF/AWS.WEB.RTGRAF 0/0/0/3/3 200 274 - - ---- 1/1/0/0/0 0/0 "GET /grafana/?2+1 HTTP/1.1"
127.0.0.1:42418 [16/Sep/2020:21:57:32.038] FRONT.AWS.WEB.PROXY BACK.AWS.WEB.RTGRAF/AWS.WEB.RTGRAF 0/0/0/-1/0 502 214 - - PH-- 1/1/0/0/0 0/0 "POST /grafana/query HTTP/1.1"
grafana log:
INFO[09-16|22:23:02] Request Completed logger=context userId=1 orgId=1 uname=admin method=POST path=/api/datasources/proxy/2/query status=502 remote_addr=192.168.56.1 time_ms=6 size=107 referer="http://192.168.56.101:3000/d/aQPWEJFmz/system-status?orgId=1&refresh=10s"
Found the problem, for anyone in the future, that's for you!
....................................................caring ancient developers
Debug requests with simple nc
You'd find, wrong path is requested from first HTTP GET
So that needs rewriting:
backend BACK.AWS.WEB.ARTIFACTS
mode http
http-request set-uri %[url,regsub(^/release/,/,)]
http-response replace-value X-Application-Context (.*)(\release).*$ \1
server AWS.WEB.ARTIFACTS *:5581/ maxconn 1000 check port 5581
backend BACK.AWS.WEB.RTGRAF
mode http
http-response replace-value X-Application-Context (.*)(\grafana).*$ \1
server AWS.WEB.RTGRAF *:5582/ maxconn 1000 check port 5582

Not able to curl an application runs on aws ec2 via https, but http works

I have an ec2 instance whose public dns (ipv4) is ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com. I run a java application on the ec2 and I can fetch data via curl and http from my local laptop:
$ curl http://ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080/users/1/items/2759 -verbose
* Trying xx.xxx.xx.xx:8080...
* TCP_NODELAY set
* Connected to ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com (xx.xxx.xx.xx) port 8080 (#0)
> GET /users/1/items/2759 HTTP/1.1
> Host: ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080
> User-Agent: curl/7.65.3
> Accept: */*
> Referer: rbose
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Thu, 13 Feb 2020 06:34:50 GMT
<
< ...expected data...
However, https does not work
curl https://ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com:8080/users/1/items/2759 -verbose
* Trying xx.xxx.xx.xx:8080...
* TCP_NODELAY set
* Connected to ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com (xx.xxx.xx.xx) port 8080 (#0)
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /usr/local/anaconda3/ssl/cacert.pem
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
My ec2 security group inbound rule:
Why can't I access my java application endpoint via https? And how can I do so?

How to make an HTTP GET request manually with netcat?

So, I have to retrieve temperature from any one of the cities from http://www.rssweather.com/dir/Asia/India.
Let's assume I want to retrieve of Kanpur's.
How to make an HTTP GET request with Netcat?
I'm doing something like this.
nc -v rssweather.com 80
GET http://www.rssweather.com/wx/in/kanpur/wx.php HTTP/1.1
I don't know exactly if I'm even in the right direction or not. I am not able to find any good tutorials on how to make an HTTP get request with netcat, so I'm posting it on here.
Of course you could dig in standards searched for google, but actually if you want to get only a single URL, it isn't​‎​‎ worth the effort.
You could also start a netcat in listening mode on a port:
nc -l 64738
(Sometimes nc -l -p 64738 is the correct argument list)
...and then do a browser request into this port with a real browser. Just type in your browser http://localhost:64738 and see.
In your actual case the problem is that HTTP/1.1 doesn't close the connection automatically, but it waits your next URL you want to retrieve. The solution is simple:
Use HTTP/1.0:
GET /this/url/you/want/to/get HTTP/1.0
Host: www.rssweather.com
<empty line>
or use a Connection: request header to say the server you want to close after that:
GET /this/url/you/want/to/get HTTP/1.1
Host: www.rssweather.com
Connection: close
<empty line>
Extension: After the GET header write only the path part of the request. The hostname from which you want to get data belongs to a Host: header as you can see in my examples. This is because multiple websites can run on the same webserver, so the browsers need to say him, from which site it wants to load the page.
This works for me:
$ nc www.rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.0
Host: www.rssweather.com
And then hit double <enter>, i.e. once for the remote http server and once for the nc command.
source: pentesterlabs
You don't even need to use/install netcat
Create a tcp socket via an unused file-descriptor i.e I use 88 here
Write the request into it
use the fd
exec 88<>/dev/tcp/rssweather.com/80
echo -e "GET /dir/Asia/India HTTP/1.1\nhost: www.rssweather.com\nConnection: close\n\n" >&88
sed 's/<[^>]*>/ /g' <&88
On MacOS, you need the -c flag as follows:
Little-Net:~ minfrin$ nc -c rssweather.com 80
GET /wx/in/kanpur/wx.php HTTP/1.1
Host: rssweather.com
Connection: close
[empty line]
The response then appears as follows:
HTTP/1.1 200 OK
Date: Thu, 23 Aug 2018 13:20:49 GMT
Server: Apache
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
The -c flag is described as "Send CRLF as line-ending".
To be HTTP/1.1 compliant, you need the Host header, as well as the "Connection: close" if you want to disable keepalive.
Test it out locally with python3 http.server
This is also a fun way to test it out. On one shell, launch a local file server:
python3 -m http.server 8000
Then on the second shell, make a request:
printf 'GET / HTTP/1.1\r\nHost: localhost\r\n\r\n' | nc localhost 8000
The Host: header is required in HTTP 1.1.
This shows an HTML listing of the directory, just as you would see from:
firefox http://localhost:8000
Next you can try to list files and directories and observe the response:
printf 'GET /my-subdir/ HTTP/1.1\n\n' | nc localhost 8000
printf 'GET /my-file HTTP/1.1\n\n' | nc localhost 8000
Every time you make a successful request, the server prints:
127.0.0.1 - - [05/Oct/2018 11:20:55] "GET / HTTP/1.1" 200 -
confirming that it was received.
example.com
This IANA maintained domain is another good test URL:
printf 'GET / HTTP/1.1\r\nHost: example.com\r\n\r\n' | nc example.com 80
and compare with: http://example.com/
https SSL
nc does not seem to be able to handle https URLs. Instead, you can use:
sudo apt-get install nmap
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | ncat --ssl github.com 443
See also: https://serverfault.com/questions/102032/connecting-to-https-with-netcat-nc/650189#650189
If you try nc, it just hangs:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
and trying port 80:
printf 'GET / HTTP/1.1\r\nHost: github.com\r\n\r\n' | nc github.com 443
just gives a redirect response to the https version:
HTTP/1.1 301 Moved Permanently
Content-Length: 0
Location: https://github.com/
Connection: keep-alive
Tested on Ubuntu 18.04.

Will I be able to use CURL to get HTTP/2 headers?

Right now I use curl -I to retrieve headers.
Will sites adopt a different way of serving headers with HPACK in the upcoming adoption of HTTP/2 by browsers that will render my use of the curl command ineffective?
Yes, you can use curl to see and send HTTP headers with HTTP/2 just as you do with HTTP/1.
curl supports HTTP/2 and it is implemented as a sort of translation layer. It means it shows and "pretends" that headers work 1.1 style. It shows headers as text and it sends headers in callbacks like they were done with 1.1. We made it this way to make scripts and applications get a very smooth and basically invisible transition path to HTTP/2 with curl.
Internally that is of course done by decompressing received headers before showing them, and showing them before compressing them when sending them.
I believe it depends on curl version. HTTP/2 was added in curl 7.36.x IIRC ? not all distros would have that version ?
This is with curl 7.41.0 over HTTP/2 against https://google.com
curl --http2 -I -v https://google.com
* Rebuilt URL to: https://google.com/
* Trying 173.194.123.1...
* Connected to google.com (173.194.123.1) port 443 (#0)
* ALPN, offering h2-14, http/1.1
* ALPN, server accepted to use h2-14
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.google.com
* start date: 2015-03-11 16:13:43 GMT
* expire date: 2015-06-09 00:00:00 GMT
* subjectAltName: google.com matched
* issuer: C=US; O=Google Inc; CN=Google Internet Authority G2
* SSL certificate verify ok.
* Using HTTP2
edit: correction, curl --http2 needs nghttp2 compiled for it to work https://nghttp2.org/
curl --version
curl 7.41.0 (x86_64-unknown-linux-gnu) libcurl/7.41.0 OpenSSL/1.0.2b zlib/1.2.8 nghttp2/0.7.8-DEV
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets

Resources