I see some strange logs in my kong container, which internally uses nginx:
2019/08/07 15:54:18 [info] 32#0: *96775 client closed connection while SSL handshaking, client: 10.244.0.1, server: 0.0.0.0:8443
This happens every 5 secs, like some sort of diagnostic is on.
In my kubernetes descriptor I set no ready or liveliness probe, so I can't understand why there are those calls and how can I prevent them from appearing as they only dirt my logs...
edit:
It seems it's the LoadBalancer service: I tried deleting it and I get no logs anymore...how to get rid of those logs though?
This has been already discussed on Kong forum in Stopping logs generated by the AWS ELB health check thread.
The same behaviour with lb heathcheck every few seconds.
Make Kong listen on plain HTTP port, open that port up only to the
subnet in which ELB is running (public most probably), and then don’t
open up port 80 on the ELB. So ELB will be able to Talk on port 80 for
health-check but there won’t be a HTTP port available to external
world.
Use L4 proxying (stream_listen) in kong, open up the port and
then make ELB healthcheck that port.
Both of solutions are reasonable.
Just simply check what is connecting to your nginx:
kubectl get po,svc --all-namespaces -owide | grep 10.244.0.1
after that, you should know what is happening inside your cluster, maybe a misconfigured pod or some clients.
I also encountered this error when watching the log of Nginx.
I use the Azure cloud, I found the ip in the log is the location of my Azure server
I resolved that by changing the Protocol option from [TCP] to [HTTPS] in Heath probes menu on Azure portal
Related
I'm trying for the first time to connect my local server (Synology) through NGINX and Cloudflare so I can access it through my own domain name. I have the proxy host all set up pointing to my local IP address with the port and I have an SSL encryption using Let's Encrypt. The site gives me either a timed out error or unreachable, however one time somehow the site took my to ASUS aicloud which is through my ASUS ac68u router but I was not even pointing NGINX to that.
using cloudflare diagnostic center site it syas the request failed because the web server did not respond.
I'm not sure whether my router is blocking Cloudflare or if there is any other issue going on, would appreciate any help with the matter!
I have developed a website for a customer, it's currently sitting on a sub-domain on our server and is finished but i now need to carry out testing for eCommerce payments and that means the site needs to move from our sub-domain over to their live domain.
For this, i've created a cPanel account with that domain but because it's live elsewhere, the best way for me to complete the migration before any DNS records are changed on the 3rd party hosting is to be able to access the site on my machine, i edited my local host files (windows) for that reason
Before NGINX was installed on the CentOS server, modifying the local host file would work perfectly and i could access the site only on my machine to finish up the migration, then when finished i'll ask the 3rd party host to change NS to our us, meaning no downtime to their site and a nice migration.
At the moment, even though the host file is changed and a local cmd ping brings up our server, i get a 502 gateway error nginx in the browser, checking nginx error logs i believe it's because nginx server is trying to resolve the 3rd party real host IP address but my machine is set to resolve the server ip version.
Does that make sense? All other sites on the server are working fine through Apache + Nginx but i'm stuck with this problem.
I could simply ask the 3rd party hosting company to change the A record to point to our server but it would mean the client would face some downtime while i finished up the migration.
Any help is appreciated.
Purging nginx cache and reloading
Here is the error message regarding this specific domain in nginx
2019/09/25 10:58:19 [error] 25641#25641: *47 connect() failed (111: Connection refused) while connecting to upstream, client: IP.ADDRESS, server: localhost, request: "GET / HTTP/1.1", upstream: "http://IP.ADDRESS:8080/", host: "www.xxx.co.uk"
The IP address here in this error is the real one where the site is live at the moment.
I found that all i needed to do was edit the /etc/nginx/custom_rules and adding the domain and ip address under "when to specify a domain name". I can now access it locally.
I'm using websockets and have managed to successfully set up my war file on the AWS Beanstalk. I am using Nginx as a proxy server and a classic Load Balancer listening on port 80, and the protocol is TCP instead of HTTP. Cross-zone load balancing is enabled and Connection draining is also enabled with the draining timeout as 60 seconds. The logs aren't showing any errors.
I have not changed the default nginx.conf file.
The connection upgrade is happening successfully, with the status code returning '101 Switching Protocol'. I just cannot seem to figure out why the connection is closing immediately.
Any help is appreciated. Thanks.
This wonderful answer helped me solve this problem.
It turns out that the corporate WiFi connection that I was on had a firewall which was immediately terminating my WebSocket connection. It worked perfectly fine when I tried on a different WiFi network that did not have any firewall configured.
I'm trying to setup a private docker registry to upload my stuff but I'm stuck. The docker-registry instance is running on port 5000 and I've setup nginx in front of it with a proxy pass directive to pass requests on port 80 back to localhost:5000.
When I try to push my image I get this error:
Failed to upload metadata: Put http://localhost:5000/v1/images/long_image_id/json: dial tcp localhost:5000: connection refused
If I change localhost with my server's ip address in nginx configuration file I can push allright. Why would my local docker push command would complain about localhost when localhost is being passed from nginx.
Server is on EC2 if it helps.
I'm not sure the specifics of your traffic, but I spent a lot of time using mitmproxy to inspect the dataflows for Docker. The Docker registry is actually split into two parts, the index and the registry. The client contacts the index to handle metadata, and then is forwarded on to a separate registry to get the actual binary data.
The Docker self-hosted registry comes with its own watered down index server. As a consequence, you might want to figure out what registry server is being passed back as a response header to your index requests, and whether that works with your config. You may have to set up the registry_endpoints config setting in order to get everything to play nicely together.
In order to solve this and other problems for everyone, we decided to build a hosted docker registry called Quay that supports private repositories. You can use our service to store your private images and deploy them to your hosts.
Hope this helps!
Override X-Docker-Endpoints header set by registry with:
proxy_hide_header X-Docker-Endpoints;
add_header X-Docker-Endpoints $http_host;
I think the problem you face is that the docker-registry is advertising so-called endpoints through a X-Docker-Endpoints header early during the dialog between itself and the Docker client, and that the Docker client will then use those endpoints for subsequent requests.
You have a setup where your Docker client first communicates with Nginx on the (public) 80 port, then switch to the advertised endpoints, which is probably localhost:5000 (that is, your local machine).
You should see if an option exists in the Docker registry you run so that it advertises endpoints as your remote host, even if it listens on localhost:5000.
I'm running a nginx in a EC2 instance.
When trying to connect it with public dns, I get "connection timed out" error.
I run curl 127.0.0.1, it print nginx home page, so the nginx configure is fine.
I thins there is something wrong with my EC2 instance's security policy,but I have already set it to allow all the traffic in and out.
Thank u,guys.
It's because the iptable block the 80 port.
Flush iptables,it works.