I keep getting this error when I try to setup liveness & readiness prob for my awx_web container
Liveness probe failed: Get http://POD_IP:8052/: dial tcp POD_IP:8052: connect: connection refused
Liveness & Readiness section in my deployment for the container awx_web
ports:
- name: http
containerPort: 8052 # the port of the container awx_web
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 8052
initialDelaySeconds: 5
periodSeconds: 5
readinessProbe:
httpGet:
path: /
port: 8052
initialDelaySeconds: 5
periodSeconds: 5
if I test if the port 8052 is open or not from another pod in the same namespace as the pod that contains the container awx_web or if I test using a container deployed in the same pod as the container awx_web i get this (port is open)
/ # nc -vz POD_IP 8052
POD_IP (POD_IP :8052) open
I get the same result (port 8052 is open) if I use netcat (nc) from the worker node where pod containing the container awx_web is deployed.
for info I use a NodePort service that redirect traffic to that container (awx_web)
type: NodePort
ports:
- name: http
port: 80
targetPort: 8052
nodePort: 30100
I recreated your issue and it looks like your problem is caused by too small value of initialDelaySeconds for the liveness probe.
It takes more than 5s for awx container to open 8052 port.
You need to wait a bit longer for it to start. I have found out that setting it to 15s is enough for me, but you may require some tweaking.
In my case this issue has occurred because I've configured the backend application host as localhost. The issue is resolved when I changed the host value to 0.0.0.0 inside my app properties.
Use the latest built docker image after making this change.
Most likely your application couldnt startup or crash little after it start up . It may due to insufficient memory and cpu resource. Or one of the awx dependency not setup correctly like postgreslq & rabbit.
Did you check that if your application works correctly without probes? I recommend do that first. Examine the pods stats little bit to ensure its not restart.
Related
I currently configure my Nginx Pod readinessProbe to monitor Redis port 6379, and I configure my redis-pod behind the redis-service (ClusterIP).
So my idea is to monitor Redis port though redis-service using DNS instead of IP address.
When I use readinessProbe.host: redis-service.default.svc.cluster.local the Nginx-pod is not running. When I describe the Nginx-pod $ kubectl describe pods nginx, I found below error in Events section:
Readiness probe failed: dial tcp: lookup redis-service.default.svc.cluster.local: no such host
It only works if I use ClusterIP instead of DNS.
Please help me figure out how to use DNS instead of ClusterIP.
My Pod file:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
livenessProbe:
httpGet:
path: /
port: 80
readinessProbe:
tcpSocket:
host: redis-service.default.svc.cluster.local
port: 6379
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Thanks.
I figured out how to do it.
Instead of using tcpSocket, just use exec.
Exec your tcpCheck script inside the container to check the service:port availability.
Use your init-container to share the script with the main container.
Thanks.
I installed Minikube v1.3.1 on my RedHat EC2 instance for some tests.
Since the ports that the nginx-ingress-controller uses by default are already in use, I am trying to change them in the deployment but without result. Could please somebody advise how to do it?
How do I know that the port are already in Use?
When I listed the system pods using the command kubectl -n kube-system get deployment | grep nginx, I get:
nginx-ingress-controller 0/1 1 0 9d
meaning that my container is not up. When I describe it using the command kubectl -n kube-system describe pod nginx-ingress-controller-xxxxx I get:
Type Reason Age From
Message ---- ------ ----
---- ------- Warning FailedCreatePodSandBox 42m (x163507 over 2d1h) kubelet, minikube (combined from similar
events): Failed create pod sandbox: rpc error: code = Unknown desc =
failed to start sandbox container for pod
"nginx-ingress-controller-xxxx": Error response from daemon: driver
failed programming external connectivity on endpoint
k8s_POD_nginx-ingress-controller-xxxx_kube-system_...: Error starting
userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
Then I check the processes using those ports and I kill them. That free them up and the ingress-controller pod gets deployed correctly.
What did I try to change the nginx-ingress-controller port?
kubectl -n kube-system get deployment | grep nginx
> NAME READY UP-TO-DATE AVAILABLE AGE
> nginx-ingress-controller 0/1 1 0 9d
kubectl -n kube-system edit deployment nginx-ingress-controller
The relevant part of my deployment looks like this:
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 81
hostPort: 81
protocol: TCP
- containerPort: 444
hostPort: 444
protocol: TCP
- containerPort: 18080
hostPort: 18080
protocol: TCP
Then I remove the subsections with port 443 and 80, but when I rollout the changes, they get added again.
Now my services are not reachable anymore through ingress.
Please note that minikube ships with addon-manager, which role is to keep an eye on specific addon template files (default location: /etc/kubernetes/addons/) and do one of two specific actions based on the label's value of managed resource:
addonmanager.kubernetes.io/mode
addonmanager.kubernetes.io/mode=Reconcile
Will be periodically reconciled. Direct manipulation to these addons
through apiserver is discouraged because addon-manager will bring
them back to the original state. In particular
addonmanager.kubernetes.io/mode=KeepOnly
Will be checked for existence only. Users can edit these addons as
they want.
So to keep your customized version of default Ingress service listening ports, please change first the Ingress deployment template configuration to KeepOnly on minikube VM.
Basically, minikube bootstraps Nginx Ingress Controller as the separate addon, thus as per design you might have to enable it in order to propagate the particular Ingress Controller's resources within minikube cluster.
Once you enabled some specific minikube Addon, Addon-manager creates template files for each component by placing them into /etc/kubernetes/addons/ folder on the host machine, and then spin up each manifest file, creating corresponded K8s resources; furthermore Addon-manager continuously inspects the actual state for all addon resources synchronizing K8s target resources (service, deployment, etc.) according to the template data.
Therefore, you can consider modifying Ingress addon template data throughout ingress-*.yaml files under /etc/kubernetes/addons/ directory, propagating the desired values into the target k8s objects; it may takes some until K8s engine reflects the changes and re-spawns the relative ReplicaSet based resources.
Well, I think you have to modify the Ingress which refer to the service you're trying to expose on custom port.
This can be done with custom annotation. Here is an example for your port 444:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice
namespace: mynamespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/listen-ports-ssl: "444"
spec:
tls:
- hosts:
- host.org
secretName: my-host-tls-cert
rules:
- host: host.org
http:
paths:
- path: /
backend:
serviceName: my-service
servicePort: 444
I have an application that can receive commands from a specific port like so:
echo <command> | nc <hostname> <port>
In this case it is opening port 22082, I believe in it's Docker container.
When I place this application into a kubernetes pod, I need to expose it by creating a kubernetes service. Here is my service:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
targetPort: 22082
#type: NodePort
externalIPs:
- 10.10.30.19
NOTE: I commented out NodePort because I haven't been able to expose the port using that method. Whenever I use sudo netstat -nlp | grep 22282 I get nothing.
Using an external IP i'm able to find the port and connect to it using netcat, but whenever I issue a command over the port, it just hangs.
Normally I should be able to issue a 'help' command and get information on the app. With kubernetes I can't get that same output.
Now, if I use hostNetwork: true in my app yaml (not the service), I can connect to the port and get my 'help' info.
What could be keeping my command from reaching the app while not using hostNetwork configuration?
Thanks
UPDATE: Noticed this message from sudo iptables --list:
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 172.21.155.23 /* default/commander: has no endpoints */ tcp dpt:22282 reject-with icmp-port-unreachable
UPDATE #2: I solved the above error by setting spec.template.metadata.labels.app to commander. I still, however, am experiencing an inability to send any command to the app.
Thanks to #sfgroups I discovered that I needed to set an actual nodePort like so:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
nodePort: 32282
targetPort: 22082
type: NodePort
Pretty odd behavior, makes me wonder what the point of the port field even is!
I have a problem with multi-port services. I try to expose two ports, the first one works, the other does not. I am testing this with telnet (amongst others), and I always get "connection refused" for the second port.
This is the part about the ports in the service's yaml:
spec:
clusterIP: 10.97.153.249
externalTrafficPolicy: Cluster
ports:
- name: port-1
nodePort: 32714
port: 8080
protocol: TCP
targetPort: 8080
- name: port-2
nodePort: 32715
port: 17176
protocol: TCP
targetPort: 17176
I would first confirm that kubectl get svc shows the two NodePorts. If that is the case, then it is highly likely that the destination port in the pods are not working. Could you check in the pods if the ports are listening correctly? Then, I would also advise you to check the access using the ClusterIP as well.
I've got a pod with 2 containers, both running nginx. One is running on port 80, the other on port 88. I have no trouble accessing the one on port 80, but can't seem to access the one on port 88. When I try, I get:
This site can’t be reached
The connection was reset.
ERR_CONNECTION_RESET
So here's the details.
1) The container is defined in the deployment YAML as:
- name: rss-reader
image: nickchase/nginx-php-rss:v3
ports:
- containerPort: 88
2) I created the service with:
kubectl expose deployment rss-site --port=88 --target-port=88 --type=NodePort --name=backend
3) This created a service of:
root#kubeclient:/home/ubuntu# kubectl describe service backend
Name: backend
Namespace: default
Labels: app=web
Selector: app=web
Type: NodePort
IP: 11.1.250.209
Port: <unset> 88/TCP
NodePort: <unset> 31754/TCP
Endpoints: 10.200.41.2:88,10.200.9.2:88
Session Affinity: None
No events.
And when I tried to access it, I used the URL
http://[nodeip]:31754/index.php
Now, when I instantiate the container manually with Docker, this works.
So anybody have a clue what I'm missing here?
Thanks in advance...
My presumtion is that you're using the wrong access IP. Are you trying to access the minion's IP address and port 31754?