error: unable to upgrade connection: container not found ("wordpress") - wordpress

My goal is to list environment variables in my wordpress pod
kubectl get pods
wordpress-77f45f895-lxh5t 1/1 Running 993 92d
wordpress-mysql-7d4fc77fdc-x4bfm 1/1 Running 87 92d
Although the pod is running
kubectl exec wordpress-77f45f895-lxh5t env
error: unable to upgrade connection: container not found ("wordpress")
If I try the other one
kubectl exec wordpress-mysql-7d4fc77fdc-x4bfm env
Unable to connect to the server: net/http: TLS handshake timeout
My services
wordpress NodePort 10.102.29.45 <none> 80:31262/TCP 94d
wordpress-mysql ClusterIP None <none> 3306/TCP 94d
Why is container not found?

by looking at your output, I think your containers are crashing. first pod crashed 993 times and the second one crashed 87 times. You can check logs of the containers/events of the pods
kubectl logs {{podname}}: for pod logs
kubectl describe pod {{podname}} for detailed description.
As suggested by #mdaniel in comment check for ports also.
Are you able to access application on nodePort?

The problem may be that the container hasn't been started yet. Use
kubectl describe pod <podname>
to see if you can view a message such as:
Normal Created 19s kubelet Created container container
Normal Started 17s kubelet Started container container
The container will have been created and you should no longer see that error message.

Related

Kubernetes Ingress nginx on Minikube fails

minikube v1.13.0 on Ubuntu 18.04 with Kubernetes v1.19.0 on Docker 19.03.8. Using helm/helmfile ("v3.3.4"). The Ubuntu VM is on VM-Workstation running on Win10, networking set as NAT, everything in my home wifi network.
I am trying to use ingress-backend stable/nginx-ingress 1.36.0 . I do have the nginx-ingress-1.36.0.tgz in the ingress/charts folder, and I have ingress/enabled minikube addons enable ingress.
Before I had enabled ingress on minikube, everything will get deployed successfully (no errors) but the service/LB stayed pending:
ClusterIP 10.101.41.156 <none> 8080/TCP
ingress-controller-nginx-ingress-controller LoadBalancer 10.98.157.222 <pending> 80:30050/TCP,443:32294/TCP
After I enabled ingress on minikube, I now get this connection refused error:
STDERR:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
COMBINED OUTPUT:
Error: UPGRADE FAILED: cannot patch "ingress-service" with kind Ingress:
Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.kube-system.svc:443/extensions/v1beta1/ingresses?timeout=30s":
dial tcp 10.105.131.220:443: connect: connection refused
I don't know what is this IP 10.105.131.220 - looks like pvt IP. It is not my minikube IP, or my VM IP or my laptop IP, I cant ping it.
But it all still deploys fine- but the Load Balancer still shows pending.
Update
I had missed one of the Steps based on documentation
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
I stopped/deleted minkube and redid everything, now the error is gone, but the loadbalancer is still <pending>
By default all solutions like minikube does not provide you LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for you automatically by spinning in the background separate LB. Thats why you see Pending status.
Solutions:
use MetalLB on minikube
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation. In short, it allows you to create Kubernetes services of type LoadBalancer in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers.
Installation:
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
use minikube tunnel
Services of type LoadBalancer can be exposed via the minikube tunnel
command. It must be run in a separate terminal window to keep the
LoadBalancer running. Ctrl-C in the terminal can be used to terminate
the process at which time the network routes will be cleaned up.
minikube tunnel runs as a process, creating a network route on the host to the service CIDR of the cluster using the cluster’s IP
address as a gateway. The tunnel command exposes the external IP
directly to any program running on the host operating system.

When minikube on Mac is asked for URL, why does it instead start a service in a tunnel?

I installed the latest Docker, Minikube, and kubectl into my Mac (Catalina). I also have a recent MySQL, with the command line properly installed in the PATH. I'm using the stock terminal (zsh).
Docker started just fine, tells me of the pods it has installed.
Minikube starts fine, and kubectl get all reports on its artifacts just fine.
Jeromes-MacBook-Pro:cloudnative-statelessness jerome$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mysql-7dbfd4dbc4-sz8ps 1/1 Running 0 15m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
service/mysql-svc NodePort 10.111.176.15 <none> 3306:30022/TCP 15m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mysql 1/1 1 1 15m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mysql-7dbfd4dbc4 1 1 1 15m
When I run minikube service mysql-svc --url I'm expecting to get a URL, like this one from another machine: http://192.168.99.101:31067 . Instead I see something about starting a service in a 'tunnel':
Jeromes-MacBook-Pro:cloudnative-statelessness jerome$ minikube service mysql-svc --url
🏃 Starting tunnel for service mysql-svc.
|-----------|-----------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|-----------|-------------|------------------------|
| default | mysql-svc | | http://127.0.0.1:64966 |
|-----------|-----------|-------------|------------------------|
http://127.0.0.1:64966
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
At this point the terminal is non-responsive.
I'm believing that minikube service SERVICENAME should try to start a service, and also return that block of text. I'm also believing that the --url suffix should merely returns what is in the URL column, and skip starting a service.
Any good explanations of how I can get the result I want on my Mac?
And BTW, how do I recover control of the terminal session once it states "Because..." ?
Thanks,
Jerome.
UPDATE ON 8/14/2020:
I took Saravanan's advice. I uninstalled Docker from my Mac and used homebrew to install docker + docker-machine + virtualbox (see https://www.robinwieruch.de/docker-macos). When I run "minikube service mysql-svc --url" I no longer get the tunnel problem. Thank you, Saravanan.
My problems have morphed into getting a correct version of my containers (compiled apps, then run thru docker build) from Docker Hub. The YAML file I have points at my account there, but I'm afraid I've an obsolete version. What do I do to overwrite my current version on my Mac, or to delete the Docker containers so that kubectl create can get the updated version?
The reason for this is your minikube image is running in the container.
Try changing the configuration to run it in the virtual box. Then you can reach your sql pod without tunneling.
# first delete the existing minikube image
$ minikube delete
# change the minikube driver to virtualbox
$ minikube config set vm-driver virtualbox
# start minikube again
$ minikube start
Ensure you have virtual box installed before proceeding

run kubernetes containers without minikube or etc

I want to just run an nginx-server on kubernetes with the help of
kubectl run nginx-image --image nginx
but the error was thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
I then ran
kubectl run nginx-image --kubeconfig ~/.kube/config --image nginx
again thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
minikube start solves the problem but it is taking resources...
I just want to ask How can I run kubectl without minikube (or other such solutions) being started? Please tell me if it not possible
when I run kubectl get pods, I get two pods instead I just want one and I know it is possible since I had seen in some video tutorials.
Please help...
Kubectl is a command-line tool and it is responsible for communicating with Minikube. Kubectl allows you to run commands against Minikube. You can use Kubectl to deploy applications, inspect and manage resources, and view logs. When you execute this command
kubectl run nginx-image --image nginx
kubectl tries to connect to minikube and sends your request(run Nginx) to it. So if you stop minikube, kubectl can't communicate. So minikube is responsible to run Nginx and kubectl is just responsible to tell Minikube to run Nginx
I mean you need to install Kubernetes in order to use it. It’s not magic. If minikube isn’t to your liking there are many installers, try Docker Desktop or k3d.

kubernetes liveness probe restarts the pod which ends in CrashLoopback

I have a deployment with 2 replicas of nginx with openconnect vpn proxy container (a pod has only one container).
They start without any problems and everything works, but once the connection crashes and my liveness probe fails, the nginx container is restarted ending up in CrashLoopbackoff because the openconnect and nginx restart fails with
nginx:
host not found in upstream "example.server.org" in /etc/nginx/nginx.conf:11
openconnect:
getaddrinfo failed for host 'vpn.server.com': Temporary failure in name resolution
It seems like the /etc/resolv.conf is edited by openconnect and on the pod restart it stays the same (altough it is not a part of a persistent volume) and I believe the whole container should be run from a clean docker image, where the /etc/resolv.conf is not modified, right?
The only way how to fix the CrashLoopback is to delete the pod and the deployment rc runs a new pod that works.
How is it different to create a new pod vs. when the container in pod is restarted by the liveness probe restartPolicy: Always? Is the container restarted with a clean image?
restartPolicy applies to all Containers in the Pod, not the pod itself. Pods usually only get re-created when someone explicitly deletes them.
I think this explains why the restarted container with the bad resolv.conf fails but a new pod works.
A "restarted container" is just that, it is not spawned new from the downloaded docker image. It is like killing a process and starting it - the file system for the new process is the same one the old process was updating. But a new pod will create a new container with a local file system view identical to the one packaged in the downloaded docker image - fresh start.

kubernetes: a service is not accessible outside host

I am following the guide at http://kubernetes.io/docs/getting-started-guides/ubuntu/ to create a kubernetes cluster. Once the cluster is up, i can create pods and services using kubectl. Basically, do the following
kubectl run nginx --image=nginx --port=80
kubectl expose deployment/nginx
I see a pod and service running
# kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 192.168.3.1 <none> 443/TCP 2d
nginx 192.168.3.208 <none> 80/TCP 2d
When I try to access the service from the machine where the pod is running, I get back the nginx helloworld page. But if i try it another machine in the kubernetes cluster, i get a timeout.
I thought all the services are accessible anywhere in the cluster. Why could it not be working that way?
Thanks
Yes, services should be accessible anywhere in the cluster. Is your "another machine" listed in the output of kubectl get nodes? Is the node Ready? Maybe the machine wasn't configured correctly.
If you want to get the servicer anywherer in the cluster, You must use the network plug-in,such as Flannel,OpenVSwitch.
http://kubernetes.io/docs/admin/networking/#flannel
https://github.com/coreos/flannel#flannel
found out my error by comparing it with another installation where it worked. This installation was missing an iptables rule that forced everything going to the containers onto the flannel interface. So the traffic was reaching the target host on eth0 making it discard the packet. I donot know why the proxy didnt add that rule. Once i manually added it, it worked.

Resources