kubernetes liveness probe restarts the pod which ends in CrashLoopback - nginx

I have a deployment with 2 replicas of nginx with openconnect vpn proxy container (a pod has only one container).
They start without any problems and everything works, but once the connection crashes and my liveness probe fails, the nginx container is restarted ending up in CrashLoopbackoff because the openconnect and nginx restart fails with
nginx:
host not found in upstream "example.server.org" in /etc/nginx/nginx.conf:11
openconnect:
getaddrinfo failed for host 'vpn.server.com': Temporary failure in name resolution
It seems like the /etc/resolv.conf is edited by openconnect and on the pod restart it stays the same (altough it is not a part of a persistent volume) and I believe the whole container should be run from a clean docker image, where the /etc/resolv.conf is not modified, right?
The only way how to fix the CrashLoopback is to delete the pod and the deployment rc runs a new pod that works.
How is it different to create a new pod vs. when the container in pod is restarted by the liveness probe restartPolicy: Always? Is the container restarted with a clean image?

restartPolicy applies to all Containers in the Pod, not the pod itself. Pods usually only get re-created when someone explicitly deletes them.
I think this explains why the restarted container with the bad resolv.conf fails but a new pod works.
A "restarted container" is just that, it is not spawned new from the downloaded docker image. It is like killing a process and starting it - the file system for the new process is the same one the old process was updating. But a new pod will create a new container with a local file system view identical to the one packaged in the downloaded docker image - fresh start.

Related

run kubernetes containers without minikube or etc

I want to just run an nginx-server on kubernetes with the help of
kubectl run nginx-image --image nginx
but the error was thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
I then ran
kubectl run nginx-image --kubeconfig ~/.kube/config --image nginx
again thrown:
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
minikube start solves the problem but it is taking resources...
I just want to ask How can I run kubectl without minikube (or other such solutions) being started? Please tell me if it not possible
when I run kubectl get pods, I get two pods instead I just want one and I know it is possible since I had seen in some video tutorials.
Please help...
Kubectl is a command-line tool and it is responsible for communicating with Minikube. Kubectl allows you to run commands against Minikube. You can use Kubectl to deploy applications, inspect and manage resources, and view logs. When you execute this command
kubectl run nginx-image --image nginx
kubectl tries to connect to minikube and sends your request(run Nginx) to it. So if you stop minikube, kubectl can't communicate. So minikube is responsible to run Nginx and kubectl is just responsible to tell Minikube to run Nginx
I mean you need to install Kubernetes in order to use it. It’s not magic. If minikube isn’t to your liking there are many installers, try Docker Desktop or k3d.

503 Service Temporarily Unavailable with gitlab docker and nginx-proxy docker

Description:
I've set up the nginx-proxy container which works really great with one of my two docker containers. Which is just a mini go web server on dev.MY_IP_ADDRESS.com.
I've set it up for my gitlab docker container as well which runs on MY_IP_ADDRESS.com:10080 but doesn't seem to work with gitlab.MY_IP_ADDRESS.com
I've done the same configurations as with my web server, by setting by adding an environment variable:
gitlab:
#other configs here
environment:
- VIRTUAL_HOST=gitlab.MY_IP_ADDERSS.com
#more configs here
The only difference is that I set up my go server and nginx-proxy server in the same docker-compose.yml and the gitlab one uses a different docker-compose.yml file. Unsure if this has anything to do with it.
I've attempted to docker-compose up each file in a different orders to see if that was an issue.
Error:
This is what I get when I go on gitlab.MY_IP_ADDRESS.com:
503 Service Temporarily Unavailable
nginx/1.11.8
Question:
Why isn't the reverse proxy for gitlab.MY_IP_ADDERSS.com working for gitlab? Is there a conflict somewhere? It works fine on MY_IP_ADDRESS.com:10080
If any logs are needed or any more information let me know. Thanks.
I completely forgot about this question, I actually found a solution which worked for me:
The problem is that your docker-gen is not able to find your GitLab and therefore does not generate the Nginx configuration for gitlab.MY_IP_ADDERSS.com.
To solve this you have three options:
1.) If you are using the solution with separate containers and launch the docker-gen container with the -only-exposed flag this might prevent it from finding GitLab. This was the issue in my case which is why I am mentioning it.
2.) In your case it will probably be because your GitLab container and your Nginx container do not share a common Docker network. Create one like docker create network nginx-proxy and add all your containers to it.
3.) Another solution proposed in this issue is to add a line network_mode: bridge to your GitLab container. I did not test this myself.

Cannot connect Docker container to Weave network

I am trying to connect two Docker containers on different hosts with a Weave overlay network. On my first host, I could connect to the Weave network without any problems. But on the other host my command line freezes whenever I'm trying to run a container with that network or when I'm trying to attach an existing container to that network later on.
Those are the commands I am using:
docker run -id --name test_container --net=weave test_img
and:
docker run -id --name test_container test_img
weave attach test_container
In both cases the command line is blocking and ctrl+c cannot stop the command. When I close the terminal and open a new one, I can see the container when I execute docker ps -a. But when I want to start it, the same things happens again.
Any ideas?
It turned out I didn't point the Weave router on the second host to the correct IP of the first host. When running weave status I saw that the connection failed. Running weave connect <IP> with the correct IP address solved the problem. Still strange, that running a Docker container blocks the command line instead of just returning an error message, though.

Restarting Containers When Using Docker and Nginx proxy_pass

I have an nginx docker container and a webapp container successfully running and talking to eachother.
The nginx container listens on port 80, and uses proxy_pass to direct traffic to the IP of the webapp container.
upstream app_humansio {
server humansio:8080 max_fails=3 fail_timeout=30s;
}
"humansio" is set in the /etc/hosts file by docker because I've started nginx with --link humansio:humansio. The webapp container (humansio) is always exposing 8080.
The problem is, when I reload the webapp container, the link to the nginx container breaks and I need to restart that as well. Is there any way I can do this differently so I don't need to restart the nginx container when the webapp container reloads?
--
I've tried to do something like connecting them manually by using a common port (8001 on both), but since they actually reserve that port, the 2nd container cannot use it as well.
Thanks!
I prefer to run the proxy (nginx of haproxy) directly on the host for this reason.
But an option is to "Link via an Ambassador Container" https://docs.docker.com/articles/ambassador_pattern_linking/
https://www.digitalocean.com/community/tutorials/how-to-use-the-ambassador-pattern-to-dynamically-configure-services-on-coreos
If you don't want to restart your proxy container whenever you have to restart one of the proxied ones (e.g. fig), you could take a look at the autoupdated proxy configuration approach: http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
if u use some modern version of docker the links in nginx container to your web service probably get updated (u can check it with docker exec -ti nginx bash - then cat /etc/hosts) - problem is nginx doesnt' use /etc/hosts every time - it caches the ip and when it changes - he gets lost. 'docker kill -s HUP nginx' which makes nginx reload its config without restart helps too.
I have the same problem. I used to start my services with systemd unit files - and when u make one service (nginx) dependant on other (webapp) and then restart the webapp - systemd is smart enough to restart the nginx as well. Now I'm trying my luck with docker-compose and restarting webapp container confuses nginx.

riak-admin reports node not running but REST API responds

I have a 5 node Riak cluster running. I ssh to node 1 and run 'riak-admin test' the output of which is "Node is not running!"..however the REST API responds (eg http://{localhst}:8098/stats returns JSON stats as expected) and I can run a client that hits the ProtoBuf endpoint ok too. I must be making a noob mistake but what? (yes, have tried sudo riak-admin test)
I'm running Riak in a docker container on Debian Jessie host and have established ssh session via docker exec -i -t [container name} bash. I have hit the HTTP endpoint with curl from the session.
This, as you might expect, turns out to be environmental. I have my five nodes running in five docker containers as per http://basho.com/riak-quick-start-with-docker/
Each time a container is recycled during a session of the host it is assigned the next ip address. The riak instance in the container has it's address statically configured, hence if I recycle a container the actual IP and the static IP for riak do not match.
I've also encountered this when the hostname doesn't contain a ".", which is the case with docker's default hostnames. I always have to start my riak containers with docker run --hostname riakN.docker.

Resources