podman ports connection refused after stopping and starting/restarting pod - networking
I'm new to using podman and am trying to follow along with Richard Walker's tutorial for containerizing a django app (https://www.richardwalker.dev/django-podman.html)
It works fine until I try to stop and restart the pod. Then my host machine can no longer connect to ports (which were exposed when building the images and mapped when the pod was created).
The docs & guides suggest that it is a simple as issuing
podman pod stop ...
podman pod start ...
but this does not seem to work.
Would appreciate your help if you can see that I am missing something.
$ podman pod create -p 8000 -p 7000 -p 5432 -n cardpod
8553ad8fc0b14a849598a51c4ffcbffa9d6d094b96b542f0e432fc0d6dfd22ff
$ podman run --name deckofcards-prod-ctr --pod cardpod -d richardwalker.dev/deckofcards-prod-img
3dbf6f9ad043fe65492f0e15be642af92916ad9e09d941e1f96315343a8d2fae
$ curl http://127.0.0.1:7000/deck/
[{"suit":"clubs","face":"queen","value":10},{"suit":"spades","face":"four","value":4},{"suit":"hearts","face":"king","value":10},{"suit":"diamonds","face":"six","value":6},{"suit":"hearts","face":"two","value":2},{"suit":"diamonds","face":"ace","value":1},{"suit":"hearts","face":"eight","value":8},{"suit":"clubs","face":"three","value":3},{"suit":"spades","face":"five","value":5},{"suit":"clubs","face":"nine","value":9},{"suit":"spades","face":"nine","value":9},{"suit":"diamonds","face":"five","value":5},{"suit":"hearts","face":"nine","value":9},{"suit":"diamonds","face":"two","value":2},{"suit":"clubs","face":"king","value":10},{"suit":"diamonds","face":"eight","value":8},{"suit":"clubs","face":"ace","value":1},{"suit":"hearts","face":"three","value":3},{"suit":"spades","face":"jack","value":10},{"suit":"hearts","face":"ten","value":10},{"suit":"spades","face":"king","value":10},{"suit":"spades","face":"ace","value":1},{"suit":"spades","face":"ten","value":10},{"suit":"hearts","face":"five","value":5},{"suit":"hearts","face":"ace","value":1},{"suit":"clubs","face":"eight","value":8},{"suit":"hearts","face":"jack","value":10},{"suit":"diamonds","face":"queen","value":10},{"suit":"clubs","face":"ten","value":10},{"suit":"diamonds","face":"nine","value":9},{"suit":"clubs","face":"five","value":5},{"suit":"clubs","face":"jack","value":10},{"suit":"diamonds","face":"ten","value":10},{"suit":"hearts","face":"queen","value":10},{"suit":"diamonds","face":"seven","value":7},{"suit":"hearts","face":"seven","value":7},{"suit":"hearts","face":"six","value":6},{"suit":"spades","face":"two","value":2},{"suit":"clubs","face":"two","value":2},{"suit":"clubs","face":"seven","value":7},{"suit":"spades","face":"seven","value":7},{"suit":"clubs","face":"four","value":4},{"suit":"spades","face":"queen","value":10},{"suit":"diamonds","face":"king","value":10},{"suit":"spades","face":"six","value":6},{"suit":"diamonds","face":"jack","value":10},{"suit":"diamonds","face":"four","value":4},{"suit":"hearts","face":"four","value":4},{"suit":"clubs","face":"six","value":6},{"suit":"diamonds","face":"three","value":3},{"suit":"spades","face":"three","value":3},{"suit":"spades","face":"eight","value":8}]
$ podman pod stop cardpod
8553ad8fc0b14a849598a51c4ffcbffa9d6d094b96b542f0e432fc0d6dfd22ff
$ podman pod start cardpod
8553ad8fc0b14a849598a51c4ffcbffa9d6d094b96b542f0e432fc0d6dfd22ff
$ curl http://127.0.0.1:7000/deck/
curl: (7) Failed to connect to 127.0.0.1 port 7000: Connection refused
More Info:
I can see that the django servers are running by inspecting "podman logs <container_id>" and the mapped ports are still available from viewing "podman port <pod_id>"
dockerfile as per tutorial:
# FROM directive instructing base image to build upon
FROM python:3.7-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Create and change to working dir
RUN mkdir /code
WORKDIR /code
# Copy code
COPY /release/ /code/
# Install dependencies
COPY requirements.txt /code/
RUN pip install -r requirements.txt
# EXPOSE port 7000 to allow communication to/from server
EXPOSE 7000
# CMD specifies the command to execute to start the server running.
CMD python3 manage.py runserver 0.0.0.0:7000
podman: version 2.0.2
distro: ubuntu 18.04
Related
How to properly start nginx in Docker
I want nginx in a Docker container to host a simple static hello world html website. I want to simply start it with "docker run imagename". In order to do that I added the run parameters to the Dockerfile. The reason I want to do that is that I would like to host the application on Cloud Foundry in a next step. Unfortunately I get the following error when doing it like this. Dockerfile FROM nginx:alpine COPY . /usr/share/nginx/html EXPOSE 5000 CMD ["nginx -d -p 5000:5000"] Error Error starting userland proxy: Bind for 0.0.0.0:5000: unexpected error Permission denied.
From :: https://docs.docker.com/engine/reference/builder/#expose EXPOSE does not make the ports of the container accessible to the host. To do that, you must use either the -p flag to publish a range of ports or the -P flag to publish all of the exposed ports. You can expose one port number and publish it externally under another number CMD ["nginx -d -p 5000:5000"] You add your dockerfile FROM nginx:alpine its already starts nginx. after you build from your dockerfile you should use this on docker run -d -p 5000:5000 <your_image> Edit: If you want to use docker port 80 -> machine port 5000 docker run -d -p 5000:80 <your_image>
Keep Docker running when shell script exits
I have the following docker file. FROM ubuntu RUN apt-get update EXPOSE 9000:80 # Install nginx RUN apt-get install -y nginx # Install Curl RUN apt-get -qq update RUN apt-get -qq -y install curl ENTRYPOINT service nginx start When I try to run the following commands in a shell script, the docker image is created and container is started, however, when the shell script exits, the docker container is stopped. How can I keep the docker container running after the shell script exits ? The idea is to have a running container with nginx running on port 80 that can be accessed from host using port 9000.
Don't run nginx as a background service. Launch it in the foreground as the nginx container on hub.docker.com does: CMD ["nginx", "-g", "daemon off;"] With containers, when pid 1 dies, your container dies. It's identical to when you kill pid 1 (init) on any Linux machine.
How to setup Nginx as a load balancer using the StrongLoop Nginx Controller
I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken: Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands: $ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address] $ sudo apt-get update $ sudo apt-get install nginx $ sudo apt-get install build-essential $ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash - $ sudo apt-get install -y nodejs $ sudo npm install -g strong-nginx-controller $ sudo sl-nginx-ctl-install -c 444 Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule. Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server: $ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address] $ sudo apt-get update $ sudo apt-get install build-essential $ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash - $ sudo apt-get install -y nodejs $ sudo npm install -g strong-pm $ sudo sl-pm-install $ sudo /sbin/initctl start strong-pm Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc: $ cd /path/to/loopback-getting-started-intermediate # my application $ slc arc Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved." Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.) Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.) How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this: sudo sl-nginx-ctl-install -c http://0.0.0.0:444 Maybe this can solve your problem.
Unable to connect to Docker Nginx build
I am trying to host a simple static site using the Docker Nginx Image from Dockerhub: https://registry.hub.docker.com/_/nginx/ A note on my setup, I am using boot2docker on OSX. I have followed the instructions and even I cannot connect to the running container: MacBook-Pro:LifeIT-war-games-frontend ryan$ docker build -t wargames-front-end . Sending build context to Docker daemon 813.6 kB Sending build context to Docker daemon Step 0 : FROM nginx ---> 42a3cf88f3f0 Step 1 : COPY app /usr/share/nginx/html ---> Using cache ---> 61402e6eb300 Successfully built 61402e6eb300 MacBook-Pro:LifeIT-war-games-frontend ryan$ docker run --name wargames-front-end -d -p 8080:8080 wargames-front-end 9f7daa48a25bdc09e4398fed5d846dd0eb4ee234bcfe89744268bee3e5706e54 MacBook-Pro:LifeIT-war-games-frontend ryan$ curl localhost:8080 curl: (52) Empty reply from server MacBook-Pro:LifeIT-war-games-frontend ryan$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9f7daa48a25b wargames-front-end:latest "nginx -g 'daemon of 3 minutes ago Up 3 minutes 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp wargames-front-end
Instead of localhost, use boot2docker ip. First do boot2docker ip and use that ip: <your-b2d-ip>:8080. Also you need to make sure you forwarded your port 8080 in VirtualBox for boot2docker.
Here is the way to connect nginx docker container service: docker ps # confirm nginx is running, which you have done. docker port wargames-front-end # get the ports, for example: 80/tcp, 0.0.0.0:8080->8080/tcp, 443/tcp boot2docker ip # get the IP address, for example: 192.168.59.103 So now, you should be fine to connect to: http://192.168.59.103:8080 https://192.168.59.103:8080
Here's how I got it to work. docker kill wargames-front-end docker rm wargames-front-end docker run --name wargames-front-end -d -p 8080:80 wargames-front-end Then I went to my virtualbox and setup these settings:
gitlab docker ssh issue
I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here. I have the following setting: linux box with ubuntu server 14.04 and IP 192.168.1.104 DNS: git.mydomain.com = 192.168.1.104 A gitlab docker that I start, according to the official doc, this way: sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image or sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080 I access git.mydomain.com without any issue, everything works. I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config Host git.mydomain.com User git Port 2222 IdentityFile /home/user/.ssh/id_rsa If I try ssh -p 2222 git#git.mydomain.com the connection is closed. I assume it is because only a git-shell is permitted. But, if I try mkdir test cd test git init touch README.md git add README.md git commit -m "first commit" git remote add origin git#git.domain.com:user/test.git git push -u origin master it stucks with Connection closed by 192.168.1.104 fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. I also tried with git remote add origin git#git.domain.com:2222/user/ and the result was the same. Note that the logs of gitlab docker include [2015-03-06T11:04:43+00:00] INFO: group[git] created [2015-03-06T11:04:43+00:00] INFO: user[git] created [2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created [2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created Any idea how I can fix this issue? Thanks in advance for your help.
I would guess that you have authentication problem. Here are few things you can try: Make sure you added you public key in gitlab. Check permissions of you id_rsa file. Try temporarily disabling hosts verification with StrictHostKeyChecking no UserKnownHostsFile /dev/null I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you. But when I log-in with ssh I get: ssh -p 2222 git#gitlab PTY allocation request failed on channel 0 Welcome to GitLab, tomzo! Connection to gitlab closed. Git remotes do not need port 2222 configured. This is OK (works for me): $ git remote -v origin git#gitlab:lab/workflow.git (fetch) origin git#gitlab:lab/workflow.git (push) And I can push and pull with git. $ git push Everything up-to-date