Docker run start services - nginx

I need nginx-openresty and redis in single docker container. I have written docker file its working fine. But thing i need to start my redis service after login into the docker bash to automate this I have written .sh file which contains instrutions like start and stop of redis server and nginx. ENTRYPOINT ["./startup.sh"]
and .sh file is
cd /etc/redis-installation/utils
echo -n | ./install_server.sh
service redis_6379 stop
cd /
cp ./dump.rdb /var/lib/redis/6379/
service redis_6379 start
openresty
My problem is that docker container start and exist when shell execution completed. How can stay the container keep running with nginx and redis in running state.

Try using docker-compose with a link between your app container and your redis container. I suggest using the official redis container

Related

Why does aspnet core start on port 80 from within Docker?

TL;DR: Why does an aspnet core app run on port 80 from within a Docker image, but 5000 outside a docker image.
Elaborate
I went through the aspnet core / docker tutorial found here:
https://learn.microsoft.com/en-us/dotnet/core/docker/building-net-docker-images
Half way through the page, I start the application with the following as prescribed:
dotnet run
Among other things, this prints this:
Now Listening on: http://localhost:5000
Great. That is what I expected. The next thing in the tutorial is to start the exact same application from within a Docker image.
docker build -t aspnetapp .
docker run -it --rm -p 5000:80 --name aspnetcore_sample aspnetapp
This results in
Now listening on: http://[::]:80
Wait. Wat? Why is the aspnet core app running on port 80? It was running on port 5000 when I ran it directly from the machine. There were no configuration file changes.
I suspect that it has something to do with the base docker images, but am not yet skilled enough in docker to track this down.
The microsoft/aspnetcore-build container builds on top of the
microsoft/aspnetcore container. The dockerhub page for that says:
A note on ports
This image sets the ASPNETCORE_URLS environment variable to http://+:80 which means that if you have not explicity set a URL in your application, via app.UseUrl in your Program.cs for example, then your application will be listening on port 80 inside the container.
So this is the container actively setting the port to 80. You can override it, if you want, by doing this in your Dockerfile:
ENV ASPNETCORE_URLS=http://+:5000
Also, it is worth noting that because of the docker command you are using, you will still be able to access the application at http://localhost:5000 whether you are running the application directly or in a container.
without dockerfile you can set any port out of the docker container. (.NET Core 3.1, .NET 5, .NET 6, .NET 7+) with docker args
docker run -it --rm -p 5000:80 -p 5001:443 -e ASPNETCORE_HTTPS_PORT=https://+:5001
-e ASPNETCORE_URLS=http://+:5000 --name aspnetcore_sample aspnetapp
more details:
https://github.com/dotnet/dotnet-docker/blob/17c1eec582e84ba9cbea5641cd9cc13fe1a41c39/samples/run-aspnetcore-https-development.md?plain=1#L85
https://github.com/dotnet/dotnet-docker/blob/5926a01d44bd47b6202ba71e30f9faa08fad1aec/samples/run-in-sdk-container.md?plain=1#L109
If you are using .NET Core 2.2 or higher, then you should to use another image: mcr.microsoft.com/dotnet/core/aspnet:2.2. In that case specifying ENV ASPNETCORE_URLS=http://+:5000 does not help. You still can force app to listen to port 5000 by using UseUrls("http://*:5000") in Programs.cs file.
Some links in other answers are for older versions, or no longer exist. The below applies to v6.
All the mcr.microsoft.com/dotnet/aspnet images are here. Suppose you are using the alpine version.
The aspnet image is based on the runtime image, as shown here.
The runtime image is based on the runtime-deps image, as shown here.
The runtime-deps image is based on the amd64/alpine image, as shown here (an older version, but with the same structure). And it sets ENV ASPNETCORE_URLS=http://+:80, as shown here, which means the container is listening on port 80.
Windows Networking Stack Limitation plays hard on Windows Docker Container.Reference Video
docker run -it --rm -p ${host_computer_port}:${container_port} --name ${container_name} ${image_name}
Example of the command:
docker run -it --rm -p 5000:8090 --name dockerwebapp9172020c dockerwebapp9172020
What is above command mean?
Your machine Port (5000) is mapped to container Port (8090). It does not mean that application running in container listening on PORT: 8090. See docker file below on how to map container port to application port.
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1903 AS base
WORKDIR /app
EXPOSE 8090
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1903 AS build
WORKDIR /src
COPY ["DockerWebApp/DockerWebApp.csproj", "DockerWebApp/"]
RUN dotnet restore "DockerWebApp/DockerWebApp.csproj"
COPY . .
WORKDIR "/src/DockerWebApp"
RUN dotnet build "DockerWebApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "DockerWebApp.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENV ASPNETCORE_URLS http://*:8090
ENTRYPOINT ["dotnet", "DockerWebApp.dll"]
Testing
Windows Networking Stack Limitation will not allow to run following command directly.
http://localhost:5000
Let's get on with first workaround.
Workaround-1 Directly hit Container. (Below command in PowerShell or Command Prompt)
ps c:/>docker inspect f31e8add55af
Find IP Address of a container from "NETWORKS" node at very end and run command in browser.
http://{container IP}:8090
Workaroud-2 From Windows Host
Let's find Your Machine IP address first.
c:\>ipconfig
Once you find your Machine IP, run following command.
http://{Your Machine IP}:5000
since 5000 is mapped to container port(8090) and asp.net core application is also listening to 8090 port within container.
Reference:Windows Containers and Docker: 101

Send a file via SFTP to a Docker Container

I have a Docker container running with an app on Linux. The container is hosted on a Mac(development) or AWS(production). I want to be able to send a file to this container remotely. How can I achieve that?
Thank you.
You need to install a SSH server in the image you are running, or make sure one is already installed. Then you need to map the ssh port (default 22) on your container to the host's port so you can reach your container from outside host. For example:
docker run -p 10022:22 app_container
If running on AWS check your security group for that ec2 instance you are running that container on to allow host port (10022 as in example above) to be accessible from outside.
You may also use "docker cp" to copy from/to container and local drive.
Be aware of the syntax. * is not possible, but cp is recursive and copies directories...
So e.g.
docker cp c867cee9451f:/var/www/html/themes/ .
copies the whole themes folder with subdirectories to your local drive while
docker cp c867cee9451f:/var/www/html/themes/* . #### does not work
won't work.

Docker shows inconsistent behaviour when creating container from image

I am developing a web application which depends on a moodle system, as it uses moodles webservices. For my automated tests, I wanted to use docker to provide a preconfigured moodle-application on all my machines. Therefore I created a docker image, which I import from a .tar.gz file.
However, creating a new container-instance from this image behaves inconsistently. Sometimes the container boots up correctly and everything works fine. However, sometimes the container starts but the moodle-website is not reachable. If I connect my bash to the container using docker exec -it <container> bash I see that apache is running. The error logs do not show any entries which might be related to this issue.
If I kill the container instance and boot it up again, everything works as expected (sometimes this step has to be repeated multiple times). Do you have any idea what could be the reason for this strange behaviour? Anyone experiencing similar issues?
Docker is running on Ubuntu 14:04. The problem appears on several machines. The script which imports the image and starts the container looks like this:
#!/usr/bin/env bash
docker rm -f moodle
docker load < my-moodle.tar.gz
docker run -d -p 8080:80 -p 8443:443 -p 3306:3306 --name moodle moodle-image
Thanks in advance!
Successful container startup depends on your container entrypoint and external resources (if the entrypoint has external dependencies). What is the entrypoint? Does it depend on external resources?

Why I can't see my files inside a docker container?

I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash

Hostname resolution fails when running docker build from a docker container

We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .
Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.

Resources