Running Jenkins in a Docker Container - nginx

Im trying to get some hands on experience in Jenkins and wanted to run it in a docker container. I was following the tutorial here. I have docker installed on my machine and using Kitematic I launched the official Jenkins docker image (tag: latest) using:
docker run -p 8080:8080 jenkins
However once the container is setup when I go to 192.168.99.100:8080 (192.168.99.100 is my docker-machine ip) it shows the default nginx page. 192.168.99.100:8080/jenkins shows
HTTP ERROR 404
Problem accessing /jenkins. Reason:
Not Found
The weird part is that kitmatic shows a web preview of the running container and shows jenkins up and running fine, but how do I access it via the browser????
EDIT : Just tried docker run -p 8082:8080 jenkins. and it works i.e. I can see the jenkins landing page. Whaaaa.. ?

See if the port 8080 is already taken by another application. it's not allocating this port because it's taken - that is why it can't reach Jenkins. try looking here: https://www.cyberciti.biz/tips/linux-display-open-ports-owner.html

Related

HTTP 403 Forbidden url error when access to docker container IIS setting server

Image of success call the swagger inside of the docker container
I success to start the Server in the docker container.
With the simple image of the Docker file, I install the dotnet sdk and hosting bundle, and set the IIS with command line.
I success to start the API Server with IIS and check with using curl inside of the docker container.
(Image is attached)
But, when I tried to call the outside from docker container, for example in my laptop, the only response is 403 Fordden url comes out.
HTTP 403 forbidden url error
I tried to compare with my local IIS setting, but every setting is exactly same.
No Managed code, and the Advance setting is same.
What's the problem?
This is the docker file I use.
# escape=`
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command"]
RUN Install-WindowsFeature Web-ASP
ADD https://download.microsoft.com/download/1/2/8/128E2E22-C1B9-44A4-BE2A-5859ED1D4592/rewrite_amd64_en-US.msi rewrite_amd64_en-US.msi
RUN Write-Host 'Installing URL Rewrite' ; Start-Process msiexec.exe -ArgumentList '/i', 'rewrite_amd64_en-US.msi', '/quiet', '/norestart' -NoNewWindow -Wait;
WORKDIR /app
COPY ./ /app
RUN mkdir C:/inetpub/wwwroot/api
COPY ./api C:/inetpub/wwwroot/api
EXPOSE 8080
and I install the dotnet-sdk3.1, dotnet-hosting-6.0.4
Please give me some advice.
Thanks in advance
I'm trying to containerize my server made with window and dotnet 3.1
But I got problem while I tried to access to exposed port.
By default, when you create or run a container using docker create or
docker run, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker
containers which are not connected to the container’s network, use the
--publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world.
For more information about container networking, please refer to this manual: https://docs.docker.com/config/containers/container-networking/

Configure Docker to use a proxy server

I have installed docker on windows , when I try to run hello-world for testing on docker. I get following error
Unable to find image
My computer is using proxy server for communication. I need to configure that server in the docker. I know proxy server address and port. Where I need to update this setting. I tried using https://docs.docker.com/network/proxy/#set-the-environment-variables-manually.
It is not working.
Try setting the proxy. Right click on the docker icon in system tray, go to settings, proxy and add the below settings:
"HTTPS_PROXY=http://<username>:<password>#<host>:<port>"
If you are looking to set a proxy on Linux, see here
The answer of Alexandre Mélard at question Cannot download Docker images behind a proxy works, here is the simplified version:
Find out the systemd script or init.d script path of the docker service by running:service docker status or systemctl status docker, for example in Ubuntu16.04 it's at /lib/systemd/system/docker.service
Edit the script for example sudo vim /lib/systemd/system/docker.service by adding the following in the [Service] section:
Environment="HTTP_PROXY=http://<proxy_host>:<port>"
Environment="HTTPS_PROXY=http://<proxy_host>:<port>"
Environment="NO_PROXY=<no_proxy_host_or_ip>,<e.g.:172.10.10.10>"
Reload and restart the daemon: sudo systemctl daemon-reload && sudo systemctl restart docker or sudo service docker restart
Verify: docker info | grep -i proxy should show something like:
HTTP Proxy: http://10.10.10.10:3128
HTTPS Proxy: http://10.10.10.10:3128
This adds the proxy for docker pull, which is the problem of the question. If for running or building docker a proxy is needed, either configure ~/.docker/config as the official docs explained, or change Dockerfile so there is proxy inside the container.
I had the same problem on a windows server and solved the problem by setting the environment variable HTTP_PROXY on powershell:
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://username:password#proxy:port/", [EnvironmentVariableTarget]::Machine)
And then restarting docker:
Restart-Service docker
More information at Microsoft official proxy-configuration guide.
Note: The error returned when pulling image, with version 19.03.5, was connection refused.

How do I connect to a container hosted in Docker Toolbox?

I am attempting to run my ASP.NET Core 1.1 web API in a Docker container, but I cannot connect to the web API from a browser or curl. To troubleshoot, I have also brought up standard nginx and Apache httpd containers and cannot connect to these either, so I believe this is a Docker/Docker Toolbox/configuration issue rather than a problem with my application.
I'll focus on what I have done with nginx and Apache:
I am running Docker Toolbox on Windows 7 Professional, and everything seems to work as I would expect.
Docker commands all work as expected
I can access the underlying Windows filesystem
I can get the expected results from curl http://localhost (if I start the default IIS website on Windows 7)
So now I shut down IIS and run nginx in a container:
$ docker run -d -p 80:80 nginx
45bb1f373c11b820d8431de3eb3bf222d57d412de53e8625f461b62c4279e644
Docker now shows nginx running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45bb1f373c11 nginx "nginx -g 'daemon off" 47 seconds ago Up 48 seconds 0.0.0.0:80->80/tcp, 443/tcp admiring_pike
But I cannot connect with either curl (within the Docker Toolbox command prompt) or a web browser in Windows:
$ curl http://localhost
curl: (7) Failed to connect to localhost port 80: Connection refused
I get exactly the same results if I run an Apache 2.4 (httpd) container.
Any ideas? Thanks! Peter
I have found the answer in another question here.
Because Docker Toolbox is running on a lightweight Linux VM, it has its own IP address. One needs either to map localhost to the VM using DOCKER_HOST ir access the VM via it's IP address, found using the command:
docker-machine ip default
As you are running on VM, you need to follow this docker document from here.
After that run the following command to check the IP address of your VM.
docker-machine ip default
Start the nginx and hit [ip default address]:port in the browser. It works!

503 Service Temporarily Unavailable with gitlab docker and nginx-proxy docker

Description:
I've set up the nginx-proxy container which works really great with one of my two docker containers. Which is just a mini go web server on dev.MY_IP_ADDRESS.com.
I've set it up for my gitlab docker container as well which runs on MY_IP_ADDRESS.com:10080 but doesn't seem to work with gitlab.MY_IP_ADDRESS.com
I've done the same configurations as with my web server, by setting by adding an environment variable:
gitlab:
#other configs here
environment:
- VIRTUAL_HOST=gitlab.MY_IP_ADDERSS.com
#more configs here
The only difference is that I set up my go server and nginx-proxy server in the same docker-compose.yml and the gitlab one uses a different docker-compose.yml file. Unsure if this has anything to do with it.
I've attempted to docker-compose up each file in a different orders to see if that was an issue.
Error:
This is what I get when I go on gitlab.MY_IP_ADDRESS.com:
503 Service Temporarily Unavailable
nginx/1.11.8
Question:
Why isn't the reverse proxy for gitlab.MY_IP_ADDERSS.com working for gitlab? Is there a conflict somewhere? It works fine on MY_IP_ADDRESS.com:10080
If any logs are needed or any more information let me know. Thanks.
I completely forgot about this question, I actually found a solution which worked for me:
The problem is that your docker-gen is not able to find your GitLab and therefore does not generate the Nginx configuration for gitlab.MY_IP_ADDERSS.com.
To solve this you have three options:
1.) If you are using the solution with separate containers and launch the docker-gen container with the -only-exposed flag this might prevent it from finding GitLab. This was the issue in my case which is why I am mentioning it.
2.) In your case it will probably be because your GitLab container and your Nginx container do not share a common Docker network. Create one like docker create network nginx-proxy and add all your containers to it.
3.) Another solution proposed in this issue is to add a line network_mode: bridge to your GitLab container. I did not test this myself.

ASP.NET on Docker Not Serving Web App to Browser

I can't get my ASP.NET web application to get served to my browser when the web app is containerized in Docker.
I'm running a Mac, and I've used Visual Studio Code to create an ASP.NET web application. It's a simple, out-of-the-box demo that is based on the yo aspnet "Empty Application." When run "native" (outside of Docker), this application serves a "Hello World!" to http://localhost:5000 just fine. In other words, running dnx web starts the web server (Kestrel) and yeilds:
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
This is good. Now enter Docker. I seem to have successfully built a Docker image containing the web application, and when I run the container in Docker, I get the same output from Kestrel. Also good, but, I can no longer load the "Hello World!" page in my browser at http://localhost:5000. Instead, I get a ERR_CONNECTION_REFUSED. This is fairly obviously because due to the Docker "indirection," there is nothing serving directly to port 5000 anymore. In other words, I think there's an incorrect forwarding configuration, or, I think am misunderstanding the addressing.
I believe that port forwarding is involved in this process. In my Dockerfile, I am using an EXPOSE 5000 which I thought would allow me to map my local use of port 5000 to the Docker container's port 5000 using a run command like this:
docker run -i -t -p 5000:5000 container_name
But that's not the case with http://localhost:5000 (ERR_CONNECTION_REFUSED). So it occurred to me that Docker is almost certainly not at localhost. I had noticed when Docker loads, it says:
docker is configured to use the default machine with IP 192.168.99.100
So, I thought I'd try http://192.168.99.100:5000, but again (confusingly?) ERR_CONNECTION_REFUSED. Next, I read an interesting article here and I was able to determine from the suggested command
docker inspect container_name | grep IPAddress
That the container is assigned "IPAddress": "172.17.0.2"
So, I thought I'd try http://172.17.0.2:5000. And now we might actually be getting somewhere, because instead of a ERR_CONNECTION_REFUSED, I instead get a spinning hourglass and a resulting timeout. But still no "Hello World!"
What might I be missing?
It turns out that the web application is available at the IP address of the virtual machine 192.168.99.100 as suspected. 172.17.0.2 was clearly some sort of red herring.
The real kicker seems to be that the container's default "internal" IP is 0.0.0.0
Following the excellent advice of this posting, I edited the Dockerfile and specified the following:
ENTRYPOINT ["dnx", "web", "--server.urls", "http://0.0.0.0:5000"]
Because...
This will allow our web application to serve requests that come in from the
port forwarding provided by Docker which defaults to 0.0.0.0
The port mapping is crucial to link the host's port to the container's, but the EXPOSE command is apparently redundant. Now, when I run
docker run -i -t -p 80:5000 container_name
I can simply browse to http://192.168.99.100 (port 80 is implicit)
And viola! There's my "Hello World!"
Apart from using http://0.0.0.0:5000 you can use http://*.5000
ENTRYPOINT ["dnx", "web", "--server.urls", "http://*:5000"]
or you can include this against the runtimes environment
"commands": {
"kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
},
"web": ......
and the entrypoint in the dockerfile can be
ENTRYPOINT ["dnx","-p","project.json","kestrel"]

Resources