Linking exposed port of docker with with the default nginx port - nginx

I have this Dockerfile:
FROM alpine:3.4
RUN apk update
RUN apk add nginx
RUN apk update
RUN cp index.html /var/lib/nginx/html/
EXPOSE 8080
Now, how can I access the file index.html on lets say port 9000 on localhost? I got puzzled. Please ask if I am not clear. Just an outline to solution is highly appreciated.

The main application is nginx so start from an nginx Dockerfile and simply copy your index.html to it.
Assuming that you have index.html in your local directory (where the Dockerfile is located).
FROM nginx:1.10-alpine
COPY ./index.html /var/lib/nginx/html
Build using
docker build -t mywebserver:latest .
Then your docker-compose.yml file could look like:
version: "2"
services:
mywebserver:
image: mywebserver:latest
ports:
- "8080:80"
command: ["nginx", "-g", "daemon off;"]
And build the containers using
docker-compose up -d
The command could also be skipped but it's a good practice to include the actual command in the service definition.

Related

error Back-off restarting failed container in nginx pod

simple use-case to deploy the nginx image using kubernetes
Dockerfile used to create the image. The "./build/" in the dockerfile COPY statement, is the output directory (npm run build) of reactjs code.Just static files in it.
FROM nginx
COPY ./build/ /usr/share/nginx/html/
RUN rm -f /etc/nginx/conf.d/default.conf
COPY ./default.conf /etc/nginx/conf.d/
CMD ["nginx"]
EXPOSE 80
Why is this error in nginx deployment? error is "Back-off restarting failed container".
manually ran the container, but nginx is not running. I could start the nginx manually. Am I missing any configuration in the Dockerfile or in /etc/nginx/conf.d/default.conf file? files are copied and available in github path for reference.
root#desktop:~/github/nginx-app# docker run -it private.registry.corp/company-dev-poc/nginx-cms:001 bash
root#fc7ebd431ae2:/# service nginx status
[FAIL] nginx is not running ... failed!
Use CMD ["nginx", "-g", "daemon off;"]
Also, you do not need to specify the command. CMD and EXPOSE would be defined in the base image - nginx in this case. They need not be defined again.

COPY doesn't work on Docker prod

My Dockerfile looks like this:
FROM nginx
COPY dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/
My docker-compose.yml file looks like this:
version: '2'
services:
portfolio:
build: .
ports:
- "80:80"
When I run docker-compose up -d on my Mac, Im able to verify that the nginx.conf gets copied into the container.
However, when I run docker-compose up -d on my digital ocean prod machine, the nginx.conf file doesn't get copied over! Instead, I find the default nginx.conf file in /etc/nginx.
What am I missing here?
OK, the following worked:
clearing all images and containers, docker-compose down, then running docker-compose up worked! Docker now copies over my nginx.conf file just file!
To anyone else who is stuck with the same scenario follow this guide

Why I can't see my files inside a docker container?

I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash

Docker: How to run two web apps(app1,app2) with same port (say 80) in docker

I have tried to run app1. While visiting web page like: http://localhost:80 i am able to see the content.
In dockerfile i am adding a html file for app1 & exposing the port 80.
But i don't know how to approach with app2. Do i need to add html file same as app1 else i could approach anything else.
Could anyone tell me how to approache for app2.
dockerfile:
FROM ubuntu:14.04
RUN apt-get update
RUN apt-get install -y nginx
COPY index.html /usr/share/nginx/html/index.html
ENTRYPOINT ["/usr/sbin/nginx","-g","daemon off;"]
EXPOSE 80
For this i need to create two images & need to see the two apps are running or not on the same port.

Docker nginx container exists instantly

I want to have some control over the official nginx image, so I wrote my own Dockerfile that adds some extra funtionality to it.
The file has the following contents:
FROM nginx
RUN mkdir /var/www/html
COPY nginx/config/global.conf /etc/nginx/conf.d/
COPY nginx/config/nginx.conf /etc/nginx/nginx.conf
When I build this image and create a container of the image using this command:
docker run -it -d -v ~/Projects/test-website:/var/www/html --name test-nginx my-nginx
It will exit instantly. I can't access the log files as well. What could be the issue? I've copied the Dockerfile of the official nginx image and this does the same thing.
So I didn't know about the docker ps -a; docker logs <last container id> command. I executed this and it seemed I had a duplicated daemon off; command.
Thanks for the help guys ;)!

Resources