error Back-off restarting failed container in nginx pod - nginx

simple use-case to deploy the nginx image using kubernetes
Dockerfile used to create the image. The "./build/" in the dockerfile COPY statement, is the output directory (npm run build) of reactjs code.Just static files in it.
FROM nginx
COPY ./build/ /usr/share/nginx/html/
RUN rm -f /etc/nginx/conf.d/default.conf
COPY ./default.conf /etc/nginx/conf.d/
CMD ["nginx"]
EXPOSE 80
Why is this error in nginx deployment? error is "Back-off restarting failed container".
manually ran the container, but nginx is not running. I could start the nginx manually. Am I missing any configuration in the Dockerfile or in /etc/nginx/conf.d/default.conf file? files are copied and available in github path for reference.
root#desktop:~/github/nginx-app# docker run -it private.registry.corp/company-dev-poc/nginx-cms:001 bash
root#fc7ebd431ae2:/# service nginx status
[FAIL] nginx is not running ... failed!

Use CMD ["nginx", "-g", "daemon off;"]
Also, you do not need to specify the command. CMD and EXPOSE would be defined in the base image - nginx in this case. They need not be defined again.

Related

css file integrity check fails after docker build

I have the following issue.
Failed to find a valid digest in the 'integrity' attribute for resource 'http://127.0.0.1:8080/uistatic/css/bootstrap4.0.0.min.css' with computed SHA-256 integrity 'xLbtJkVRnsLBKLrbKi53IAUvhEH/qUxPC87KAjEQBNo='. The resource has been blocked.
And this is happening when i put my site into docker with building this Dockerfile:
FROM python:3.6
COPY skfront /app
WORKDIR /app
RUN mkdir -p /static/resources
RUN mkdir logs
RUN mkdir certs
RUN pip3 install -r requirements.txt
RUN python3 manage.py collectstatic --settings blog_site.settings
EXPOSE 8080
CMD python3 website.py -l 0.0.0.0 -p 8080
This css is static file that doesn't change.
Any ideas why this is happening?
I found out that building Dockerfile on Windows can mess-up the final image.
When I build my site under Linux there ware no integrity errors.

Shiny server docker app runs locally but not when deployed to AWS Fargate

I have a containerized R Shiny app that, when run locally with docker run --rm -p 3838:3838 [image] works as expected. The "landing" page appears when I go to localhost:3838 and all is good. However, when this container is deployed to AWS Fargate, things break down. The container appears to start and run, but there's no webpage being served on 3838 even though all ports are pointed to 3838 in Fargate.
I'm using dockerfile
FROM rocker/verse:3.5.1
LABEL Steven "email"
## Add shiny capabilities to container
RUN export ADD=shiny && bash /etc/cont-init.d/add
## Update and install
RUN tlmgr update --self
RUN tlmgr install beamer translator
## Add R packages
RUN R -e "install.packages(c('shiny', 'googleAuthR', 'dplyr', 'googleAnalyticsR', 'knitr', 'rmarkdown', 'jsonlite', 'scales', 'ggplot2', 'reshape2', 'Cairo', 'tinytex'), repos = 'https://cran.rstudio.com/')"
#Copy app dir and them dirs to their respective locations
COPY app /srv/shiny-server/ga-reporter
COPY app/report/themes/SwCustom /opt/TinyTeX/texmf-dist/tex/latex/beamer/
#Force texlive to find my custom beamer thems
RUN texhash
EXPOSE 3838
## Add shiny-server information
COPY shiny-server.sh /usr/bin/shiny-server.sh
COPY shiny-customized.config /etc/shiny-server/shiny-server.conf
## Add dos2unix to eliminate Win-style line-endings and run
RUN apt-get update && apt-get install -y dos2unix
RUN dos2unix /usr/bin/shiny-server.sh && apt-get --purge remove -y dos2unix && rm -rf /var/lib/apt/lists/*
RUN ["chmod", "+x", "/usr/bin/shiny-server.sh"]
CMD ["/usr/bin/shiny-server.sh"]
and shiny-server.conf
# Instruct Shiny Server to run applications as the user "shiny"
run_as shiny;
# Define a server that listens on port 3838
server {
listen 3838;
# Define a location at the base URL
location / {
# Host the directory of Shiny Apps stored in this directory
app_dir /srv/shiny-server/ga-reporter;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.
directory_index on;
}
}
with shiny-server.sh
#!/bin/sh
# ShinyServer: Make sure the directory for individual app logs exists
mkdir -p /var/log/shiny-server
chown -R shiny.shiny /var/log/shiny-server
# RUN ShinyServer
exec shiny-server >> /var/log/shiny-server.log 2>&1
I have edited the .conf file to display the app (i.e., location /) in /srv/shiny-server/ga-reporter which is also where I've copied the app_dir in the Dockerfile. Shiny is listening on port 3838 and should serve the page there. Again, this happens locally but not when deployed to AWS Fargate. I've tried logging Shiny logs to stdout by using the first answer provided here but have had no luck seeing any errors generated. Server "health checking" is only offered in the "pro" version so I can't check to see if the server is actually running.
On AWS, the container starts and appears to function normally (i.e., the "normal" start up logs appear):
but there is simply no page displayed at the location I expect it to be served.
I found another Shiny app that is on dockerhub running under the same configuration as the Fargate cluster but have had no luck trying to implement anything in the shiny-server.conf or the shiny-server.sh files located there.
What am I missing? Everything on Fargate is pointed to listening on 3838; there must be something I'm missing in the .conf file for this to be failing when deployed.
EDIT
I can't bash in to the running container on Fargate because I don't have access to the server on which docker is running.
Fargate has a UI that accepts host and container ports:
EDIT 2 (2018-08-27)
The engineer that was deploying this has been able to resolve the issue:
"it was the port change, I forgot to can the port on the ALB’s security group, I only updated the cluster’s inbound rules
so the cluster was allowing connections, but the ALB security group wasn’t letting it out"

Linking exposed port of docker with with the default nginx port

I have this Dockerfile:
FROM alpine:3.4
RUN apk update
RUN apk add nginx
RUN apk update
RUN cp index.html /var/lib/nginx/html/
EXPOSE 8080
Now, how can I access the file index.html on lets say port 9000 on localhost? I got puzzled. Please ask if I am not clear. Just an outline to solution is highly appreciated.
The main application is nginx so start from an nginx Dockerfile and simply copy your index.html to it.
Assuming that you have index.html in your local directory (where the Dockerfile is located).
FROM nginx:1.10-alpine
COPY ./index.html /var/lib/nginx/html
Build using
docker build -t mywebserver:latest .
Then your docker-compose.yml file could look like:
version: "2"
services:
mywebserver:
image: mywebserver:latest
ports:
- "8080:80"
command: ["nginx", "-g", "daemon off;"]
And build the containers using
docker-compose up -d
The command could also be skipped but it's a good practice to include the actual command in the service definition.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Docker nginx container exists instantly

I want to have some control over the official nginx image, so I wrote my own Dockerfile that adds some extra funtionality to it.
The file has the following contents:
FROM nginx
RUN mkdir /var/www/html
COPY nginx/config/global.conf /etc/nginx/conf.d/
COPY nginx/config/nginx.conf /etc/nginx/nginx.conf
When I build this image and create a container of the image using this command:
docker run -it -d -v ~/Projects/test-website:/var/www/html --name test-nginx my-nginx
It will exit instantly. I can't access the log files as well. What could be the issue? I've copied the Dockerfile of the official nginx image and this does the same thing.
So I didn't know about the docker ps -a; docker logs <last container id> command. I executed this and it seemed I had a duplicated daemon off; command.
Thanks for the help guys ;)!

Resources