docker: Error response from daemon: Conflict. The container name "/grafana" is already in use by container "f6b2d471d737a" - graphite

Please I really need your help when I try to execute this command
root#Graphite:~# docker run -d\
--name graphite
--restart=always
-v /path/to/graphite/configs:/opt/graphite/conf
-v /path/to/graphite/data:/opt/graphite/storage
-v /path/to/statsd_config:/opt/statsd/config
graphiteapp/graphite-statsd
I receive this message
docker: Error response from daemon: Conflict. The container name "/grafana" is already in use by container "f6b2d471d737a". You have to remove (or rename) that container to be able to reuse that name.
please I need your help I've used Docker to install graphite it's working but I want files to be locally and I don't have this much knowledge about linux this why I need your help

Related

Cant find local volume on Docker rocker

I can't seem to solve this question, even though it seems that this has been answered here:
I am trying to run the rocker/tensorflow Docker container in Ubuntu 20.04 but I also need it to access the following folder /home/au687614/Documents/LUCAS_ML
So I tried to follow this answer and run this:
docker run -d -p 8787:8787 -v $(pwd):/home/au687614/Documents/LUCAS_ML:/home/rstudio/LOOKATMEEE -e ROOT=TRUE rocker/tensorflow
This however gets me the following error:
docker: Error response from daemon: invalid mode: /home/rstudio/LOOKATMEEE.
See 'docker run --help'
What is my mistake?
A correct syntax would be:
docker run -d -p 8787:8787 -e PASSWORD=yourpassword -v /path/to/your/local/folder:/home/rstudio/LOOKATMEEE:rw rocker/tensorflow
This comes from this e-book where you can find explanations about the syntax and other Docker essentials explained.

I keep getting this error "docker: invalid reference format: repository name must be lowercase."

I keep getting this error even my repo name is lowercase, the code i´m running is this sudo docker container run --rm -p 3838:3838 -v /home/ubuntu/la-liga-2018-2019-stats/stats/:/srv/shiny-server/stats -v /home/ubuntu/log/shiny-server/:/var/log/shiny-server/
BorisRendon/shinyauth. I´m trying to deploy a shiny app to aws using docker and i can´t pass this step.
You should probably use docker run and not docker container run.

how to share data between docker container and host

I'm working on a read the docs documentation where I use docker. To customize it, I d like to share the css folder between the container and host, in order to avoid building always a new image to see the changes. The goal is, that I can just refresh the browser and see the changes.
I tried something like this, but it doesn't work:
docker run -v ~/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
What is wrong in this command?
The path of the folder I'd like to share is:
Documents/my-documentation/docs/source/_static/css
Thanks for your help!
I'm guessing that the ~ does not resolve correctly. The tilde character ("~") refers to the home directory of your user; usually something like /home/your_username.
In your case, it sounds like your document isn't in this directory anyway.
Try:
docker run -v Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
I have no mac to test with, but I suspect the command should be as below (Documents is a subfolder to inside your home directory denoted by ~)
docker run -v ~/Documents/my-documentation/docs/source/_static/css:/docs/source/_static/css -p 80:80 -it my-docu:latest
In your OP you mount the host folder ~/docs/source/_static/css, which does not make sense if your files are in Documents/my-documentation/docs/source/_static/css as that would correspond to ~/Documents/my-documentation/docs/source/_static/css
Keep in mind that Docker is still running inside a VM on Mac, so you will need to give a host path that is valid on that VM
What you can do to get a better view of the situation is to start an interactive container where you mount the root file system of the host vm root into /mnt/vm-root. That way you can see what paths are available to mount and how they should be formatted when you pass them using the -v flag to the docker run command
docker run --rm -it -w /mnt/vm-root -v /:/mnt/vm-root ubuntu:latest bash

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Unable to run docker commands

I am running docker using the command
sudo docker -H 0.0.0.0:2375 -d &
I am then using teh dockerjava client to create images and run containers in the following way
DockerClient dockerClient = DockerClientBuilder.getInstance("http://localhost:2375").build();
l
CreateContainerResponse container = dockerClient.createContainerCmd(image_name)
.exec();
dockerClient.startContainerCmd(container.getId()).exec();
This works fine and the docker logs look fine too. But when I try to use any of the docker commands including docker ps, docker images, docker info, all of them fail with the following error
FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Using sud also does not solve the problem. I am running docker on unix. Any thoughts?
Using sudo also does not solve the problem. I am running docker on unix. Any thoughts?
You have started up Docker listening on a TCP socket. This means that when the docker client attempts to connect to the default Unix-domain socket, there's nothing there. The error message is pretty clear about that:
dial unix /var/run/docker.sock: no such file or directory.
You need to tell the docker client where to connect, just like you have to provide that information to the DockerClientBuilder class in your code. You can do this (a) using the -H option to the client or (b) using the DOCKER_HOST environment variable.
For example:
$ docker -H http://localhost:2375 ps
$ docker -H http://localhost:2375 pull alpine
Or:
$ export DOCKER_HOST=http://localhost:2375
$ docker ps
$ docker pull alpine

Resources