Unable to run docker commands - unix

I am running docker using the command
sudo docker -H 0.0.0.0:2375 -d &
I am then using teh dockerjava client to create images and run containers in the following way
DockerClient dockerClient = DockerClientBuilder.getInstance("http://localhost:2375").build();
l
CreateContainerResponse container = dockerClient.createContainerCmd(image_name)
.exec();
dockerClient.startContainerCmd(container.getId()).exec();
This works fine and the docker logs look fine too. But when I try to use any of the docker commands including docker ps, docker images, docker info, all of them fail with the following error
FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Using sud also does not solve the problem. I am running docker on unix. Any thoughts?

Using sudo also does not solve the problem. I am running docker on unix. Any thoughts?
You have started up Docker listening on a TCP socket. This means that when the docker client attempts to connect to the default Unix-domain socket, there's nothing there. The error message is pretty clear about that:
dial unix /var/run/docker.sock: no such file or directory.
You need to tell the docker client where to connect, just like you have to provide that information to the DockerClientBuilder class in your code. You can do this (a) using the -H option to the client or (b) using the DOCKER_HOST environment variable.
For example:
$ docker -H http://localhost:2375 ps
$ docker -H http://localhost:2375 pull alpine
Or:
$ export DOCKER_HOST=http://localhost:2375
$ docker ps
$ docker pull alpine

Related

Why can't I access a host port from my Docker container?

I've read this post which asks the same question, but the solutions there don't seem to work. Basically I'm trying to access a port on the host os from inside the docker, and I'm using the --net="host" flag as suggested in the linked post. However, I'm still unable to access the port. The only thing that works for me is to run my host web service on 0.0.0.0 and then access it from 192.168.###.###, but that address changes based on what Wifi I'm on, so I don't want to do that. Here's what I've tried:
Set up a test webserver that I can try to access from inside the container:
bash-3.2$ echo hi > index.html
bash-3.2$ python -m SimpleHTTPServer 1234 >/dev/null 2>&1 &
[1] 57942
Curl it from the host to make sure it's running:
bash-3.2$ curl localhost:1234
hi
Start up a container that has curl installed (this is just ubuntu + curl):
bash-3.2$ docker run --rm -it --net="host" tutum/curl bash
Try curling from inside the container:
root#moby:/# curl localhost:1234
curl: (7) Failed to connect to localhost port 1234: Connection refused
I am on macOS, so I'm thinking it might have something to do with the container's host being boot2docker rather than my mac, but I still don't know how to mitigate this.
Any advice would be much appreciated! :)

My docker container isn't starting on localhost (0.0.0.0) on Docker for Windows (Native using Hyper-V)

I'm following Digital Ocean's tutorial on how to start a nginx docker container (Currently on Step 4). Currently this is their output:
$ docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b91f3ce26553 nginx "nginx -g 'daemon off" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp docker-nginx
But when I run it, this is my output (noticed the different IP of the container):
C:\>docker run --name docker-nginx -p 80:80 -d nginx
d3ccb73a91985651ec61231bca9f9c716f0dec807e354a29eeef2144f883a01c
C:\>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3ccb73a9198 nginx "nginx -g 'daemon off" 14 hours ago Up 2 seconds 10.0.75.2:80->80/tcp, 443/tcp docker-nginx
Why does this happen? And how can I get the same results as Digital Ocean's? (Getting the server to start on localhost)
Edit: I'm using Docker for windows (recently released) which apparently runs native using Hyper-V. My output for docker-machine ls is this:
C:\>docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
C:\>
But when I run it, this is my output (noticed the different IP of the
container)
Since this a Windows machine, I assume that you're using Docker Toolbox Docker for Windows. 10.0.75.2 is the IP of the boot2docker virtual machine.
If you are using Windows or Mac OS, you will need some form of virtualization in
order to run Docker. The IP you just saw is the IP of that lightweight virtual machine.
And how can I get the same results as Digital Ocean's? (Getting the
server to start on localhost)
Use a Linux distribution! Also you can enable Expose container ports on localhost in Docker For Windows Settings:
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command (or open a browser) to view the default web site on your nginx web server inside the container
curl http://192.168.99.100:80
if you are using a virtual machine on windows:
docker-machine ip default
https://docs.docker.com/machine/concepts/
When I ran this command for the first time: docker run -d -p 80:80 --name docker-tutorial docker101tutorial
I got this error:
docker: Error response from daemon: Conflict. The container name
"/docker-tutorial" is already in use by container "LONG_CONTAINER_ID".
You have to remove (or rename) that container to be able to reuse that
name.
so, I tried to remove this container using: docker rm -f LONG_CONTAINER_ID
then I did: docker run -d -p 3080:80 --name docker-tutorial docker101tutorial
note 3080:80 instead of 80:80... Had I run this from the docker desktop, I would see this default option below:

Docker "/bin/bash" could not be invoked when mounting an NFS file with -v on openstack

I'm running an Ubuntu 14.04 instance that has docker installed on openstack. I'm trying to mount a volume into a docker container. I'm doing this with
docker run -t -i -v /mnt/data/dir:/mnt/test ubuntu
Where /mnt/data/dir is an NFS shared directory. Doing this gets me:
docker:
Error response from daemon: Container command '/bin/bash' could not be invoked..
However, using a local directory instead of a mounted directory works exactly as expected.
I understand that docker doesn't natively support an NFS mounted file system, however the errors I googled are usually not of the form that I've mentioned above.
Any clue on how to proceed
Edit: I forgot to mention that its not just limited to /bin/bash could not be invoked. I tried running a tomcat server and that gave me the exact same error.

Docker run results in "host not found in upstream" error

I have a frontend-only web application hosted in Docker. The backend already exists but it has "custom IP" address, so I had to update my local /etc/hosts file to access it. So, from my local machine I am able to access the backend API without problem.
But the problem is that Docker somehow can not resolve this "custom IP", even when the host in written in the container (image?) /etc/hosts file.
When the Docker container starts up I see this error
$ docker run media-saturn:dev
2016/05/11 07:26:46 [emerg] 1#1: host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
nginx: [emerg] host not found in upstream "my-server-address.com" in /etc/nginx/sites/ms.dev.my-company.com:36
I update the /etc/hosts file via command in Dockerfile, like this
# install wget
RUN apt-get update \
&& apt-get install -y wget \
&& rm -rf /var/lib/apt/lists/*
# The trick is to add the hostname on the same line as you use it, otherwise the hosts file will get reset, since every RUN command starts a new intermediate container
# it has to be https otherwise authentification is required
RUN echo "123.45.123.45 my-server-address.com" >> /etc/hosts && wget https://my-server-address.com
When I ssh into the machine to check the current content of /etc/hosts, the line "123.45.123.45 my-server-address.com" is indeed there.
Can anyone help me out with this? I am Docker newbee.
I have solved this. There are two things at play.
One is how it works locally and the other is how it works in Docker Cloud.
Local workflow
cd into root of project, where Dockerfile is located
build image: docker build -t media-saturn:dev .
run the builded image: docker run -it --add-host="my-server-address.com:123.45.123.45" -p 80:80 media-saturn:dev
Docker cloud workflow
Add extra_host directive to your Stackfile, like this
and then click Redeploy in Docker cloud, so that changes take effect
extra_hosts:
'my-server-address.com:123.45.123.45'
Optimization tip
ignore as many folders as possible to speed up process of sending data to docker deamon
add .dockerignore file
typically you want to add folders like node_modelues, bower_modules and tmp
in my case the tmp contained about 1.3GB of small files, so ignoring it sped up the process significantly

Why is Play application not able to resolve dependencies from inside docker container?

I am trying to get a Play Framework application running inside a docker container on an Ubuntu Server 14.04 machine.
$ docker pull mzkrelx/playframework2-dev:2.2.3
$ docker run -i -t -v /path/to/play/app:/opt/workspace -p 9000:9000 mzkrelx/playframework2-dev:2.2.3
bash-4.1# play
[play-application] $ run
The last command results in attempts to resolving dependencies but only puts out errors, warnings and infos such as You probably access the destination server through a proxy server that is not well configured.
What do I do wrong?
It seems as if my problems were network-related and subject to caching behaviour. The same setup now works perfectly. After a shutdown of the machine and a play clean.
Thanks for your help nevertheless!

Resources