Why is Play application not able to resolve dependencies from inside docker container? - sbt

I am trying to get a Play Framework application running inside a docker container on an Ubuntu Server 14.04 machine.
$ docker pull mzkrelx/playframework2-dev:2.2.3
$ docker run -i -t -v /path/to/play/app:/opt/workspace -p 9000:9000 mzkrelx/playframework2-dev:2.2.3
bash-4.1# play
[play-application] $ run
The last command results in attempts to resolving dependencies but only puts out errors, warnings and infos such as You probably access the destination server through a proxy server that is not well configured.
What do I do wrong?

It seems as if my problems were network-related and subject to caching behaviour. The same setup now works perfectly. After a shutdown of the machine and a play clean.
Thanks for your help nevertheless!

Related

Docker "/bin/bash" could not be invoked when mounting an NFS file with -v on openstack

I'm running an Ubuntu 14.04 instance that has docker installed on openstack. I'm trying to mount a volume into a docker container. I'm doing this with
docker run -t -i -v /mnt/data/dir:/mnt/test ubuntu
Where /mnt/data/dir is an NFS shared directory. Doing this gets me:
docker:
Error response from daemon: Container command '/bin/bash' could not be invoked..
However, using a local directory instead of a mounted directory works exactly as expected.
I understand that docker doesn't natively support an NFS mounted file system, however the errors I googled are usually not of the form that I've mentioned above.
Any clue on how to proceed
Edit: I forgot to mention that its not just limited to /bin/bash could not be invoked. I tried running a tomcat server and that gave me the exact same error.

Docker shows inconsistent behaviour when creating container from image

I am developing a web application which depends on a moodle system, as it uses moodles webservices. For my automated tests, I wanted to use docker to provide a preconfigured moodle-application on all my machines. Therefore I created a docker image, which I import from a .tar.gz file.
However, creating a new container-instance from this image behaves inconsistently. Sometimes the container boots up correctly and everything works fine. However, sometimes the container starts but the moodle-website is not reachable. If I connect my bash to the container using docker exec -it <container> bash I see that apache is running. The error logs do not show any entries which might be related to this issue.
If I kill the container instance and boot it up again, everything works as expected (sometimes this step has to be repeated multiple times). Do you have any idea what could be the reason for this strange behaviour? Anyone experiencing similar issues?
Docker is running on Ubuntu 14:04. The problem appears on several machines. The script which imports the image and starts the container looks like this:
#!/usr/bin/env bash
docker rm -f moodle
docker load < my-moodle.tar.gz
docker run -d -p 8080:80 -p 8443:443 -p 3306:3306 --name moodle moodle-image
Thanks in advance!
Successful container startup depends on your container entrypoint and external resources (if the entrypoint has external dependencies). What is the entrypoint? Does it depend on external resources?

Unable to run docker commands

I am running docker using the command
sudo docker -H 0.0.0.0:2375 -d &
I am then using teh dockerjava client to create images and run containers in the following way
DockerClient dockerClient = DockerClientBuilder.getInstance("http://localhost:2375").build();
l
CreateContainerResponse container = dockerClient.createContainerCmd(image_name)
.exec();
dockerClient.startContainerCmd(container.getId()).exec();
This works fine and the docker logs look fine too. But when I try to use any of the docker commands including docker ps, docker images, docker info, all of them fail with the following error
FATA[0000] Get http:///var/run/docker.sock/v1.18/info: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?
Using sud also does not solve the problem. I am running docker on unix. Any thoughts?
Using sudo also does not solve the problem. I am running docker on unix. Any thoughts?
You have started up Docker listening on a TCP socket. This means that when the docker client attempts to connect to the default Unix-domain socket, there's nothing there. The error message is pretty clear about that:
dial unix /var/run/docker.sock: no such file or directory.
You need to tell the docker client where to connect, just like you have to provide that information to the DockerClientBuilder class in your code. You can do this (a) using the -H option to the client or (b) using the DOCKER_HOST environment variable.
For example:
$ docker -H http://localhost:2375 ps
$ docker -H http://localhost:2375 pull alpine
Or:
$ export DOCKER_HOST=http://localhost:2375
$ docker ps
$ docker pull alpine

Hostname resolution fails when running docker build from a docker container

We are running a Jenkins CI server from a docker container, started with docker-compose. The Jenkins server is running some jobs which are pulling projects from git and building docker containers the standard way executing docker build . on them. To be able to use docker inside the docker container we are mounting over /var/run/docker.sock with docker-compose to the Jenkins container.
Some of the Dockerfile-s we are trying to build there are downloading files from our fileserver (3rd party installation images for example). Such a Dockerfile command looks like RUN curl -o xx.zip http://fileserver/xx-1.2.3.zip.
The fileserver hostname gets resolved through the /etc/hosts file and it resolves to the host's public IP which runs the Jenkins CI server. The docker-compose config for the Jenkins container also includes the extra_hosts parameter pointing the fileserver to the host's public IP.
The problem is that building the docker container with Jenkins running in it's own container fails with a plain Unknown host: fileserver message. If I enter the Jenkins container via docker exec -it <id>, I can execute the same curl command and it resolves the host, but if I try to run docker build . there which tries to run the same curl command, it fails to resolve the host.
Our host is an RHEL and I failed to reproduce the problem on my desktop Arch Linux so I suspect it's something redhat-specific issue (again).
Add --network=host, so that the build env will use the host machine domain resolution.
docker build --network=host foo/bar:latest .
Docker builds don't happen on the machine issuing the command (your jenkins container, in this case) - they happen on the machine with the Docker Engine. This means that your Jenkins machine tars up the source directory and ships it back to the parent machine for the build to happen. So, check if the curl command works from the parent machine, not the Jenkins container.

Gitlab: Problems running Unicorn, Resque with Passenger/Nginx

I have installed a Gitlab on a brand new Ubuntu (10.04) and it is working almost correctly. Gitlab is reachable on HTTP, I can push/pull data via git to the server. There is one thing missing though, the activity feed is not updating. So I thought there is something wrong with the git hooks. I completely followed the installation process from Gitlab except I'd like to use Passenger to run Nginx in order to deploy multiple apps.
I was running the the sudo -u gitlab -H bundle exec rake gitlab:env:info RAILS_ENV=production to see if everything is set up correctly, but it said, Redis is not running. ps aux says, redis-server is up. So it is not the git hooks. Gitlab docu says, restart the gitlab service to solve that problem. In this case I get an error which I think is the problem I need to solve:
$ sudo /etc/init.d/gitlab restart
Error, unicorn not running!
My question is, how can I get around this problem? How can I run unicorn, I thought the gitlab service would start it? Am I not using Nginx? Before I start reinstalling the whole thing firstly without using Passenger, I thought I might ask the question here beforehand.
As mentioned by the OP pabera, nginx and mysql must be started, for the other components of GitLab (redis, unicorn, and now sidekiq) to run properly.
The official /etc/init.d/gitlab is here.
I have my own version of gitlabd (here), because I manage sidekiq in my own script, and I don't need to run the script as root.
You can see the run order for all the services in this script:
ssh
Apache and/or NGiNX
mysql
redis
GitLab (which will start unicorn and sidekiq)
Kind of a poke in the dark...
In the GitLab installation.md README is states:
"
Start your GitLab instance:
sudo service gitlab start
# or
sudo /etc/init.d/gitlab restart
"
I did the first AND the second and got this exact error. However, I skipped the "or" and continued to the Nginx commands and it seems to work.
Hope this helps!

Resources