Linking 2 docker containers in chef recipe - nginx

I am trying to achieve what the command does using chef recipe:
docker run -d --name=nginx --restart=unless-stopped -p 80:80 -p 443:443 -v /etc/test/test.cert:/etc/test/test.cert -v /etc/test/test.key:/etc/test/test.key -v /etc/nginx/conf.d/nginx_ssl_conf.conf:/etc/nginx/conf.d/default.conf --link=rancher-server nginx
This is what I have come up with so far. But I am still unable to link the two containers:
docker_image 'nginx' do
tag 'latest'
action :pull
end
docker_container 'my_nginx' do
repo 'nginx'
tag 'latest'
port ['80:80', '443:443']
volumes [ '/etc/test/test.cert:/etc/test/test.cert', '/etc/test/test.key:/etc/test/test.key', '/etc/nginx/conf.d/nginx_ssl_conf.conf:/etc/nginx/conf.d/default.conf' ]
links ['rancher-server:nginx']
subscribes :run, 'docker_image[nginx]'
end
Any thoughts ? suggestions ?

There is a links property which takes an array of links. There is an example in the README if you search for "Manage container links".

Related

Deploy docker image to k8s cluster issue

I am trying to deploy docker image to kubernetes but hitting a strange issue. Below is the command i am using in jenkinsfile
stage('Deploy to k8s') {
steps{
sshagent(['kops-machine']) {
sh "scp -o StrictHostKeyChecking=no deployment.yml ubuntu#<ip>:/home/ubuntu/"
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh 'kubectl set image deployment/nginx-deployment nginx=account/repo:${BUILD_NUMBER}'
}
}
I am getting this error message
kubectl set image deployment/nginx-deployment nginx=account/repo:69
error: the server doesn't have a resource type "deployment"
Strange thing is if i copy and paste this command and execute on the cluster, the image gets updated
kubectl set image deployment/nginx-deployment nginx=account/repo:69
Can somebody please help, image builds and pushes to docker hub successfully, its just that i am stuck with pulling and deploying to kubernetes cluster, if you have anyother alternatives please let me know, the deployment.yml file which gets copied to the server is as follows
spec:
containers:
- name: nginx
image: account/repo:3
ports:
- containerPort: 80
Ok so if found the work around. if i change this line in my docker file
sh "ssh ubuntu#<ip> kubectl apply -f ." to
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"
It works, but if there is no deployment created at all, then i have to add these two line to make it work
sh "ssh ubuntu#<ip> kubectl apply -f ."
sh "ssh ubuntu#<ip> kubectl set image deployment/nginx-deployment
nginx=account/repo:${BUILD_NUMBER}"

Development environment for Wordpress with Docker incl. FTP access

I would like to know if it is possible to provide a local development environment for Wordpress, where the data can be uploaded via FTP?
I already got the official Docker image of Wordpress running:
docker run --name wordpress-db -e MYSQL_ROOT_PASSWORD=admin -p 3306:3306 -d mysql
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -d wordpress
Next, I'm trying to use an FTP Docker image. After a short search, I chose metabrainz/docker-anon-ftp. But I do not have to continue using this image if there are better ones. I have extended my commands as follows:
docker run --name wordpress --link wordpress-db:mysql -p 8080:80 -v /wordpress-volume:/var/www/html/wp-content -d wordpress
docker run --name wordpress-ftp -p 20-21:20-21 -p 65500-65515:65500-65515 -v /wordpress-volume:/var/www/html/wp-content -d metabrainz/docker-anon-ftp
Now I have the problem that the FTP server is reachable, but does not provide me with any content.
What did I do wrong? Can someone give me the commands with which I can upload files (from my IDE) into Wordpress via FTP?

Docker Container Networking with Docker-in-Docker

I would like to network with a child docker container from a parent docker container, with a docker-in-docker setup.
Let's say I'm trying to connect to a simple Apache httpd server. When I run the httpd container on my host machine, everything works fine:
asnyder:~$ docker run -d -p 8080:80 httpd:alpine
asnyder:~$ curl localhost:8080
<html><body><h1>It works!</h1></body></html>
But when I do the same from a docker-in-docker setup, I get a Connection refused error:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
I have tried a couple alterations without luck. Specifying the 0.0.0.0 interface:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 0.0.0.0:8080:80 httpd:alpine
/ # curl 0.0.0.0:8080
curl: (7) Failed to connect to 0.0.0.0 port 8080: Connection refused
Using the host network:
asnyder:~$ docker run -d --name mydind --privileged docker:dind
asnyder:~$ docker run -it --link mydind:docker docker:latest sh
/ # docker run -d --network host httpd:alpine
/ # curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
Surprisingly, I was unable to find any existing articles on this. Does anyone here have some insight?
Thanks!
There are pros and cons for both DinD and bind mounting the Docker socket and there are certainly use cases for both. As an example, check out this set of blog posts, which does a good job of explaining one of the use cases.
Given your example docker-in-docker setup above, you can access Apache httpd server in one of two ways:
1) From inside the docker:dind container, it will be available on localhost:8080.
2) From inside the docker:latest container, where you were trying to access it originally, it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 would give you the standard Apache <html><body><h1>It works!</h1></body></html>.
Hope it makes sense!
Building upon Yuriy's answer:
2) From inside the docker:latest container, [...] it will be available on whatever hostname is set for the docker:dind container. In this case, you used --name mydind, therefore curl mydind:8080 [...]
In the Gitlab CI config, you can address the DinD container by the name of its image (in addition to the name of its container, which is auto-generated):
Accessing the services
Let’s say that you need a Wordpress instance to test some API integration with your application.
You can then use for example the tutum/wordpress image in your .gitlab-ci.yml:
services:
- tutum/wordpress:latest
If you don’t specify a service alias, when the job is run, tutum/wordpress will be started and you will have access to it from your build container under two hostnames to choose from:
tutum-wordpress
tutum__wordpress
Using
service:
- docker:dind
will allow you to access that container as docker:8080:
script:
- docker run -d -p 8080:80 httpd:alpine
- curl docker:8080
Edit: If you'd prefer a more explicit host name, you can, as the documentation states, use an alias:
services:
- name: docker:dind
alias: dind-service
and then
script:
- docker run -d -p 8080:80 httpd:alpine
- curl dind-service:8080
Hth,
dtk
I am very convinced that #Yuriy Znatokov's answer is what I want, but I have understood it for a long time. In order to make it easier for later people to understand, I have exported the complete steps.
1) From inside the docker:dind container
docker run -d --name mydind --privileged docker:dind
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl localhost:8080
<html><body><h1>It works!</h1></body></html>
2) From inside the docker:latest container
docker run -d --name mydind --privileged docker:dind
docker run -it --link mydind:docker docker:latest sh
/ # docker run -d -p 8080:80 httpd:alpine
/ # curl mydind:8080
<html><body><h1>It works!</h1></body></html>

Why I can't see my files inside a docker container?

I'm a Docker newbie and I'm trying to setup my first project.
To test how to play with it, I just cloned one ready-to-go project and I setup it (Project repo).
As the guide claims if I access a specific url, I reach the homepage. To be more specific a symfony start page.
Moreover with this command
docker run -i -t testdocker_application /bin/bash
I'm able to login to the container.
My problem is if I try to go to the application folder through bash, the folder that I shared with my host is empty.
I tried with another project, but the result is the same.
Where I'm wrong?
Here some infos about my env:
Ubuntu 12.04
Docker version 1.8.3, build f4bf5c7
Config:
application:
build: code
volumes:
- ./symfony:/var/www/symfony
- ./logs/symfony:/var/www/symfony/app/logs
tty: true
Looks like you have a docker-compose.yml file but are running the image with docker. You don't actually need docker-compose to start a single container. If you just want to start the container your command should look like this:
docker run -ti -v $(pwd)/symfony:/var/www/symfony -v $(pwd)/logs/symfony:/var/www/symfony/app/logs testdocker_application /bin/bash
To use your docker-compose.yml start your container with docker-compose up. You would also need to add the following to drop into a shell.
stdin_open: true
command: /bin/bash

Docker nginx container exists instantly

I want to have some control over the official nginx image, so I wrote my own Dockerfile that adds some extra funtionality to it.
The file has the following contents:
FROM nginx
RUN mkdir /var/www/html
COPY nginx/config/global.conf /etc/nginx/conf.d/
COPY nginx/config/nginx.conf /etc/nginx/nginx.conf
When I build this image and create a container of the image using this command:
docker run -it -d -v ~/Projects/test-website:/var/www/html --name test-nginx my-nginx
It will exit instantly. I can't access the log files as well. What could be the issue? I've copied the Dockerfile of the official nginx image and this does the same thing.
So I didn't know about the docker ps -a; docker logs <last container id> command. I executed this and it seemed I had a duplicated daemon off; command.
Thanks for the help guys ;)!

Resources