I have ASP.NET Core app, that is packed to docker.
Here is my docker-compose file, it has kibana and EL images in it.
version: "3.1"
services:
tooseeweb:
image: ${DOCKER_REGISTRY-}tooseewebcontainer
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:80
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
I try to deploy this to Azure Container Registry via this article
Article link
It's all okay and I see my APIŠ± it's under 80 port. But I don't see kibana and elastic search.
At local machine I make docker-compose up and see it by 5601 and 9200, but on Azure Container Registry this ports not working. How I can deploy all together? Or I need to deploy containers separately?
Firstly, the Azure Container Registry store the docker images for you. So you need to push the images to it, not the running containers. And you do not need to separate them, but you need to create all the images with the name as your_acr_name.azurecr.io/image_name:tag and then push them to the ACR.
As I see in your question, you only create the image tooseeweb with the name ${DOCKER_REGISTRY-}tooseewebcontainer, when you push this image to the ACR, it only stores this one for you, does not contain the other two images.
If you want to store the other two images in ACR, you need to follow the two steps below.
tag your image. For example:
docker tag docker.elastic.co/elasticsearch/elasticsearch:6.2.4 your_acr_name.azurecr.io/elasticsearch:6.2.4
push the image to ACR.
docker push your_acr_name.azurecr.io/elasticsearch:6.2.4
Related
I have 2 folders separated, one for backend and one for frontend services:
backend/docker-compose.yml
frontend/docker-compose.yml
The backend has a headless wordpress installation on nginx, with the scope to serve the frontend as an api service. The frontend runs on next.js. Here are the 2 different docker-compose.yml:
backend/docker-compose.yml
version: '3.9'
services:
nginx:
image: nginx:latest
container_name: my-app-nginx
ports:
- '80:80'
- '443:443'
- '8080:8080'
...
networks:
- internal-network
mysql:
...
networks:
- internal-network
wordpress:
...
networks:
- internal-network
networks:
internal-network:
external: true
frontend/docker-compose.yml
version: '3.9'
services:
nextjs:
build:
...
container_name: my-app-nextjs
restart: always
ports:
- 3000:3000
networks:
- internal-network
networks:
internal-network:
driver: bridge
name: internal-network
In the frontend I use the fetch api in nextjs as following:
fetch('http://my-app-nginx/wp-json/v1/enpoint', ...)
I tried also with ports 80 and 8080, without success.
The sequence of commands I run are:
docker network create internal-network
in backend/ folder, docker-compose up -d (all backend containers run fine, I can fetch data with Postman from WordPress api)
in frontend/ folder, docker-compose up -d fails with the error Error: getaddrinfo EAI_AGAIN my-app-nginx
I am not a very expert user of docker so I might miss something here, but I understand that there might be internal network issues over the containers. I read many answers regarding this topic but I couldn't figure it out.
Any recommendations?
Just to add a proper answer:
Generally you should NOT really want to be executing multiple docker-compose up -d commands
If you want to combine two separate docker-compose configs and run as one (slightly more preferable), you can use the extends keyword as described in the docs
However, I would suggest that you treat it as a single docker-compose project which can itself have multiple nested git repositories:
Example SO answer - Git repository setup for a Docker application consisting of multiple repositories
You can keep your code in a mono-repo or multiple repos, up to you
Real working example to backup using your applications that validates this approach:
headless-wordpress-nextjs-starter-kit and it's docker-compose.yml
I have found this thread here
Communication between multiple docker-compose projects
By looking at the most upvoted answers, I wonder if it is related to network prefix?
It seems like the internal-network would be prefixed with frontend_? On the other hand you can also try to locate the network by name in backend/docker-compose.yml:
networks:
internal-network:
external:
name: internal-network
The issue is external networks need the network name specified (because docker compose prefixes resources by default). Your backend docker compose network section should look like this:
networks:
internal-network:
name: internal-network
external: true
You are creating the network in your frontend docker compose so you should omit the docker network create ... command (just need to init frontend first). Or instead treat them both as external and keep the command. In which use the named external network as shown above in your frontend docker compose as well.
I've a question how exactly docker-compose handles environment variables.
services:
wp:
image: wordpress:latest
container_name: "wp"
restart: unless-stopped
links:
- wpdb
environment:
- TZ=Europe/Berlin
- WORDPRESS_DB_HOST=wpdb:3306
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=password
- WORDPRESS_DB_NAME=wp
volumes:
- ./data:/var/www/html
labels:
- "traefik.enable=true"
- "traefik.backend=wp"
- "traefik.frontend.rule=Host:MASKED"
- "traefik.port=80"
- "traefik.docker.network=web"
networks:
- internal
- web
wpdb:
image: mariadb:latest
restart: unless-stopped
container_name: "wpdb"
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=password
- MYSQL_DATABASE=wp
networks:
- internal
labels:
- "traefik.enable=false"
volumes:
- ./sql:/var/lib/mysql
volumes:
data:
sql:
networks:
web:
external: true
internal:
The compose file works great. The containers will be created and work perfectly.
But when I change the defaults at: WORDPRESS_DB_PASSWORD=password and MYSQL_PASSWORD=password.
The Wordpress container throws access denied for user. I also tried to kill the container and volumes.
Hopefully someone has a hint for me.
You should be doing a docker-compose down -v which would delete the named volumes declared in the volumes section. The only downside is that you would be losing all the data created by the service for the first time.
Here is how I could reproduce it -
Used your compose file as reference and on first time used the default password mentioned by you. The services come up fine, I install it and do a Ctrl+C to bring down the service. So all the MYSQL data is written into sql named volume.
When you do a Ctrl+C OR docker-compose down it only removes the containers and networks defined in the service. Not the volumes. Read more about it here
Now when you change password and bring the service back up it still uses the old volumes which has your old password.
So use a docker-compose down -v to remove the volumes too and give it a try.
Here are the steps how I reproduced it
Ctrl+C to stop all the services and then update the docker-compose.yml to update the password and do a docker-compose up again to get access denied error.
Do a docker-compose down -v to clean all the volume too and then do a docker-compose up
On doing a docker-compose down -v you will be losing all the data created by the prior service. Use it cautiously.
I have two Docker containers: node-a, node-b. One of them (node-b) should send http request to other (node-a). I'm starting them with Docker Compose. When I'm trying to up them with Compose I face an error:
Get http://node-a:9098: dial tcp 172.18.0.3:9098: getsockopt: connection refused
EXPOSE is declared in Docker file of a-node:
EXPOSE 9098
docker-compose.yml:
version: '3'
services:
node-a:
image: a
ports:
- 9098:9098
volumes:
- ./:/a-src
depends_on:
- redis
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
Forwarding is enabled. I believe a server starts because it works well without Docker.
Where I should pay attention? What could cause a problem?
EDIT:
I've tried to add links but it had no effect:
node-b:
image: b
volumes:
- ./:/b-src
links:
- node-a
depends_on:
- node-a
Also links seemed to be deprecated and does the same thing as depends_on in 2+ version of docker-compose.yml:
docker-compose execute V2 files, it will automatically build a network between all of the containers defined in the file, and every container will be immediately able to refer to the others just using the names defined in the docker-compose.yml file.
Link a container to the service using links. (docker-compose documentation on links).
Example:
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
links:
- node-a
I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
I'm trying to connect a ASP.NET Core container to PostgreSQL official container using Docker-Compose, I've name my container as aspnetcore-postgres and it contains in the configuration the connection string points to the PostgreSQL container as:
User ID=postgres;Password=5432;Host=localhost;Server=localhost;Port=5432;Database=ApplicationDbContext;Pooling=true;
And I use the following compose file for connection:
version: '2'
services:
web:
image: aspnetcore-postgres
ports:
- "5000:5000"
networks:
- aspnetcoreapp-network
postgres:
image: postgres
environment:
- "POSTGRES_PASSWORD: 5432"
networks:
- aspnetcoreapp-network
ports:
- "5432:5432"
networks:
aspnetcoreapp-network:
driver: bridge
Whenever I try to run docker-compose up, the web application can't recognize the database container. Eventually, I only see the Postgres container in the network not the web application container. I'm using Docker for Windows RC4.
Anyone can recognized that where I'm doing the wrong step?