I am trying to run an ASP.NET Core 2.0 application (REDIS + RabbitMQ + NGINX) on Docker.
When I upload these containers via docker-compose, these services work and are even accessible by Windows, since they are mapped by "HostPORT: ContainerPORT".
However, when testing the App itself, .NET informs in console that it was not possible to connect to the REDIS, for example.
fail: Microsoft.AspNetCore.Server.Kestrel[13]
Connection id "0HLDGDJNAEB9E", Request id "0HLDGDJNAEB9E:00000001": An unhandled exception was thrown by the application.
StackExchange.Redis.RedisConnectionException: It was not possible to connect to the redis server(s); to create a disconnected multiplexer, disable AbortOnConnectFail. SocketFailure on PING.
My docker-compose.yml:
version: '3'
services:
nginx:
build:
dockerfile: ./nginx/nginx.dockerfile
context: .
image: nginx
container_name: nginx
ports:
- "80:80"
networks:
- production-network
depends_on:
- "wordSearcherApp"
wordSearcherApp:
image: wordsearcherapplication
container_name: wordsearcherapp
build:
context: .
dockerfile: WordSearcher/Dockerfile
networks:
- production-network
ports:
- "61370"
volumes:
- repository:/repository
depends_on:
- redis
- rabbit
redis:
image: redis
container_name: redis
ports:
- "6379:6379"
networks:
- production-network
rabbit:
image: rabbitmq
container_name: rabbitmq
ports:
- "5672:5672"
- "15672:15672"
networks:
- production-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:15672"]
interval: 30s
timeout: 10s
retries: 5
networks:
production-network:
driver: bridge
volumes:
repository:
driver: local
For Connection in C#, i use this connectionString localhost:6379
How can i do this?
Thanks.
Use redis:6379 instead of localhost:6379.
Docker-Compose will use the name you've defined for a service in the docker-compose.yml file as the hostname for its container.
Related
I have deployed several workload containers from dockerhub to Rancher. Now I need them connected through a network. How do I go about this? I have a Load balancer set up. I think a network can be set up through load balancer in the Rancher UI?
Currently I have five workloads under one namespace (webapp-9):
webapp-9-apache
webapp-9-php
webapp-9-mysql
webapp-9-solr
webapp-9-phpmyadmin
Following error occurs when pullin up webapp-9-apache workload in browser:
Proxy Error
Reason: DNS lookup failure for: php
Here is my docker-compose.yml:
version: '3.1'
services:
apache:
build:
context: .
dockerfile: path/to/apache/Dockerfile
image: user:webapp-9-apache
ports:
- 80:80
depends_on:
- mysql
- php
volumes:
- ./http:/path/to/web/
php:
build:
context: .
dockerfile: path/to/php/Dockerfile
image: user:webapp-9-php
volumes:
- ./http:/path/to/folder/
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=${MYSQL_RANDOM_ROOT_PASSWORD}
depends_on:
- mysql
mysql:
build:
context: .
dockerfile: path/to/mysql/Dockerfile
image: user:webapp-9-mysql
command: mysqld --sql-mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
ports:
- 3306:3306
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=${MYSQL_RANDOM_ROOT_PASSWORD}
volumes:
- ./data:/path/to/mysql
- .docker/mysql/config:/path/to/conf.d
solr:
build:
context: .
dockerfile: path/to/Dockerfile
image: user:webapp-9-solr
ports:
- "8983:8983"
volumes:
- ./solr_data:/path/to/solr
command:
- solr-precreate
- gettingstarted
phpmyadmin:
build:
context: .
dockerfile: path/to/phpmyadmin/Dockerfile
image: user:webapp-9-phpmyadmin
ports:
- 8090:80
environment:
- PMA_HOST=mysql
- PMA_PORT=3306
- PMA_USER=${MYSQL_USER}
- PMA_PASSWORD=${MYSQL_PASSWORD}
- UPLOAD_LIMIT=200M
All workloads need to be under the same Namespace (which they already were) and the workloads need to be named according to the services in the docker-composer.yml file.
e.g. drupal-9-spintx-php -> php
I have the following docker-compose.yml, but need to model a public/private network split where the Redis instance must only be accessible to localhost.
version: "2.2" # for compatibility, I can upgrade if needed
services:
nginx:
image: nginx
ports:
- 8080:80
redis:
image: redis
ports:
- 6379:6379
This seems straightforward if I needed to restrict it to being accessible only within the docker network. Consider:
version: "2.2"
services:
nginx:
image: nginx
ports:
- 8080:80
networks:
- frontend
- backend
redis:
image: redis
ports:
- 6379:6379
networks:
- backend
networks:
frontend: {}
backend:
internal: true
However our local web developers need to be able to access that Redis instance from their host machine (outside of the docker network) when they build, run, and debug locally.
Just bind the service port of redis to localhost(127.0.0.1).
Try follows...
...
redis:
image: redis
ports:
- 127.0.0.1:6379:6379
...
Run Redis Web UI called redis-commander.
Use env vars to point to the running redis.
Expose this new container & access it instead of exposing Redis container.
services:
redis:
# Do comment ports! no need to expose it
# ports:
# - 6379:6379
// ....
redis-commander:
image: rediscommander/redis-commander:latest
restart: unless-stopped
environment:
REDIS_HOST: redis:6379 # <-- 🔴 Here you point to your redis
# REDIS_PASSWORD # <- in case redis is protected with password
ports:
- "8081:8081"
Let your developers go to http://localhost:8081 and enjoy.
Find more details about that image
I'm having an issue with my php container not connecting to my database container.
My docker-compose.yml :
version: "2"
volumes:
# this is the mysql data volume we are going to host the data inside
dev_mysql_data:
# This volume is used for elasticsearch
dev_elastic_search:
networks:
mp_pixel:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
services:
# database container for local development purposes
dev_database:
image: mysql:5.6
networks:
mp_pixel:
aliases:
- database
ports:
# port 3304 (external) is for use on your desktop sql client
# port 3306 (internal) is for use inside your application code
- 3304:3306
volumes:
# mount the mysql_data docker volume to host the local development database
- dev_mysql_data:/var/lib/mysql
# the provision file helps when trying to use the provision script to clone databases
- ./provision.cnf:/provision.cnf
environment:
MYSQL_ROOT_PASSWORD: pixel
# This is the local development version of the nginx container
dev_nginx:
image: mp-pixel-nginx:latest
build: ./nginx
ports:
- '80:80'
- '443:443'
networks:
mp_pixel:
aliases:
- nginx
depends_on:
- dev_phpfpm
volumes_from:
- dev_phpfpm
environment:
- VIRTUAL_HOST=~^(mp-pixel|mp-location|mp-feedback|mp-user|mp-phone|mp-loancalculator|mp-seo|mp-media|mp-listing|mp-development|mp-kpi|mp-newsletter|mp-auth|mp-worker|mp-search)-ph-dev.pixel.local
# This is the local development version of the phpfpm container
dev_phpfpm:
image: mp-pixel-phpfpm:latest
build:
context: ./
args:
# this build might fail, if so, run in a terminal: export SSH_KEY=$(cat ~/.ssh/id_rsa)
- SSH_KEY=$SSH_KEY
networks:
mp_pixel:
aliases:
- phpfpm
depends_on:
- dev_database
volumes:
# we override the images /www directory with the code from the live machine
- ./:/www
env_file:
# inside this file, are the shared database secrets such as username/password
- ./env/common
- ./env/dev
dev_elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.3.3
networks:
mp_pixel:
aliases:
- elasticsearch
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
mem_limit: 1g
cap_add:
- IPC_LOCK
volumes:
- dev_elastic_search:/usr/share/elasticsearch/data
ports:
- 9200:9200
environment:
- cluster.name=dev-elasticsearch-pixel
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "xpack.security.enabled=false"
I run it with docker-compose up and the php logs show
An exception occured in driver: SQLSTATE[HY000] [2002] Connection timed out
I try to access the database container with docker exec, and I can confirm that I have the right credentials.
What could be the problem?
When your containers are up, did you already try to connect to the database with a tool like Sequel Pro? Maybe the database is just not initialized and because of this, the connection from the php container can't be established? You tried to access the db container but not the database itself.
Additionally you could add some more environment variables to the database section of your docker-compose.yml
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=databasename
- MYSQL_USER=databaseuser
- MYSQL_PASSWORD=databasepassword
Hope that helps
I'm starting with Docker. On start I want to set up simple server on nginx with proxy and SSL (only local on my machine) so I do somethink like this:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs:ro
- /etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
- /usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true
nginx-proxy-ssl:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-ssl
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy
whoami2:
image: jwilder/whoami
container_name: whoami2
environment:
- VIRTUAL_HOST=vertex.local.com
- LETSENCRYPT_HOST=vertex.local.com
- LETSENCRYPT_EMAIL=contact#vertex.local.com
networks:
default:
external:
name: developer
On standard HTTP everythink is fine, I'm getting site but Let's Encrypt returns error:
Unable to reach http://vertex.local.com/.well-known/acme-challenge/zr0QPZ53RHLRFKy76GX1NKx3lY4GPIaVorH4PT88_Ew: HTTPConnectionPool(host='vertex.local.com', port=80): Max retries exceeded with url: /.well-known/acme-challenge/zr0QPZ53RHLRFKy76GX1NKx3lY4GPIaVorH4PT88_Ew (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff41fd2e550>: Failed to establish a new connection: [Errno 111] Connection refused',))
So I have question:
For Docker (even locally) I need to set real existing domain? If is true how should I set up my this domain (on provider site - records, etc.) and on local machine?
Or how can I set SSL into my containers?
This is my docker-compose.yml
yml
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
network_mode: "default"
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
links:
- admin_db
network_mode: "bridge"
If with networking_mode:"bridge" I should be able to access my app (admin) on http://127.0.0.1:8000/ from localhost, but currently, I'm able to access it only on random-ip:8000 from localhost.
I'm able to http://127.0.0.1:8000/ access when networking_mode is "host", but then I'm unable to link containers.
Is there any solution to have both things ?
- linked containers
- app running on http://127.0.0.1:8000/ from localhost
If for some unknown reason normal linking doesn't work you can always create another bridged network and connect directly to that docker image. By doing that IP address of that running image will always be the same.
I would edit it like this:
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
networks:
back_net:
ipv4_address: 11.0.0.2
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
extra_hosts:
- "admin_db:11.0.0.2"
networks:
back_net:
ipv4_address: 11.0.0.3
networks:
back_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
com.docker.network.bridge.name: "back"
ipam:
driver: default
config:
- subnet: 11.0.0.0/24
gateway: 11.0.0.1
Hope that helps.