I'm using amazon-ecs to launch docker containers that I have. Everything works fine locally, but when I'm running the containers on ECS I'm getting the following error:
"NOTICE: PHP message: Unable to open PDO connection [wrapped: SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known]"
I'm linking the containers in the docker-compose file, and I'm able to ping the the mysql container from the nginx container, so I know their linked.
docker-compose
version: '2'
services:
nginx:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/nginx:latest"
ports:
- "8086:80"
links:
- fpm
- mysql
fpm:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/php-fpm:latest"
links:
- redis
mysql:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/mysql:latest"
environment:
MYSQL_DATABASE: strix
MYSQL_USER: strix
MYSQL_PASSWORD: rRCd29b3fG76ypM3
MYSQL_ROOT_PASSWORD: root
redis:
image: redis:latest
My symfony database.yml has the following:
dev:
propel:
param:
classname: DebugPDO
debug: { realmemoryusage: true, details: { time: { enabled: true }, slow: { enabled: true, threshold: 0.1 }, mem: { enabled: true }, mempeak: { enabled: true }, memdelta: { enabled: true } } }
task:
propel:
param:
profiler: false
test:
propel:
param:
classname: DebugPDO
all:
propel:
class: sfPropelDatabase
param:
classname: PropelPDO
dsn: 'mysql:host=mysql;dbname=strix'
username: strix
password: xxxx
encoding: utf8
persistent: true
pooling: true
I'm not sure if there's some network config that I have wrong on ECS or if I'm pointing to the wrong hostname. Any help would be appreciated. I am not familiar with symfony and am trying to raise an old application from the dead.
Turns out when I was running docker on my local machine, all the containers could talk to each other even though I hadn't explicitly linked them. In this case the fpm container needed to connect to the mysql container (and was doing so locally) but I didn't know this. When it was up on ECS, because it was not explicitly linked, it was throwing the connection error.
I simply fixed it by adding mysql to the fpm links
fpm:
image: "xxxx.dkr.ecr.us-east-2.amazonaws.com/php-fpm:latest"
links:
- redis
- mysql
Related
I have been trying to connect my webapi built in ASP.NET Core 7.1 to a postgresql database. It is inside a docker container. However, every time I run docker-compose -f docke-compose.yml up, I get the following error:
Unhandled exception. System.Net.Sockets.SocketException (00000001, 11): Resource temporarily unavailable
I assume this means that something has gone wrong with the database connection but I don't know how to fix it. Here is my docker-compose.yml
version: '3.8'
services:
server:
build: ./Test
ports:
- "8000:80"
depends_on:
- db
db:
container_name: db
image: postgres:latest
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=data
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "1234:5432"
networks:
- db-network
networks:
db-network:
driver: bridge
volumes:
pgdata:
And here is my appsettings.json from the backend
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"ConnectionStrings": {
"Data": "Host=db;Port=5432;Database=data;User ID=user;Password=pass"
},
"AllowedHosts": "*"
}
I have tried changing the connection strings, password and user id but I keep getting the same error.
Either remove network from docker-compose.yml (so default is used) or add db-network to server:
services:
server:
build: ./Test
ports:
- "8000:80"
depends_on:
- db
networks:
- db-network
I am running airflow 2 with docker-compose (works great) but I cannot make it accessible behind a nginx proxy, using a combo of nginxproxy/nginx-proxy and nginxproxy/acme-companion.
Other projects work fine using that combo (meaning, that combo is working fine) but it seems that I need to change some airflow cfgs to make it work.
The airflow docker-compose includes the following:
x-airflow-common:
&airflow-common
build: ./airflow-docker/
environment:
AIRFLOW__WEBSERVER__BASE_URL: 'http://abc.def.com'
AIRFLOW__WEBSERVER__ENABLE_PROXY_FIX: 'true'
[...]
services:
[...]
airflow-webserver:
<<: *airflow-common
command: webserver
expose:
- "8080"
environment:
- VIRTUAL_HOST=abc.def.com
- LETSENCRYPT_HOST=abc.def.com
- LETSENCRYPT_EMAIL=some.email#def.com
networks:
- proxy_default # proxy_default is the docker network the nginx-proxy container runs in
- default
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
[...]
[...]
[...]
networks:
proxy_default:
external: true
Airflow can be reached under the (successfully encrypted) address, but when one opens that url it results in the "Ooops! Something bad has happened." airflow error, more specifically a "sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: session" error, even though it works fine when not behind the proxy.
What am I missing?
Description
I am trying to build an equal configuration in my local docker-environment like on our production system. After spending some time investigating and rebuilding the docker container setup, still can't get it to work and Graylog is not receiving any data.
Overview and interim results
web, php and db container are in use for the symfony based application
symfony runs properly on localhost in php-container and generates logfiles
symfony-logfiles are located here: /var/www/html/var/logs/*.log
symfony-logfiles format is json / gelf
all other containers are also up and running when starting the complete composition
filebeat configuration is based on first link below
filebeat.yml seems to retrieve any logfile found in any container
filebeat configured to transfer data directly to elasticsearch
elasticsearch persists data in mongodb
all graylog related data in persisted in named volumes in docker
additionally I am working with docker-sync on a Mac
The docker-compose.yml is based on the following resources:
https://github.com/jochenchrist/docker-logging-elasticsearch
http://docs.graylog.org/en/2.4/pages/installation/docker.html?highlight=docker
https://www.elastic.co/guide/en/beats/filebeat/6.3/running-on-docker.html
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-reference-yml.html
config.yml
# Monolog Configuration
monolog:
channels: [graylog]
handlers:
graylog:
type: stream
formatter: line_formatter
path: "%kernel.logs_dir%/graylog.log"
channels: [graylog]
docker-compose.yml
version: "3"
services:
web:
image: nginx
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- ./docker-config/nginx.conf:/etc/nginx/conf.d/default.conf
- project-app-sync:/var/www/html
- ./docker-config/localhost.crt:/etc/nginx/ssl/localhost.crt
- ./docker-config/localhost.key:/etc/nginx/ssl/localhost.key
php:
build:
context: .
dockerfile: ./docker-config/Dockerfile-php
links:
- graylog
volumes:
- project-app-sync:/var/www/html
- ./docker-config/php.ini:/usr/local/etc/php/php.ini
- ./docker-config/www.conf:/usr/local/etc/php-fpm.d/www.conf
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=project
- MYSQL_USER=project
- MYSQL_PASSWORD=password
volumes:
- ./docker-config/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
- project-mysql-sync:/var/lib/mysql
# Graylog / Filebeat
filebeat:
build: ./docker-config/filebeat
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- graylog-network
depends_on:
- graylog-elasticsearch
graylog:
image: graylog/graylog:2.4
volumes:
- graylog-journal:/usr/share/graylog/data/journal
networks:
- graylog-network
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- graylog-mongo:mongo
- graylog-elasticsearch:elasticsearch
depends_on:
- graylog-mongo
- graylog-elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
graylog-mongo:
image: mongo:3
volumes:
- graylog-mongo-data:/data/db
networks:
- graylog-network
graylog-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10
ports:
- "9200:9200"
volumes:
- graylog-elasticsearch-data:/usr/share/elasticsearch/data
networks:
- graylog-network
environment:
- cluster.name=graylog
- "discovery.zen.minimum_master_nodes=1"
- "discovery.type=single-node"
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
project-app-sync:
external: true
project-mysql-sync: ~
graylog-mongo-data:
driver: local
graylog-elasticsearch-data:
driver: local
graylog-journal:
driver: local
networks:
graylog-network: ~
Dockerfile of filebeat container
FROM docker.elastic.co/beats/filebeat:6.3.1
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
# must run as root to access /var/lib/docker and /var/run/docker.sock
USER root
RUN chown root /usr/share/filebeat/filebeat.yml
# dont run with -e, to disable output to stderr
CMD [""]
filebeat.yml
filebeat.prospectors:
- type: docker
paths:
- '/var/lib/docker/containers/*/*.log'
# path to symfony based logs
- '/var/www/html/var/logs/*.log'
containers.ids: '*'
processors:
- decode_json_fields:
fields: ["host","application","short_message"]
target: ""
overwrite_keys: true
- add_docker_metadata: ~
output.elasticsearch:
# transfer data to elasticsearch container?
hosts: ["localhost:9200"]
logging.to_files: true
logging.to_syslog: false
Graylog backend
After setting up this docker composition I started the Graylog web-view and set up a collector and input as described here:
http://docs.graylog.org/en/2.4/pages/collector_sidecar.html#step-by-step-guide
Maybe I have totally misunderstood how this could work. I am not totally sure if Beats from Elastic is the same as the filebeats container and if the sidecar collector is something extra I forgot to add. Maybe I misconfigured the collector and input in graylog?!
I would be thankful to any help or working example according to my problem ...
Graylog seems to be running on http://127.0.0.1:9000/api which is in the container. You might want to run it as http://graylog:9000/api or as http://0.0.0.0:9000/api
Accessing the other images from within any of the other images will have be done with the same name as the service name, as defined in the docker-compose.yml files. The url to the graylog-elasticsearch would be something like: http://graylog-elasticsearch/.... if you would post to localhost it would stay inside its own image.
Hope this will help you along in finding the solution.
I want to pass configuration parameters from docker-compose file to my Symfony application, I tried this:
app:
image: <NGINX + PHP-FPM IMAGE>
environment:
DATABASE_HOST: db
My parameters.yml file :
database_host: "%env(DATABASE_HOST)%"
I get error 500 "Environment variable not found: DATABASE_HOST"
I also tried SYMFONY__DATABASE_HOST in docker-compose but also not working.
How does it work?
The error you're receiving refers to Symfony runtime environment variables (see here). Docker compose environment variables are retrieved from the .env file that resides within the build context (AKA the directory where you want docker-compose to run from) of the docker-compose.yml file. What you really want to do is make sure that the environment variables you set in your Symfony config/parameters and your docker-compose.yml file match, especially the db host.
You should consider breaking your image up into PHP-FPM and Nginx images to have better potential scalability and a separation of concerns according to the best practices outlined by Docker here. All of these technologies have regularly-maintained images available on Docker Hub.
Here's my docker-compose.yml that creates containers for PHP-FPM, MySQL, and Nginx in a multicontainer environment with the latest stable packages available as of today.
version: '3'
services:
db:
image: mysql
volumes:
- ./.data/db:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
php:
build: php7.1-fpm
depends_on:
- db
ports:
- "9000:9000"
volumes:
- symfony-bind-mount:/var/www/symfony
- symfony-logs-bind-mount:/var/www/symfony/app/logs
nginx:
build: nginx
depends_on:
- php
ports:
- "8025:8025"
- "80:80"
volumes:
- symfony-bind-mount:/var/www/symfony
- nginx-logs-bind-mount:/var/log/nginx
volumes:
symfony-bind-mount:
driver: local
driver_opts:
o: bind,rw
type: none
device: ${SYMFONY_APP_PATH}
nginx-logs-bind-mount:
driver: local
driver_opts:
o: bind,rw
type: none
device: ${DOCKER_SYMFONY_PATH}/logs/nginx
symfony-logs-bind-mount:
driver: local
driver_opts:
o: bind,rw
type: none
device: ${DOCKER_SYMFONY_PATH}/logs/symfony
Here's my .env file:
# Symfony application's path
SYMFONY_APP_PATH=/absolute/path/to/symfony/project
# Path to local docker-symfony repository
DOCKER_SYMFONY_PATH=/absolute/path/to/sibling/docker/directory
# MySQL
MYSQL_ROOT_PASSWORD=root
MYSQL_DATABASE=mydb
MYSQL_USER=
MYSQL_PASSWORD=password
You can checkout my fork of maxpou's docker-symfony repository here. I've updated the docker-compose.yml and the bind mounts to be compatible with version 3 of the Compose file format.
I am trying to install salt minion from master using salt ssh
This is my sls file
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
service:
- running
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
/etc/salt/minion:
file.managed:
- source: salt://minion/minion.conf
- user: root
- group: root
- mode: 644
And this my roster file
minion3:
host: 192.168.33.103
user: vagrant
passwd: vagrant
sudo: True
My problem is that when I run
sudo salt-ssh -i '*' state.sls
I get this error
ID: salt-minion
Function: service.running
Result: False
Comment: One or more requisite failed: install_minion./etc/salt/minion
Started:
Duration:
Changes:
Strangely it works fine when I run it for the second time.
Any pointers to what I am doing wrong would be very helpful.
When installing salt on a machine via SSH you might want to look at the Salt's saltify module.
It will connect to a machine using SSH, run a bootstrap method, and register the new minion with the master. By default it runs the standard Salt bootstrap script, but you can provide your own.
I have a similar setup running in my Salt/Consul example here. This was originally targeted at DigitalOcean, but it also works with Vagrant (see cheatsheet.adoc for more information). A vagrant up followed by a salt-cloud -m mapfile-vagrant.yml will provision all minion using ssh.
Solved it.
The state file should be like this:
salt-minion:
pkgrepo:
- managed
- ppa: saltstack/salt
- require_in:
- pkg: salt-minion
pkg.installed:
- version: 2015.5.3+ds-1trusty1
/etc/salt/minion:
file.managed:
- template: jinja
- source: salt://minion/files/minion.conf.j2
- user: root
- group: root
- mode: 644
salt-minion_watch:
service:
- name: salt-minion
- running
- enable: True
- restart: True
- watch:
- file: /etc/salt/minion
- pkg: salt-minion
This is working for me. Though I am not clear on the reason.