Having the wollowing docker-compose file :
version: '3'
services:
postgres:
image: postgres:12.1
environment:
- POSTGRES_PASSWORD:'xyz'
ports:
- '5432:5432'
volumes:
- postgres:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
redis:
image: 'redis:5-alpine'
command: redis-server
ports:
- '6379:6379'
volumes:
- 'redis:/data'
specs:
image: gcr.io/project_id/container_name:latest
depends_on:
- 'postgres'
- 'redis'
build: .
entrypoint:
- /bin/bash
- -c
- |
...stuff that require database processing...
environment:
- RAILS_ENV=test
- REDIS_URL=redis://redis:6379/0
- DATABASE_URL=postgres://postgres:#postgres:5432/test_db
links:
- postgres
- redis
volumes:
redis:
postgres:
networks:
default:
external:
name: cloudbuild
and the following cloudbuild step (we're using the community cloud builder) :
...
- name: 'gcr.io/app-dmx-sh/docker-compose'
args:
- 'run'
- 'specs'
id: 'specs'
...
we keep hitting the following error :
could not translate host name "postgres" to address: Name or service not known
We've also tried to add the network_mode: cloudbuild for each containers in the compose file, no success.
What should be done to be able have the compose file network work as expected within this cloudbuild environment ?
Related
I'm using asp.net core and docker and the goal is to use Elastic APM, here is my configuration:
Program.cs:
app.UseAllElasticApm(builder.Configuration);
appsettings.json:
"ElasticApm": {
"ServiceName": "Appraisal360APMSerivce",
"LogLevel": "verbose",
"ServerUrl": "http://localhost:8200",
"apm-server-secret-token": "",
"TransactionSampleRate": 1.0
}
docker-compose file:
version: '3.4'
services:
apm-server:
image: docker.elastic.co/apm/apm-server:7.15.2
ports:
- 8200:8200
- 6060:6060
volumes:
- ./apm-server.yml:/usr/share/kibana/config/apm-server.yml
environment:
- output.elasticsearch.hosts=["http://elasticsearch:9200"]
networks:
- elastic
command: >
apm-server -e
-E apm-server.rum.enabled=true
-E apm-server.host=0.0.0.0:8200
-E setup.kibana.host=kibana:5601
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
logging:
driver: 'json-file'
options:
max-size: '200m'
max-file: '50'
elasticsearch:
container_name: elasticsearch
image: elasticsearch:8.5.0
ports:
- 9200:9200
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- xpack.security.enabled=true
networks:
- elastic
kibana:
container_name: kibana
image: kibana:8.5.0
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
environment:
- XPACK_MONITORING_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED=true
- XPACK_SECURITY_ENABLED=true
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- 5601:5601
networks:
- elastic
elastic-agent:
image: docker.elastic.co/beats/elastic-agent:8.5.0
container_name: elastic-agent
restart: always
user: root # note, synthetic browser monitors require this set to `elastic-agent`
environment:
- fleet-server-es=http://localhost:9200
- fleet-server-service-token=*****
- fleet-server-policy=fleet-server-policy
networks:
elastic:
driver: bridge
volumes:
elasticsearch-data:
my container is up and running without error
now the problem is here that fleet server does not find any connection like what u see in the picture:
i would be thankful if there is any help
I have deployed several workload containers from dockerhub to Rancher. Now I need them connected through a network. How do I go about this? I have a Load balancer set up. I think a network can be set up through load balancer in the Rancher UI?
Currently I have five workloads under one namespace (webapp-9):
webapp-9-apache
webapp-9-php
webapp-9-mysql
webapp-9-solr
webapp-9-phpmyadmin
Following error occurs when pullin up webapp-9-apache workload in browser:
Proxy Error
Reason: DNS lookup failure for: php
Here is my docker-compose.yml:
version: '3.1'
services:
apache:
build:
context: .
dockerfile: path/to/apache/Dockerfile
image: user:webapp-9-apache
ports:
- 80:80
depends_on:
- mysql
- php
volumes:
- ./http:/path/to/web/
php:
build:
context: .
dockerfile: path/to/php/Dockerfile
image: user:webapp-9-php
volumes:
- ./http:/path/to/folder/
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=${MYSQL_RANDOM_ROOT_PASSWORD}
depends_on:
- mysql
mysql:
build:
context: .
dockerfile: path/to/mysql/Dockerfile
image: user:webapp-9-mysql
command: mysqld --sql-mode="STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"
ports:
- 3306:3306
environment:
- MYSQL_DATABASE=${MYSQL_DATABASE}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_RANDOM_ROOT_PASSWORD=${MYSQL_RANDOM_ROOT_PASSWORD}
volumes:
- ./data:/path/to/mysql
- .docker/mysql/config:/path/to/conf.d
solr:
build:
context: .
dockerfile: path/to/Dockerfile
image: user:webapp-9-solr
ports:
- "8983:8983"
volumes:
- ./solr_data:/path/to/solr
command:
- solr-precreate
- gettingstarted
phpmyadmin:
build:
context: .
dockerfile: path/to/phpmyadmin/Dockerfile
image: user:webapp-9-phpmyadmin
ports:
- 8090:80
environment:
- PMA_HOST=mysql
- PMA_PORT=3306
- PMA_USER=${MYSQL_USER}
- PMA_PASSWORD=${MYSQL_PASSWORD}
- UPLOAD_LIMIT=200M
All workloads need to be under the same Namespace (which they already were) and the workloads need to be named according to the services in the docker-composer.yml file.
e.g. drupal-9-spintx-php -> php
I am trying to install jfrog insights by following the official documentation available on the website using docker-compose method.
ERROR: The Compose file './docker-compose.yaml' is invalid because:
services.router.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.router.ports is invalid: Invalid port ":", should be [[remote_ip:]remote_port[-remote_port]:]port[/protocol]
services.router.ports value [':', ':'] has non-unique elements
The docker file looks
version: '3'
services:
router:
image: releases-docker.jfrog.io/jfrog/router:${DOCKER_VERSION_ROUTER}
container_name: insight_router
restart: always
environment:
- JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES=jfisv,jfisc
- JF_ROUTER_ENTRYPOINTS_EXTERNALPORT=${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}
ports:
- ${JF_ELASTICSEARCH_TRANSPORTPORT}:${JF_ELASTICSEARCH_TRANSPORTPORT}
- ${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT}:${JF_ROUTER_ENTRYPOINTS_EXTERNALPORT} # for router communication
user: "${INSIGHT_USER}:${INSIGHT_USER}"
volumes:
- /etc/localtime:/etc/localtime:ro
- "${ROOT_DATA_DIR}/var:/var/opt/jfrog/router"
scheduler:
image: ${DOCKER_REGISTRY}/jfrog/insight-scheduler:${DOCKER_VERSION_JFSC}
container_name: insight_scheduler
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- "${ROOT_DATA_DIR}/var:/var/opt/jfrog/insight"
logging:
driver: json-file
options:
max-size: 50m
max-file: '10'
network_mode: service:router
insight_server:
image: ${DOCKER_REGISTRY}/jfrog/insight-server:${DOCKER_VERSION_JFIS}
container_name: insight_server
restart: always
volumes:
- /etc/localtime:/etc/localtime:ro
- "${ROOT_DATA_DIR}/var:/var/opt/jfrog/insight"
logging:
driver: json-file
options:
max-size: 50m
max-file: '10'
network_mode: service:router
elasticsearch:
entrypoint: ""
command: /bin/bash -c " (/usr/local/bin/initializeSearchGuard.sh &) && docker-entrypoint.sh 'elasticsearch'"
image: releases-docker.jfrog.io/jfrog/elasticsearch-sg:7.16.3
container_name: insight_elasticsearch
volumes:
- /etc/localtime:/etc/localtime:ro
- "${ROOT_DATA_DIR}/var/data/elasticsearch/data:/usr/share/elasticsearch/data"
- "${ROOT_DATA_DIR}/var/log/elasticsearch:/usr/share/elasticsearch/logs"
- "${ROOT_DATA_DIR}/var/data/elasticsearch/config/jvm.options.d:/usr/share/elasticsearch/config/jvm.options.d"
- "${ROOT_DATA_DIR}/var/data/elasticsearch/sgconfig:/usr/share/elasticsearch/plugins/search-guard-7/sgconfig"
- "${ROOT_DATA_DIR}/var/data/elasticsearch/config/unicast_hosts.txt:/usr/share/elasticsearch/config/unicast_hosts.txt"
environment:
- transport.host=0.0.0.0
- transport.port=9300
- transport.publish_host=${HOST_IP}
- bootstrap.memory_lock=true
- node.name=${HOST_IP}
- discovery.seed_providers=file
- $ES_MASTER_NODE_SETTINGS
- ELASTICSEARCH_USERNAME=REPLACE_ELASTICSEARCH_USERNAME
- ELASTICSEARCH_PASSWORD=REPLACE_ELASTICSEARCH_PASSWORD
- ELASTICSEARCH_CLUSTERSETUP=${JF_ELASTICSEARCH_CLUSTERSETUP}
restart: always
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
cap_add:
- IPC_LOCK
logging:
driver: json-file
options:
max-size: 50m
max-file: '10'
network_mode: service:router
Hamza - can you confirm if you are using this latest documentation - https://www.jfrog.com/confluence/display/JFROG/Installing+Insight
This has a step (step #3) to run installer script ./config.sh - did you run it?
Description
I am trying to build an equal configuration in my local docker-environment like on our production system. After spending some time investigating and rebuilding the docker container setup, still can't get it to work and Graylog is not receiving any data.
Overview and interim results
web, php and db container are in use for the symfony based application
symfony runs properly on localhost in php-container and generates logfiles
symfony-logfiles are located here: /var/www/html/var/logs/*.log
symfony-logfiles format is json / gelf
all other containers are also up and running when starting the complete composition
filebeat configuration is based on first link below
filebeat.yml seems to retrieve any logfile found in any container
filebeat configured to transfer data directly to elasticsearch
elasticsearch persists data in mongodb
all graylog related data in persisted in named volumes in docker
additionally I am working with docker-sync on a Mac
The docker-compose.yml is based on the following resources:
https://github.com/jochenchrist/docker-logging-elasticsearch
http://docs.graylog.org/en/2.4/pages/installation/docker.html?highlight=docker
https://www.elastic.co/guide/en/beats/filebeat/6.3/running-on-docker.html
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-reference-yml.html
config.yml
# Monolog Configuration
monolog:
channels: [graylog]
handlers:
graylog:
type: stream
formatter: line_formatter
path: "%kernel.logs_dir%/graylog.log"
channels: [graylog]
docker-compose.yml
version: "3"
services:
web:
image: nginx
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- ./docker-config/nginx.conf:/etc/nginx/conf.d/default.conf
- project-app-sync:/var/www/html
- ./docker-config/localhost.crt:/etc/nginx/ssl/localhost.crt
- ./docker-config/localhost.key:/etc/nginx/ssl/localhost.key
php:
build:
context: .
dockerfile: ./docker-config/Dockerfile-php
links:
- graylog
volumes:
- project-app-sync:/var/www/html
- ./docker-config/php.ini:/usr/local/etc/php/php.ini
- ./docker-config/www.conf:/usr/local/etc/php-fpm.d/www.conf
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=project
- MYSQL_USER=project
- MYSQL_PASSWORD=password
volumes:
- ./docker-config/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
- project-mysql-sync:/var/lib/mysql
# Graylog / Filebeat
filebeat:
build: ./docker-config/filebeat
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- graylog-network
depends_on:
- graylog-elasticsearch
graylog:
image: graylog/graylog:2.4
volumes:
- graylog-journal:/usr/share/graylog/data/journal
networks:
- graylog-network
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- graylog-mongo:mongo
- graylog-elasticsearch:elasticsearch
depends_on:
- graylog-mongo
- graylog-elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
graylog-mongo:
image: mongo:3
volumes:
- graylog-mongo-data:/data/db
networks:
- graylog-network
graylog-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10
ports:
- "9200:9200"
volumes:
- graylog-elasticsearch-data:/usr/share/elasticsearch/data
networks:
- graylog-network
environment:
- cluster.name=graylog
- "discovery.zen.minimum_master_nodes=1"
- "discovery.type=single-node"
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
project-app-sync:
external: true
project-mysql-sync: ~
graylog-mongo-data:
driver: local
graylog-elasticsearch-data:
driver: local
graylog-journal:
driver: local
networks:
graylog-network: ~
Dockerfile of filebeat container
FROM docker.elastic.co/beats/filebeat:6.3.1
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
# must run as root to access /var/lib/docker and /var/run/docker.sock
USER root
RUN chown root /usr/share/filebeat/filebeat.yml
# dont run with -e, to disable output to stderr
CMD [""]
filebeat.yml
filebeat.prospectors:
- type: docker
paths:
- '/var/lib/docker/containers/*/*.log'
# path to symfony based logs
- '/var/www/html/var/logs/*.log'
containers.ids: '*'
processors:
- decode_json_fields:
fields: ["host","application","short_message"]
target: ""
overwrite_keys: true
- add_docker_metadata: ~
output.elasticsearch:
# transfer data to elasticsearch container?
hosts: ["localhost:9200"]
logging.to_files: true
logging.to_syslog: false
Graylog backend
After setting up this docker composition I started the Graylog web-view and set up a collector and input as described here:
http://docs.graylog.org/en/2.4/pages/collector_sidecar.html#step-by-step-guide
Maybe I have totally misunderstood how this could work. I am not totally sure if Beats from Elastic is the same as the filebeats container and if the sidecar collector is something extra I forgot to add. Maybe I misconfigured the collector and input in graylog?!
I would be thankful to any help or working example according to my problem ...
Graylog seems to be running on http://127.0.0.1:9000/api which is in the container. You might want to run it as http://graylog:9000/api or as http://0.0.0.0:9000/api
Accessing the other images from within any of the other images will have be done with the same name as the service name, as defined in the docker-compose.yml files. The url to the graylog-elasticsearch would be something like: http://graylog-elasticsearch/.... if you would post to localhost it would stay inside its own image.
Hope this will help you along in finding the solution.
This is my docker-compose.yml
yml
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
network_mode: "default"
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
links:
- admin_db
network_mode: "bridge"
If with networking_mode:"bridge" I should be able to access my app (admin) on http://127.0.0.1:8000/ from localhost, but currently, I'm able to access it only on random-ip:8000 from localhost.
I'm able to http://127.0.0.1:8000/ access when networking_mode is "host", but then I'm unable to link containers.
Is there any solution to have both things ?
- linked containers
- app running on http://127.0.0.1:8000/ from localhost
If for some unknown reason normal linking doesn't work you can always create another bridged network and connect directly to that docker image. By doing that IP address of that running image will always be the same.
I would edit it like this:
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
networks:
back_net:
ipv4_address: 11.0.0.2
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
extra_hosts:
- "admin_db:11.0.0.2"
networks:
back_net:
ipv4_address: 11.0.0.3
networks:
back_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
com.docker.network.bridge.name: "back"
ipam:
driver: default
config:
- subnet: 11.0.0.0/24
gateway: 11.0.0.1
Hope that helps.