I'm new in traefik, and couldn't understand why it doesn't redirect.
I saw a lot ways how to do redirect , and this one pretty match for me, because i want, that redirect works on the all routers.
Especially I don't want to write redirect to labels of every router
docker-compose.yml
services:
traefik:
image: traefik:v2.5
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- 80:80
- 443:443
- 8082:8082
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/custom/:/custom/:ro
- ./data/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.example.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.routers.traefik.tls.certresolver=letsEncrypt"
- "traefik.http.routers.traefik.service=api#internal"
- "traefik.http.services.traefik-traefik.loadbalancer.server.port=888"
- "traefik.http.middlewares.traefik-auth.basicauth.users=admin:$$apr1$$yTyey7a2$$CDmIjg/aratMfqENIHcQW1"
- "traefik.http.routers.traefik.middlewares=traefik-auth"
traefik.yml
api:
dashboard: true
entryPoints:
http:
address: ":80"
http:
redirections:
entryPoint:
to: https
scheme: https
permanent: true
https:
address: ":443"
metrics:
address: ":8082"
metrics:
prometheus:
entryPoint: metrics
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
directory: /custom
watch: true
certificatesResolvers:
letsEncrypt:
acme:
email: postmaster#example.com
storage: acme.json
#caServer: "https://acme-staging-v02.api.letsencrypt.org/directory"
httpChallenge:
entryPoint: http
A few months ago I have configured a reverse proxy with Traefik, basically, I have an authentication server and an API. Traefik redirects the traffic toward the authentication server if the request url has the auth path prefix and toward the API if the request url has the api path prefix. Here you go all my configuration using docker-compose.yaml:
version: '3'
services:
reverse-proxy:
image: traefik:v2.5
container_name: selling-point-reverse-proxy
ports:
- 80:80
- 8080:8080
volumes:
# Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
command:
# Enables the web UI
- --api.insecure=true
# Tells Traefik to listen to docker
- --providers.docker
# Creates a new entrypoint called web
- --entrypoints.web.address=:80
# Disable container exposition
- --providers.docker.exposedByDefault=false
# Traefik matches against the container's labels to determine whether to create any route for that container
- --providers.docker.constraints=Label(`traefik.scope`,`selling-point`)
networks:
- selling-point
api:
image: selling-point-api
container_name: selling-point-api
build:
context: ./selling-point-api
labels:
# Tells Traefik where to redirect the request if the url has the specified prefix
- traefik.http.routers.api.rule=PathPrefix(`/api`)
# Attaches a middleware for forwarding the authentication
- traefik.http.routers.api.middlewares=forward-auth,latency-check
# Attaches entrypoints
- traefik.http.routers.api.entrypoints=web
# Exposes container
- traefik.enable=true
# Matcher for creating a route
- traefik.scope=selling-point
# Creates a service called selling-point-api
- traefik.http.services.selling-point-api.loadbalancer.server.port=3000
# Attach the container to a service
- traefik.http.routers.api.service=selling-point-api
# Creates circuit breaker middleware
- traefik.http.middlewares.latency-check.circuitbreaker.expression=LatencyAtQuantileMS(50.0) > 100
volumes:
- ./selling-point-api/src:/app/src
networks:
- selling-point
environment:
WAIT_HOSTS: mysql:3306
DATABASE_URL: mysql://root:huachinango#mysql:3306/selling_point
NODE_ENV: development
auth:
image: selling-point-auth
container_name: selling-point-auth
build:
context: ./selling-point-auth
labels:
# Tells Traefik where to redirect the request if the url has the specified prefix
- traefik.http.routers.auth.rule=PathPrefix(`/auth`)
# Creates a forward auth middleware
- traefik.http.middlewares.forward-auth.forwardauth.address=http://auth:3000/auth/authorize
# Attaches entrypoints
- traefik.http.routers.auth.entrypoints=web
# Exposes container
- traefik.enable=true
# Matcher for creating a route
- traefik.scope=selling-point
# Creates a service called selling-point-auth
- traefik.http.services.selling-point-auth.loadbalancer.server.port=3000
# Attach the container to a service
- traefik.http.routers.auth.service=selling-point-auth
# Attaches a circuit breaker middleware
- traefik.http.routers.auth.middlewares=latency-check
environment:
WAIT_HOSTS: mysql:3306
IGNORE_ENV_FILE: 'true'
DATABASE_URL: mysql://root:huachinango#mysql:3306/selling_point
PASSWORD_SALT: $$2b$$10$$g0OI8KtIE3j6OQqt1ZUDte
NODE_ENV: development
volumes:
- ./selling-point-auth/src:/app/src
networks:
- selling-point
mysql:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: huachinango
MYSQL_DATABASE: selling_point
networks:
- selling-point
volumes:
- mysql-db:/var/lib/mysql
volumes:
mysql-db:
networks:
selling-point:
name: selling-point
driver: bridge
Related
I made simple docker based application with NGINX, PHP, PostgreSQL, Node, Mercure and Symfony just to test the capabilities of Mercure.
The problem is that I'm not getting any updates from Symfony publisher service, there's no errors in logs, no errors in Symfony profiler, CORS is working properly. Sending updates trough mercure ui is working just fine.
I'm using latest dunglas/mercure image along with PHP 7.4 and Symfony 5.2.2
My docker-compose file:
version: '3'
networks:
backend:
services:
# nginx
nginx-service:
image: nginx:stable-alpine
container_name: nginx-container
ports:
- "8080:80"
volumes:
- ./app:/var/www/project
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
depends_on:
- php74-service
- postgres11-service
networks:
- backend
# php
php74-service:
build:
context: .
dockerfile: ./php/Dockerfile
container_name: php74-container
ports:
- "9000:9000"
volumes:
- ./app:/var/www/project
networks:
- backend
#postgres
postgres11-service:
image: postgres:11-alpine
container_name: postgres11-container
ports:
- "5432:5432"
volumes:
- ./postgres:/var/libpostgresql/data
restart: always
environment:
POSTGRES_USER: main
POSTGRES_PASSWORD: secret
networks:
- backend
# node
node-service:
image: node:latest
container_name: node-container
volumes:
- ./app:/var/www/project
working_dir: /var/www/project
networks:
- backend
# mercure
mercure:
image: dunglas/mercure:latest
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
container_name: mercure-container
ports:
- 9090:80
networks:
- backend
My Caddyfile configuration
{
# Debug mode (disable it in production!)
debug
# HTTP/3 support
experimental_http3
}
# The address of your server
localhost:80
# enable logs
log
route {
# redirect to ui
redir / /.well-known/mercure/ui/
mercure {
demo
# Publisher JWT key
publisher_jwt !ChangeMe!
# Subscriber JWT key
subscriber_jwt !ChangeMe!
cors_origins http://localhost:8080
publish_origins http://localhost:8080
anonymous
}
respond "Not Found" 404
}
My .env cofiguration of mercure: (default token with !ChangeMe! key)
MERCURE_PUBLISH_URL=http://mercure/.well-known/mercure
MERCURE_JWT_TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJtZXJjdXJlIjp7InB1Ymxpc2giOltdfX0.Oo0yg7y4yMa1vr_bziltxuTCqb8JVHKxp-f_FwwOim0
Symfony function:
public function push(PublisherInterface $publisher): Response
{
$update = new Update(
'test',
json_encode(['status' => 'new update'])
);
// The Publisher service is an invokable object
$publisher($update);
return new Response('published!');
}
I've set up Traefik and Portainer on my server running Ubuntu 20.04 that is in my front room (I used this guide and this one, but didn't set up the default IP whitelist in the second tutorial as I want it to be a publicly accessible webserver). Both apps work and appear to be using HTTPS. I can manage and create containers in Portainer.
To test out my configuration, I added two containers - MySQL and Wordpress. I added in the Traefik labels from the above tutorials like when I set up Traefik, and I set the Wordpress container's domain name in Portainer, but whenever I try to access the Wordpress site at that domain, I get a Bad Gateway error (just the words 'Bad Gateway', not even a status code).
I'm not sure where I've gone wrong. Here are my configuration files:
traefik.yml:
api:
dashboard: true
entryPoints:
http:
address: ":80"
https:
address: ":443"
providers:
docker:
endpoint: "unix:///var/run/docker.sock"
exposedByDefault: false
file:
filename: /config.yml
version: '3'
services:
traefik:
image: traefik:v2.0
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.mywebsite.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=michael:$$apr1$$.m1mfSB0$$6Ypx6rfih8y.vHkNQe9rJ0"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.mywebsite.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
certificatesResolvers:
http:
acme:
email: me#myemail.com
storage: acme.json
httpChallenge:
entryPoint: http
config.yml:
http:
middlewares:
https-redirect:
redirectScheme:
scheme: https
docker-compose.yml:
version: '3'
services:
traefik:
image: traefik:v2.0
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- proxy
ports:
- 80:80
- 443:443
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./data/traefik.yml:/traefik.yml:ro
- ./data/acme.json:/acme.json
- ./data/config.yml:/config.yml:ro
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.entrypoints=http"
- "traefik.http.routers.traefik.rule=Host(`traefik.mywebsite.com`)"
- "traefik.http.middlewares.traefik-auth.basicauth.users=michael:$$apr1$$.m1mfSB0$$6Ypx6rfih8y.vHkNQe9rJ0"
- "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
- "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
- "traefik.http.routers.traefik-secure.entrypoints=https"
- "traefik.http.routers.traefik-secure.rule=Host(`traefik.mywebsite.com`)"
- "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
- "traefik.http.routers.traefik-secure.tls=true"
- "traefik.http.routers.traefik-secure.tls.certresolver=http"
- "traefik.http.routers.traefik-secure.service=api#internal"
networks:
proxy:
external: true
Wordpress/MySQL docker-compose.yml:
version: '3.1'
services:
wordpress:
image: wordpress
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: admin
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wordpressdb
volumes:
- wordpress:/var/www/html
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.wordpress.entrypoints=http"
- "traefik.http.routers.wordpress.rule=Host(`myblog.com`)"
- "traefik.http.routers.wordpress.middlewares=https-redirect#file"
- "traefik.http.routers.wordpress-secure.entrypoints=https"
- "traefik.http.routers.wordpress-secure.rule=Host(`myblog.com`)"
- "traefik.http.routers.wordpress-secure.tls=true"
- "traefik.http.routers.wordpress-secure.tls.certresolver=http"
- "traefik.http.routers.wordpress-secure.service=wordpress"
- "traefik.http.services.wordpress.loadbalancer.server.port=9000"
- "traefik.docker.network=proxy"
db:
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: username
MYSQL_PASSWORD: password
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.mysql.entrypoints=http"
- "traefik.http.routers.mysql.middlewares=https-redirect#file"
- "traefik.http.routers.mysql-secure.entrypoints=https"
- "traefik.http.routers.mysql-secure.tls=true"
- "traefik.http.routers.mysql-secure.tls.certresolver=http"
- "traefik.http.routers.mysql-secure.service=mysql"
- "traefik.http.services.mysql.loadbalancer.server.port=9000"
- "traefik.docker.network=proxy"
volumes:
wordpress:
db:
networks:
proxy:
external: true
I can provide the Portainer docker-compose.yml file too if needed, but I don't really think it's necessary. Any help here would be great!
For network connectivity between the different applications you must create the network in one of your applications. I would do that in your traefik docker-compose.yml
Meaning, that in your traefik compose file you must NOT specify the proxy network as external, because you create it internally in that application like this:
networks:
proxy:
In your Wordpress/MySQL docker-compose.yml you must specify a name for the external network like this:
networks:
proxy:
external:
name: "traefik_proxy"
When you create a new application using compose, everything in the application gets a prefix, that is the directoryname in which the compose file is placed.
Meaning the above example only works if your traefik compose file is placed in a directory named "traefik"
This should fix your issue with connectivity.
Description
I am trying to build an equal configuration in my local docker-environment like on our production system. After spending some time investigating and rebuilding the docker container setup, still can't get it to work and Graylog is not receiving any data.
Overview and interim results
web, php and db container are in use for the symfony based application
symfony runs properly on localhost in php-container and generates logfiles
symfony-logfiles are located here: /var/www/html/var/logs/*.log
symfony-logfiles format is json / gelf
all other containers are also up and running when starting the complete composition
filebeat configuration is based on first link below
filebeat.yml seems to retrieve any logfile found in any container
filebeat configured to transfer data directly to elasticsearch
elasticsearch persists data in mongodb
all graylog related data in persisted in named volumes in docker
additionally I am working with docker-sync on a Mac
The docker-compose.yml is based on the following resources:
https://github.com/jochenchrist/docker-logging-elasticsearch
http://docs.graylog.org/en/2.4/pages/installation/docker.html?highlight=docker
https://www.elastic.co/guide/en/beats/filebeat/6.3/running-on-docker.html
https://www.elastic.co/guide/en/beats/filebeat/6.3/filebeat-reference-yml.html
config.yml
# Monolog Configuration
monolog:
channels: [graylog]
handlers:
graylog:
type: stream
formatter: line_formatter
path: "%kernel.logs_dir%/graylog.log"
channels: [graylog]
docker-compose.yml
version: "3"
services:
web:
image: nginx
ports:
- "80:80"
- "443:443"
links:
- php
volumes:
- ./docker-config/nginx.conf:/etc/nginx/conf.d/default.conf
- project-app-sync:/var/www/html
- ./docker-config/localhost.crt:/etc/nginx/ssl/localhost.crt
- ./docker-config/localhost.key:/etc/nginx/ssl/localhost.key
php:
build:
context: .
dockerfile: ./docker-config/Dockerfile-php
links:
- graylog
volumes:
- project-app-sync:/var/www/html
- ./docker-config/php.ini:/usr/local/etc/php/php.ini
- ./docker-config/www.conf:/usr/local/etc/php-fpm.d/www.conf
db:
image: mysql
ports:
- "3306:3306"
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=project
- MYSQL_USER=project
- MYSQL_PASSWORD=password
volumes:
- ./docker-config/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
- project-mysql-sync:/var/lib/mysql
# Graylog / Filebeat
filebeat:
build: ./docker-config/filebeat
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock
networks:
- graylog-network
depends_on:
- graylog-elasticsearch
graylog:
image: graylog/graylog:2.4
volumes:
- graylog-journal:/usr/share/graylog/data/journal
networks:
- graylog-network
environment:
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_WEB_ENDPOINT_URI=http://127.0.0.1:9000/api
links:
- graylog-mongo:mongo
- graylog-elasticsearch:elasticsearch
depends_on:
- graylog-mongo
- graylog-elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
graylog-mongo:
image: mongo:3
volumes:
- graylog-mongo-data:/data/db
networks:
- graylog-network
graylog-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.10
ports:
- "9200:9200"
volumes:
- graylog-elasticsearch-data:/usr/share/elasticsearch/data
networks:
- graylog-network
environment:
- cluster.name=graylog
- "discovery.zen.minimum_master_nodes=1"
- "discovery.type=single-node"
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
# Disable X-Pack security: https://www.elastic.co/guide/en/elasticsearch/reference/5.6/security-settings.html#general-security-settings
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
project-app-sync:
external: true
project-mysql-sync: ~
graylog-mongo-data:
driver: local
graylog-elasticsearch-data:
driver: local
graylog-journal:
driver: local
networks:
graylog-network: ~
Dockerfile of filebeat container
FROM docker.elastic.co/beats/filebeat:6.3.1
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
# must run as root to access /var/lib/docker and /var/run/docker.sock
USER root
RUN chown root /usr/share/filebeat/filebeat.yml
# dont run with -e, to disable output to stderr
CMD [""]
filebeat.yml
filebeat.prospectors:
- type: docker
paths:
- '/var/lib/docker/containers/*/*.log'
# path to symfony based logs
- '/var/www/html/var/logs/*.log'
containers.ids: '*'
processors:
- decode_json_fields:
fields: ["host","application","short_message"]
target: ""
overwrite_keys: true
- add_docker_metadata: ~
output.elasticsearch:
# transfer data to elasticsearch container?
hosts: ["localhost:9200"]
logging.to_files: true
logging.to_syslog: false
Graylog backend
After setting up this docker composition I started the Graylog web-view and set up a collector and input as described here:
http://docs.graylog.org/en/2.4/pages/collector_sidecar.html#step-by-step-guide
Maybe I have totally misunderstood how this could work. I am not totally sure if Beats from Elastic is the same as the filebeats container and if the sidecar collector is something extra I forgot to add. Maybe I misconfigured the collector and input in graylog?!
I would be thankful to any help or working example according to my problem ...
Graylog seems to be running on http://127.0.0.1:9000/api which is in the container. You might want to run it as http://graylog:9000/api or as http://0.0.0.0:9000/api
Accessing the other images from within any of the other images will have be done with the same name as the service name, as defined in the docker-compose.yml files. The url to the graylog-elasticsearch would be something like: http://graylog-elasticsearch/.... if you would post to localhost it would stay inside its own image.
Hope this will help you along in finding the solution.
This is my docker-compose.yml
yml
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
network_mode: "default"
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
links:
- admin_db
network_mode: "bridge"
If with networking_mode:"bridge" I should be able to access my app (admin) on http://127.0.0.1:8000/ from localhost, but currently, I'm able to access it only on random-ip:8000 from localhost.
I'm able to http://127.0.0.1:8000/ access when networking_mode is "host", but then I'm unable to link containers.
Is there any solution to have both things ?
- linked containers
- app running on http://127.0.0.1:8000/ from localhost
If for some unknown reason normal linking doesn't work you can always create another bridged network and connect directly to that docker image. By doing that IP address of that running image will always be the same.
I would edit it like this:
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
networks:
back_net:
ipv4_address: 11.0.0.2
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
extra_hosts:
- "admin_db:11.0.0.2"
networks:
back_net:
ipv4_address: 11.0.0.3
networks:
back_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
com.docker.network.bridge.name: "back"
ipam:
driver: default
config:
- subnet: 11.0.0.0/24
gateway: 11.0.0.1
Hope that helps.
I'm using Docker Compose to create two containers. One runs an Nginx web server which serves the mydomain.com website, and the second needs to send HTTP requests to the first one (using the mydomain.com domain name).
I don't want to have to check the Nginx container's ip each time I run it and then use docker run --add-host on the second container. My goal is to run docker-compose up and that everything be ready.
I know it's not possible, but what I'm looking for is something in the line of:
# docker-compose.yml
nginx_container:
...
second_container:
extra_hosts:
# This is invalid. extra_hosts only accepts ips.
- "mydomain.com:nginx_container"
You can get a similar result using a configuration like this:
version: "3"
services:
api:
image: node:8.9.3
container_name: foo_api
domainname: api.foo.test
command: 'npm run dev'
links:
- "mongo:mongo.foo.test"
- "redis:redis.foo.test"
volumes:
- .:/app
- /app/node_modules
ports:
- "${PORT}:3000"
- "9229:9229"
depends_on:
- redis
- mongo
networks:
- backend
redis:
image: redis:3
container_name: foo_redis
domainname: redis.foo.test
ports:
- "6379:6379"
networks:
- backend
mongo:
image: mongo:3.6.2
container_name: foo_mongo
domainname: mongo.foo.test
ports:
- "${MONGO_PORT}:27017"
environment:
- MONGO_PORT=${MONGO_PORT}
networks:
- backend
networks:
backend:
driver: bridge