I'm trying to get Kafka up and running on my Mac using docker compose.
This is my docker-compose.yml file:
version: '2'
services:
zookeeper:
image: ********
network_mode: "host"
hostname: "zookeeper"
environment:
- "MYID=1"
ports:
- "2181:2181"
- "3888:3888"
mysql:
image: *******
network_mode: "host"
hostname: "mysql"
environment:
- "MYSQL_ROOT_PASSWORD=password"
ports:
- "3306:3306"
schema-registry:
image: ********
network_mode: "host"
hostname: "schema-registry"
environment:
- "ZOOKEEPER_URL=127.0.0.1:2181"
ports:
- "8081:8081"
kafka:
image: **********
network_mode: "host"
hostname: "kafka"
environment:
- "SERVICE_NAME=localhost"
- "SERVICE_TAGS=syracuse-dev"
- "KAFKA_ADVERTISED_HOST_NAME=localhost"
- "KAFKA_ZOOKEEPER_CONNECT=localhost:2181"
- "KAFKA_NUM_PARTITIONS=10"
- "KAFKA_LISTENERS=PLAINTEXT://:9092"
- "KAFKA_BROKER_ID=1"
- "KAFKA_DEFAULT_REPLICATION_FACTOR=1"
ports:
- "9092:9092"
- "7203:7203"
Everything gets up and running with the exception of Kafka. As Kafka loads it looks for Zookeeper once found I receive the following error.
Found zookeeper
Error: Exception thrown by the agent : java.net.MalformedURLException: Local host name unknown: java.net.UnknownHostException: moby: moby: unknown error
I was able to get Kafka up and running by removing the "network_mode: "host" lines from each container. And setting the zookeeper url to: "zookeeper:2181"
It's unclear to me what the network_mode does and why it impeded kafka from running. I'm hoping someone can shed some light onto this and educate me.
much appreciated
You may instead of
- "KAFKA_ADVERTISED_HOST_NAME=localhost"
- "KAFKA_ZOOKEEPER_CONNECT=localhost:2181"
provide them with the name of the zookeeper container, which is zookeeper in your case.
- "KAFKA_ADVERTISED_HOST_NAME=zookeeper"
- "KAFKA_ZOOKEEPER_CONNECT=zookeper:2181"
Does that help?
Related
I was using Keycloak 16. Now that I want to upgrade to Keycloak 20, I see that they have changed a lot.
This is my docker-compose.yml file from 16:
version: "3.9"
services:
accounts:
image: jboss/keycloak:latest
container_name: Accounts
ports:
- 8080:8080
environment:
- KEYCLOAK_FRONTEND_URL=https://accounts.example.local/auth
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=user
- KEYCLOAK_PASSWORD=pass
- DB_VENDOR=mariadb
- DB_ADDR=database
- DB_DATABASE=accounts
- DB_USER=db_user
- DB_PASSWORD=db_pass
logging:
driver: none
restart: always
database:
image: mariadb
container_name: AccountsDatabase
ports:
- 3306:3306
environment:
- MARIADB_ROOT_PASSWORD=root_pass
- MYSQL_DATABASE=accounts
- MYSQL_USER=db_user
- MYSQL_PASSWORD=db_pass
volumes:
- /Temp/AccountsDatabases:/var/lib/mysql
logging:
driver: none
restart: always
admin:
image: adminer
container_name: AccountsAdminer
restart: always
logging:
driver: none
ports:
- 8080:8080
environment:
- ADMINER_DEFAULT_SERVER=database
Now it seems that Keycloak needs a database URL.
I can't find out how can I connect MariaDB to Keycloak. I can't find out the URL of my MariaDB URL and the Keycloak blog says that they won't provide examples for any database other than their first class PostreSQL.
I'm stuck at this point. Any help is appreciated.
Their documents show KC_DB_URL is a JDBC URL.
So the simple form of jdbc:mariadb://host/database seems used in their tests, so for you:
environment:
- KEYCLOAK_FRONTEND_URL=https://accounts.example.local/auth
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=user
- KEYCLOAK_PASSWORD=pass
- KB_DB_URL=jdbc:mariadb://database/accounts
- KB_DB_USER=db_user
- KB_DB_PASSWORD=db_pass
note: I'm hoping/assuming the JDBC driver for MariaDB is in their container which it may not be.
I'm new to docker and I'm facing a problem when trying to connect to a container from within another. The wierd stuff is that some containers can indeed be contacted, and others, configured in the same way, not. I tried a zillion fixes found while crawling google, nothing works. I guess this is a simple noob mistake though.
Here's my docker-compose file:
version: '3.4'
services:
mssql.db.posm:
image: "microsoft/mssql-server-linux"
environment:
SA_PASSWORD: "mypassword"
ACCEPT_EULA: "Y"
MSSQL_PID: "Express"
volumes:
- mssql-data:/var/opt/mssql
ports:
- "1433:1433"
networks:
posm:
api.posm:
image: ${DOCKER_REGISTRY}posm.api
build:
context: .
dockerfile: Posm.Api/Dockerfile
expose:
- "6869"
ports:
- "6869:80"
networks:
posm:
cloud.subscription:
image: ${DOCKER_REGISTRY}cloud.subscription
build:
context: ./Services
dockerfile: Cloud.Subscription/Dockerfile
ports:
- "80"
networks:
posm:
catalogmanager.services.posm:
image: ${DOCKER_REGISTRY}posm.services.catalogmanager
build:
context: ./Services
dockerfile: CatalogManager/Dockerfile
ports:
- "80"
networks:
posm:
aliases:
- services.posm
- catalogmanager
productmanager.services.posm:
image: ${DOCKER_REGISTRY}posm.services.productmanager
build:
context: ./Services
dockerfile: ProductManager/Dockerfile
ports:
- "80"
networks:
posm:
aliases:
- services.posm
- productmanager
localizer.services.posm:
image: ${DOCKER_REGISTRY}posm.services.localizer
build:
context: ./Services
dockerfile: Localizer/Dockerfile
ports:
- "80"
networks:
posm:
aliases:
- services.posm
- localizer
networks:
posm:
volumes:
mssql-data:
So, when I connect to the api.posm, the latter is able to successfully connect to mssql.db.posm and to cloud.subscription. It however fails, with a connection refused error, to connect to 'catalogmanager' and 'productmanager', even though I can resolve their hostname from within api.posm... What is happening?
You are giving the alias services.posm both to 'catalogmanager' and 'productmanager':
posm:
aliases:
- services.posm
- catalogmanager
As you now per the documentation (https://docs.docker.com/compose/compose-file/#aliases), this will give the hostname "services.posm" to the containers in the posm network. This means that you are having 2 times the "services.posm" host in the network, and both exposing the port 80. This will not work. Remove the "services.posm" line on both containers aliases, restart and you will be able of accessing them using the catalogmanager and productmanager hosts.
So, the problem was a stack overflow exception in one of my images that was unhandled by my debugger, and that I therefore did not see.
Concerning your answer, Moreno...
You are giving the alias services.posm both to 'catalogmanager' and
'productmanager'
I had duplicated this post on Docker forums for increasing my chances to get a relevant answer, and someone suggested the same. The reason I was using the same aliases for three different services is that I read somewhere (cannot find it now, of course) that you could tag multiple services with the same alias for discovery purposes. I probably got that wrong but Docker doesn't seem to mind at all anyways.
So, thanks for your time nonetheless, sorry that I bothered for such stupid mishandling of my code.
Bottom line is: DOCKER ROCKS
I have two Docker containers: node-a, node-b. One of them (node-b) should send http request to other (node-a). I'm starting them with Docker Compose. When I'm trying to up them with Compose I face an error:
Get http://node-a:9098: dial tcp 172.18.0.3:9098: getsockopt: connection refused
EXPOSE is declared in Docker file of a-node:
EXPOSE 9098
docker-compose.yml:
version: '3'
services:
node-a:
image: a
ports:
- 9098:9098
volumes:
- ./:/a-src
depends_on:
- redis
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
Forwarding is enabled. I believe a server starts because it works well without Docker.
Where I should pay attention? What could cause a problem?
EDIT:
I've tried to add links but it had no effect:
node-b:
image: b
volumes:
- ./:/b-src
links:
- node-a
depends_on:
- node-a
Also links seemed to be deprecated and does the same thing as depends_on in 2+ version of docker-compose.yml:
docker-compose execute V2 files, it will automatically build a network between all of the containers defined in the file, and every container will be immediately able to refer to the others just using the names defined in the docker-compose.yml file.
Link a container to the service using links. (docker-compose documentation on links).
Example:
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
links:
- node-a
I'm starting with Docker. On start I want to set up simple server on nginx with proxy and SSL (only local on my machine) so I do somethink like this:
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/nginx/certs:ro
- /etc/nginx/vhost.d
- /var/run/docker.sock:/tmp/docker.sock:ro
- /usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true
nginx-proxy-ssl:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-proxy-ssl
volumes:
- ./certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro
volumes_from:
- nginx-proxy
whoami2:
image: jwilder/whoami
container_name: whoami2
environment:
- VIRTUAL_HOST=vertex.local.com
- LETSENCRYPT_HOST=vertex.local.com
- LETSENCRYPT_EMAIL=contact#vertex.local.com
networks:
default:
external:
name: developer
On standard HTTP everythink is fine, I'm getting site but Let's Encrypt returns error:
Unable to reach http://vertex.local.com/.well-known/acme-challenge/zr0QPZ53RHLRFKy76GX1NKx3lY4GPIaVorH4PT88_Ew: HTTPConnectionPool(host='vertex.local.com', port=80): Max retries exceeded with url: /.well-known/acme-challenge/zr0QPZ53RHLRFKy76GX1NKx3lY4GPIaVorH4PT88_Ew (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7ff41fd2e550>: Failed to establish a new connection: [Errno 111] Connection refused',))
So I have question:
For Docker (even locally) I need to set real existing domain? If is true how should I set up my this domain (on provider site - records, etc.) and on local machine?
Or how can I set SSL into my containers?
This is my docker-compose.yml
yml
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
network_mode: "default"
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
links:
- admin_db
network_mode: "bridge"
If with networking_mode:"bridge" I should be able to access my app (admin) on http://127.0.0.1:8000/ from localhost, but currently, I'm able to access it only on random-ip:8000 from localhost.
I'm able to http://127.0.0.1:8000/ access when networking_mode is "host", but then I'm unable to link containers.
Is there any solution to have both things ?
- linked containers
- app running on http://127.0.0.1:8000/ from localhost
If for some unknown reason normal linking doesn't work you can always create another bridged network and connect directly to that docker image. By doing that IP address of that running image will always be the same.
I would edit it like this:
version: '2'
services:
admin_db:
build:
context: .
dockerfile: postgres.dockerfile
args:
- DB_NAME=admin_db
- DB_USER=admin
- DB_PASSWORD=admin_pass
networks:
back_net:
ipv4_address: 11.0.0.2
admin:
build:
context: .
dockerfile: admin.dockerfile
args:
- UID=$UID
- GID=$GID
- UNAME=$UNAME
command: /bin/bash
depends_on:
- admin_db
ports:
- "8000:8000"
extra_hosts:
- "admin_db:11.0.0.2"
networks:
back_net:
ipv4_address: 11.0.0.3
networks:
back_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
com.docker.network.bridge.name: "back"
ipam:
driver: default
config:
- subnet: 11.0.0.0/24
gateway: 11.0.0.1
Hope that helps.