Docker Compose with ASP.NET Core and PostgreSQL services - asp.net

I'm trying to connect a ASP.NET Core container to PostgreSQL official container using Docker-Compose, I've name my container as aspnetcore-postgres and it contains in the configuration the connection string points to the PostgreSQL container as:
User ID=postgres;Password=5432;Host=localhost;Server=localhost;Port=5432;Database=ApplicationDbContext;Pooling=true;
And I use the following compose file for connection:
version: '2'
services:
web:
image: aspnetcore-postgres
ports:
- "5000:5000"
networks:
- aspnetcoreapp-network
postgres:
image: postgres
environment:
- "POSTGRES_PASSWORD: 5432"
networks:
- aspnetcoreapp-network
ports:
- "5432:5432"
networks:
aspnetcoreapp-network:
driver: bridge
Whenever I try to run docker-compose up, the web application can't recognize the database container. Eventually, I only see the Postgres container in the network not the web application container. I'm using Docker for Windows RC4.
Anyone can recognized that where I'm doing the wrong step?

Related

Connect from Angular Docker container to Asp.Net container

I created a simple program with three containers: database (MS SQL Server), backend (Asp.Net Core), and frontend (Angular 8).
To run it I use a docker-compose:
services:
sqlserver:
image: mcr.microsoft.com/mssql/server:2019-latest
#ports:
#- 1433:1433 - it's hidden
volumes:
- data-sql:/var/opt/mssql
environment:
SA_PASSWORD: "Pass"
ACCEPT_EULA: "Y"
web_api:
build:
dockerfile: WebApi/Dockerfile
#ports:
#- 5000:80 - it's hidden
depends_on:
- sqlserver
environment:
"ASPNETCORE_URLS": "http://+:5000"
"ConnectionStrings:SqlConnectionString": "Server=sqlserver,1433;Database=db;User Id=sa;Password=pass;"
web_app:
build: WebApp/
ports:
- 4200:80
depends_on:
- web_api
environment:
"ENV": "Production"
"BASE_URL": "http://web_api:5000"
I want to hide the external ports for sqlserver and web_api, because they are only used in the docker-compose services.
I could hide the sqlserver port by adding the SqlConnectionString environment to the web_api.
But this approach doesn't work with web_app. My idea was to add the "BASE_URL": "http://web_api:5000" to the web_app so it'll be able to send requests on this URL, but it doesn't work.
Have any ideas on how to do this?
This is because this kind of frontends (SPA) communicates with your backend not internally (docker network) but externally (your host network).
Accordingly, 2 steps are needed:
🔴 Backend must be accessible thru host network
🟢 Frontend should point to the public URL of backend
web_api:
build:
dockerfile: WebApi/Dockerfile
ports: # 🔴 Added
- 5000:80 # 🔴 Added
depends_on:
# ... etc
web_app:
build: WebApp/
# .... etc
environment:
"ENV": "Production"
"BASE_URL": "http://localhost:5000" # 🟢 Changed

Upload multiple containers to Azure Container registry

I have ASP.NET Core app, that is packed to docker.
Here is my docker-compose file, it has kibana and EL images in it.
version: "3.1"
services:
tooseeweb:
image: ${DOCKER_REGISTRY-}tooseewebcontainer
build:
context: .
dockerfile: Dockerfile
ports:
- 5000:80
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
ports:
- "9200:9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
I try to deploy this to Azure Container Registry via this article
Article link
It's all okay and I see my APIб it's under 80 port. But I don't see kibana and elastic search.
At local machine I make docker-compose up and see it by 5601 and 9200, but on Azure Container Registry this ports not working. How I can deploy all together? Or I need to deploy containers separately?
Firstly, the Azure Container Registry store the docker images for you. So you need to push the images to it, not the running containers. And you do not need to separate them, but you need to create all the images with the name as your_acr_name.azurecr.io/image_name:tag and then push them to the ACR.
As I see in your question, you only create the image tooseeweb with the name ${DOCKER_REGISTRY-}tooseewebcontainer, when you push this image to the ACR, it only stores this one for you, does not contain the other two images.
If you want to store the other two images in ACR, you need to follow the two steps below.
tag your image. For example:
docker tag docker.elastic.co/elasticsearch/elasticsearch:6.2.4 your_acr_name.azurecr.io/elasticsearch:6.2.4
push the image to ACR.
docker push your_acr_name.azurecr.io/elasticsearch:6.2.4

Making requests to external host on Docker container

I am trying to connect to a database that has an IP of x.x.x.x from my Docker container
Getting this error
java.net.NoRouteToHostException: No route to host (Host unreachable)
Tried running container using --network=host which has a similar approach to the above attempt
As I mentioned in the comments, here is the sample docker-compose file.
version: '3.7'
services:
entitygraph:
image: entitygraph-by-jar:latest
container_name: entitygraph
restart: always
networks:
- eg-net
ports:
- 9999:8080
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://eg-mysql/customers?useSSL=false
SPRING_PROFILES_ACTIVE: mysql
eg-mysql:
image: mysql:5.7
restart: always
networks:
- eg-net
container_name: eg-mysql
environment:
MYSQL_DATABASE: customers
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_ROOT_PASSWORD:
networks:
eg-net:
name: eg-net
In this file, the application entitygraph is trying to talk to mysql. In my application, the connection string to mysql is as below,
spring.datasource.url=jdbc:mysql://localhost:3306/customers?useSSL=false
So, docker will replace the spring.datasource.url property with the one I specified on my docker-compose file. Note that host:port is eg-mysql, which docker resolves to it's internal IP and will use it to communicate.
I don't know about your application architecture. If I know, I could give you more specific answer to your problem.

Docker: connection refused on exposed port

I have two Docker containers: node-a, node-b. One of them (node-b) should send http request to other (node-a). I'm starting them with Docker Compose. When I'm trying to up them with Compose I face an error:
Get http://node-a:9098: dial tcp 172.18.0.3:9098: getsockopt: connection refused
EXPOSE is declared in Docker file of a-node:
EXPOSE 9098
docker-compose.yml:
version: '3'
services:
node-a:
image: a
ports:
- 9098:9098
volumes:
- ./:/a-src
depends_on:
- redis
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
Forwarding is enabled. I believe a server starts because it works well without Docker.
Where I should pay attention? What could cause a problem?
EDIT:
I've tried to add links but it had no effect:
node-b:
image: b
volumes:
- ./:/b-src
links:
- node-a
depends_on:
- node-a
Also links seemed to be deprecated and does the same thing as depends_on in 2+ version of docker-compose.yml:
docker-compose execute V2 files, it will automatically build a network between all of the containers defined in the file, and every container will be immediately able to refer to the others just using the names defined in the docker-compose.yml file.
Link a container to the service using links. (docker-compose documentation on links).
Example:
node-b:
image: b
volumes:
- ./:/b-src
depends_on:
- node-a
links:
- node-a

Docker container can't connect to DB on external network

I have a docker container (Windows 10) running on a new docker network I've defined. The container runs a pentaho transformation that tries to connect to an OpenEdge database.
Within my transformation set up, I have the following DB connection parameters:
#Connection URL
jdbc:datadirect:openedge://<machine_name>:<machine_port>;databaseName=<db_name>;user=<user_name>;password=<pass_word>
#Driver
com.ddtek.jdbc.openedge.OpenEdgeDriver
#User
user_name
#Pass
password
I have the correct drivers in the pentaho lib folder with the correct permissions.
I'm running the transformation from docker-compose and successfully connecting to a mysql DB in another container:
version: "2"
services:
db:
image: mysql:latest
container_name: my-pdi-mysql
networks:
- my-pdi-network
environment:
- MYSQL_ROOT_PASSWORD=tbitter
- MYSQL_DATABASE=mysql-db
ports:
- "3307:3306"
volumes:
- ./goldbi:/var/lib/mysql
pdi:
image: my-pdi-image-with-pan:latest
container_name: my-pdi-container
networks:
- my-pdi-network
volumes:
- C:\Docker-Pentaho\resource:/home/pentaho/data-integration/resources
#entrypoint:
# - C:\Docker-Pentaho\docker-entrypoint-2.sh
networks:
my-pdi-network:
How do I also connect to a DB on an external machine on the same network as the host from my container? I've done a lot of 'googling' on this but I'm a bit confused!
Any help would be greatly appreciated.
Thanks.

Resources