I have gone through the documentation but running across issues as I'm launching the containers through docker-compose.
I'm using docker-compose because I have more than one containers that directly correlates to each csproj file within the location, also the docs refer to attributes using Dockerfile which itself adds another layer of complexity.
docker-compose.yml
version: '3'
services:
app1:
image: mcr.microsoft.com/dotnet/core/sdk:2.2
container_name: app1
restart: on-failure
working_dir: /service
command: bash -c "dotnet build && dotnet bin/Debug/netcoreapp2.2/App1.dll"
ports:
- 5001:5001
- 5000:5000
volumes:
- "./App1:/service"
app2:
image: mcr.microsoft.com/dotnet/core/sdk:2.2
container_name: app2
restart: on-failure
working_dir: /service
command: bash -c "dotnet build && dotnet bin/Debug/netcoreapp2.2/App2.dll"
ports:
- 5001:5001
- 5000:500
Project Structure
Microservice.sln
App1/App1.csproj
App2/App2.csproj
Issue
My IDE starts to complain as soon as I run the containers for all kinds of syntax errors, and a local build simply fails.
is there a way to be able to compile the app locally as well as in the docker?
Edit
Related
I have three web projects in one solution, they work together on gRPC. I am trying to bring all three projects up with docker-compose, but the problem is when the command
docker-compose up --build
This is how everything works, but when I try to start using the Visual Studio interface with a debugger connected, it does not work
When you run the application via docker-compose in Visual Studio, then when you start the containers themselves, an error appears with the following content:
Can't find a program for debugging in the container
And then it immediately pops out this:
The target process exited without raising an event fired by CoreCLR. Make sure the target process is configured to use .NET Core. This may be necessary if the target process has not started in .NET Core.
This message appears in the container logs:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application '/app/bin/Debug/net5.0/Votinger.Gateway.Web.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
This is a Dockerfile which is auto-generated by Visual Studio, it is the same for every project except for the paths to the project
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Votinger.Gateway/Votinger.Gateway.Web/Votinger.Gateway.Web.csproj", "Votinger.Gateway/Votinger.Gateway.Web/"]
RUN dotnet restore "Votinger.Gateway/Votinger.Gateway.Web/Votinger.Gateway.Web.csproj"
COPY . .
WORKDIR "/src/Votinger.Gateway/Votinger.Gateway.Web"
RUN dotnet build "Votinger.Gateway.Web.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Votinger.Gateway.Web.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
FROM mcr.microsoft.com/dotnet/sdk:5.0
ENTRYPOINT ["dotnet", "Votinger.Gateway.Web.dll"]
docker-compose.yml
version: '3.4'
services:
votinger.authserver.db:
image: mysql:8
container_name: Votinger.AuthServer.Db
restart: always
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
votinger.pollserver.db:
image: mysql:8
container_name: Votinger.PollServer.Db
restart: always
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
votinger.authserver.web:
image: ${DOCKER_REGISTRY-}votingerauthserverweb
container_name: Votinger.AuthServer.Web
build:
context: .
dockerfile: Votinger.AuthServer/Votinger.AuthServer.Web/Dockerfile
links:
- votinger.authserver.db:authdb
votinger.gateway.web:
image: ${DOCKER_REGISTRY-}votingergatewayweb
container_name: Votinger.Gateway.Web
build:
context: .
dockerfile: Votinger.Gateway/Votinger.Gateway.Web/Dockerfile
ports:
- 5000:5000
links:
- votinger.authserver.web:authserver
- votinger.pollserver.web:pollserver
votinger.pollserver.web:
image: ${DOCKER_REGISTRY-}votingerpollserverweb
container_name: Votinger.PollServer.Web
build:
context: .
dockerfile: Votinger.PollServer/Votinger.PollServer.Web/Dockerfile
links:
- votinger.pollserver.db:polldb
docker-compose.override.yml
version: '3.4'
services:
votinger.authserver.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
votinger.gateway.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
votinger.pollserver.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
I will also post a link to GitHub where this project is located
(dev branch)
https://github.com/SeanWoo/Votinger
I will forgive your help
I found a solution, the problem was that in the docker-compose.vs.debug.yml file is generated by Visual Studio, full paths to the project and debugger are indicated there, and my path went through a folder named as C #, and Visual Studio decided to calculate the symbol # unnecessary and deleted it, it turned out to be a path with the C folder, which led to a completely different place
I try to build a web stack with docker using "php, mariadb, ngnix, composer"
I try to use only container from official repositories
following my docker-compose.yml
version: '2'
services:
nginx:
image: nginx
container_name: nginx
ports:
- "8000:80"
mariadb:
image: mariadb
container_name: mariadb
ports:
- "3306:3306"
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_USER : root
MYSQL_ROOT_PASSWORD: root
php:
image: php:fpm
container_name: php
ports:
- "80:80"
volumes:
- ./php/:/var/www/html/
composer:
image: composer
container_name: composer
volumes_from:
- php
working_dir: /var/www/
volumes:
- ./composer2:/app
this docker-compose work correctly, but I don't understand why composer down quickly after 'docker-compose up -d'
PS:My first goal is to use this stack for symfony2 or silex
The composer container is ended imediatelly becuase it's not intended to run as a "daemon". In case you don't provide any command, the composer, simply said, has nothing to do. Anyways, if it has "something to do", then it executes it and ends.
You can use it through interactive shell like this:
docker run --rm --interactive --tty --volume $PWD:/app composer install
More examples are in the "Using" section here: https://hub.docker.com/_/composer/
I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
I am doing something very simple puting drupal in a docker container
volumes:
- /c/Users/mark/drupalb/sites/all/modules:/var/www/html/sites/all/modules
this directive which should mount home directory of my modules into the containers module directory doesnt work this seems like thing that people do everyday but how to resolves this
Not clear what your problem is. In the meantime here's a working drupal example
Example
The folowing compose file
└── docker-compose.yml
Runs two containers, one for drupal the other for the db:
$ docker volume create --name drupal_sites
$ docker-compose up -d
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
dockercompose_db_1 docker-entrypoint.sh mysqld Up 3306/tcp
dockercompose_web_1 apache2-foreground Up 0.0.0.0:8080->80/tcp
Note how the volume used to store the drupal sites is created separately, defaults to local storage but can be something more exotic to fit your needs
docker-compose.yml
version: '2'
services:
db:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=letmein
- MYSQL_DATABASE=drupal
- MYSQL_USER=drupal
- MYSQL_PASSWORD=drupal
volumes:
- /var/lib/mysql
web:
image: drupal
depends_on:
- db
ports:
- "8080:80"
volumes:
- drupal_sites:/var/www/html/sites
- /var/www/private
volumes:
drupal_sites:
external: true
I need to run multiple WordPress containers linked all to a single MySQL container + Nginx Reverse Proxy to easy handle VIRTUAL_HOSTS.
Here is what I'm trying to do (with only one WP for now):
Wordpress (hub.docker.com/_/wordpress/)
Mysql (hub.docker.com/_/mysql/)
Nginx Reverse Proxy (github.com/jwilder/nginx-proxy)
I'm working on OSX and this is what I run on terminal:
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
docker run --name some-mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=root -d mysql:latest
docker run -e VIRTUAL_HOST=wordpress.mylocal.com --name wordpress --link some-mysql:mysql -p 8080:80 -d wordpress
My Docker is running on 192.168.99.100 and that brings me to a 503 nginx/1.9.12 error ofc.
Then 192.168.99.100:8080 brings me to the WordPress as expected.
But http://wordpress.mylocal.com it's not working; it's not redirecting to 192.168.99.100:8080 and I don't understand what I'm doing wrong.
Any suggestions? Thanks!
First of all I recommend you start using docker-compose , running your containers and finding errors will become much easier.
As for your case it seems that you should be using VIRTUAL_PORT to direct to your container on 8080.
Secondly you cannot have two containers(the nginx-proxy + wordpress) napped to the same port on the host.
Good luck!
One:
Use docker compose.
vi docker-compose.yaml
Two:
paste this into the file:
version: '3'
services:
nginx-proxy:
image: budry/jwilder-nginx-proxy-arm:0.6.0
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- certs:/etc/nginx/certs:ro
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
labels:
- com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy
environment:
- DEFAULT_HOST=example2.com
networks:
- frontend
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:stable
restart: always
volumes:
- certs:/etc/nginx/certs:rw
- confd:/etc/nginx/conf.d
- vhostd:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
# - LETSENCRYPT_SINGLE_DOMAIN_CERTS=true
# - LETSENCRYPT_RESTART_CONTAINER=true
- DEFAULT_EMAIL=example#mail.com
networks:
- frontend
depends_on:
- nginx-proxy
#########################################################
..The rest of the containers go here..
#########################################################
networks:
frontend:
driver: bridge
backend:
driver: bridge
volumes:
certs:
html:
vhostd:
confd:
dbdata:
maildata:
mailstate:
maillogs:
Three:
Configure as many docker as you need and configure them to your liking. Here are some examples:
mysql (MariaDB):
mysql:
image: jsurf/rpi-mariadb:latest #MARIADB -> 10 #82eec62cce90
restart: always
environment:
MYSQL_DATABASE: nameExample
MYSQL_USER: user
MYSQL_PASSWORD: password
MYSQL_RANDOM_ROOT_PASSWORD: passwordRoot
MYSQL_ROOT_HOST: '%'
ports:
- "3306:3306"
networks:
- backend
command: --init-file /data/application/init.sql
volumes:
- /path_where_it_will_be_saved_on_your_machine/init.sql:/data/application/init.sql
- /physical_route/data:/var/lib/mysql
nginx-php7.4:
nginx_php:
image: tobi312/php:7.4-fpm-nginx-alpine-arm
hostname: example1.com
restart: always
expose:
- "80"
volumes:
- /physical_route:/var/www/html:rw
environment:
- VIRTUAL_HOST=example1.com
- LETSENCRYPT_HOST=example1.com
- LETSENCRYPT_EMAIL=example1#mail.com
- ENABLE_NGINX_REMOTEIP=1
- PHP_ERRORS=1
depends_on:
- nginx-proxy
- letsencrypt
- mysql
networks:
- frontend
- backend
WordPress:
wordpress:
image: wordpress
restart: always
ports:
- 8080:80
environment:
- WORDPRESS_DB_HOST=db
- WORDPRESS_DB_USER=exampleuser
- WORDPRESS_DB_PASSWORD=examplepass
- WORDPRESS_DB_NAME=exampledb
- VIRTUAL_HOST=example2.com
- LETSENCRYPT_HOST=example2.com
- LETSENCRYPT_EMAIL=example2#mail.com
volumes:
- wordpress:/var/www/html #This must be added in the volumes label of step 2
You can find many examples and documentation here
You must be careful since in some examples I put images that are for rpi and it is very likely that they will give problems in amd64 and intel32 systems.You should search and select the images that interest you according to your cpu and operating system
Four:
Run this command to launch all dockers
docker-compose up -d --remove-orphans
"--remove-orphans" serves to remove dockers that are no longer in your docker-compose file
Five:
When you have the above steps done you can come and ask what you want, we will be happy to read your dockerFile without dying trying to read a lot of commands
According to your case I think that the best solution for you is to use an nginx reverse proxy that is listening on the docker socket and can pass request to different virtual hosts.
for example, let's say you have 3 WPs.
WP1 -> port binding to 81:80
WP2 -> port binding to 82:80
WP3 -> port binding to 83:80
for each one of them you should use a docker environment variable with the virtual host name you want to use.
WP1-> foo.bar1
WP2-> foo.bar2
WP3-> foo.bar3
After doing so you should have 3 differnt WP with ports exposed on 81 82 83.
Now download and start this nginx docker container (reverse proxy) here
it should listen on the docker socket and retrives all data coming to you machine on port 80.
and when you started the WP container and by the environment variable that you provide he will be able to detect which request shouuld get to which WP instance...
This is an example of how you should run one of you WP docker images
> docker run -e VIRTUAL_HOST=foo.bar1.com -p 81:80 -d wordpres:tag
In this case the virtual host will be the virtual host coming from the http request