I have three web projects in one solution, they work together on gRPC. I am trying to bring all three projects up with docker-compose, but the problem is when the command
docker-compose up --build
This is how everything works, but when I try to start using the Visual Studio interface with a debugger connected, it does not work
When you run the application via docker-compose in Visual Studio, then when you start the containers themselves, an error appears with the following content:
Can't find a program for debugging in the container
And then it immediately pops out this:
The target process exited without raising an event fired by CoreCLR. Make sure the target process is configured to use .NET Core. This may be necessary if the target process has not started in .NET Core.
This message appears in the container logs:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application '/app/bin/Debug/net5.0/Votinger.Gateway.Web.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
This is a Dockerfile which is auto-generated by Visual Studio, it is the same for every project except for the paths to the project
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Votinger.Gateway/Votinger.Gateway.Web/Votinger.Gateway.Web.csproj", "Votinger.Gateway/Votinger.Gateway.Web/"]
RUN dotnet restore "Votinger.Gateway/Votinger.Gateway.Web/Votinger.Gateway.Web.csproj"
COPY . .
WORKDIR "/src/Votinger.Gateway/Votinger.Gateway.Web"
RUN dotnet build "Votinger.Gateway.Web.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Votinger.Gateway.Web.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
FROM mcr.microsoft.com/dotnet/sdk:5.0
ENTRYPOINT ["dotnet", "Votinger.Gateway.Web.dll"]
docker-compose.yml
version: '3.4'
services:
votinger.authserver.db:
image: mysql:8
container_name: Votinger.AuthServer.Db
restart: always
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
votinger.pollserver.db:
image: mysql:8
container_name: Votinger.PollServer.Db
restart: always
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
votinger.authserver.web:
image: ${DOCKER_REGISTRY-}votingerauthserverweb
container_name: Votinger.AuthServer.Web
build:
context: .
dockerfile: Votinger.AuthServer/Votinger.AuthServer.Web/Dockerfile
links:
- votinger.authserver.db:authdb
votinger.gateway.web:
image: ${DOCKER_REGISTRY-}votingergatewayweb
container_name: Votinger.Gateway.Web
build:
context: .
dockerfile: Votinger.Gateway/Votinger.Gateway.Web/Dockerfile
ports:
- 5000:5000
links:
- votinger.authserver.web:authserver
- votinger.pollserver.web:pollserver
votinger.pollserver.web:
image: ${DOCKER_REGISTRY-}votingerpollserverweb
container_name: Votinger.PollServer.Web
build:
context: .
dockerfile: Votinger.PollServer/Votinger.PollServer.Web/Dockerfile
links:
- votinger.pollserver.db:polldb
docker-compose.override.yml
version: '3.4'
services:
votinger.authserver.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
votinger.gateway.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
votinger.pollserver.web:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:5000;http://+:5001
- ASPNETCORE_Kestrel__Certificates__Default__Password=password
- ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/https:ro
I will also post a link to GitHub where this project is located
(dev branch)
https://github.com/SeanWoo/Votinger
I will forgive your help
I found a solution, the problem was that in the docker-compose.vs.debug.yml file is generated by Visual Studio, full paths to the project and debugger are indicated there, and my path went through a folder named as C #, and Visual Studio decided to calculate the symbol # unnecessary and deleted it, it turned out to be a path with the C folder, which led to a completely different place
Related
I am trying to run the aspnet core application In docker container. I am having issues with physical file provider.
In my application startup.cs I am using following code to for a physical file provider and map with alias
app.UseFileServer(new FileServerOptions
{
FileProvider = new PhysicalFileProvider("G:\\Work\\LMS\\lms-data"),
RequestPath = new PathString("/lms-data"),
EnableDirectoryBrowsing = false
});
Now My docker file is
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build
ENV ASPNETCORE_ENVIRONMENT=Development
ENV DOTNET_USE_POLLING_FILE_WATCHER=1
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
COPY ["SharedKernal/SharedKernal.csproj", "SharedKernal/"]
COPY ["LMS.Entities/LMS.Entities.csproj", "LMS.Entities/"]
COPY ["LMS.Core/LMS.Core.csproj", "LMS.Core/"]
COPY ["LMS.Infrastructure/LMS.Infrastructure.csproj", "LMS.Infrastructure/"]
COPY ["LMS.Web/LMS.Web.csproj", "LMS.Web/"]
RUN dotnet restore "LMS.Web/LMS.Web.csproj"
RUN mkdir /lms-data
COPY . .
WORKDIR "/app/LMS.Web"
CMD [ "/bin/bash","-c","dotnet restore && dotnet watch run" ]
My docker compose file is:
version: "3.4"
services:
lmsapp:
image: lmsapp
container_name: lmsappv1
build:
context: .
dockerfile: Dockerfile
working_dir: "/app/LMS.Web"
volumes:
- ".:/app"
ports:
- "5000:5000"
- "5001:5001"
networks:
- mongo_network
mongodb:
image: mongo
container_name: mongo_db
networks:
- mongo_network
ports:
- "27017:27017"
networks:
mongo_network:
driver: bridge
Now when ever i run the the command docker-compose up after running docker-compose build
I receive following error
System.ArgumentException: The path must be absolute. (Parameter 'root')
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root, ExclusionFilters filters)
at Microsoft.Extensions.FileProviders.PhysicalFileProvider..ctor(String root)
at LMS.Web.Startup.Configure(IApplicationBuilder app, IWebHostEnvironment env) in /app/LMS.Web/Startup.cs:line 130
How to solve this error?
The other issue I am facing when ever I run the docker-compose up it always restore packages. How to avoid that?
Instead of this:
FileProvider = new PhysicalFileProvider("G:\\Work\\LMS\\lms-data"),
Try this:
FileProvider = new PhysicalFileProvider(Path.Combine(Directory.GetCurrentDirectory(), #"lms-data"))
I have gone through the documentation but running across issues as I'm launching the containers through docker-compose.
I'm using docker-compose because I have more than one containers that directly correlates to each csproj file within the location, also the docs refer to attributes using Dockerfile which itself adds another layer of complexity.
docker-compose.yml
version: '3'
services:
app1:
image: mcr.microsoft.com/dotnet/core/sdk:2.2
container_name: app1
restart: on-failure
working_dir: /service
command: bash -c "dotnet build && dotnet bin/Debug/netcoreapp2.2/App1.dll"
ports:
- 5001:5001
- 5000:5000
volumes:
- "./App1:/service"
app2:
image: mcr.microsoft.com/dotnet/core/sdk:2.2
container_name: app2
restart: on-failure
working_dir: /service
command: bash -c "dotnet build && dotnet bin/Debug/netcoreapp2.2/App2.dll"
ports:
- 5001:5001
- 5000:500
Project Structure
Microservice.sln
App1/App1.csproj
App2/App2.csproj
Issue
My IDE starts to complain as soon as I run the containers for all kinds of syntax errors, and a local build simply fails.
is there a way to be able to compile the app locally as well as in the docker?
Edit
I'm learning microservice concept and using Docker for these purposes.
I have 3 containers:
mssqlserver - my database
asp-net-core:2.0 - for my microservice (only 1 at the moment)
asp-net-core:2.0 - MVC
Connection exists between these so this isn't a cause of the problem.
MVC contains wwwroot directory where images (banners etc.), css and .js files are placed. I've checked they are on my docker container (ran /bin/bash on container and checked).
But somehow my .cshtml files can't see these files.
Dockerfile for MVC project:
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore
# copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# build runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 80/tcp
ENTRYPOINT ["dotnet", "MVC.dll"]
docker-compose:
version: "3.2"
networks:
frontend:
backend:
services:
webmvc:
build:
context: .\src\Web\MVC
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- CatalogUrl=http://catalog
container_name: webshop
ports:
- "5500:80"
networks:
- frontend
depends_on:
- catalog
catalog:
build:
context: .\src\Services\ProductCatalogApi
dockerfile: Dockerfile
image: microservices-v1.0.0
environment:
- DatabaseServer=mssqlserver
- DatabaseName=CatalogDb
- DatabaseUser=sa
- DatabaseUserPassword=ProductApi(!)
container_name: catalogapi
ports:
- "5000:80"
networks:
- backend
- frontend
depends_on:
- mssqlserver
mssqlserver:
image: "microsoft/mssql-server-linux:latest"
ports:
- "2200:1433"
container_name: mssqlcontainer
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=ProductApi(!)
networks:
- backend
Example use of image (in Index.cshtml):
<img src="~/images/banner.jpg" alt="ASP.NET" class="img-responsive" />
I've tried differenet combination of path to image like:
<img src="wwwroot/images/banner.jpg" alt="ASP.NET" class="img-responsive" />
<img src="~/app/wwwroot/images/banner.jpg" alt="ASP.NET" class="img-responsive" />
<img src="app/wwwroot/images/banner.jpg" alt="ASP.NET" class="img-responsive" />
None of these worked.
Most likely you failed to enable the static files middleware in your ASP.NET Core project. In Startup.Configure, you need the line:
app.UseStaticFiles();
That will serve up wwwroot, by default, as the document root of your site, so you would then reference in static files under that via:
<img src="~/images/banner.jpg" />
Which would correspond to the file at wwwroot/images/banner.jpg.
I'm working on a docker image for dev environment for a Symfony 4 application. I'm building it on alpine, php-fpm and nginx.
I have configured an application, but the performance was not great (~700ms) even for the simple hello world application, so I thought I can make it faster somehow.
First of all, I went for semantics configuration and configured the volumes to use cached configuration. Then, I moved vendor to separate volume as it caused the most of performance issues.
As a second thing I wanted to use docker-sync as the benchmarks looked amazing. I configured it and everything ran smoothly. But now I realized that the docker is not reacting to changes in code.
First, I thought that it has something to do with Symfony 4 cache, so I did connect to php's container and ran php bin/console cache:clear. Cache has been cleared, but the docker did not react to anything. I double-check the files on both web and php containers and the files are changed there. I'm wondering if there is something more I need to configure or why is Symfony not reacting to changes.
UPDATE
Symfony/Container does not react to changes even after complete image re-build and removal of semantics configuration and docker-sync. So, basically, it's plain docker with hello-world symfony 4 application and it does not react to changes. Changes are not even synced with container
Configuration:
# docker-compose-dev.yml
version: '3'
volumes:
symfony-sync:
external: true
services:
php:
build: build/php
expose:
- 9000
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
web:
build: build/nginx
restart: always
expose:
- 80
- 443
ports:
- 8080:80
- 8081:443
depends_on:
- php
volumes:
- symfony-sync:/var/www/html/symfony
- ./vendor:/var/www/html/vendor
networks:
default:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.4.0.0/16
# docker-sync.yml
version: "2"
options:
verbose: true
syncs:
symfony-sync:
src: './symfony'
sync_excludes:
- '.git'
- 'composer.lock'
Makefile I use for running the app
start:
docker-sync stop
docker-sync clean
cd symfony
docker volume create --name=symfony-sync
cd ..
docker-compose -f docker-compose-dev.yml down
docker-compose -f docker-compose-dev.yml up -d
docker-sync start
stop:
docker-compose stop
docker-sync stop
I recommend to use dinghy instead docker4mac: https://github.com/codekitchen/dinghy
Have a try to this repo for example too: https://github.com/jorge07/symfony-4-es-cqrs-boilerplate
If this doesn't work the problem will be in you host or dockerfile. Be sure you don't enable opcache for development.
I come here because I develop an app with Symfony3. And I've some questions about the deployment of the app.
Actually I use docker-compose:
version: '2'
services:
nginx:
build: ./docker/nginx/
ports:
- 8081:80
volumes:
- .:/home/docker:ro
- ./docker/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
networks:
- default
php:
build: ./docker/php/
volumes:
- .:/home/docker:rw
- ./docker/php/php.ini:/usr/local/etc/php/conf.d/custom.ini:ro
working_dir: /home/docker
networks:
- default
dns_search:
- php
db:
image: mariadb:latest
ports:
- 3307:3306
environment:
- MYSQL_ROOT_PASSWORD=collectionManager
- MYSQL_USER=collectionManager
- MYSQL_PASSWORD=collectionManager
- MYSQL_DATABASE=collectionManager
volumes:
- mariadb_data:/var/lib/mysql
networks:
- default
dns_search:
- db
search:
build: ./docker/search/
ports:
- 9200:9200
- 9300:9300
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- default
dns_search:
- search
volumes:
mariadb_data:
driver: local
elasticsearch_data:
driver: local
networks:
default:
nginx is clear, engine is PHP-FPM with some extensions and composer, db is MariaDB, and search ElasticSearch with some plugins.
Before I don't use Docker and to deploy I used Megallanes or Deployer, when I want to deploy webapp.
With Docker I can use the docker-compose file and recreate images and container on the server, I also can save my containers in images and in tar archive and load it on the server. It's okay for nginx, and php-fpm, but what about elasticsearch and the db ? Because I need to keep data in for future update of the code. Then when I deploy the code I need to execute a Doctrine Migration and maybe some commands, and Deployer do it perfectly with some other interresting things. And how I deploy the code with Docker ? Can we use both ? Deployer for code and Docker for services ?
Thanks a lot for your help.
First of all , Please try using user-defined networks, they have additional features vs legacy linking like Embedded DNS. Meaning you can call other containers on the same network with their names in your applications. Containers on a User defined network are isolate from containers on another User defined network.
To create a user defined network:
docker network create --driver bridge <networkname>
Dockerfile to use user defined network example:
search:
restart: unless-stopped
build: ./docker/search/
ports:
- "9200:9200"
- "9300:9300"
networks:
- <networkname>
Second: I noticed you didnt use data volumes for you DB and ElasticSearch.
You need to mount volumes at certain points to keep your persistant data.
Third: When you export your containers it wont contain mounted volumes. You need to back up volume data and migrate it manually.
To backup volume data:
docker run --rm --volumes-from db -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
The above command will create a container, mounts volumes from DB container and mounts current directory in container as /backup , uses ubuntu image and tar command to create a backup of /dbdata in container (consider changing this to your dbdirectory) in the /backup that is mounted from your docker hosts). after the operation completes the transient container will be removed (ubuntu container we used for creating the backup with --rm switch).
To restore:
You copy the tar archive to the remote location and create your container with empty mounted volume. then extract the tar archive in that volume using the following command.
docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"