Cannot push asp.net container to aws ecr - asp.net

I have an asp.net project, that's in at GitLab and I try to build and push it to AWS ECR.
The building is completed successfully, but I have this error on the push
Here is a screen of permissions, that I have on the IAM user
and pipeline .yml file
step-deploy-development:
stage: development
image: docker:stable
services:
- docker:18.09.7-dind
before_script:
# - export DOCKER_HOST="tcp://localhost:2375"
# - docker info
- export DYNAMIC_ENV_VAR=DEVELOPMENT
- apk update
- apk upgrade
- apk add util-linux pciutils usbutils coreutils binutils findutils grep
- apk add python3 python3-dev python3 py3-pip
- pip install awscli
script:
- echo setting up env $DYNAMIC_ENV_VAR
- $(aws ecr get-login --no-include-email --region eu-west-2)
- docker build --build-arg ASPNETCORE_ENVIRONMENT=${ASPNETCORE_ENVIRONMENT_DEV} --build-arg DB_CONNECTION=${DB_CONNECTION_DEV} --build-arg CORS_ORIGINS=${CORS_ORIGINS_DEV} --build-arg SERVER_ROOT_ADDRESS=${SERVER_ROOT_ADDRESS_DEV} -f src/COROI.Web.Host/Dockerfile -t $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA .
- docker push $ECR_DEV_REPOSITORY_URL:$CI_COMMIT_SHA
- cd deployment
- sed -i -e "s/TAG/$CI_COMMIT_SHA/g" ecs_task_dev.json
- aws ecs register-task-definition --region $ECS_REGION --cli-input-json file://ecs_task_dev.json >> temp.json
- REV=`grep '"revision"' temp.json | awk '{print $2}'`
- aws ecs update-service --cluster $ECS_DEV_CLUSTER --service $ECS_DEV_SERVICE --task-definition $ECS_DEV_TASK --region $ECS_REGION
environment: development
tags:
# - CoroiAdmin
only:
- main
Where can be the problem?

You need to set the ECR repository policy to allow your IAM user to pull and push the images as explained in the following link Setting a repository policy statement, and for example this repository policy allows the IAM user to push and pull images to and from a repository.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPushPull",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::account-id:user/iam-user"
]
},
"Action": [
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetDownloadUrlForLayer",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart"
]
}
]
}

Related

ASP.NET Core publish actions, how to access NPM build files from previous step in .NET Core build step

I am working on configuring a deployment pipeline for my team. We are using Github Actions and I have most of it sorted, but ran into an issue when trying to run certain steps in a sequential order.
The app is ASP.NET Core MVC with some Blazor support. We are using webpack, webpack-cli css-loader & style-loader to bundle and minify our CSS and JS files.
The issue: I need to be able to run the npm build task before the .net core publish task in github actions. I am able to get the steps to run sequentially HOWEVER, the files generated in step 1 (the npm build) don't seem to be available to the .net core publish actions which happens after.
Question: what do I need to do in order to have the newly generated npm build files be used by the .net core publish action? I assume its a matter of moving them around to the correct location, however I am having trouble finding any good info on this. I have done Much trial and error, and research has been unfruitful.
Below is the yaml file and webpack file being used, and the command run is just npm run build
# This workflow will build a .NET project
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-net
name: .net Publish
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
job1:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [15.x]
steps:
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v3
with:
node-version: ${{ matrix.node-version }}
- name: build webpack
uses: actions/checkout#v1
- run: ls -a
- run: npm i
working-directory: ./ClientApp
- run: npm run build
working-directory: ./ClientApp
- run: pwd
- run: ls -a
- name: Persist npm build output
uses: actions/upload-artifact#v3
with:
name: npm-post-build
path: |
./wwwroot/dist/**
job2:
needs: job1
runs-on: ubuntu-latest
steps:
- name: Install Octopus Deploy CLI
uses: OctopusDeploy/install-octopus-cli-action#v1
with:
version: '*'
- uses: actions/checkout#v3
- name: Setup .NET
uses: actions/setup-dotnet#v3
with:
dotnet-version: 7.0.x
- name: Restore dependencies
run: dotnet restore
- name: replace dist with npm build artifact
uses: actions/download-artifact#v3
with:
name: npm-post-build
path: npm-post-build
- shell: bash
run: |
ls -a
rm -r wwwroot/dist
ls wwwroot
mv npm-post-build wwwroot/dist
ls wwwroot/dist
- name: Build
run: dotnet build --no-restore
- name: Publish
run: dotnet publish -p:PublishProfile=FolderProfile -o website
- name: package
run: |
shopt -s globstar
paths=()
for i in website/**; do
dir=${i%/*}
echo ${dir}
paths=(${paths[#]} ${dir})
done
eval uniquepaths=($(printf "%s\n" "${paths[#]}" | sort -u))
for i in "${uniquepaths[#]}"; do
echo $i
done
packages=()
versions=()
for path in "${uniquepaths[#]}"; do
dir=${path}/../../../..
parentdir=$(builtin cd $dir; pwd)
projectname=${parentdir##*/}
octo pack \
--basePath ${path} \
--id ${projectname} \
--version 1.0.0 \
--format zip \
--overwrite
packages=(${packages[#]} "${projectname}.1.0.0.zip")
versions=(${versions[#]} "${projectname}:1.0.0")
done
printf -v joined "%s," "${packages[#]}"
echo "::set-output name=artifacts::${joined%,}"
printf -v versionsjoinednewline "%s\n" "${versions[#]}"
versionsjoinednewline="${versionsjoinednewline//'%'/'%25'}"
versionsjoinednewline="${versionsjoinednewline//$'\n'/'%0A'}"
versionsjoinednewline="${versionsjoinednewline//$'\r'/'%0D'}"
echo "::set-output name=versions_new_line::${versionsjoinednewline%\n}"
echo "prepackage complete ${packages}"
- name: Push a package to Octopus Deploy
uses: OctopusDeploy/push-package-action#v3
env:
OCTOPUS_URL: https://****.octopus.app/
OCTOPUS_API_KEY: *****
OCTOPUS_SPACE: 'Default'
with:
overwrite_mode: OverwriteExisting
packages: runner.1.0.0.zip
const path = require('path');
module.exports = {
entry: {
site: '../wwwroot/js/site.js'
},
output: {
filename: '[name].entry.js',
path: path.resolve(__dirname, '..', 'wwwroot', 'dist')
},
devtool: 'source-map',
mode: 'development',
module: {
rules: [
{
test: /\.css$/,
use: ['style-loader', 'css-loader'],
},
{
test: /\.(eot|woff(2)?|ttf|otf|svg)$/i,
type: 'asset'
},
]
}
};

How to setup buildspec.yml file for two different repositories?

I have setup a cicd pipeline using AWS Codepipeline and AWS CodeBuild. Right now my application's frontend and backend are present in different repositories. I have two source stages in my Codepipeline. Now I want to build both frontend image and backend image in the CodeBuild using buildspec.yml file.
Problem
I am very confused how to setup buildspec.yml files for both frontend and backend repos. Do I have to specify buildspec.yml file in both repos separate or I have to specify buildspec.yml file only in PrimarySource repository.
This is my buildspec.yml file:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=${COMMIT_HASH:=latest}
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $ECR_REPOSITORY_URI:latest .
- docker tag $ECR_REPOSITORY_URI:latest $ECR_REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $ECR_REPOSITORY_URI:latest
- docker push $ECR_REPOSITORY_URI:$IMAGE_TAG
- printf '{"ImageURI":"%s:%s"}' $ECR_REPOSITORY_URI $IMAGE_TAG > imageDetail.json
artifacts:
files:
- imageDetail.json
I want to know how I can setup similar for backend too. So, my Codebuild should first build for frontend and then for backend.

Gitlab CI: how to load vars from JSON file

I'm creating ASP.Net Core applications using Gitlab CI and docker-in-docker.
So at first stage I build Dockerfiles and push them in repo, at the second stage I apply yamls to deploy from repo to cluster.
K8S doesn't allow to run containers at port 80 (in my configuration), so I need to expose ports for Dockerfiles.
What I want to do - load correct port numbers from appsettings.json (AspNet config file) in GitLab CI/CD - I don't want to hardcode that values in Gitlab job or in Gitlab Variables.
At the current moment I'm going this way gitlab-ci.yml:
build:
image: ${CI_REGISTRY}/dockerhub/library/docker:20.10.11
stage: build
services:
- ${CI_REGISTRY}/dockerhub/library/docker:20.10.11-dind
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
script:
- docker build
-f ./MyApp.Api/Dockerfile
--build-arg PORT=5076 .
- docker push ${CI_REGISTRY_IMAGE}/myapp:${tag}
I hardcoded --build_arg to expose correct port.
How to read it value from JSON file?
Json file looks like this:
{
"MyApp": {
"ConnectionString": "Host=host;Port=5432;"
},
"ClientBaseUrls": {
"MyApp": "http://my-serv:5076/api",
}
}
Also I can add additional var to JSON, for example: PORT=5076 but still can't figure out how to read this value in GitLab CI.
You could use jq in a previous job to read from appsettings.json , save them in a script that exports these variables, use this script as artifact and run it in your build job so that env variables are available in your build stage.
You stated that you can add additional vars, so lets assume your .json looks like this:
{
"port" : "5678",
...
}
Then your gitlab-ci.yml could look like this:
stages:
- variables
- build
variables:
stage: variables
image: ubuntu
before_script:
- apt-get update
- apt-get install -y jq
script:
- export PORT=$(cat appsettings.json | jq '.port' )
- echo "export PORT=$PORT" >> vars.sh
artifacts:
paths:
- vars.sh
build:
image: ${CI_REGISTRY}/dockerhub/library/docker:20.10.11
stage: build
services:
- ${CI_REGISTRY}/dockerhub/library/docker:20.10.11-dind
variables:
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- . ./vars.sh
script:
- docker build
-f ./MyApp.Api/Dockerfile
--build-arg PORT=$PORT .
- docker push ${CI_REGISTRY_IMAGE}/myapp:${tag}

Symfony 4 app works with Docker Compose but breaks with Docker Swarm (no login, profiler broken)

I'm using Docker Compose locally with:
app container: Nginx & PHP-FPM with a Symfony 4 app
PostgreSQL container
Redis container
It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app.
The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container).
Using the profiler, I nearly always get the following error:
Token not found
Token "2df1bb" was not found in the database.
When I display the content of the var/log/dev.log file, I get these lines about my login attempts:
[2019-07-22 10:11:14] request.INFO: Matched route "app_login". {"route":"app_login","route_parameters":{"_route":"app_login","_controller":"App\\Controller\\SecurityController::login"},"request_uri":"http://dev.ip/public/login","method":"GET"} []
[2019-07-22 10:11:14] security.DEBUG: Checking for guard authentication credentials. {"firewall_key":"main","authenticators":1} []
[2019-07-22 10:11:14] security.DEBUG: Checking support on guard authenticator. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.DEBUG: Guard authenticator does not support the request. {"firewall_key":"main","authenticator":"App\\Security\\LoginFormAuthenticator"} []
[2019-07-22 10:11:14] security.INFO: Populated the TokenStorage with an anonymous Token. [] []
The only thing I may find useful here is the Guard authenticator does not support the request. message, but I have no idea what do search from there.
UPDATE:
Here is my docker-compose.dev.yml (removed redis container and changed app environment variables):
version: "3.7"
networks:
web:
driver: overlay
services:
# Symfony + Nginx
app:
image: "registry.gitlab.com/my-image"
deploy:
replicas: 2
restart_policy:
condition: on-failure
networks:
- web
ports:
- 80:80
environment:
APP_ENV: dev
DATABASE_URL: pgsql://user:pass#0.0.0.0/my-db
MAILER_URL: gmail://user#gmail.com:pass#localhost
Here is the Dockerfile.dev used to build the app image on development servers:
# Base image
FROM php:7.3-fpm-alpine
# Source code into:
WORKDIR /var/www/html
# Import Symfony + Composer
COPY --chown=www-data:www-data ./symfony .
COPY --from=composer /usr/bin/composer /usr/bin/composer
# Alpine Linux packages + PHP extensions
RUN apk update && apk add \
supervisor \
nginx \
bash \
postgresql-dev \
wget \
libzip-dev zip \
yarn \
npm \
&& apk --no-cache add pcre-dev ${PHPIZE_DEPS} \
&& pecl install redis \
&& docker-php-ext-enable redis \
&& docker-php-ext-configure pgsql -with-pgsql=/usr/local/pgsql \
&& docker-php-ext-install pdo_pgsql \
&& docker-php-ext-configure zip --with-libzip \
&& docker-php-ext-install zip \
&& composer install \
--prefer-dist \
--no-interaction \
--no-progress \
&& yarn install \
&& npm rebuild node-sass \
&& yarn encore dev \
&& mkdir -p /run/nginx
# Nginx conf + Supervisor entrypoint
COPY ./dev.conf /etc/nginx/conf.d/default.conf
COPY ./.htpasswd /etc/nginx/.htpasswd
COPY ./supervisord.conf /etc/supervisord.conf
EXPOSE 80 443
ENTRYPOINT /usr/bin/supervisord -c /etc/supervisord.conf
UPDATE 2:
I pulled my Docker images and ran the application using only the docker-compose.dev.yml (without the docker-compose.local.yml that I'd use too locally). I have been able to login, everything is okay.
So... It works with Docker Compose locally, but not in Docker Swarm on a remote server.
UPDATE 3:
I made the dev server leave the Swarm cluster and started the services using Docker Compose. It works.
The issue is about going from Compose to Swarm. I created an issue: docker/swarm #2956
Maybe it's not your specific case, but it could help some user who have problems using Docker Swarm which are not present in Docker Compose.
I've been fighting this issue for over a week. I found that the default network for Docker Compose uses the bridge driver and Docker Swarm uses Overlay.
Later, I read in the Caveats section in the Postgres Docker image repo that there's a poblem with the IPVS connection timeouts in overlay networks and it refers to this blog for solutions.
I try with the first option and changed the endpoint_mode setting to dnsrr in my docker-compose.yml file:
db:
image: postgres:12
# Others settings ...
deploy:
endpoint_mode: dnsrr
Keep in mind that there are some caveats (mentioned in the blog) to consider. However, you could try the other options.
Also in this issue maybe you find something useful as they faced the same problem.

How to build firebase project in circleci including firestore and storage rules

I have a react project that is hosted in firebase. I am using circleci for builds. This has been working fine. However, I want to include firebase firestore rules and index config and firebase storeage rules in the build.
I have added them to my firebase.json file as follows:
"firestore": {
"rules": "firestore.rules",
"indexes": "firestore.indexes.json"
},
"storage": {
"rules": "storage.rules"
}
If I do a firebase deployment from the command line, the rules and indexes I've configured work fine.
My problem comes when I try and do a build in circleci. I get to the deploying stage and then I get this error:
i deploying firestore, hosting
Error: Error reading rules file firestore.rules
Exited with code 1
This is the relevant part of the config.yml:
deploy_uat:
docker:
- image: google/cloud-sdk
steps:
- run: echo $(printenv)
- type: shell
name: "Provisioning infrastructure"
command: |
curl -sL https://deb.nodesource.com/setup_8.x | bash -
apt-get -qq install -y build-essential nodejs
echo "node version -> $(node --version)"
echo "npm version -> $(npm --version)"
# Firebase tools include native code and need npm 5.x to install into a special dir since it won't have permission to access '/usr/lib/node_modules/'
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
export PATH=~/.npm-global/bin:$PATH
npm install -g firebase-tools
- type: shell
name: "Downloading & configuring archive prior to deployment"
command: |
echo ${GCP_SERVICE_ACCOUNT_AMCE_API_ADMIN_CIRCLECI} | base64 --decode > key.json
gcloud auth activate-service-account --key-file key.json
gcloud config set compute/zone us-central1
gcloud config set project AMCE-45
mkdir tmp
cd tmp
gsutil cp gs://AMCE-45-AMCE-admin-archive-web/${CIRCLE_PROJECT_REPONAME}-${CIRCLE_SHA1}.tgz .
tar xfz ${CIRCLE_PROJECT_REPONAME}-${CIRCLE_SHA1}.tgz
ls -al
- type: shell
name: "Deploying"
command: |
export PATH=~/.npm-global/bin:$PATH
ls -al build
echo "Using env -> $(cat build/env.js)"
firebase list --token "${FIREBASE_AUTH_TOKEN_AMCE_WEB_CUSTOMER_UAT}"
firebase deploy -P uat --token "${FIREBASE_AUTH_TOKEN_AMCE_WEB_CUSTOMER_UAT}"
Is there some additional dependency that I need to add? I've played around trying to add various firebase dependencies but just generate errors.
After a nights sleep, the solution was obvious...
I hadn't added firestore.rules, storage.rules and firestore.indexes.json to my config.yml file. Once I added them it built fine.
ls -al
tar -zcvf ${CIRCLE_PROJECT_REPONAME}-${CIRCLE_SHA1}.tgz .firebaserc firebase.json firestore.rules storage.rules firestore.indexes.json build

Resources