'acr purge --untagged' is removing all tagged images from an ACR repository - azure-container-registry

If I have the following tags and manifest in an ACR repository...
Which returns the following when I run the following command...
az acr repository show-manifests --name "[registry-name]" --repository "[repository-name]"
[
{
"digest": "sha256:30be2b07e723b0f36fed370c386b027e52dbcd0ad2ad2fcac1d3b7d1b361292f",
"tags": [
"982878",
"master"
],
"timestamp": "2022-09-07T15:49:04.4187041Z"
}
]
When I run the following purge command....
az acr run --cmd "acr purge --filter '[repository-name]:.*' --untagged --ago 1m" --registry [registry-name] /dev/null
It is deleting the tags and manifest, and because it deletes everything the repository is deleted as well.
Why is it doing this when I'm using the --untagged flag and you can clearly see it's not untagged based on the starting state?

I have tried to reproduce the same in my environment
I have two repositories ,hello-world with 1 tag: latest
I checked with below command which you tried:
PURGE_CMD="acr purge --filter 'hello-world:.*' \
--untagged –ago 1m"
az acr run \
--cmd "$PURGE_CMD" \
--registry myregistry807 \
/dev/null
It is deleting even the tagged repository
This command:
az acr run --cmd "acr purge --filter 'hello-world:.*' --untagged --ago 1d" --registry myregistry807 /dev/null
It is deleting the tags first, and then it is deleting the untagged manifests and then the registry.
You can check this Purge tags and manifests-run-in-an-on-demand-task - Azure Container Registry | Microsoft Docs:
This purge command deletes all image tags and manifests in the
repository (hello-world in my case) repository in myregistry that were
modified more than 1 day ago and all the untagged manifests.
You can try below commands to delete untagged repositories: Commands
from Delete all untagged manifests within a repository in one
command · GitHub where you can use [?tags[0]==null] to delete
only repos with no tag or null tag.
In bash:
az acr repository show-manifests -n myregistry807 –repository targetrepository --query "[?tags[0]==null].digest" -o tsv | xargs -I% az acr repository delete -n myregistry807 -t targetrepository #% --yes
for preview version:
az acr manifest list-metadata -r myregistry807 -n hello-world --query "[?tags[0]==null].digest" -o tsv | xargs -I% az acr repository delete -n myregistry807 -t hello-world#% --yes
and repository is not deleted as it has tags.
then i checked with [?tags[0]!=null] to delete all tags except null, and it successfully worked for me:
Result: deleted tagged manifest which is the only one present:

Related

Airflow 2.0.2 - No user yet created

we're moving from airflow 1.x to 2.0.2, and I'm noticing the below error in my terminal after i run docker-compose run --rm webserver initdb:
{{manager.py:727}} WARNING - No user yet created, use flask fab
command to do it.
but in my entrypoint.sh I have the below to create users:
echo "Creating airflow user: ${AIRFLOW_CREATE_USER_USER_NAME}..."
su -c "airflow users create -r ${AIRFLOW_CREATE_USER_ROLE} -u ${AIRFLOW_CREATE_USER_USER_NAME} -e ${AIRFLOW_CREATE_USER_USER_NAME}#vice.com \
-p ${AIRFLOW_CREATE_USER_PASSWORD} -f ${AIRFLOW_CREATE_USER_FIRST_NAME} -l \
${AIRFLOW_CREATE_USER_LAST_NAME}" airflow
echo "Created airflow user: ${AIRFLOW_CREATE_USER_USER_NAME} done!"
;;
Because of this error whenever I try to run airflow locally I still have to run the below to create a user manually every time I start up airflow:
docker-compose run --rm webserver bash
airflow users create \
--username name \
--firstname fname \
--lastname lname \
--password pw \
--role Admin \
--email email#email.com
Looking at the airflow docker entrypoint script entrypoint_prod.sh file, looks like airflow will create the an admin for you when the container on boots.
By default the admin user is 'admin' without password.
If you want something diferent, set this variables: _AIRFLOW_WWW_USER_PASSWORD and _AIRFLOW_WWW_USER_USERNAME
(I'm on airflow 2.2.2)
Looks like they changed the admin creation command password from -p test to -p $DEFAULT_PASSWORD. I had to pass in this DEFAULT_PASSWORD env var to the docker-compose environment for the admin user to be created. It also looks like they now suggest using the .env.localrunner file for configuration.
Here is the commit where that change was made.
(I think you asked this question prior to that change being made, but maybe this will help someone in the future who had my same issue).

How to delete all contents of local Artifactory repository via REST API?

I'm using a local repository as a staging repo and would like to be able to clear the whole staging repo via REST. How can I delete the contents of the repo without deleting the repo itself?
Since I have a similar requirement in one of my environments I like to provide a possible solution approach.
It is assumed the JFrog Artifactory instance has a local repository called JFROG-ARTIFACTORY which holds the latest JFrog Artifactory Pro installation RPM(s). For listing and deleting I've created the following script:
#!/bin/bash
# The logged in user will be also the admin account for Artifactory REST API
A_ACCOUNT=$(who am i | cut -d " " -f 1)
LOCAL_REPO=$1
PASSWORD=$2
STAGE=$3
URL="example.com"
# Check if a stage were provided, if not set it to PROD
if [ -z "$STAGE" ]; then
STAGE="repository-prod"
fi
# Going to list all files within the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-FileList
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X GET "https://${STAGE}.${URL}/artifactory/api/storage/${LOCAL_REPO}/?list&deep=1" \
-w "\n\n%{http_code}\n"
echo
# Going to delete all files in the local repository
# Doc: https://www.jfrog.com/confluence/display/JFROG/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
curl --silent \
-u"${A_ACCOUNT}:${PASSWORD}" \
-i \
-X DELETE "https://${STAGE}.${URL}/artifactory/${LOCAL_REPO}/" \
-w "\n\n%{http_code}\n"
echo
So after calling
./Scripts/deleteRepository.sh JFROG-ARTIFACTORY Pa\$\$w0rd! repository-dev
for the development instance, it listed me all files in the local repository called JFROG-ARTIFACTORY, the JFrog Artifactory Pro installation RPM(s), deleted them, but left the local repository itself.
You may change and enhance the script for your needs and have also a look into How can I completely remove artifacts from Artifactory?

Mount EFS to wp-content on elastic beanstalk

So i'm having a problem setting up a Wordpress site on EB. I got the EFS to mount correctly on wp-content/uploads/wpfiles (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-hawordpress-tutorial.html) however this only allows the pages to be stored and not the plugins. Is it possible to mount the entire wp-content folder onto EFS, I've tried and so far failed
I'm not sure if this issue was resolved and it passed silently. I'm having the same issue as you, but with a different error. My knowledge is fairly limited so take what I say with a grain of salt, according to what I saw in your log the problem is that your instance can't see the server. I think that it could be that your EB application is getting deployed in a different Availability Zone than your EFS. What I mean is that maybe you have mount targets for AZ a, b and d and your EB is getting deployed in AZ c. I hope this helps.
I tried a different approach (it basically does the same thing, but I'm manually linking each of the subfolders instead of the wp-content folder), for it to work I deleted the original folders inside /var/app/ondeck (that will eventually get copied to /var/app/current/ that is the folder which get served). Of course, once this gets done your Wordpress won't work since it doesn't have any themes, the solution here would be to quickly log in to the EC2 instance in which your ElasticBeanstalk app is running and manually copying the contents to the mounted EFS (in my case the /wpfiles folder). To connect to the EC2 instance (you can find the instance ID under your EB health configuration) you can follow this link and to mount your EFS you can follow this link. Of course, if the config works you won't have to mount it since it would be already mounted though empty. Here is the content of my config file:
option_settings:
aws:elasticbeanstalk:application:environment:
EFS_NAME: '`{"Ref" : "FileSystem"}`'
MOUNT_DIRECTORY: '/wpfiles'
REGION: '`{"Ref": "AWS::Region"}`'
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p $MOUNT_DIRECTORY
EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION')
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
MOUNT_DIRECTORY=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME.efs.${EFS_REGION}.amazonaws.com:/ $MOUNT_DIRECTORY || true
mkdir -p $MOUNT_DIRECTORY/uploads
mkdir -p $MOUNT_DIRECTORY/plugins
mkdir -p $MOUNT_DIRECTORY/themes
chown webapp:webapp -R $MOUNT_DIRECTORY/uploads
chown webapp:webapp -R $MOUNT_DIRECTORY/plugins
chown webapp:webapp -R $MOUNT_DIRECTORY/themes
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-rm-wp-content-uploads:
command: rm -rf /var/app/ondeck/wp-content/uploads && rm -rf /var/app/ondeck/wp-content/plugins && rm -rf /var/app/ondeck/wp-content/themes
02-symlink-uploads:
command: ln -snf $MOUNT_DIRECTORY/uploads /var/app/ondeck/wp-content/uploads && ln -snf $MOUNT_DIRECTORY/plugins /var/app/ondeck/wp-content/plugins && ln -snf $MOUNT_DIRECTORY/themes /var/app/ondeck/wp-content/themes
I'm using another config file to create my EFS as in here, in case you have already created your EFS you must change EFS_NAME: '`{"Ref" : "FileSystem"}`' to EFS_NAME: id_of_your_EFS.
I hope this helps user3738338.
You can do following this link - https://github.com/aws-samples/eb-php-wordpress/blob/master/.ebextensions/efs-mount.config
Just keep a note it uses uploads, you can change it for wp-content.

Cannot copy intermediate docker container files to host

I have a Dockerfile, it does dotnet publish and the dll's are copied to intermediate docker container. I would like to copy the dll's which are generated in container to my local system (Host) as well.
I believe we can use "cp" command to do that, but I am not able to find a solution to get the intermediate container Id to use the "cp" command.
syntax: docker cp CONTAINER:Container_Path Host_Path.
Please suggest me any other better solution for this scenario.
Dockerfile:
FROM microsoft/aspnetcore-build:1.1.4 as builder
COPY . /Code
RUN dotnet restore /Code/MyProj.csproj
RUN dotnet publish -c Release /Code/MyProj.csproj
RUN cp CONTAINER: /Code/bin/Release/netcoreapp1.1/publish /binaries
Thanks.
This answer is outside of the Dockerfile.
first your Dockerfile would have to have a volume.
[VOLUME] /my/path/in/container
To get files into and out of a volume, try using tar -cvf and tar -xvf to put and get files between a container and a host.
To put files from host's newfiles.tar in pwd to a container at /var/lib/neo4j/conf mount.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -xf /newfiles/newfiles.tar"
To get files from into a container at /my/path/in/container mount to a host oldfiles.tar.
docker run --rm \
-v my-volume-data:/my/path/in/container -v $(pwd):/newfiles ubuntu bash -c \
"cd /my/path/in/container && tar -cf /newfiles/origfiles.tar"
The --user 1000:1000 is optional if your container has a user with uid of 1000.

Meteor Up Docker and Graphicsmagick

I'm looking for how to install Graphicsmagick at Meteor Up Docker.
I found this solution (Access binaries inside docker) but I couldn't make work, where do I put those lines at start.sh?
meteorDockerId=docker ps | grep meteorhacks/meteord:base | awk '{print $1}'
docker exec $meteorDockerId apt-get install graphicsmagick -y
That's my start.sh:
#!/bin/bash
APPNAME=instagatas
APP_PATH=/opt/$APPNAME
BUNDLE_PATH=$APP_PATH/current
ENV_FILE=$APP_PATH/config/env.list
PORT=80
USE_LOCAL_MONGO=0
# remove previous version of the app, if exists
docker rm -f $APPNAME
# remove frontend container if exists
docker rm -f $APPNAME-frontend
set -e
docker pull meteorhacks/meteord:base
if [ "$USE_LOCAL_MONGO" == "1" ]; then
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
else
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--hostname="$HOSTNAME-$APPNAME" \
--env-file=$ENV_FILE \
--name=$APPNAME \
meteorhacks/meteord:base
fi
docker pull meteorhacks/mup-frontend-server:latest
docker run \
-d \
--restart=always \
--volume=/opt/$APPNAME/config/bundle.crt:/bundle.crt \
--volume=/opt/$APPNAME/config/private.key:/private.key \
--link=$APPNAME:backend \
--publish=443:443 \
--name=$APPNAME-frontend \
meteorhacks/mup-frontend-server /start.sh
Re-installing the graphicsmagick package every time you re-start the containers seems like a hack I wouldn't want to do.
If you're modifying the start script already, might as well use a Dockerfile:
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
Then modify start.sh template to build a new docker image with graphicsmagick, tag it and use that image instead:
see: https://gist.github.com/so0k/7d4be21c5e2d9abd3743/revisions
EDIT: Where to put Dockerfile?
start.sh template is copied to /opt/<appName>/config/, currently the Dockerfile would need to be in that same directory (/opt/<appName>/config/Dockerfile)
see Linux init Task
Alternatively, you can specify specific Dockerfile with the -f flag for the docker build
Or your third option is to pipe Dockerfile to docker build using a here document
I've updated the start.sh gist, we no longer pull the meteord:base image and build it instead:
docker build -t meteorhacks/meteord:app - << EOF
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
EOF
The docker build will run every time, but as long as the requirements aren't changing, docker will use the docker images it cached.
The development Version of Meteor Up at Kadirahq allows specification of a custom Docker Image in the config file (mup.js).
MeteorD-Images with Graphicsmagick installed are available on Docker Hub.
This got me a working deployment (Meteor 1.3.2.4, Meter Up 309cefb, Node v5.4.1):
mup.js:
module.exports = {
…
meteor: {
dockerImage: 'ianmartorell/meteord-graphicsmagick',
…
},
};
I couldn't get the docker image that #bskp mentioned to work, so I figured out how to write one that uses abernix/meteord:base and then has graphicsmagick installed. Very simple, but it seems to be working for me on Meteor 1.4.1.1
I just did this in my mup.js file
docker: {
image: "joshjoe/meteor-graphicsmagick",
},
This was a huge pain to get working, so I'd be happy to help anyone who is struggling with this.
https://github.com/c316/meteor-graphicsmagick
If the if statement successes, you should be able to see a running container corresponding to the image you are grepping. In my opinion you can add the two lines after the fi to obtain the environment variable.
Build an image for get things right, but you can do temporary:
docker exec -it MeteorAppName apt-get install imagemagick -y
docker restart MeteorAppName
Check imagemagick: docker exec -it MeteorAppName convert -version
Why don't you add the following package meteor add cfs:graphicsmagick
https://atmospherejs.com/cfs/graphicsmagick
It tries to make sure Graphicsmagick is available. It worked for my use case i think it will work with docker too.

Resources