Build a docker image using Dockerfile pushing same image to Artifactory.
I noticed that when using apk --no-cache the sha256 changes although Dockerfile did not.
I pushed 3 different images to Artifactory, and when checking i noticed 3 different layers - does it means it occupies 3 layers instead of having reference to the first layer pushed?
I build the same Dockerfile 3 times, and pushed the image to Artifactory.
Checking the image layers i noticed 3 images with different layers (different sha256).
FROM alpine:3.9
ADD resources/repositories /etc/apk/repositories
RUN apk --no-cache add curl && apk --no-cache add --repository http://myartifactory.com:8081/artifactory/alpine-nl-remote/alpine/edge/testing gosu
Running (where build 1 changes to 2 and 3)
docker build -t myartifactory.com/apline:3.9-1
docker push
Checking Artifactory I now have 3 layers each image.
Once layer is different and two layers are same in all 3 images
Same image should be build with same sha256 and Artifactory should have one copy of the image and 2 more references pointing to that image
apk --no-cache to install same package may get different sha256, depends on is there any local cache.The reason image changed is because file metadata like mtime or atime changed.
You should build a base layer which installed all dependencies, then build from the base.
Related
Turbo crashes when using any command (e.g. turbo build), even when a valid project and turbo.json exists. This doesn't seem to be a problem on Ubuntu, but only on Alpine (arm64).
I've tried all of the new versions but they have the same issue.
npm install --global turbo
npm install --global turbo#latest
npm install --global turbo#canary
error:
thread 'main' panicked at 'Failed to execute turbo.: Os { code: 2, kind: NotFound, message: "No such file or directory" }', crates/turborepo/src/main.rs:23:10
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Because I was stuck on this for a few hours, I'll share solution here (which I also shared on Github):
If using a Dockerfile: add RUN apk add --no-cache libc6-compat to it
If using it on an Alpine machine, run apk add --no-cache libc6-compat
More explanation in:
Docker Node image docs
Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
The main caveat to note is that it does use musl libc instead of glibc and friends, so certain software might run into issues depending on the depth of their libc requirements.
One common issue that may arise is a missing shared library ... . To add the missing shared libraries to your image, adding the libc6-compat package in your Dockerfile is recommended: apk add --no-cache libc6-compat
https://github.com/vercel/turbo/issues/3373#issuecomment-1397080265
We have a JS-based stack in our application - React with vast majority being a React Admin frontend, built on Next.js server, with Postgres, Prisma and Nexus on backend. I realize it's not a great use case for Next.js (React Admin basically puts entire application in a single "component" (root), so basically I have a giant index.tsx page instead of lots of smaller pages), but we've had quite terrible build times in Gitlab CI and I'd like to know if there's anything I can do about it.
We utilize custom gitlab-runners deployed on the company Kubernetes cluster. Our build job essentially looks like:
- docker login
- CACHE_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG:latest"
- SHA_IMAGE_NAME="$CI_REGISTRY_IMAGE/$CI_COMMIT_SHORT_SHA"
- docker pull $CACHE_IMAGE_NAME || true
- docker build
-t $CACHE_IMAGE_NAME
-t $SHA_IMAGE_NAME
--cache-from=$CACHE_IMAGE_NAME
--build-arg BUILDKIT_INLINE_CACHE=1
- docker push # both tags
And the Dockerfile for that is
FROM node:14-alpine
WORKDIR /app
RUN chown -R node:node /app
USER node
COPY package.json yarn.lock ./
ENV NODE_ENV=production
RUN yarn install --frozen-lockfile --production
COPY . .
# Prisma client generate
RUN npm run generate
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
ARG NODE_OPTIONS=--max-old-space-size=4096
ENV NODE_OPTIONS $NODE_OPTIONS
EXPOSE 3000
CMD ["yarn", "start"]
This built image is then deployed with Helm into our K8s with the premise that initial build is slower, but subsequent builds in the pipeline will be faster as they can utilize docker cache. This works fine for npm install (first run takes around 10 minutes to install, subsequent are cached), but next build is where hell breaks loose. The build times are around 10-20 minutes. I recently updated to Next.js 12.0.2 which ships with new Rust-based SWC compiler which is supposed to be up to 5 times faster, and it's actually even slower (16 minutes).
I must be doing something wrong, but can anyone point me in some direction? Unfortunately, React Admin cannot be split across several Next.js pages AFAIK, and rewriting it to not use the framework is not an option either. I've tried doing npm install and next build in the CI and copy that into the image, and store in the Gitlab cache, but that seems to just shift the time spent from installing/building into copying the massive directories in/out cache and into the image. I'd like to try caching the .next directory in between builds, maybe there is some kind of incremental build possible but I'm skeptical to say the least.
Well, there are different things that we can approach for making it faster.
You're using Prisma, but you're generating the client every time that you have a modification in any of the files, preventing the Docker cache to actually take care of that layer. If we take a look into the Prisma documentation, we need to generate the Prisma Client each time that there's a change on the Prisma schema, not in the TS/JS Code.
I will suppose you have your Prisma schematics under the prisma directory, but feel free to adapt my suppositions to the reality of your project:
ENV NODE_ENV=production
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
You're using a huge image for your final container, which maybe doesn't have a significant impact on the build time, but it definitely has on the final size and the time required to load/download the image. I would recommend to migrate into a multi-stage solution like the following one:
ARG NODE_OPTIONS=--max-old-space-size=4096
FROM node:alpine AS dependencies
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production
COPY prisma prisma
RUN npm run generate
FROM node:alpine AS build
WORKDIR /app
COPY . .
COPY --from=dependencies /app/node_modules ./node_modules
ENV NEXT_TELEMETRY_DISABLED 1
RUN npm run build
FROM node:alpine
ARG NODE_OPTIONS
ENV NODE_ENV production
ENV NODE_OPTIONS $NODE_OPTIONS
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001
WORKDIR /app
COPY --from=build /app/public ./public
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./package.json
COPY --chown=nextjs:nodejs --from=build /app/.next ./.next
USER 1001
EXPOSE 3000
CMD ["yarn", "start"]
From another point of view, probably you could improve also the NextJS build by changing some tools and modifying the NextJS configuration. You should use locally the tool https://github.com/stephencookdev/speed-measure-webpack-plugin to analyze which is the culprit of that humongous build-time (which probably is something related to sass) and also take a look into the TerserPlugin and the IgnorePlugin.
(I am newbee with respect to Artifactory) I use the JFrog CLI for promoting builds (docker images) from dev repo to prod. When I build I create three tags, one VERSION, one latest and one with a build ID. When I promote using "jfrog rt bpr" command I don't want the build-id tag to be promoted, only VERSION and latest.
I build Docker images using Gitlab and use the JFrog CLI for pushing images, scanning with x-ray and publishing build info to Artifactory.
My process is that in a Gitlab pipeline I build the docker image with three tags.
Then I deploy it to a dev repository in Artifactory with "jfrog rt docker-push..." for all three tags and then publish the build info to Artifactory.
Then I test the Docker image in a test stage in the Gitlab pipline, followed by a xray scan of the build using JFrog cli.
When everything works I want to promote the Docker image to my prod repository in Artifactory using JFrog cli. This however promotes all three tags but I would like to only promote the VERSION and latest tag and not the third tag which is only used as a "snapshot" tag..
Is this possible using jfrog cli promote command? Or is there a better way of thinking about the whole process of naming, tagging and promoting images from dev to prod using Artifactory?
This is the build stage:
# Build docker images
- >
docker build
-t $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest
-t $DOCKER_REGISTRY/$CI_PROJECT_NAME:$VERSION
-t $DOCKER_REGISTRY/$CI_PROJECT_NAME:$CI_PIPELINE_ID
.
# Push to Artifactory's dev repo via the virtual repo
- jfrog rt docker-push $DOCKER_REGISTRY/$CI_PROJECT_NAME:latest docker-virtual --build-name=$CI_PROJECT_NAME --build-number=$CI_PIPELINE_ID
- jfrog rt docker-push $DOCKER_REGISTRY/$CI_PROJECT_NAME:$VERSION docker-virtual --build-name=$CI_PROJECT_NAME --build-number=$CI_PIPELINE_ID
# Collect environment variables
- jfrog rt build-collect-env $CI_PROJECT_NAME $CI_PIPELINE_ID
# Push build info to Artifactory, but exclude sensitive information such as passwords
- jfrog rt build-publish --build-url=$CI_PIPELINE_URL --env-exclude="*DOCKER_AUTH_CONFIG*;*PASSWORD*;*KEY*" $CI_PROJECT_NAME $CI_PIPELINE_ID
This is the promote stage:
- jfrog rt bpr --status=STABLE --copy=true $CI_PROJECT_NAME $CI_PIPELINE_ID docker-prod-local
I'm trying to dockerize Drupal 8, and I'm running into this issue that after running Drupal 8 in a container and installing it, if I then remove the container and start it again, it prompts to install it again.
The thing is, when Drupal is installed, a settings.php file is created with the database details.
I wanted to create a systemd unit file for launching the Drupal 8 container in a smart way that even if it's removed, it should start again next time with the same installation.
Someone recommended me to write a systemd unit file with ConditionPathExists= to mount settings.php based on whether it's there locally, however I think this is not going to fully work, because on installation in the container, the generated settings.php file wouldn't be persisted back to the host machine.
So how can I solve the issue of making a Docker container for Drupal that offers to install if it hasn't been installed yet, and from then on use the installed instance even if the container is removed and rebuilt?
I would highly recommend using the official docker image for drupal
https://hub.docker.com/_/drupal/
Saves a lot of time and if you still need to customize your environment at least you can look at its Dockerfile and see how it's been done by the community.
Container persistence
When a container is stopped it can be restarted. All it's files are preserved including any settings.php files that may have been created.
A brand new container, on the other hand, will always start from scratch no simple way to avoid this. To persist data across container instances you need to use volumes.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Here's how it's done:
#
# Create a data container
#
docker create \
-v /var/www/html/sites \
-v /var/www/private \
--name my-data \
drupal
#
# Run drupal without a db container (select sqlite on first install)
#
docker run --volumes-from my-data --name my-drupal -p 8080:80 -d drupal
Note:
You could use volume mappings to the host machine, but this data container pattern is more flexible, for example when upgrading drupal.
How it works
Drupal 8 is built on top of the offical PHP language image.
Drupal 8.1 Dockerfile
Php 7 Apache Dockerfile
In the PHP buildfile note how Apache is being run in the foreground?
CMD ["apache2-foreground"]
No need for systemd running inside the container.
I am trying to deploy one EXE file and it's zipped source file to Sonatype Nexus using maven command line. Files must be deployed as SNAPSHOTs.
So, I have 2 files:
-testXYZ.exe and source file
-testXYZ.zip
Using maven 2.2.1 and command described here:
mvn deploy:deploy-file -Durl=file:///home/me/m2-repo \
-DrepositoryId=some.repo.id \
-Dfile=./path/to/artifact-name-1.0.jar \
-DpomFile=./path/to/pom.xml \
-Dsources=./path/to/artifact-name-1.0-sources.jar \
-Djavadoc=./path/to/artifact-name-1.0-javadoc.jar
I can deploy EXE, but cannot deploy source, because maven 2.2.1 is using deploy-plugin v2.5 and this command is not supported until v2.7.
It is not allowed to me to use newer versions of maven, so I try different approach.
Using these two subsequent commands I can deploy these two artifacts, but, source cannot be downloaded from nexus.
call mvn deploy:deploy-file -DgroupId=com.xyz -DartifactId=testXYZ -Dversion=1.1.116-SNAPSHOT -Dpackaging=zip -Dfile=testXYZ.zip -Dclassifier=sources -Durl=http://build:8081/nexus/content/repositories/snapshots -DrepositoryId=nexus
call mvn deploy:deploy-file -DgroupId=com.xyz -DartifactId=testXYZ -Dversion=1.1.116-SNAPSHOT -Dpackaging=exe -Dfile=testXYZ.exe -Durl=http://build:8081/nexus/content/repositories/snapshots -DrepositoryId=nexus
After deploy, i search for testXYZ and click on artifact source download link.
Nexus says:
"Item not found on path
"com.xyz:testXYZ:1.1.116-SNAPSHOT:c=sources:e=jar"!"
Problem is the way maven upload these artifacts:
Line form log file while source is uploading:
Uploaded: http://build:8081/nexus/content/repositories/snapshots/com/xyz/testXYZ/1.1.116-SNAPSHOT/testXYZ-1.1.116-20120106.111705-1-sources.zip
Line form log file while Main artifact is uploading:
Uploaded: http://build:8081/nexus/content/repositories/snapshots/com/xyz/testXYZ/1.1.116-SNAPSHOT/testXYZ-1.1.116-20120106.111709-2.exe
Notice 111705-1 and 111705-2. Last number must be the same if we wish Nexus can generate correct links.
This approach is described here:
Deploying an artifact, its sources and javadoc using maven's deploy:deploy-file plugin
and here:
http://maven.apache.org/plugins/maven-install-plugin/examples/installing-secondary-artifacts.html
and it working for fixed versions(for example 1.1.116), but not for SNAPSHOTs.
Exe and Zip files can be deployed to Nexus (like jar files), if fixed version is used.
So, question is:
Is there a way to deploy artifact and source SNAPSHOTs from command line to Sonatype Nexus and to be sure that these files can be downloaded by clicking on sources and artifacts links?
Note:
If I disable timestamps suffix, this can work, but I do not want to do this.
-DuniqueVersion=false
Thanks,
Marjan
I found partial solution for this problem. I can call specific version of maven-deploy-plugin like this:
mvn org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file...
This way, artifacts and sources SNAPSHOTs can be deployed to Nexus avoiding any problems with download, but it behave like
-DuniqueVersion=false
is still there.