How to delete Dev tagged docker images in hosted group of Nexus Repository - nexus

Am trying to delete Dev tagged docker images in hosted group of Nexus Repository. Can anyone provide clear procedure to do it by once which means using some scripts or any alternate methods.

There's information on removing docker images in the cleanup guide. If you just want to remove a single image you can use the UI or REST API to delete the manifest. Layers may be used by other images (especially in the case you're deleting a tag) so it's adviced to use the tasks described therein (any mention of cleanup is because it's the cleanup guide and isn't relavent to an individual removal).

Related

Practical Use of Artifactory Repositories

In a near future I will start using Artifactory in my project. I have been reading about local and remote repositories and I am a bit confused of their practical use. In general as far as I understand
Local repositories are for pushing and pulling artifacts. They have no connection to a remote repository (i.e. npm repo at https://www.npmjs.com/)
Remote repositories are for pulling and caching artifacts on demand. It works only one way, it is not possible to push artifacts.
If I am right up to this point, then practically it means you only need a remote repository for npm if you do not develop npm modules but only use them to build your application. In contrast, if you need to both pull and push Docker container images, you need to have one local repository for pushing&pulling custom images and one remote repository for pulling official images.
Question #1
I am confused because our Artifactory admin created a local npm repository for our project. When I discussed the topic with him he told me that I need to first get packages from the internet to my PC and push them to Artifactory server. This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Question #2
Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy (i.e. delete packages older than 6 months)? I ask this because it is important to keep packages until a meteor hits the servers (for archiving policy of the company).
Question #3
We will need to get official Docker images and customize them for CI. It would be a bit hard to maintain one local repo for pulling&pushing custom images and one remote repo for pulling official images. Let's say I need to pull official Ubuntu latest, modify it, push and finally pull the custom image back. In this case it should be pulled using remote repository, pushed to local repo and pulled again from local repo. Is it possible to use virtual repositories to do this seamlessly as one repo?
Question #1 This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Generally, you would want to use a remote repository for this. You would then point your client to this remote repository and JFrog Artifactory would grab them from the remote site and cache them locally, as needed.
In some very secure environments, corporate policies do not even allow this (they may not even be connected to the internet) and instead manually download, vet, and then upload those third-party libraries to a local repository. I don't think that is your case and they may just not understand their intended usages.
Question #2 Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy?
They will not be deleted unless you actively configure it to do so.
For some repo types there are built-in retention mechanisms like the number of snapshots or maximum tags but not for all of them and even in those that have it, they must be actively turned on. Different organizations have different policies for how long artifacts must be maintained. There are a lot of ways to cleanup those old artifacts but ultimately it will depend on your own requirements.
Question #3 Is it possible to use virtual repositories to do this seamlessly as one repo?
A virtual repository will let you aggregate your local and remote sites and appear as a single source. So you can do something like:
docker pull myarturl/docker/someimage:sometag
... docker build ...
docker push myarturl/docker/someimage:sometag-my-modified-version
docker pull myarturl/docker/someimage:sometag-my-modified-version
It is also security-aware so if the user only has access to the local stuff and not the remote stuff, they will only be able to access the local stuff even though they are using the virtual repository that contains both of them.
That said, I don't see why it would be any harder to explicitly use different repositories:
docker pull myarturl/docker-remote/someimage:sometag
... docker build ...
docker push myarturl/docker-local/someimage:sometag-my-modified-version
docker pull myarturl/docker-local/someimage:sometag-my-modified-version
This also has the added advantage that you know they can only pull your modified version of the image and not the remote (though you can also accomplish that by creating the correct permissions).

Synchronise back-office with Github in a stateful CMS

For the continuous integration and deployment of websites, I am using this pipeline:
But for many CMS like wordpress, prestashop, magento and others, the configuration of the website and the installation of plugins is done in the back-office of the deployed website.
For now, I am building the docker image on top of the CMS base image, then replacing all the /var/html directory with the files in github. Then Kubernetes is deploying the containers and plug a database and a persistent storage
Hence, this is breaking my pipeline: imagine that someone is installing and configuring a plugin in the back-office, then someone else is doing a modification on a file and pushes it to github. The github repo doesn't have the info that a plugin was installed and will build and deploy a new image without it.
How to integrate all the modifications done in the back-office in my github repository?
The solution we use is an override of the DB class.
So we monitor a number of tables (Configuration, module, hook, etc ...) and we store all queries about it in a sql file.
So during commit, we also have a .sql actions to perform on the database side.
Once deployed, either you manually execute the sql, or a script detect that new SQL are present and executes.
In this way we are always up to date.
This solution we developed in the form of Prestashop modules to track all actions.
Regards
My (by any means not ideal) working solution:
Create plugins folder outside docker and symlink this folder in dockered /wp-content/plugins
recreate above in production
Then installing new plugin doesn't break CI flow but requires two independent installations and configurations, if you (or dev team) need to install something new.
So you basically treat plugin files same way as you already do it with DB.

Is it possible to back up a Docker container with all the volumes / data / state?

I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.

nexus-cli returns 404 with command nexus-cli image ls

I'm trying to remove unused docker images from nexus repo using nexus-cli(with automation script).
I have downloaded nexus-cli and configured Host, Repository, Username and Password when I'm trying to hit command
./nexus-cli image ls
It retrns HTTP Code: 404
can anyone please help me on this or please suggest me if there is any another way to remove unused docker images
Refrance links: http://www.blog.labouardy.com/cleanup-old-docker-images-from-nexus-repository/
https://www.ivankrizsan.se/2016/06/09/create-a-private-docker-registry/#comment-739
NXRM3 has a "Purge unused docker manifests and images" task that may be helpful. It removes any items that are no longer associated as in use. Unfortunately, because docker shares layers if the layers are used elsewhere this task often isn't as much help. If you manually remove the images then it should remove the layers that are no longer used however.
Ref: https://help.sonatype.com/display/NXRM3/System+Configuration#SystemConfiguration-TypesofTasksandWhentoUseThem
Also don't forget to run the "Compact blob store" task after you run any NXRM3 cleanup task. Otherwise, your files are just soft deleted and not freeing up any space.
You may also wish to watch https://issues.sonatype.org/browse/NEXUS-11435 and even provide feedback if you have specific usage scenarios.

How to sync 2 remotes using Phabricator Diffusion

I have to use 2 remotes for my repositories. For eg.
One is my local git server (gitblit)
One is Github/bitbucket
Additionally, I have to use Phabricator to manage all this. So the workflow i am thinking is:
I push the changes to my local git server, and my friends push to github. Phabricator Observe the changes from local git server + Github and sync it with the other remote changes. I have tried Mirror option, but it deleted the changes from one of remote, because that's what mirror is supposed to do.
So I need to know a way which i can use to sync these 2 remotes using Phabricator.
Apart from creating a (read-only, as you discovered) mirror, Phabricator doesn't really have any ability to push to other servers. It assumes one of the following workflows:
Phabricator is the master copy of the repository - everyone pushes to Phabricator (Phabricator can push to mirrors in this scenario).
Some other server is the master copy of the repository - Phabricator will monitor the remote master and keep a read-only copy of the repository locally.
It might be possible to implement a respository merging task in Harbormaster, but you'll have to be prepared for frequent manual intervention in any workflow that has users pushing to different repositories and expecting automation to sync them together. Probably this syncing task would be easier if you were to get rid of the gitblit server from the equation, and just use Phabricator locally.

Resources