Change docker repository class (local to virtual) - artifactory

We need more flexibility in our current local docker repos (ie. we want to be able to pull local and remote images from the same repo). Therefore, I'd like to rename our docker repo to docker-local whilst creating a new virtual docker repo called docker which includes docker-local. Is there a way to do this operation atomically?
I've read that renaming repositories is considered bad in artifactory. Would renaming the repo in this case break anything? It's not really clear to me what are the problems with renaming a repo. Would the internal state be inconsistent?

It is indeed possible and as you already mentioned considered bad practice as it could indeed mess up the internal state of Artifactory if done incorrectly.
The better alternative would be to create a new repo called docker-local and move the artifacts from the current repo there (move is a much cheaper option when it comes to resources). After that you can delete the docker repo and create it as a virtual one.
Please be aware that as you're doing this, the clients connecting to the repository won't be able to resolve their dependencies.

Related

Practical Use of Artifactory Repositories

In a near future I will start using Artifactory in my project. I have been reading about local and remote repositories and I am a bit confused of their practical use. In general as far as I understand
Local repositories are for pushing and pulling artifacts. They have no connection to a remote repository (i.e. npm repo at https://www.npmjs.com/)
Remote repositories are for pulling and caching artifacts on demand. It works only one way, it is not possible to push artifacts.
If I am right up to this point, then practically it means you only need a remote repository for npm if you do not develop npm modules but only use them to build your application. In contrast, if you need to both pull and push Docker container images, you need to have one local repository for pushing&pulling custom images and one remote repository for pulling official images.
Question #1
I am confused because our Artifactory admin created a local npm repository for our project. When I discussed the topic with him he told me that I need to first get packages from the internet to my PC and push them to Artifactory server. This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Question #2
Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy (i.e. delete packages older than 6 months)? I ask this because it is important to keep packages until a meteor hits the servers (for archiving policy of the company).
Question #3
We will need to get official Docker images and customize them for CI. It would be a bit hard to maintain one local repo for pulling&pushing custom images and one remote repo for pulling official images. Let's say I need to pull official Ubuntu latest, modify it, push and finally pull the custom image back. In this case it should be pulled using remote repository, pushed to local repo and pulled again from local repo. Is it possible to use virtual repositories to do this seamlessly as one repo?
Question #1 This does not make any sense to me because I have seen some remote repositories on the same server and what we need is only to pull packages from npm. Is there a point that I miss?
Generally, you would want to use a remote repository for this. You would then point your client to this remote repository and JFrog Artifactory would grab them from the remote site and cache them locally, as needed.
In some very secure environments, corporate policies do not even allow this (they may not even be connected to the internet) and instead manually download, vet, and then upload those third-party libraries to a local repository. I don't think that is your case and they may just not understand their intended usages.
Question #2 Are artifacts at remote repository cache saved until intentionally deleted? Is there a default retention policy?
They will not be deleted unless you actively configure it to do so.
For some repo types there are built-in retention mechanisms like the number of snapshots or maximum tags but not for all of them and even in those that have it, they must be actively turned on. Different organizations have different policies for how long artifacts must be maintained. There are a lot of ways to cleanup those old artifacts but ultimately it will depend on your own requirements.
Question #3 Is it possible to use virtual repositories to do this seamlessly as one repo?
A virtual repository will let you aggregate your local and remote sites and appear as a single source. So you can do something like:
docker pull myarturl/docker/someimage:sometag
... docker build ...
docker push myarturl/docker/someimage:sometag-my-modified-version
docker pull myarturl/docker/someimage:sometag-my-modified-version
It is also security-aware so if the user only has access to the local stuff and not the remote stuff, they will only be able to access the local stuff even though they are using the virtual repository that contains both of them.
That said, I don't see why it would be any harder to explicitly use different repositories:
docker pull myarturl/docker-remote/someimage:sometag
... docker build ...
docker push myarturl/docker-local/someimage:sometag-my-modified-version
docker pull myarturl/docker-local/someimage:sometag-my-modified-version
This also has the added advantage that you know they can only pull your modified version of the image and not the remote (though you can also accomplish that by creating the correct permissions).

Connect one Artifactory to another Artifactory

Our setup includes a company wide Artifactory that holds in-house-built artifacts as well as goes out and fetches publicly available artifacts. I’m trying to setup a local Artifactory at our location that would fetch publicly available artifacts through the regular internet, but would connect to the company wide Artifactory for our in-house-built artifacts. Is this possible?
In my local Artifactory setup, I put the company wide Artifactory URL as a Remote Repository. I can hit the Test button and it tells me that it successfully connected. However, when I go to download an artifact it does not work. I would like to say that publicly available artifacts can be fetched through my local Artifactory, so at least I can get to jcenter.bintray.
Can one Artifactory be connected to another Artifactory? If yes, is there a way to test if this connection works
I don’t think we would be using all the contents of the company wide Artifactory, so I don’t want to do an export and import to the local or do replication. I would prefer if we could fetch on demand. Is this possible?
Edit: Thanks to #DarthFennec pointing me to Smart Remote Repositories I have solved my problem. To others who have the same problem
Please follow the steps mentioned on the previously mentioned page to set up the Smart Remote Repository. In my case Artifactory did not detect that the remote was another instance of Artifactory and did not give me any options to set, but I was not interested in these anyway.
Note You can always click the Test button to make sure that your connection to the Remote Repository works.
Next, go to the Admin -> Virtual Repositories select your Repository Key and select your Smart Repository from the Available Repositories so that it moves into the Selected Repositories. Click Save & Finish at the bottom and you should be good to go.
I'm not sure exactly what your problem ended up being, but if you want to remote one Artifactory repository from another, it should be a smart remote repository. This is when Artifactory detects that a remote is pointing at another Artifactory, and it enables a number of extra features, like download statistics, property replication, and remote browsing.
An important thing to keep in mind when configuring a smart remote repository is that depending on the package type, you might need to point the remote at <artifactory>/api/<type>/<repo>, rather than just <artifactory>/<repo>. This is the case for Bower, Chef, CocoaPods, Docker, Go, NuGet, Npm, Php Composer, Puppet, Pypi, RubyGems, and Vagrant repositories. Other repository types should use the standard <artifactory>/<repo> URL.

Using GIT to keep operational systems up to date

I'm not a Git novice but also not a guru either and I have a question. We want to create remote repos that appear as folders within our network. The folders will actually contain a large legacy ASP app running in a production manner.
Then we want to be able to make local changes and be able to push commits to these networked repos and thus dynamically update the production application.
We already have the repo on Github and our developers fork that and work locally (we use SmartGit for most day to day stuff).
However (because the app is huge and legacy) we have (prior to using Git) always had a process for copying changed files to the target systems (production, QA etc).
But it dawned on me that we may be able to treat the operational system "as" a repo that's checked out to master. Then (when suitably tested) we want to simply use SmartGit to do a "push to" the operational system and have the changes delivered that way.
I'm at the edge of my Git knowledge though and unsure if this is easy to do or risky.
We don't want to install Git on the operational machine (its restricted to running Windows 2003 - yes I know...) so want to simply treat the remote system just like it was a local folder - with Git installed on our local machines.
Any tips or suggestions?
My tip: don't bother.
You can only push to bare repositories. These are such that they only contain the files normally residing in .git, with no working directory at all. So you cannot "run" those on the server. You would need to push to a bare repos on the server, and then clone/checkout that bare repos into a non-bare local repos on the server itself (which can be done in a post-receive hook inside git). But as you said, you cannot even install git on the server. So git push does nothing for you.
Second option would be to mount the servers filesystem on whatever staging/deployment machine you have, presumably one which you can install git on. Then you can git push into a bare repos on that deployment machine, run git hooks, and copy newly pushed stuff into your non-git server filesystem.
Third option would be to package everything up locally, make a tarball (or, I guess, zip-ball...) and just unpack that on the server.
So. Automated, continuous deployment => great idea. Using git => great idea. Directly using git push, not so much, mainly due to your constraints.

R Studio - Cloning local repository

I want to create a master repository on our server, from which I can clone a local version onto my computer.
I am using R Studio v0.98.994.
So far, this is what I have tried doing:
Create a folder for the master repository to live in. I do this using 'new project' in R studio, and tell it to make a git repository.
I can then open up another new project, located on my C drive, and use R studio to clone, by telling it to open an existing project and setting the URL as the location of the master project.
However, then when I make changes and commit to my local repository (which works fine) I cannot push to the master repository, I get an error exactly as described in this question: git push fails: `refusing to update checked out branch: refs/heads/master`
So it appears that R Studio creates non-bare repositories?
Now I thought, well okay, I will use git bash to initialise the repository and then connect to that within R studio.
I do so, but cannot then find a way to use that repository in R Studio.
I am very new to Git, so it is entirely probable that this is one of those 'read the instructions' questions, in which case I am very sorry - and could someone possibly point me towards some guidance for this situation? I have spent the better half of a day googling around this error and haven't yet managed to pull together the pieces :( I also apologise; this doesn't feel like a very reproducible question.
It sounds like you are using Windows Git, with a setup on a local Windows machine (C: drive) and a server of some kind, mounted as the S: drive. There's a few things you should be aware of when doing this.
Shared Repositories
If you are intending for multiple people to share the same repository, you want to initiate a shared repository. See the --shared option in git-init for more details. Note that I'm not sure how having your repository on a Windows machine affects the sharing options. If you are just trying to keep your repository in two places, that makes things a lot easier.
Bare Repositories
Separate from the discussion of sharing is the discussion of bare repositories. If you don't intend to ever work with files in the server (i.e. it's just going to be a place to push changes so they are safely stored), you could initialize a bare repository. A bare repository contains the database structure of Git, but does not have the actual files in the directory.
A standard Git repository is a directory with a hidden folder in it named .git. This .git folder contains all the various data structures that Git uses to track changes. A bare repository is essentially a folder containing only the contents of .git.
The good thing about a bare repository is that no one can work in the repository itself (since there is no working directory, just the database). This means that no one could log into S: and edit the repository themselves. Instead, they would have to clone the repository, then push their changes back to the origin. The GitGuys have a good article about why this is ideal.
Note that shared repos and bare repos are not dependent or mutually exclusive. As a general practice, if you are having a "server repo" from which you pull and to which you push, you should have it be bare, regardless of whether the project is shared.
A Non-Shared Workflow
Since it's not clear if you are sharing or not sharing and you're on a Windows environment, which I don't know about from a sharing standpoint, I'm going to give you a simple example. Using git-bash, you should be able to change directories to wherever on S: you have your repositories. Then, use git init with the bare options as described by the link above to initialize a bare repository. Navigate to where you want your repository to live on C:, and then do git clone to get a working copy.
Add a README file or something else so you can do your initial commit, and then commit and do git push origin master to push your changes to the S: repository. Once all that is done, THEN initialize the RStudio Git project. RStudio should defer to your existing configuration, and things should hopefully work.

creating git repositories from live [wordpress,magento] sites, ignoring core php files, but be able to clone the repository on multiple local servers

So I have a lot of websites, 150+. Starting with the bigger sites I am beginning to set up git repositories for tracking the changes to these sites. I can create a localserver version of a site and set up the repository and everything is running fine.
I have set up a .gitignore file to ignore all the core files and plugin folders etc. Again this is fine, the files are still on my local machine and have been deleted from my repository.
What I want to do is set up this repository on multiple computers (my colleagues who do less development work but will still need access to the repository). I imagine cloning won't work as all the core files are no longer in repository. How do I get around this?
Thanks all!
EDIT:
I should have mentioned we're using BitBucket to act as a central repository if that makes any difference.
There are few ways you can do that.
You can set local environment in one location, and keep git repository in other location.
After cloning or pulling the repository you can then run script which will copy the files from repository to the local environment.
You can add all files to the repository ignoring only var/, .htaccess, app/etc/local.xml and .gitignore. Bare in mind that you can break a website by changing files which should not have been changed. Debugging then becomes a nightmare. Having all in git, you know instantly what went wrong.
We've managed to set up great workflow using beanstalk.com. They've got option to share repositories (like github) and then deploying them on different server through SSH. Works like a charm - highly recommended.

Resources