Unique link to an artifact deployed to JFrog Artifactory - artifactory

I'm a new Artifactory user. My company just setup Artifactory v6.5.2 and I'm looking to use is for managing software deployed for our production team. What I need is a download link that will get documented in our product management system that directly points to the exact file that Software deployed for Production to use. I was anticipating this would look like this:
https://artifactory.mycompany.com/artifactory/myrepo/mymodule/mypkgfile_v1_b30b890becfb4a02510ed12a7283c676.tgz
I'm not seeing that Artifactory can do this for me. What I see is I can do this:
http://artifactory.mycompany.com/artifactory/myrepo/mymodule/mypkgfile_v1.tgz
However if another artifact is deployed with the same name, it's not reflected in the download link. This means that the link could return different results.
Am I missing something or am I asking Artifactory do something it's not intended to do?

Artifactory returns the URL based on on the filename and the path (as any web server would do). Here are two options to achieve what you need:
Name the artifacts uniquely (timestamps are the simplest). Instead of naming the artifact mypkgfile_v1.tgz, name it mypkgfile_v1-1553038888.tgz (I used the Unix Epoch time, but everything unique enough will do).
This one is more evolved but doesn't require you to change the naming scheme.
First, configure a custom repository layout to match your versioning.
Once you've done that, every time you deploy an artifact, attach a unique identifier to the artifact as property during deployment (using matrix params, for example), deploying your artifact as mypkgfile_v1;timestamp=1553038888.
On the revrieval, use the token for the latest release together with the timestamp you need as a matrix param:mypkgfile_v[RELEASE];timestamp=1553038888

Related

Link to latest master build

I have a product for which I store software builds in Artifactory.
I name the software artifacts like this, so it is possible to se what a downloaded file contains: system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
Thus it is also possible to download directly via an URL, not using jfrog
https://artifactory.deif.com/ui/native/amc-sw/pcm33/master/system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
Now I would like to make a quick way to download the latest master build. To do this I have in my build made a symlink
system-pcm33-base.raucb -> system-pcm33-base-v0.0.0.0_65_ga03970a.raucb
I can also push this symlink to artifactory, but it only works from the GUI and via jfrog. I do not get this symlink as I had hoped:
https://artifactory.deif.com/ui/native/amc-sw/pcm33/master/system-pcm33-base.raucb
Is there a way to do this?
It is of course possible to upload the file twice under two different names, and thus update system-pcm33-base.raucb on every build. But that is a bit more heavy.
Artifactory doesn't handle symbolic links as in the Linux file system.
Based on the described use case, you can upload the file twice (as suggested) - first with the actual version, second as the latest. The important part is - when you upload for the 2nd time, as the latest, use Checksum Deploy.
Artifactory has a checksum based storage, which means that every file is actually stored once, even if it is uploaded to different target paths. In order to tell Artifactory to create/update a path without actually sending the binary, you can send the checksum of the binary, and Artifactory will link the path to the binary with that checksum. This operation is quite cheap.
Another possible approach is to define and use a custom Repository Layout. This way, in order to download the latest version of the file, you can use the [RELEASE] placeholder. The actual latest version ill be automatically resolved by the extracted version value based on the layout.
See also:
How to create simple versioning custom layout in Artifactory
How to find the latest artifact version based on layout?
Thanks to yinon explaining that the checksum is used, I found this simple solution
jf rt copy --flat amc-sw/pcm33/master/system-pcm33-base-v0.0.0.0_65_ga03970a.raucb amc-sw/pcm33/master/system-pcm33-base.raucb
This copies ALL the properties, but then a download query will return two files, so a property has to be changed
jf rt sp amc-sw/pcm33/master/system-pcm33-base.raucb artifact=last_bsp

When migrating from an old Artifactory instance to a new one, what is the point of copying $ARTIFACTORY_HOM/data/filestore?

Artifactory recommends the steps outlined here when moving from an old Artifactory server to a new one: https://jfrog.com/knowledge-base/what-is-the-best-way-to-migrate-a-large-artifactory-instance-with-minimal-downtime/
Under both methods it says that you're supposed to copy over $ARTIFACTORY_HOME/data/filestore, but then you just go ahead an export the old data and import it into the new instance, and in the first method you also rsync the files. This seems like you're just doing the exact same thing three times in a row. JFrog really doesn't explain why each of these steps is necessary and I don't understand what each does differently that cannot be done by the other.
When migrating Artifactory instance we need to take two things into consideration:
Artifactory Database - Contains the information about the binaries, configurations, security information (users, groups, permission targets, etc)
Artifactory Filestore - Contains all the binaries
Regardless to your questions, I would like to add that from my experience, in case of a big filestore size (500GB+) it is recommended to use a skeleton export (export the database only, without the filestore. This can be done by marking "Exclude Content" in Export System) and copy the filestore with the help of a 3rd party tool such as Rsync.
I hope this clarifies further.
The main purpose of this article is to provide a bit faster migration comparing to simple full export & import.
The idea of both methods is to select the "Exclude Content". The content we select to exclude is exactly the one that is stored in $ARTIFACTORY_HOME/data/filestore/.
The difference between the methods is that Method #1 exposes some downtime, as you will have to shut down Artifactory at a certain point, sync the diffs, and start the new one.
While method #2 exposes a bit more complexed process, that includes in-app replications to sync the diffs.
Hope that makes more sense.

Artifactory: symlink or aliasing an Artifact URL?

I realize that the Artifactory support team reads these SO posts, so my question is either a regular question or a feature request.
I want to programmatically update various artifacts and when my operations complete (copies, writes, moves, deletes), create an alias/symlink to the new/updated artifact URLs.
For example, I would like to have a "latest" link which always points to the latest build for a number of different artifacts: Java, yum, Python and generic binaries, that is, native executables that do not use pypy/Maven/yum toolsets.
I don't see a way to do this and if that is the case, I'd like to request this feature in a new version of Artifactory.

Using Jenkins to Deploy to Production Server

I have 3 stages (dev / staging / production). I've successfully set up publishing for each, so that the code will be deployed, using msbuild, to the correct location, with the correct web configs transformed - all within Jenkins.
The problem I'm having is that I don't know to deploy the code to staging from what was built on dev (and staging to production). I'm currently using SVN as the source control, so I think I would need to somehow save the latest revision number dev has built and somehow tell Jenkins to build/deploy staging based on that number?
Is there a way to do this, or a better alternative?
Any help would be appreciated.
Edit: Decided to use the save the revision number method, which parses a file containing the revision number to the next job -- to do this, I followed this answer:
How to promote a specific build number from another job in Jenkins?
It explains how to copy an artifact from one job to another using the promotion plugin. For the artifact itself, I added a "Execute Windows batch command" build step after the main build with:
echo DEV_ENVIRONMENT_CORE_REVISION:%SVN_REVISION%>env.properties
Then in the staging job, using that above guide, copied that file, and then using a plugin EnvInject, to read from that file and set an environment variable, which can then be used as a parameter to the SVN Repository URL.
You should be able to identify the changeset number that was built in DEV and manually pass that changeset to the Jenkins build to pull that same changeset from SVN. Obviously that makes your deployment more manual. Maybe you can setup Jenkins to publish the changeset number to a file and then have the later env build to read that file for the changeset number.
We used to use this model as well and it was always complex. Eventually we moved to a build once and deploy many times model using WebDeploy. This has made the process much more simple. Check it out - http://www.dotnetcatch.com/2016/04/16/msbuild-once-msdeploy-many-times/

How do permissions on a PlasticSCM repository work in a DVCS scenario

So I've been working on a rather large project and using PlasticSCM as by VCS. I use it with a DVCS model, but so far it's pretty much just been me sync'ing between my office machine and home.
Now we're getting other people involved in the project, and what I would like to do is restrict the other developers to specific branches so that only I can merge branches into /main.
So I went to my local repository, and I made the permissions changes (that part's pretty straight forward). But now how does that work with the other developers? When they sync up, are the permissions replicated on their local repositories? If they attempted to merge into /main on their local repository, does it allow that, and then they get an error when they attempt to push the changes to my repository?
This is my first foray into DVCS so I'm not quite sure how this kind of thing works.
Classic DVCS (Mercurial, Git) don't include ACL, meaning a clone wouldn't keep any ACL restriction.
This is usually maintain through hook on the original repo (meaning you might be able to modify the wrong branch on a cloned repo, but you wouldn't be able to push back to the original repo).
As the security page mentions, this isn't the case for PlasticSCM, and a clone should retain the ACL (caveat below) set on an object, which will inherit said ACL through two realms: the file system hierarchy (directory, subdirectories, files) and the repository object hierarchy:
The caveat in a DVCS settings is that there must be a mechanism in place to translate users and groups from one site to another.
The Plastic replication system supports three different translation modes:
Copy mode: it is the default behaviour. The security IDs are just copied between repositories on replication. It is only valid when the servers hosting the different repositories involved work in the same authentication mode.
Name mode: translation between security identifiers is done based on name. In the sample at Figure above suppose user daniel has to be translated by name from repA to repB. At repB the Plastic server will try to locate a user with name daniel and will introduce its LDAP SID into the table if required.
Translation table: it also performs a translation based on name, but driven by a table. The table, specified by the user, tells the destination server how to match names: it tells how a source user or group name has to be converted into a destination name. Figure below explains how a translation table is built and how it can translate between different authentication modes.
Note: a translation table is just a plain text file with two names per line separated by a semi-colon “;”. The first name indicates the user or group to be translated (source) and the one on the right the destination one.

Resources