Artifactory Pro - Legal way for testing Upgrade - artifactory

I want to upgrade my Artifactory Pro setup. I planned to clone the database, create a similar instance, and set up Artifactory Pro with the current production version, then upgrade the test instance to the latest version and make some tests(Upload artifacts, SAML, etc). If all tests will be Okay, upgrade the version of Artifactory Pro on the production instance.
How to be sure that I am making it legally? Do I need to request a Trial License? Or I can use my "production" key for a while?
Thanks in advance.

I recommend reaching out to JFrog as having a license running on two Artifactory instances simultaneously is against the EULA (as it probably is to request a trial every time you need to do something not in prod). If it is production though you should consider having some sort of DR/test/dev/staging/qa setup, artifact availability is important

Related

Upgrading Artifactory setup with Remote Repositories

I have an artifactory server, with a bunch of remote repositories.
We are planning to upgrade from 5.11.0 to 5.11.6 to take advantage of a security patch in that version.
Questions are:
do all repositories need to be on exactly the same version?
is there anything else i need to think about when upgrading multiple connected repositories (there is nothing specific about this in the manual)
do i need to do a system-level export just on the primary server? or should i be doing it on all of the remote repository servers
Lastly, our repositories are huge... a full System Export to backup will take too long...
is it enough to just take the config files/dirs
do i get just the config files/dirs by hitting "Exclude Content"
If you have an Artifactory instance that points to other Artifactory instances via smart remote repositories, then you will not have to upgrade all of the instances as they will be able to communicate with each other even if they are not on the same version. With that said, it is always recommended to use the latest version of Artifactory (for all of your instances) in order to enjoy all the latest features and bug fixes and best compatibility between instances. You may find further information about the upgrade process in this wiki page.
In addition, it is also always recommended to keep backups of your Artifactory instance, especially when attempting an upgrade. You may use the built-in backup mechanism or you may manually backup your filestore (by default located in $ARTIFACTORY_HOME/data/filestore) and take DataBase snapshots.
What do you mean by
do all repositories need to be on exactly the same version?
Are you asking about Artifactory instances? Artifactory HA nodes?
Regarding the full system export:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
https://jfrog.com/knowledge-base/how-should-we-backup-our-data-when-we-have-1tb-of-files/
For more info, you might want to contact JFrog's support.

Dependency resolution against local Artifactory takes very long

I have an Artifactory pro (without support) server installed in my local network.
One major use case for this artifactory was to use it as local cache for remote artifacts from e.g. repo1 maven repository or lightbend ivy2 repository. The hope was that I could speedup resolution of dependencies hosted on repo1 when caching them on my local artifactory.
I am pretty sure my development machine is configured correctly to exclusively resolve artifacts against my local artifactory.
However, every once in a while (suspiciously close to the interval configured as Metadata Retrieval Cache Period (Sec) in the Advanced Tab of the remote repository settings), the resolution of dependencies originally hosted on maven repo 1 takes far longer then usual.
I suspect that at these times artifactory refreshes artifact meta data (pom, ivy.xml) of remote artifacts. But this takes far longer than I would expect, assuming that a simple pom or ivy download should not take several seconds but rather a few milli seconds.
I am currently requesting root access to the server for attempting a tcpdump from OPs which may take time...
So my question is
Has anyone an idea what actually might happen that takes several seconds per dependency of a remote repository to refresh meta data files or am I looking in the wrong direction?
Update
My Artifactory version is
Artifactory Professional 5.1.3 rev 50019
We had a similar issue but with npm repo's where the meta data re-calculation was taking quite sometime and eventually we came to know that it was a bug in artifactory and was resolved in version 6.1.0. Worth checking the artifactory jira's for any such bugs. Hope so this helps!
Artifactory Jira Link

setting up a local npm cache using artifactory

I am using artifactory to set up a local npm registry cache.
I did
npm config set registry https://example.com/artifactory/api/npm/npm-virtual/
and have jenkins run
npm install
unfortunately, there doesn't seem to be any difference between using artifactory and using the normal npm registry (npm install uses the same amount of time for both approaches)
am I doing something wrong?
The difference, of course, is not in the install time. Most of the install time is consumed by network, so even if one of the solutions (local registry or Artifactory) is faster than the other, the difference won't be noticeable.
Here's a short, but not complete list of benefits of Artifactory over the simple local registry:
Artifactory works for a very broad set of technologies, not only npm, allowing using single tool for all your development and operational binaries (including Vagrant, Docker, and what's not).
Artifactory supports multiple repositories, allowing you to control access, visibility and build promotion pipelines on top of them. That's the correct way to manage binaries.
Artifactory is priced by server, not by user, allowing bringing more people in the organization to use it without additional cost.
I am with JFrog, the company behind Bintray and [artifactory], see my profile for details and links.

Trouble with Wordpress local development and deployments

The Setup:
I'm setting up a Wordpress-powered application using Elastic Beanstalk from Amazon Web Services. All development is being done locally under a MAMP apache2/php5 server environment with a GIT repository controlling the entire application root.
Deployment Workflow:
After committing any code changes (edits, new plugins, etc) to the repo the application is deployed using AWS EB CLI's eb deploy command which pushes the latest version out to any running EC2 instances managed by Elastic Beanstalk.
My Issue:
Sometimes the code changes aren't exactly syncing up between my development/production environments and I'm not sure how to overcome it. Especially when trying to install and setup plugins like W3 Total Cache or WP Super Cache.
Since my local environment doesn't have things like a memcahced server installed, but my production environment does (ElastiCache) I'm unable to save the proper settings file and deploy it for use in my production environment. These plugins won't allow me to select the needed services because it sees them as not available...
It seems I can only get W3 Total Cache to work if I install it directly onto a live production environment, which seems like a bad idea.
Given the above:
Am I going about deployments the wrong way?
Should plugins like W3 Total Cache be installed and configured on
local development environments and pushed to production environments?
I cannot comment on the issues specific to Elastic Beanstalk, but based on experience I can make a suggestion about the second part of your issue statement:
You are better off running a development environment that mirrors your production environment as closely as possible. I suggest that you convert from MAMP to a VM environment like VirtualBox. You might want to check out puphpet.com for help in getting it set up. It requires some startup effort, but gives you an environment similar to or the same as your production servers. For example, you could run memcached yourself so you could actually test it with W3 Total Cache.
As for your second question, just installing a plugin in the production environment without testing it beforehand has obvious risks (but then again clients do that all the time). I would prefer to test first. To a certain extent it probably depends on how critical it is if the site experiences downtime or weirdness.
I would suggest you to create another environment on Beanstalk.
It's easy, fast and more reliable than a VM in your case because it will allow you to test your deployment process as well.
I usually have 3 environment for a every website. Each environment is on its own branch. If your configuration is different between environment (url and database access for example), just store your wp-config and other config files into S3 (you may not want production password in your git repository), and through ebextensions you can download them into your website automatically.
I use AWS Beanstalk that way for 16 websites and some are wordpress one. All with autoscaling and able to get thousands of users simultaneously.
Don't hesitate to ask me for further details.

Guide to Repository Layout and Build Promotion with Artifactory

Usual story, new to artificatory and looking for a jump start.
Can anyone point me at a decent how-to post for using Artifactory (free version) with Jenkins in a deployment pipeline?
I figure I'm likely to want to:
setup several repos for dev thru production (any standards for this?)
have jenkins publish artifacts to the first repo using the artifactory plugin - limiting the amount of builds kept in artifactory.
promote builds from one repo to next as release to next environment - again deleting older builds
I just need a good guide/example to get started...
Did you check the User Guide? It covers all your questions perfectly well. Here are:
Creating repositories (re the standards - the preconfigured repos reflect the standards).
Working with Jenkins
Jenkins Release Management
Keep in mind that all the promotion functionality is available in Pro version only. That includes both simple move operation (not recommended for promotion pipeplines) and extremely powerful release management, based on build integration with Jenkins Artifactory plugin.

Resources