Change storage directory in Artifactory - artifactory

I have just installed Artifactory and I need to set up and running a company-wide Ivy repository.
For disaster-recovery purposes, I need Artifactory to store data on a RAID-1 file system mounted at /srv (where MySQL datafiles are stored also). I would not prefer using blob storage, so how can I tell Artifactory to store all of its data in a directory different than the standard?
System info: I run SLES 11 and I have installed Artifactory from RPM.

If you have artifactory 4.6 or greater, you can create a $ARTIFACTORY_HOME//binarystore.xml config file. e.g /var/opt/jfrog/artifactory/etc/binarystore.xml
The following config would put the artifacts in the /data directory
<config version="v1">
<chain template="file-system"> <!-- Use the "file-system" template -->
</chain>
<provider id="file-system" type="file-system"> <!-- Modify the "file-system" binary provider -->
<fileStoreDir>/data/binaries</fileStoreDir> <!-- Override the <fileStoreDir> attribute -->
</provider>
</config>

The checksum based storage is one of the biggest advantages of Artifactory. It gives much better performace, deduplication and allows uploads optimization, replication optimization, free copy and move artifacts. The blob storage is by far the right way to store blobs (binaries).
Location of the artifacts storage can be changed according your needs by mapping the storage as $ARTIFACTORY_HOME/data.
For disaster recovery we recommend to setup active/passive synchronization or active/active cluster. Also, the Artifactory backup dumps the files in the standard directory structure format and the location of the backup can be configured.

Related

Artifactiry migration from 4.0.2 to 7

I migrated from an artifactory 4.0.2 to a 7.last suing method 1 suggested by https://jfrog.com/knowledge-base/what-is-the-best-way-to-migrate-a-large-artifactory-instance-with-minimal-downtime/.
Now surfing into the new repository I see all definitions fof my repositories as well as all metadata, however I don't see any artifact content (e.g. jar files).
I copied all filestore content from the old to the instance.
What I missed ?
Tks
From the scenario you've described, it appears to be an issue with the storage (filestore) configuration/connection.
What do we need to check next?
If you have updated the filestore path as a part of the migration, update the path into the storage configuration file [binarystore.xml] in the Artifactory instance. Let's say, you have moved the data from one mount to another mount to use the data via the new Artifactory version 7.x, this step will help Artifactory to know where exactly it needs to look out for the binaries.
The file will be present under $JFROG_HOME/etc/artifactory/ directory in the Artifactory 7.x version.

How to migrate all my config and content from Enterprise Antmedia version to another Enterprise Antmedia?

I need a step-by-step to these questions:
(1) How to migrate my content from Free Antmedia version to Enterprise Antmedia?
(2) How to migrate all my config and content from Enterprise Antmedia version to another Enterprise Antmedia? (for example, in case to change my datacenter, I will need to install and config another host)
(3) How to do backups of my config and content? And how to recover? Will be the same step-by-step of question (2)?
Please describe every step in details (what to copy, which order to install and replace files, what permissions to fix etc).
Live App DB file: antmedia/liveapp.db
WebRTCAppEE DB File: antmedia/webrtcappee.db (Enterprise Edition)
WebRTCApp DB File: antmedia/webrtcapp.db (Community Edition)
Server Management DB file: antmedia/server.db
Settings folder of applications: antmedia/webapps/{App_Name}/WEB-INF/
Streams Folder: antmedia/webapps/{App_Name}/streams/
1) Because of WebRTC apps are different, please copy the server.db, liveapp.db and settings of LiveApp for migration of Community -> Enterprise.
2) For that case, you should copy the server.db, liveapp.db and webrtcapp.db also settings and streams folder of application to the new instance.
3) Just back up the files mentioned in the answer, either you can migrate or recover using these files.

WSO2 API Manager - API-specific configuration files

We use a configuration management tool (Chef) for WSO2 API Manager installation (v2.1.0). For each installation, the WSO directory is deleted and overwritten with the new changes / patches.
This process removes already created APIs from the WSO2 API Publisher. (Since these are still present in the database, they cannot be re-created with the same name.) We have assumed that the entire API configuration is stored in the database which is obviously not the case.
This API-specific file is noticeable to us:
<wso2am>/repository/deployment/server/synapse-configs/default/api/admin--my-api-definition_vv1.xml
Are there any other such files that must not be deleted during a new installation or is there a way to create these files from the information stored in the database?
We have considered using the "API import / export tool" (https://docs.wso2.com/display/AM210/Migrating+the+APIs+to+a+Different+Environment). However, according to documentation, this also creates the database entries for the API, which in our case already exist.
You have to keep the content of the server folder (/repository/deployment/server). For this, you can use SVN based dep-sync. Once you enable dep-sync by giving an SVN server location, all the server specific data will be written to the SVN server.
When you are installing the newer pack, what you need to do is to point to the svn location and the database. (I hope you are using a production-ready database other than inbuilt h2)

Adding a virtual directory pointing to Azure Cloud Files for Cloud Service via configuration file

My work have recently launched a new, always-online, cloud service that has a number of shared resources (learning content) that needs to be available to both the staging environment and production.
We have chose to host these files on an Azure Cloud Files storage device and map the drive on 'Application_start', however, currently we need to manually map the virtual directory to the appropriate location in staging, before swapping with the production environment. This is not ideal as should the virtual machine (VM) be restarted at all, the original, uploaded configuration is used and the virtual directories we added are removed, leaving our content unusable.
Virtual directories appear to be pre-configurable via the 'ServiceDefinition.csdef' file, however I cannot set the 'physicalDirectory' attribute to the Cloud Files storage device as this is not available on my local device and prevent publishing from proceeding. I fear using configuration transformations will also suffer the same fate.
Previously, I have looked in the use of the LocalStorage feature of the Cloud Service, but this appears to be even more volatile and non-shareable between the set-ups.
Is there a more appropriate storage solution we can look into that will suit our set-up.
Many thanks.
EDIT:
Just a quick example of what I have for the current Virtual Directory set-up in ServiceConfiguration.csdef:
<VirtualDirectory name="media" physicalDirectory="\\drive.file.core.windows.net\media\courseware" />
<VirtualDirectory name="app_data" physicalDirectory="\\drive.file.core.windows.net\app_name\app_data" />
I assume by 'Azure file storage device' you mean 'Azure file service/storage'.
Have you tried to mount share the storage on Web role start, alternatively you can use net use (step 4) when your web role starts.
Jack, you need persistent the credential (e.g. your storage account and key) so that your connection to Azure File Storage won't lost during reboot.
Are you using Azure PaaS Role, if that's case, you need to ensure that your code can connect automatically whether the system has started a fresh instance, or if your instance has just been restart.
Please refer to the post below for details.
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/27/persisting-connections-to-microsoft-azure-files.aspx

Using Robocopy to deploy sites

I want to be able to quickly deploy updates to a site that is fairly busy. For smaller sites I would just FTP the new files over the old ones. This one, however, has a few large dll's that regularly get updated and while they are copying the site is effectively down (plus there is the hassle of making backups of them in case something goes wrong.
My plan is to use TortoiseHg to synchronise with a staging copy on the server over FTP (using netdrive or something similar). I can then check all is running smoothly and once that is complete I would like to run a .bat file (or something else) that will create a backup of the live site (preferably only the files that are about to change, but that is not critical) and then copy the newly changed files over to the live site.
If possible I also want to have the copy ignore certain directories (like user uploads) so that it won't overwrite those files on the live site?
I've heard RoboCopy is the way to go but I'm not sure of where to start. Would I need to call 2 commands (1 for the initial backup and one for the copy)? Is there any way to restore the live site to it's previous state should something go wrong?
The site is in ASP.NET and would be copied to Windows 2003 server.
EDIT: It gets a little tricky when web.config items have changed and need to be merged so that the staging servers settings (appsettings, connection strings, etc) don't get deployed to the live site. How does that get handled?
What we use is the following
first build the website with msbuild in cruisecontrol.net to build the binaries
archive the currently deployed files under a timestamped folder to avoid losing data in case of a problem
C:\DevTools\Robocopy\robocopy.exe /R:1 /W:10 /mir "D:\WebSite\Files" "D:\Webarchive\ArchivedFiles\Documents.%date:~0,-8%.%date:~3,-5%.%date:~6%.%time:~0,-9%.%time:~3,-6%.%time:~6,-3%" /XF *.scc
stop the website
deploy the website by copying everything except the files we archived (/XD is eXclude Directory)
C:\DevTools\Robocopy\robocopy.exe /R:1 /W:10 /mir "c:\dev\site" "D:\WebSite" /XF *.scc /XD "D:\WebSite\Files"
copy and rename (with xcopy, this time) a release.config with correct information to d:\Website\web.config (in fact, that's what we used to do, now we have a homebrew transformation engine to change parts of the dev web.config on the fly).
restart the website
(optional) delete the archive you made at step two
In your case, you'll have to add the /XD flags for any directory you want to ignore, such as the users' upload. And unless the production web.config file is complicated, i'd really recommend simply copying a release.config that you maintain as a part of the project, side by side with the web.config
Is Robocopy a hard requirement? Why not use MSBuild? Everything you have listed can painlessly be done in MSBuild.
<!-- Attempt to build new code -->
<MSBuild Projects="$(BuildRootPath)\ThePhotoProject.sln" Properties="Configuration=$(Environment);WebProjectOutputDir=$(OutputFolder);OutDir=$(WebProjectOutputDir)\" />
<!-- Get temp file references -->
<PropertyGroup>
<TempConfigFile>$([System.IO.Path]::GetTempFileName())</TempConfigFile>
<TempEnvironmentFile>$([System.IO.Path]::GetTempFileName())</TempEnvironmentFile>
</PropertyGroup>
<!-- Copy current web configs to temp files -->
<Copy SourceFiles="$(OutputFolder)\web.config" DestinationFiles="$(TempConfigFile)"></Copy>
<Copy SourceFiles="$(OutputFolder)\web.$(Environment).config" DestinationFiles="$(TempEnvironmentFile)"></Copy>
<ItemGroup>
<DeleteConfigs Include="$(OutputFolder)\*.config" />
</ItemGroup>
<Delete Files="#(DeleteConfigs)" />
...
<!-- Copy app_offline file -->
<Copy SourceFiles="$(CCNetWorkingDirectory)\Builder\app_offline.htm" DestinationFiles="$(DeployPath)\app_offline.htm" Condition="Exists('$(CCNetWorkingDirectory)\Builder\app_offline.htm')" />
<ItemGroup>
<DeleteExisting Include="$(DeployPath)\**\*.*" Exclude="$(DeployPath)\app_offline.htm" />
</ItemGroup>
<!-- Delete Existing files from site -->
<Delete Files="#(DeleteExisting)" />
<ItemGroup>
<DeployFiles Include="$(OutputFolder)\**\*.*" />
</ItemGroup>
<!-- Deploy new files to deployment folder. -->
<Copy SourceFiles="#(DeployFiles)" DestinationFiles="#(DeployFiles->'$(DeployPath)\%(RecursiveDir)%(Filename)%(Extension)')" />
<!-- Delete app_offline file -->
<Delete Files="$(DeployPath)\app_offline.htm" Condition="Exists('$(DeployPath)\app_offline.htm')" />
On Nix based servers i would use RSYNC and i understand that on Windows you can use DeltaCopy which a port of RSYNC and is open sources (never used DeltaCopy so please check it carefully) Anyway assuming it works like RSYNC then it is fast and only updates files that have been changed.
You can use various configuration options to delete files on the target that have been deleted on the source and you can also use an add in a file that will exclude files or directories, i.e. the local config, you do not want copying. etc.
You should be able to fold it all into one script to run when required which means you can test and time it so you know what is happening.
Check out these links to see if they help:
ASP.NET website Continuous Integration+Deployment using CruiseControl.NET, Subversion, MSBuild and Robocopy
Deployment to multiple folders with Robocopy
You'll find that robocopy.exe /? is extremely helpful. In particular you'll want the /XF switch for excluding files, and /XD for excluding folders.
You will need to write a script (e.g. bat, powershell, cscript) to take care of the web.config issues though.
Microsoft themselves use robocopy to deploy updates to some sites.
I don't know if you have multiple servers, but our deployment script went something like: 1) Stop IIS (which would take the server out of load-balancer rotation, 2) RoboCopy /MIR from \STAGING\path\to\webroot to \WEB##\path\to\webroot where ## is the number of the server, 3) Start IIS. This was done after the site was smoke-tested on the staging server.
That doesn't much help with your config problem, but our staging and production config files were the same.
What you need (and I need) is a synchronize program with the ability to create backup of the files on the server, and make quick copy over ftp of the files at ones by probably copying them first on a temporary directory, or by partial updating.
This is one program that I found : http://www.superflexible.com/ftp.htm
WebDeploy is a much better way to handle deploys (see Scott H http://www.hanselman.com/blog/WebDeploymentMadeAwesomeIfYoureUsingXCopyYoureDoingItWrong.aspx)
But, Robocopy is a great low-cost deploy tool that I still use on some sites (haven't find the time to change them to webdeploy). Robocopy is like xcopy but with a much richer set of options. So you would need 2 Robocopy commands (1 for backup and 1 for deploy). I normally do the backup command when the files are staged.
Managing config files is always tricky (and a big reason to use webdeploy). One approach, is keep a copy of the config files for each environment checked into your source control (eg, web.dev.config, web.uat.config, web.prod.config, etc). The staging (or deploy script) would grab and rename the necessary config file.
You would probably need to use a combination of tools.
I would have a look at DFSR (File Server role) with a read-only folder on your live site (so it's one-way replication).
It is very easy to configure, has a nice GUI, ability to exclude files based on location and/or masks, and with Volume Shadow Copy enabled you can have it running on schedules you set and updating those files that change only (or have it run on a schedule, or even run it manually). The beauty of this is once it is configured, you don't have to touch it again.
Once you have the bulk of your files replicating you could then get assistance in automating the possible merge on web.config, assuming you want that automated.
MSBuild is great, except for one minor (or major depending on your point of view) flaw. It rebuilds the binaries every time you run a build. This means, for deploying from TEST to PRODUCTION, or STAGE to PRODUCTION (or whatever your pre-production environment is called), if you use MSBuild, you are not promoting existing binaries from one environment to the next, you are re-building them. This also means that you are relying, with certainty, that NOTHING has changed in the source code repository since you did an MSBuild to your pre-production environment. Allowing even the slightest chance of a change to anything, major or minor, means you will not be promoting a fully tested product into your production environment. In the places I work, that is not an acceptable risk.
Enter Robocopy. With Robocopy, you are copying a (hopefully) fully tested product to your production environment. You would then either need to manually modify your web.config/app.config to reflect the production environment, OR use a transformation tool to do that. I have been using the "Configuration Transformation Tool" available on SourceForge for that purpose - it works just like the MSBuild web/app.config transformations.

Resources