In artifactory repositories, what is the difference between a file integration revision and a folder integration revision? - artifactory

I'm looking at different repository layouts, and I see a distinction between folder integration revisions and file integration revisions.
Are these the same revision number (just placed on a file and/or folder) or are they different things?
Here is a link where I see both mentioned: Repository Layouts

Folder integration revision refers to the integration revision as it appears in an artifacts folder structure while file integration revision refers to the integration revision as it appears in an artifacts file name.
Consider a number of integration revision paths according to different build tool standards:
Standard Ivy:
org/module/1.0-20111214124053/jars/module/1.0-20111214124053.jar
Non-unique Maven:
groupId/artifactId/1.0-SNAPSHOT/artifactId-1.0-SNAPSHOT.jar
Unique Maven:
groupId/artifactId/1.0-SNAPSHOT/artifactId-1.0-20111412.124253-1.jar
So as you can see, standard Ivy and non-unique Maven have identical file and folder integration revisions (Ivy has 20111214124053 and Maven has SNAPSHOT) while the unique Maven differs (SNAPSHOT and 20111412.124253-1).
In the context of Artifactorys repository layouts, their value is a customizable regular expression which should reflect the expected value of the integration revision. Providing this information helps Artifactory identify the difference between release and integration revision artifacts and extract the information from the path.

Related

Packing and publishing NuGet packages with .NET CLI in TeamCity

I am trying to create Team City build template which requires minimum customisation, and I want it to play nicely with legacy projects and projects developed with .NET Core/Standard and .NET CLI.
I am stuck with NuGet as there were some considerable changes in how things work.
Earlier we had to create nuspec file to pack project as a NuGet package. At least in that file we could define various package-related properties.
New csproj file format allows us to define all package properties inside itself. That's fine, but how then do we know which projects should be packaged and which should not?
So far our TeamCity's build step Pack NuGet just contained **.nuspec for Specification files: field. The very fact of nuspec file presence served like a flag pack & publish this project.
However, for dotnet pack we need to specify the project. There is no simple way to distinguish 'main' projects from 'auxiliary' ones on which mains depend. (Let us ignore that project-to-project references are currently not supported.)
We either could pack all projects specifying **.*proj (yet in that case we are to know which packages to publish) or we might specify projects explicitly in a build configuration, but I don't like this approach because you must edit build configuration each time new project is added to the solution.
I also considered the option Generate package on build and omit dot net pack step as package is created on build. The only thing left is publishing the packages with dotnet nuget push specifying **/%BuildConfiguration%/*.nupkg.
Unfortunately when starting build against solution without projects with enabled Generate package on build makes TC fail complaining that
Target files not found for pattern "**/Release/*.nupkg"
Hence, I either need another recipe for achieving the required result or an advice how to make TC consider empty result just as a NOP and mark build as successful.
Yet another option is to use nuspec even for new csproj...
Since TeamCity 2017.2 will be available option to associate build configuration with multiple templates. So you will be able to create different templates to create packages for old projects and new .NET CLI projects.
To specify paths for target .NET projects, which should be packaged, you could use build configuration parameters.
To set such parameters during the build you could send in the preceding build step service message. The value of this parameter could be set to the list of target project files which could be selected via script like that: https://stackoverflow.com/a/8153857/305875

SOA MDS Target folder

I would like to understand what role the target folder plays in a SOA MDS project.
I am using JDeveloper and the target folder keeps getting populated with 2 .jar files. I am not sure where these jar files are coming from, but they contain old data which should be changed.
Can somebody please help me understand what is behind the making of these files?
The target folder is the default build output directory used by maven.
If working correctly, the builds should be generated there by maven using the configuration specified in the pom.xml file. In your case, the maven build might not have been run recently, which is why you see old content in the jars.
Have a look inside the pom.xml and see what build configuration has been specified there (it is likely to be no different from a SOA composite maven build file/pom file). If it's all built correctly, you should be able to deploy that jar directly to the MDS runtime (either manually or via maven).
In the pom file, you should be able to override most things there including the name, version, bundle type, target directory etc.
You can also use maven to keep track of your MDS changes - i.e. version it like any other build artifact/SOA composite. The versioned jars can also be uploaded to an artifact repository (such as nexus), in addition to being deployed to MDS runtime, so you have good level of traceability of MDS changes
PS -
This might help explain more: http://weblog.singhpora.com/2016/10/managing-shared-metadata-mds-in-ci.html

Composing custom builds - JSON payload examples

Are there more examples of custom build JSON payloads beyond that available at https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API? Or perhaps more in-depth documentation on the “application/vnd.org.jfrog.build.BuildsByName+json” payload?
We have a build that produces both JAR/IVY and RPM files (and some other file types that Artifactory doesn’t really know the content of). Today, we publish those into a generic repository to keep everything together.
What would be ideal is to be able to create my own custom build using the REST API, composed of the JAR files + RPM files, so I can do licensing searches across them.
In the given example, the artifacts composed in the build are referenced by ID/name/hash for reference.
The problem with the current Jenkins/Artifactory/Gradle plugin that we use is that our build is separated amongst many smaller builds, but ultimately, are released as one. This makes making a full report somewhat difficult, and doesn’t have any way for us to easily do license checks including RPM files. We want to be able to publish one build, which contains everything we know in the build.
The current setup also has us uploading our JARs into a Maven repository, which adds time to the builds, given we are also publishing the same content into the Generic repository alongside the RPMS and other content.
Thanks!
The build info JSON is fully documented in the README of this
repository: https://github.com/JFrogDev/build-info
Which is also the repository the holds the code of the build info
engine used by the various JFrog CI/Build plugins. You can definitely
create your own BI JSON, and, if you're going to use Java to do that,
you should check out this project that demonstrates the usage of the
various build info Java APIs:
https://github.com/JFrogDev/project-examples/tree/master/build-info-java-example
Another option you may want to look into is the JFrog CLI, which
recently brought support for associating artifact
deployment/resolution with a build object and deploying it to
Artifactory. This method is completely agnostic to the file types
your build produces or the build tool you are using. Have a look at
the official documentation here:
https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-BuildIntegration
Lastly, if you are using Jenkins, the Jenkins Artifactory Plugin now
has Pipeline APIs that will allow you to collect artifacts and build
information programmatically, and even concatenate multiple build info
objects to deploy them as a single build entity to Artifactory,
which is pretty wicked.
Have a read about this here:
https://wiki.jenkins-ci.org/display/JENKINS/Artifactory+-+Working+With+the+Pipeline+Jenkins+Plugin

SBT How to disable Ivy cache for a specific groupid

I have several Scala modules that I'm building with SBT. Some of them (I'll call them dependant modules) are being published to Artifactory and then used by top-level modules.
All changes to the code are done in separate git branches. When the feature (or bugfix) is done, that branch is compiled in Jenkins and then deployed to the testing instance and handed off to the QA team.
So it's possible that there will be several git branches with different code in the dependant modules.
The problem is that Ivy is caching these modules locally, so it's possible that the top-level module will be built with the dependant module from a different branch (taken from the local cache).
I've tried adding changing() directive to the dependency specification in build.sbt.
In that case Ivy is ignoring the local cache and goes to Artifactory to download the POM file every time. Then it parses the POM file, but concludes that it has the jar file with that version in the local cache and fetches the jar file from the local cache and not from Artifactory. Which isn't what I want.
Since the code in branches hasn't been integrated into the master branch at this point, it's perfectly valid that different feature branches have the same version number, but different code.
Is there a way to tell Ivy (via SBT) to ignore local cache for a certain groupid? Or a single dependency at least?
If you are using versioning for your dependant modules then each codebase change must produce a different version. Ivy and maven expect that once the artifact has been published with a specific version it will stay unchanged forever. That is why they are using cached files. If you want to download a fresh version from repository on every compile you should add -SNAPSHOT suffix to the dependant module version number (eg: dep-module-1.1.1-SNAPSHOT)

CCNET - build task required? Multiple repositories, one CCNET source section per project

CCNET questions - Here's the scenario:
I've got 10 developers doing local development to a Sitecore installation w/GIT as version control. When done with their feature/fix they push to an integration repository.
I've got CCNET setup for the Sitecore project that points to the remote Integration rep and the local live qa code base. CCNET finds the commits that my developers have made to integration repository and then updates the qa code base repository.
I also have a couple other .Net class lib projects that are managed by CCNET, compiled with their output pointed to the Sitecore bin dir.
The Sitecore installation is merely a result of a build with no compilable aspects. Its a web product with it's own API as well as the ability to integrate custom dll that we create to customize the product.
Questions:
Is CCNET build task required as a condition to execute other activities such as nUnit or robocopy? (the reason I ask this is because a "build" is natively used to compile an app and generate output, whereas, the only reason why we'd want to build is to make sure all dependencies are there and we can jump to unit testing...).
If my developers are NOT pointing to a centralized rep like integration, how would CCNET know where all of their remote GIT repositories are when the config doc only allows one GIT source control section per project?
Per project when I configure the GIT vc specs it asks for the branch that needs to be statically saved to the doc. Does CCNET have the ability to accept different branches dynamically?
There's no need to have an "actual build" in your project - it could consist of any type of tasks inside the tasks element. I have a couple of projects which only copy the files from the repository to an FTP server after deleting some files which shouldn't be published.
I have no experience with GIT but you have a possibility to define multiple source control blocks of any type if you use the multi source control block.
You could use dynamic parameters which allow the user to set their values when triggering the build.

Resources