I'm currently starting with JFrog Artifactory. Up to now I have only been working with source code control systems not with binary repositories.
Can someone please tell how the versioning of files is done in Artifactory?
I have been trying to deploy a file, then change it and deploy it again.
The checksum has changed, so it's the new file. But it seems that the old version is gone.
So it looks like there are no version of files. If I want that do I have to do it in the filename?
I found versions related to packages.
But I was thinking to use it for other files as well.
Thanks for your help
Christoph
Artifactory, unlike a VCS system, is not managing a history of versions for a given path. When you deploy an artifacts over an existing artifact, it will overwrite it (you can block this by configuring the right permissions).
If you wish to manage permission for generic artifacts (ones which are not managed by a known package manager like npm, Maven etc.), there are a couple of options you can take:
Add the version as part of the artifact name, for example foo-1.0.0.zip
Add the version as part of the artifact path, for example /foo/1.0.0/foo.zip
Combine the 2 above approaches, for example /foo/1.0.0/foo-1.0.0.zip
Use an existing package management tool which is flexible enough to handle generic packages. Many people are using Maven to manage all types of packages beyond Java ones (it comes with its pros and cons)
From the Artifactory point of view there are a couple of capabilities you can leverage:
Generic repositories - aimed at managing proprietary packages which are not managed by a known package manager
Custom repository layout - can be used to define a custom layout for your generic repository and assist with tasks like automatic snapshot version cleanup
Properties - can be used to add version (and other) metadata to your artifacts which can used for searching, querying,resolution and more
Lastly, Conan is another option you should consider. Conan is a package manager intended for C and C++ packages. It is natively supported in Artifactory and can give you a more complete solution for managing your C libraries.
Related
I'm using Unity 2019.3.4f1 and Firebase Package 6.15.2 and when I import the Custom Package the Package Manager Resolver asks to change the "/manifest.json".
When I click "Add Selected Registries", Unity starts to Uninstall the packages and stay on its hours with this message: "Uninstalling the following packages: Firebase Authentication"
Package Manager Resolver
I can't deploy or play the game without this message appears.
What you're running into is that newer versions of Unity support a package manager that makes it easier to install and manage dependencies. Newer versions of the Firebase SDK can optionally take advantage of this.
I can't quite tell what your specific issue is, but there are a couple routes forward.
The simplest is to just click "Disable Registry Edition." If you don't feel like messing with this system at all, just click this button and work with Unity packages as you're probably expecting.
Otherwise, you can click "Add Selected Registries." This will kick off a somewhat complex process where:
The Firebase plugin adds a "Scoped Registry" to Unity (this is that code you see in the "After" pane). This tells Unity about Google's package repository.
The Plugin then looks for any package that is included in the registry and uninstalls it to avoid having it included twice.
The Plugin adds the package it uninstalled to your package manifest. This lets you maintain and update the Firebase plugin right in Unity with the "Package Manager" window:
There are plenty of reasons why you'd want to do this. First, the Firebase Unity SDK is HUGE on disk -- much larger than the SDK is even in your project. The reason is that it has redundant copies of every Firebase library for both legacy .NET3 projects and for modern .NET4 projects. Each unitypackage also has to include all of its dependencies -- that means that FirebaseCore is redundantly included in every unitypackage.
Second, what gets me the most, is that the Firebase plugin is too big to fit into a GitHub repo without Git LFS. This is because one or more of the libraries needed to support Linux is larger than a single source file can be. When you use the package manager, this stuff is kept out of your source repository (if you don't commit the Library/ directory, which you shouldn't) keeping your size in the cloud down and making this workaround unnecessary.
Third, as illustrated in the screenshot I included above, it's just easier to upgrade and downgrade the Firebase SDK as needed when you use the package manager. You no longer have to try to remember which Firebase libraries you've installed, you can see them in a neat list! You can also easily uninstall Firebase features that you don't need without worrying too much about large dependencies laying around (you still have to manually clean up some native plugins).
You can also perform all of this manually! Instead of downloading the Unity SDK, you can manually perform the steps as outlined here. Namely you can add:
"scopedRegistries": [
{
"name": "Game Package Registry by Google",
"url": "https://unityregistry-pa.googleapis.com",
"scopes": [
"com.google"
]
}
]
to the end of your Packages/manifest.json as indicated in that popup window. Then install and manage the Firebase plugin that way without worrying about migration at all.
EDIT:
I also should mention that if you do think that you're running into a bug, the system responsible for the dialog you're seeing is known as the "External Dependency Manager for Unity." You can file issues directly on its GitHub page.
Use case
I have a C++ build pipeline that creates my application. The output of this pipeline is a directory which contains my C++ application. I didn't know where to put them, so I installed Sonatype Nexus in order to categorize, and manage these build outputs. (Just to clarify what my initial requirement is)
After installation
After installing Sonatype Nexus I can now configure my build artifact repository. In the configuration I have the choice to use a repository format, but none of the ones listed seems to fit my requirement. I can only find repository formats which seem to be used as an input for my pipeline, but not as an output. Is there anything I miss or which format would fit my need?
The raw format will allow you to store any arbitrary file type in any arbitrary folder structure.
That being said, I'm not at all a C/C++ expert, but the Conan format seems to be dedicated to packaging for your language.
I see a little star next to the format name in your screenshot indicating that this was still a community supported plugin. Hence you are running a nexus version older than 3.22.0.
You should consider upgrading since the conan format is now part of the core nexus and that your version is subject to security vulnerabilities that were fixed in 3.22.0 and 3.22.1. See release notes
I prepared a symbol package successfully using dotnet pack's --include-symbols and --include-source switches. Now I wonder how to tell Visual Studio to use that package when trying to step into code of the corresponding non-symbol one.
I tried to place the symbol package to a local folder and configuring solution-level nuget.config file to use this folder as a package source. Idea is that there is maybe some name convention that looks for packages like {name}.symbols in all configured packages sources... but that doesn't work.
Oficial docs (especially the older ones) are talking a lot about configuring "Symbol Servers", but if I understand correctly, that's something different and older, right? If I wanted to set-up an internal symbol server, I wouldn't do that through NuGet. (I really don't want to set up an internal symbol server.)
They are also suggesting to push to smbsrc.net, but I can't do that with internal code, obviously. Also, I can't believe there are hard-coded URLs in .NET toolbox.
I didn't find a way to meaningfully use source included by dotnet build.
There are alternatives though:
SourceLink offers a way to configure mapping between source code build paths and HTTP locations. Unfortunately, that does not work for private repositories without a specific support for source control server authentication method. Bitbucket Server, for example, is not yet supported.
You can embed sources directly into PDBs. I will probably go that way.
Are there more examples of custom build JSON payloads beyond that available at https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API? Or perhaps more in-depth documentation on the “application/vnd.org.jfrog.build.BuildsByName+json” payload?
We have a build that produces both JAR/IVY and RPM files (and some other file types that Artifactory doesn’t really know the content of). Today, we publish those into a generic repository to keep everything together.
What would be ideal is to be able to create my own custom build using the REST API, composed of the JAR files + RPM files, so I can do licensing searches across them.
In the given example, the artifacts composed in the build are referenced by ID/name/hash for reference.
The problem with the current Jenkins/Artifactory/Gradle plugin that we use is that our build is separated amongst many smaller builds, but ultimately, are released as one. This makes making a full report somewhat difficult, and doesn’t have any way for us to easily do license checks including RPM files. We want to be able to publish one build, which contains everything we know in the build.
The current setup also has us uploading our JARs into a Maven repository, which adds time to the builds, given we are also publishing the same content into the Generic repository alongside the RPMS and other content.
Thanks!
The build info JSON is fully documented in the README of this
repository: https://github.com/JFrogDev/build-info
Which is also the repository the holds the code of the build info
engine used by the various JFrog CI/Build plugins. You can definitely
create your own BI JSON, and, if you're going to use Java to do that,
you should check out this project that demonstrates the usage of the
various build info Java APIs:
https://github.com/JFrogDev/project-examples/tree/master/build-info-java-example
Another option you may want to look into is the JFrog CLI, which
recently brought support for associating artifact
deployment/resolution with a build object and deploying it to
Artifactory. This method is completely agnostic to the file types
your build produces or the build tool you are using. Have a look at
the official documentation here:
https://www.jfrog.com/confluence/display/CLI/CLI+for+JFrog+Artifactory#CLIforJFrogArtifactory-BuildIntegration
Lastly, if you are using Jenkins, the Jenkins Artifactory Plugin now
has Pipeline APIs that will allow you to collect artifacts and build
information programmatically, and even concatenate multiple build info
objects to deploy them as a single build entity to Artifactory,
which is pretty wicked.
Have a read about this here:
https://wiki.jenkins-ci.org/display/JENKINS/Artifactory+-+Working+With+the+Pipeline+Jenkins+Plugin
I've been reading about drupal install profiles, and I'm wondering if there's much of a difference between using a packaged install profile vs. installing core + manually installing the modules listed in the install profiles?
I'd like to do the latter (manually installing each) to control the versions of each module installed, which I can't control with a packaged install profile that may not have been maintained.
But should I or will I be opening the door to something I'm not aware of? Shouldn't the 2 be identical, just one automated and other is manual?
What kiamlaluno said, plus the fact that installation profiles may perform custom configuration of settings on install, might construct custom views/content-types/etc (especially by means of features.module, which you can see heavy use of in OpenAtrium), and might provide other custom code in distro-specific module.
The short answer is, no you can't just replicate an install profile by downloading a clean drupal with all those modules -- best bet is to use the install profile. If you're worried about module versions, just make sure you're using a profile that's actively maintained.
The difference is that an installation profile includes the right version of all the modules it needs.
This means that differently from manually installing each module, you don't need to verify the correct version of the module X that effectively works together the module Y; there are few cases where one module doesn't work well when version A of another module is installed, and you need to install version B of the same module, if you don't want problems.
An installation profile can have a custom installation page that allows you to change some parameters of your site; it also allows the installation profile author to define a patch that needs to be applied a module, in order to fix a bug of the module, or to make it work better with another module.
If you need to set a site to work for a particular purpose, installation profiles are useful for you as they allow you to set the site correctly without to know all the details about how a Drupal site needs to be set.
I believe you can specify the versions of the modules you want to install see