What happens when two Julia different project toml files have the same project name and same depot path? Will instantiating one cause other's cache to go stale?
I assume by cache you mean the set of packages stored in a depot.
Pkg.instantiate() will ensure that all package versions which exist in the active dependency graph (as specified by the manifest file) exist somewhere in the depot path. In general, Pkg decouples the set of dependencies required by any given project from the set of packages stored in depots. This is why Julia's projects are so light: different projects are free to share dependencies so that there is no unnecessary duplication.
The fact that two different projects have the same name really has no bearing on this process.
Note: although a given project can only have a single version of a dependency, a depot is free to store any number of versions of the same package.
In case you are referring to the precompile cache: there was an issue with multiple versions of the same package clobbering each other. The fix should be in Julia 1.3.
Related
I am trying to get better understanding how renv package in R works and how it interacts with git. Here are my questions
Assume I have master and a couple git branches in my R projects for each (master and branches) I would like to use different environments (different libraries or different versions of the same libraries). Would renv be able to handle it, i.e. if I switch from one branch to another will need to call renv::restore().
I have two separate projects with renv running in both of them, call them project A and project B. I would like to take environment from project B and replace environment in project A. How can I accomplish it? Do I just need to copy renv folder from one project to another?
Assume I have master and a couple git branches in my R projects for each (master and branches) I would like to use different environments (different libraries or different versions of the same libraries). Would renv be able to handle it, i.e. if I switch from one branch to another will need to call renv::restore().
renv excludes the project library from version control (to avoid bloating the repository size), so in normally you would be required to restore the library path when switching branches.
This is a bit arduous, so another solution would be to configure renv to use a different library path for each branch for your git repository. You could accomplish this with something like (in your project .Rprofile):
branch <- system("git rev-parse --abbrev-ref HEAD", intern = TRUE)
Sys.setenv(RENV_PATHS_LIBRARY = file.path("renv/library/branches", branch))
This way, renv would be automatically configured to use a separate library for each branch, and this would happen automatically as you switch branches.
This seems like something that would be useful to have in general; you might consider filing a feature request at https://github.com/rstudio/renv/issues if so.
I have two separate projects with renv running in both of them, call them project A and project B. I would like to take environment from project B and replace environment in project A. How can I accomplish it? Do I just need to copy renv folder from one project to another?
That should suffice, although it depends on how much of the environment you want to copy. The renv folder contains a settings.dcf file, defining project settings -- you may or may not want to copy those settings over as well. (See ?renv::settings for documention on renv's project-specific settings.)
Alternatively, you could copy the project renv.lock from project B to project A, and then call renv::restore(). This might be more appropriate if you were e.g. copying a project from one machine to another, especially if those machines were running on different operating systems.
I'm currently starting with JFrog Artifactory. Up to now I have only been working with source code control systems not with binary repositories.
Can someone please tell how the versioning of files is done in Artifactory?
I have been trying to deploy a file, then change it and deploy it again.
The checksum has changed, so it's the new file. But it seems that the old version is gone.
So it looks like there are no version of files. If I want that do I have to do it in the filename?
I found versions related to packages.
But I was thinking to use it for other files as well.
Thanks for your help
Christoph
Artifactory, unlike a VCS system, is not managing a history of versions for a given path. When you deploy an artifacts over an existing artifact, it will overwrite it (you can block this by configuring the right permissions).
If you wish to manage permission for generic artifacts (ones which are not managed by a known package manager like npm, Maven etc.), there are a couple of options you can take:
Add the version as part of the artifact name, for example foo-1.0.0.zip
Add the version as part of the artifact path, for example /foo/1.0.0/foo.zip
Combine the 2 above approaches, for example /foo/1.0.0/foo-1.0.0.zip
Use an existing package management tool which is flexible enough to handle generic packages. Many people are using Maven to manage all types of packages beyond Java ones (it comes with its pros and cons)
From the Artifactory point of view there are a couple of capabilities you can leverage:
Generic repositories - aimed at managing proprietary packages which are not managed by a known package manager
Custom repository layout - can be used to define a custom layout for your generic repository and assist with tasks like automatic snapshot version cleanup
Properties - can be used to add version (and other) metadata to your artifacts which can used for searching, querying,resolution and more
Lastly, Conan is another option you should consider. Conan is a package manager intended for C and C++ packages. It is natively supported in Artifactory and can give you a more complete solution for managing your C libraries.
So I've been trying to do some AWS lambda with scala and sbt. And one of the recommendations is
Minimize your deployment package size to its runtime necessities. This
will reduce the amount of time that it takes for your deployment
package to be downloaded and unpacked ahead of invocation. For
functions authored in Java or .NET Core, avoid uploading the entire
AWS SDK library as part of your deployment package.
I've been using sbt-assembly to create fat jars for my code and dependencies, but it seems like sbt-assembly will package all library dependencies when I only use like 10% of the aws-core library which adds a lot of content. Was wondering if there is something I can do to cut down on the number of dependencies to what is actually imported in my code (and their dependencies).
As far as I know, there is no direct and safe way to selectively contains the dependent classes in a fat jar generated by the sbt-compile plugin.
First of all, you should understand that sbt plugins just provide a settings and jar files required to invoke methods in your project. It means that the dependent plugin is brought to your project with the pre-compiled jar file, which is determined by the version that you specified on your build setting (e.g., plugins.sbt in your project dir).
For example, the jars of sbt-assembly are brought to your project from this link when you specify that you want to use the sbt-assembly (although it is brought to your project when you use recent sbt version by default).
Therefore, at least you may have two choices to shrink your jar files.
Compile the jar file from the scratch
For the sbt-aws, its source code is provided on this link, so you may selectively compile the source codes to get the classes that your program is going to use.
Use the tool for shrinking jar file
There are several tools to shrink your jar file based on the dependencies. The most popular tool is proguard; it seems that there is a proguard support for sbt.
Warning
As mentioned in another stack overflow answer, selectively choosing some classes from the jar may cause your program crash depending on the input value and several other conditions. You've said that only 10 percent of the jar file is used, but you cannot ensure whether other classes are required from your code and library that your project depends on. When you use the tool for helping you to shrink the jar file, be careful when the program is security critical one.
I have an Artifactory with several projects, each having various versions of .jar files within.
How can I use Fortify to identify a vulnerability within a specific version of .jar file, say v1.2.1.
On top of this, I may have several latest versions.
Since v1.2.1 is identified to be having a vulnerability with the help of Fortify, how to identify the list of projects that internally use this v1.2.1 .jar file using Fortify ?
If you are using one of the Build integrations (i.e. generate a buildInfo) you can either:
1. Retrieve the BuildInfo json and extract the list of project dependencies from there (also, you can take a look here, here, and the best option, here). Or,
2. Use the Artifactory Query Language (AQL) to get a list of dependencies used in the project (Build).
p.s I work at JFrog
I have several Scala modules that I'm building with SBT. Some of them (I'll call them dependant modules) are being published to Artifactory and then used by top-level modules.
All changes to the code are done in separate git branches. When the feature (or bugfix) is done, that branch is compiled in Jenkins and then deployed to the testing instance and handed off to the QA team.
So it's possible that there will be several git branches with different code in the dependant modules.
The problem is that Ivy is caching these modules locally, so it's possible that the top-level module will be built with the dependant module from a different branch (taken from the local cache).
I've tried adding changing() directive to the dependency specification in build.sbt.
In that case Ivy is ignoring the local cache and goes to Artifactory to download the POM file every time. Then it parses the POM file, but concludes that it has the jar file with that version in the local cache and fetches the jar file from the local cache and not from Artifactory. Which isn't what I want.
Since the code in branches hasn't been integrated into the master branch at this point, it's perfectly valid that different feature branches have the same version number, but different code.
Is there a way to tell Ivy (via SBT) to ignore local cache for a certain groupid? Or a single dependency at least?
If you are using versioning for your dependant modules then each codebase change must produce a different version. Ivy and maven expect that once the artifact has been published with a specific version it will stay unchanged forever. That is why they are using cached files. If you want to download a fresh version from repository on every compile you should add -SNAPSHOT suffix to the dependant module version number (eg: dep-module-1.1.1-SNAPSHOT)