How to get julia package UUID - julia

How to get the UUID of the third-party package without installing it on my computer?
Now I can only open the project.toml file to view after installing the third-party package locally on my computer?

import Pkg
Pkg.METADATA_compatible_uuid("JuliaAstro")
Credit goes to Stefan K.
https://discourse.julialang.org/t/a-quick-and-dirty-tool-for-generating-project-toml/11375/22

The first thing that comes to my mind would be to just find the package on Github and look into its Project.toml.
However, if you have an updated Pkg system set up for the default package repository, there should also be a folder like ~/.julia/registries/General, containing in the file Registry.toml a list of all packages of that repo sorted by UUID, and subdirectories for each letter from A to Z with folders for all packages, containing the metadata of every package.
(This is in my 1.0 installation -- it could have change a bit since then, but probably not much. Have a look at Pkgs documentation for details.)

Related

Determining which R packages, and dependencies, use DLL files

I work in a corporate environment that uses Microsoft Windows Defender Application Control (WDAC) to provide security. This blocks unsigned EXE and DLL files from being installed on devices. R packages which use DLLs fail to install. The workaround to this is provide an R installation from an approved central source which also copies over a default set of packages, such as tidyverse, data.table etc. to the R library. Users can continue to install additional packages which are built with native R, but run into issues if they try to install, build from source, or update packages with DLL files in.
Is there a way to check whether a package uses DLL files in advance of installation?
Output something like:-
check_dll(foo)
result: "This package and its dependencies have no DLL files. You can install this package"
check_dll(bar)
result: "bar does not have any DLL files, but one dependency, OOF, uses DLL files.
You have already have a version of OOF installed so it should be safe to install bar"
check_dll(foobar)
result: "foobar has a DLL. Do not attempt to install foobar".
check_dll(RABOOF)
result: "RABOOF does not have any DLL files, but one of it's dependencies,
foobar, does have a DLL file. Do not attempt to install RABOOF".
tools::package_dependencies() will list the package dependencies, but nothing else.
Downloading the zip file from CRAN and inspecting it for a libs/x64 folder with contents will work, but seems a heavyweight approach. Theoretically if a package has lots of dependencies this could result in downloading a lot of files unnecessarily.
Look for the NeedsCompilation field in the DESCRIPTION file. If it is "yes", there will be a DLL. If it is "no", there probably won't be. (If it is not there, the package wasn't built properly, so all bets are off.)
The test is not perfect, because packages can put DLLs into the inst folder to get them installed without compiling them, though CRAN isn't supposed to allow that: "Source packages may not contain any form of binary executable code." But packages like pak (mentioned in the comments) may be allowed to get around this rule, e.g. by downloading binaries, so the test isn't perfect. You will also need to put together a blacklist of packages that will fail your WDAC tests even though they claim not to need compilation, containing pak and others like it.
The NeedsCompilation field is included as a column of the result of available.packages(), so it is very easy to access without trying to install the package.
I have accepted the answer from user2554330 as the best solution. It makes use of the normal set of commands used for package management; and the matrix generated by available.packages() can be passed to tools::package_dependencies(), removing the need for multiple internet queries.
For completeness I am documenting another possible solution. A script could query the unofficial CRAN Github mirror https://docs.r-hub.io/#cranatgh and look for a /src directory in each package project.

Using R with git and packrat

I have been using git for a while but just recently started using packrat. I would like my repository to be self contained but at the same time I do not want to include CRAN packages as they are available. It seems once R is opened in a project with packrat it will try to use packages from project library; if they are not available then it will try to install from src in the project library; if they are not available it will look at libraries installed in that computer. If a library is not available in the computer; would it look at CRAN next?
What files should I include in my git repo as a minimum (e.g., packrat.lock)?
You can choose to set an external CRAN-like repository with the source tarballs of the packages and their versions that you'd like available for your project. The default behaviour, however, is to look to CRAN next, as you've identified in your question. Check out the packrat.lock file, you will see that for each package you use in packrat, there is an option called source: CRAN (if you've downloaded the file from CRAN, that is).
When you have a locally stored package source file, the contents of the lockout for said package change to the following:
Package: FooPackage
Source: source
Version: 0.4-4
Hash: 44foo4036fb68e9foo9027048d28
SourcePath: /Users/MyName/Documents/code/myrepo/RNetica
I'm a bit unclear on your final question: What files should I include in my git repo as a minimum (e.g., packrat.lock)? But I'm going to take this as a) combination of what files should be present for packrat to run, and b) which of those files should be committed to the git-repo. To answer the first question, I illustrate with initialising packrat on an existing R project.
When you run packrat::init(), two important things happen (among others):
1. All the packrat scaffolding, including source tarballs etc are created under: PackageName/packrat/.
2. packrat/lib*/ is added to your .gitignore file.
So from this, we can see that anything under packrat/lib*/ doesn't need to be committed to your git-repo. This leaves the following 3 files to be committed:
packrat/init.R
packrat/packrat.lock
packrat/packrat.opts
packrat.lock is needed for collaborating with others through a version control system; it helps keep your private libraries in sync. packrat.opts allows you to specify different project specific options for packrat. The file is automatically generated using get_opts and set_opts. Committing this file to the git-repo will ensure that any options you specify are maintained for all collaborators. A final file to be committed to the repo is .Rprofile. This file tells R to use the private package library (when R is started from the project directory).
Depending on your needs, you can choose to commit the source tar balls to the repository, or not. If you don't want them available in your git-repo, you simply add packrat/src/ to the .gitignore. But, this will mean that anyone accessing the git-repo will not have access to the package source code, and the files will be downloaded from CRAN, or from wherever the source line dictates within the packrat.lock file.
From your question, it sounds like committing the packrat/src/ folder contents to your repo might be what you need.

How to install RBloomberg package

I'm at a loss as to how to install the RBloomberg package. The only source for the files seems to be GitHub. The offered zip file is called blpwrapper-master.zip which embeds an rbloomberg folder. When I try to install the zip file (in RStudio), I get an error message that it cannot open a compressed file. I rezipped just the rbloomberg folder, but that led nowhere either. How does one go about this?
In general things can be installed from Github using the devtools package. For example:
library("devtools")
install_github("username/packagename")
I don't know who authors so Rbloomberg, but you can swap in the appropriate Github username in the above.
Note: Sometimes this won't work because a developer uses a non-traditional .git directory structure, but it should work in most cases. Indeed, that is the case here (as #rawr points out), where you need to use a slightly different path to point to package (which is in a subdirectory of the git repo):
install_github("johnlaing/blpwrapper/rbloomberg")

Install an R package temporarily, only for the current session

Sometimes on Stack Overflow, there's a question relative to a package which is not installed on my system, and which I don't plan to reuse later.
If I install the package with install.packages(), it will be put in one of my R install libraries, and then will take some storage space and be updated each time I run update.packages().
Is there a way to install a package only for the current R session ?
You can install a package temporarily with the following function :
tmp.install.packages <- function(pack, dependencies=TRUE, ...) {
path <- tempdir()
## Add 'path' to .libPaths, and be sure that it is not
## at the first position, otherwise any other package during
## this session would be installed into 'path'
firstpath <- .libPaths()[1]
.libPaths(c(firstpath, path))
install.packages(pack, dependencies=dependencies, lib=path, ...)
}
Which you can use simply this way :
tmp.install.packages("pkgname")
The package is installed in a temporary directory, and its files should be deleted at next system restart (at least on linux systems).
Another solution for this problem is devmode from devtools. Devmode allows you to install packages to a dev repository so your other packages are untouched if you install development versions. For example:
library(devtools)
devmode()
install_github('ggplot2', 'hadley')
devmode()
You'll notice that your version has not changed.
pacman deals with package management issues like this:
library(pacman)
Now you can use:
p_load("pkgname") #installs or loads package if already installed
#at end of session:
p_delete("pkgname") #deletes package from lib
This is a quick way to install in your directory and then delete it at the end (not really a temporary install)
As an addition to Tyler's answer a p_temp function was recently added to the pacman package which does exactly what the question asks for.
library(pacman)
p_temp(pkgname) # or p_temp("pkgname") either work...
This will install the package and any dependencies temporarily.
Disclosure: Tyler and I are co-authors of the pacman package...
The following is something in the middle between
juba
and
sebastian-c,
and is as simple as that:
.libPaths("/my/path")
Now and until the end of the current session,
you can install packages as you normally would, and they will end up in
/my/path.
Also package dependencies will go to /my/path.
If you want to have control over dependencies, you can specify them manually with:
install.packages(c("pack", "dep1", "dep2", ...), dependencies = FALSE)
This approach might be useful in two particular scenarios:
A so-to-say discovery session. You want to discover new packages and install them casually to see if something interesting pops up. Then, you use an OS provided tempdir in .libPaths(), to avoid messing your R setup, and the OS will take care of the cleaning.
Create, nowadays common, reproducible environments.
You install a base R, then add .libPaths("my/project/dir"). By looking at this dir, you have a clear picture of what are your project package requirements. Further, you can copy this folder to another PC to reproduce the same environment. Much like Python pipenv you can have more isolated environments: for each session, you call .libPaths() with the related project dir.

Dependency management in R

Does R have a dependency management tool to facilitate project-specific dependencies? I'm looking for something akin to Java's maven, Ruby's bundler, Python's virtualenv, Node's npm, etc.
I'm aware of the "Depends" clause in the DESCRIPTION file, as well as the R_LIBS facility, but these don't seem to work in concert to provide a solution to some very common workflows.
I'd essentially like to be able to check out a project and run a single command to build and test the project. The command should install any required packages into a project-specific library without affecting the global R installation. E.g.:
my_project/.Rlibs/*
Unfortunately, Depends: within the DESCRIPTION: file is all you get for the following reasons:
R itself is reasonably cross-platform, but that means we need this to work across platforms and OSs
Encoding Depends: beyond R packages requires encoding the Depends in a portable manner across operating systems---good luck encoding even something simple such as 'a PNG graphics library' in a way that can be resolved unambiguously across systems
Windows does not have a package manager
AFAIK OS X does not have a package manager that mixes what Apple ships and what other Open Source projects provide
Even among Linux distributions, you do not get consistency: just take RStudio as an example which comes in two packages (which all provide their dependencies!) for RedHat/Fedora and Debian/Ubuntu
This is a hard problem.
The packrat package is precisely meant to achieve the following:
install any required packages into a project-specific library without affecting the global R installation
It allows installing different versions of the same packages in different project-local package libraries.
I am adding this answer even though this question is 5 years old, because this solution apparently didn't exist yet at the time the question was asked (as far as I can tell, packrat first appeared on CRAN in 2014).
Update (November 2019)
The new R package renv replaced packrat.
As a stop-gap, I've written a new rbundler package. It installs project dependencies into a project-specific subdirectory (e.g. <PROJECT>/.Rbundle), allowing the user to avoid using global libraries.
rbundler on Github
rbundler on CRAN
We've been using rbundler at Opower for a few months now and have seen a huge improvement in developer workflow, testability, and maintainability of internal packages. Combined with our internal package repository, we have been able to stabilize development of a dozen or so packages for use in production applications.
A common workflow:
Check out a project from github
cd into the project directory
Fire up R
From the R console:
library(rbundler)
bundle('.')
All dependencies will be installed into ./.Rbundle, and an .Renviron file will be created with the following contents:
R_LIBS_USER='.Rbundle'
Any R operations run from within this project directory will adhere to the project-speciic library and package dependencies. Note that, while this method uses the package DESCRIPTION to define dependencies, it needn't have an actual package structure. Thus, rbundler becomes a general tool for managing an R project, whether it be a simple script or a full-blown package.
You could use the following workflow:
1) create a script file, which contains everything you want to setup and store it in your projectd directory as e.g. projectInit.R
2) source this script from your .Rprofile (or any other file executed by R at startup) with a try statement
try(source("./projectInit.R"), silent=TRUE)
This will guarantee that even when no projectInit.R is found, R starts without error message
3) if you start R in your project directory, the projectInit.R file will be sourced if present in the directory and you are ready to go
This is from a Linux perspective, but should work in the same way under windows and Mac as well.

Resources