Atom package dependency add Unix utils - atom-editor

I am interested in writing an Atom package that may depend on and call a Unix util, specifically a2ps: https://www.gnu.org/software/a2ps/. Is this possible? Thanks.

Yes this is possible:
in atom you have the node.js platform at your disposal. You can do system calls with child_process.spawn.
an example on how to do this is in the graphviz-preview-plus package - specifically in renderGraphVizWithCLI.js (which is called from renderGraphViz.js)

Related

R DESCRIPTION file: is it possible to "conditionally" import packages?

Is it possible to include a "conditional import" in a package's DESCRIPTION file?
For example, I am developing a package which schedules system tasks. On Windows this is achieved with Task Scheduler and the taskscheduleR package, on unix with the cronR package. So intuitively it would be useful to do something like the following:
DESCRIPTION
Package: pkgname
Version: 0.0.1
[more fields]
Imports:
dplyr,
if (.Platform$OS.type == "windows") "taskscheduleR" else "cronR",
tidyr
I suppose it would be possible to write an .onAttach() or similar which checks the system type and installs the relevant package if not already present, but that doesn't seem like a particularly nice solution - firstly, it relies on the user attaching the package while connected to the web before they can use it, and secondly it breaks the formal dependency chain.
My current approach is to include both packages in Suggests, with the responsibility then on the user to install the correct package for their system.
I think this might be possible with a configure shell script as described in Writing R Extensions. But I don't have experience doing that. You can also do platform-dependent stuff in your NAMESPACE files (that won't help here, but see the source of base package parallel for an example).
You could help your users by using that "The R and man subdirectories may contain OS-specific subdirectories named unix or windows." This allows you to have OS-specific code, which could then do the usual check for package availability.
E.g., in the windows subdirectory, you'd have something like:
if (requireNamespace("taskscheduleR", quietly = TRUE)) {
taskscheduleR::taskscheduler_create(...)
} else {
stop("Please install the taskscheduleR package to use this functionality")
}

R FAQ for package tcltk mentions "teacup". What is this and how can I use it?

In the R FAQ section 4.6 (Package TclTk does not work) I found the following sentence:
... although they [missing Tcl/tk packages] may be downloaded via the Teacup facility
What is "teacup"? How can I install and use it?
I am using RStudio running on Ubuntu Linux and Windows 7.
Teacup is a program that ships as part of ActiveTcl, a commercial zero-cost distribution of Tcl (and Tk and many other packages) for various platforms. It does package management, looking after the key part that is download, installation and upgrading of packages from a remote repository. It is not open source, though Tcl itself is (as are the majority of packages that aren't single-company-specific).
If you've got it installed, you use these commands from a shell:
teacup update-self
teacup update
Depending on where your Tcl installation is, you might need to elevate privileges to make these command calls work. How you do this is platform-dependent; on Unix it's usually simplest to use sudo for each of the commands, whereas on Windows it is probably easier to create an elevated command shell and run inside that.
Depending on your site, you might need to configure a web proxy with teacup proxy. Try without first.
If you're using a non-ActiveTcl installation but you have an ActiveTcl installation present, you can still use teacup. You just need to use teacup link to connect that Tcl installation to the teacup local repository. This is slightly more complex because you can have multiple repositories on the one system (though I've never needed that).
First, you find where the repository is:
teacup default
Then you need to link the shell to the repository:
teacup link make $PATH_FROM_TEACUP_DEFAULT $LOCATION_OF_TCLSH_TO_LINK
Making this work with R Studio will be a matter of determining which Tcl installation it is using. If it's already an ActiveTcl, you just need the first part of this answer. Otherwise, you need the second part as well. Also note that pretty much requires that you be using either Tcl 8.5 or 8.6; there are no guarantees for older, unsupported versions.

Virtual environment in R?

I've found several posts about best practice, reproducibility and workflow in R, for example:
How to increase longer term reproducibility of research (particularly using R and Sweave)
Complete substantive examples of reproducible research using R
One of the major preoccupations is ensuring portability of code, in the sense that moving it to a new machine (possibly running a different OS) is relatively straightforward and gives the same results.
Coming from a Python background, I'm used to the concept of a virtual environment. When coupled with a simple list of required packages, this goes some way to ensuring that the installed packages and libraries are available on any machine without too much fuss. Sure, it's no guarantee - different OSes have their own foibles and peculiarities - but it gets you 95% of the way there.
Does such a thing exist within R? Even if it's not as sophisticated. For example simply maintaining a plain text list of required packages and a script that will install any that are missing?
I'm about to start using R in earnest for the first time, probably in conjunction with Sweave, and would ideally like to start in the best way possible! Thanks for your thoughts.
I'm going to use the comment posted by #cboettig in order to resolve this question.
Packrat
Packrat is a dependency management system for R. Gives you three important advantages (all of them focused in your portability needs)
Isolated : Installing a new or updated package for one project won’t break your other projects, and vice versa. That’s because packrat gives each project its own private package library.
Portable: Easily transport your projects from one computer to another, even across different platforms. Packrat makes it easy to install the packages your project depends on.
Reproducible: Packrat records the exact package versions you depend on, and ensures those exact versions are the ones that get installed wherever you go.
What's next?
Walkthrough guide: http://rstudio.github.io/packrat/walkthrough.html
Most common commands: http://rstudio.github.io/packrat/commands.html
Using Packrat with RStudio: http://rstudio.github.io/packrat/rstudio.html
Limitations and caveats: http://rstudio.github.io/packrat/limitations.html
Update: Packrat has been soft-deprecated and is now superseded by renv, so you might want to check this package instead.
The Anaconda package manager conda supports creating R environments.
conda create -n r-environment r-essentials r-base
conda activate r-environment
I have had a great experience using conda to maintain different Python installations, both user specific and several versions for the same user. I have tested R with conda and the jupyter-notebook and it works great. At least for my needs, which includes RNA-sequencing analyses using the DEseq2 and related packages, as well as data.table and dplyr. There are many bioconductor packages available in conda via bioconda and according to the comments on this SO question, it seems like install.packages() might work as well.
It looks like there is another option from RStudio devs, renv. It's available on CRAN and supersedes Packrat.
In short, you use renv::init() to initialize your project library, and use renv::snapshot() / renv::restore() to save and load the state of your library.
I prefer this option to conda r-enviroments because here everything is stored in the file renv.lock, which can be committed to a Git repo and distributed to the team.
To add to this:
Note:
1. Have Anaconda installed already
2. Assumed your working directory is "C:"
To create desired environment -> "r_environment_name"
C:\>conda create -n "r_environment_name" r-essentials r-base
To see available environments
C:\>conda info --envs
.
..
...
To activate environment
C:\>conda activate "r_environment_name"
(r_environment_name) C:\>
Launch Jupyter Notebook and let the party begins
(r_environment_name) C:\> jupyter notebook
For a similar "requirements.txt", perhaps this link will help -> Is there something like requirements.txt for R?
Check out roveR, the R container management solution. For details, see https://www.slideshare.net/DavidKunFF/ownr-technical-introduction, in particular slide 12.
To install roveR, execute the following command in R:
install.packages("rover", repos = c("https://lair.functionalfinances.com/repos/shared", "https://lair.functionalfinances.com/repos/cran"))
To make full use of the power of roveR (including installing specific versions of packages for reproducibility), you will need access to a laiR - for CRAN, you can use our laiR instance at https://lair.ownr.io, for uploading your own packages and sharing them with your organization you will need a laiR license. You can contact us on the email address in the presentation linked above.

Build R package with C Code for Linux and Windows

I have been doing research and I can't quite figure out how to build my R package, that calls C functions, in order for it to work in both Windows and Linux environments. I am building the package on a Linux machine.
I have two C files, one.C and two.C, I place the two files in the src directory after using package.skeleton(...). In the namespace file I use the command: useDynLib(one,two). Is this correct? Or do I need to put the actual function names instead of the file names? Do I need to export the function names?
Do I need to put the .so files in the src directory or will these be created automatically? I am worried then it won't work on a windows machine which needs a .dll file.
As you can see I'm a little confused, thanks for the help.
One of the standard R manuals is Writing R Extensions. Part of this manual is the section 5 System and foreign language interfaces. This will probably answer the majority of your questions. In regard to the dynamically linked libraries (dll or so), they are built on the fly. You develop your package, including the C code. Once you want to install the library from source (e.g. using R CMD INSTALL spam), or create a binary distribution, the C code will be compiled into the appropriate library file.
Faced with similar headaches I switched to C++ in combination with Rcpp. Rcpp takes care of all the headaches for you in compiling packages:
http://dirk.eddelbuettel.com/code/rcpp.html
There is also an entire vignette on how to build a package using Rcpp:
http://dirk.eddelbuettel.com/code/rcpp/Rcpp-package.pdf

Dependency management in R

Does R have a dependency management tool to facilitate project-specific dependencies? I'm looking for something akin to Java's maven, Ruby's bundler, Python's virtualenv, Node's npm, etc.
I'm aware of the "Depends" clause in the DESCRIPTION file, as well as the R_LIBS facility, but these don't seem to work in concert to provide a solution to some very common workflows.
I'd essentially like to be able to check out a project and run a single command to build and test the project. The command should install any required packages into a project-specific library without affecting the global R installation. E.g.:
my_project/.Rlibs/*
Unfortunately, Depends: within the DESCRIPTION: file is all you get for the following reasons:
R itself is reasonably cross-platform, but that means we need this to work across platforms and OSs
Encoding Depends: beyond R packages requires encoding the Depends in a portable manner across operating systems---good luck encoding even something simple such as 'a PNG graphics library' in a way that can be resolved unambiguously across systems
Windows does not have a package manager
AFAIK OS X does not have a package manager that mixes what Apple ships and what other Open Source projects provide
Even among Linux distributions, you do not get consistency: just take RStudio as an example which comes in two packages (which all provide their dependencies!) for RedHat/Fedora and Debian/Ubuntu
This is a hard problem.
The packrat package is precisely meant to achieve the following:
install any required packages into a project-specific library without affecting the global R installation
It allows installing different versions of the same packages in different project-local package libraries.
I am adding this answer even though this question is 5 years old, because this solution apparently didn't exist yet at the time the question was asked (as far as I can tell, packrat first appeared on CRAN in 2014).
Update (November 2019)
The new R package renv replaced packrat.
As a stop-gap, I've written a new rbundler package. It installs project dependencies into a project-specific subdirectory (e.g. <PROJECT>/.Rbundle), allowing the user to avoid using global libraries.
rbundler on Github
rbundler on CRAN
We've been using rbundler at Opower for a few months now and have seen a huge improvement in developer workflow, testability, and maintainability of internal packages. Combined with our internal package repository, we have been able to stabilize development of a dozen or so packages for use in production applications.
A common workflow:
Check out a project from github
cd into the project directory
Fire up R
From the R console:
library(rbundler)
bundle('.')
All dependencies will be installed into ./.Rbundle, and an .Renviron file will be created with the following contents:
R_LIBS_USER='.Rbundle'
Any R operations run from within this project directory will adhere to the project-speciic library and package dependencies. Note that, while this method uses the package DESCRIPTION to define dependencies, it needn't have an actual package structure. Thus, rbundler becomes a general tool for managing an R project, whether it be a simple script or a full-blown package.
You could use the following workflow:
1) create a script file, which contains everything you want to setup and store it in your projectd directory as e.g. projectInit.R
2) source this script from your .Rprofile (or any other file executed by R at startup) with a try statement
try(source("./projectInit.R"), silent=TRUE)
This will guarantee that even when no projectInit.R is found, R starts without error message
3) if you start R in your project directory, the projectInit.R file will be sourced if present in the directory and you are ready to go
This is from a Linux perspective, but should work in the same way under windows and Mac as well.

Resources