send2cy doesn't work in Rstudio ~ Cyrest, Cyotoscape - r

When I run "send2cy" function in R studio, I got error.
# Basic setup
library(igraph)
library(RJSONIO)
library(httr)
dir <- "/currentdir/"
setwd(dir)
port.number = 1234
base.url = paste("http://localhost:", toString(port.number), "/v1", sep="")
print(base.url)
# Load list of edges as Data Frame
network.df <- read.table("./data/eco_EM+TCA.txt")
# Convert it into igraph object
network <- graph.data.frame(network.df,directed=T)
# Remove duplicate edges & loops
g.tca <- simplify(network, remove.multiple=T, remove.loops=T)
# Name it
g.tca$name = "Ecoli TCA Cycle"
# This function will be published as a part of utility package, but not ready yet.
source('./utility/cytoscape_util.R')
# Convert it into Cytosccape.js JSON
cygraph <- toCytoscape(g.tca)
send2cy(cygraph, 'default%20black', 'circular')
Error in file(con, "r") : cannot open the connection
Called from: file(con, "r")
But I didn't find error when I use "send2cy" function from terminal R (Run R from terminal just calling by "R").
Any advice is welcome.

I tested your script with local copies of the network data and utility script, and with updated file paths. The script ran fine for me in R Studio.
Given the error message you are seeing "Error in file..." I suspect the issue is with your local files and file paths... somehow in an R Studio-specific way?
FYI: an updated, consolidated and update set of R scripts for Cytoscape are available here: https://github.com/cytoscape/cytoscape-automation/tree/master/for-scripters/R. I don't think anything has significantly changed, but perhaps trying in a new context will resolve the issue you are facing.

Related

How to export sf object to GDB using RPyGeo in R (Windows)?

I have a bunch of sf objects I'd like to export to GDB from R. I'm running R 4.0.2 on Windows 10. In this case the sf objects are all vector point data. The main reasons to export to GDB are to keep longer field names (the shapefile truncation is very annoying), and because GDBs are more desirable storage locations for our workflows.
Yes, I know about the ArcGisBinding package. I've got it to work in a test script but it's pretty unstable - often crashing and requiring a restart of R. This is a problem, because the sf objects I'd like to export come after an already long Rmd that reads in, formats and cleans the data. So it's not a simple manner of re-running the script until arc.write doesn't break. I could break up the script, but then I'd still have to read in a bunch of shapefiles. One option I haven't yet explored is using reticulate to call a python script instead of trying to do everything in R, but we're trying to do our analysis all in one place, if possible.
I'm pretty sure I've managed to set up RPyGeo appropriately, first setting my python path using the reticulate package. I'm doing it this way because IT restrictions means I can't edit PATH variables on my machine.
#package calls
library(sf)
library(spData)
library(reticulate)
#set python version in reticulate
py_path <- "C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/python.exe"
reticulate::use_python(python = py_path, required = TRUE)
#call RPyGeo
library(RPyGeo) # for potential point export
#output gdb
out.gdb <- "C:/LOCAL_PROJECTS/Output/Output.gdb"
#RPyGeo Parameters
# Note that, in order to use RPyGeo you need a working ArcMap or ArcGIS Pro installation on your computer.
# python path - note that this will change depending on which version of Arc one is using
# py_path <- "C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/python.exe"
arcpy <- rpygeo_build_env(workspace = out.gdb,
overwrite = TRUE,
extensions = c("Spatial","DataInteroperability"),
path = py_path)
I've tried a bunch of different tools to export an sf object, here using dummy data also used in the RPyGeo vignette
data(nz, package = "spData")
arcpy$Copy_management(in_data = nz,out_data = "nz_test")
arcpy$Copy_management(in_data = nz,out_data = file.path(out.gdb,"nz"))
arcpy$FeatureClassToGeodatabase_conversion(Input_Features = nz,Output_Geodatabase = out.gdb)
arcpy$FeatureClassToFeatureClass_conversion(in_features = nz,out_path = out.gdb,out_name = "nz")
arcpy$QuickExport_interop(Input = nz,Output = file.path(out.gdb,"nz"))
arcpy$CopyFeatures_management(in_features = nz,out_feature_class = file.path(out.gdb,"nz"))
arcpy$CopyFeatures_management(in_features = nz,out_feature_class = "nz")
Each time I get an error, for example:
Error in py_call_impl(callable, dots$args, dots$keywords) :
RuntimeError: Object: Error in executing tool
Detailed traceback:
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 3232, in CopyFeatures
raise e
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\management.py", line 3229, in CopyFeatures
retval = convertArcObjectToPythonObject(gp.CopyFeatures_management(*gp_fixargs((in_features, out_feature_class, config_keyword, spatial_grid_1, spatial_grid_2, spatial_grid_3), True)))
File "C:\Program Files\ArcGIS\Pro\Resources\ArcPy\arcpy\geoprocessing\_base.py", line 511, in <lambda>
return lambda *args: val(*gp_fixargs(args, True))
I'm not an expert in ArcPy by any means. Nor am I an expert in tracing errors inside packages. Am I making a simple syntax mistake? Is there something else that I'm missing? Any help would be much appreciated!

R-Studio IDE on Databricks

I am trying to import a dataset from my Databricks File System (DBFS) to R-Studio- which is running on Databricks Cluster; and I am facing this issue below.
> sparkDF <- read.df(source = "parquet", path = "/tmp/lrs.parquet", header="true", inferSchema = "true")`
Error: Error in load : java.lang.SecurityException: No token to
authorize principal
at com.databricks.sql.acl.ReflectionBackedAclClient$$anonfun$com$databricks$sql$acl$ReflectionBackedAclClient$$token$2.apply(ReflectionBackedAclClient.scala:137)
at com.databricks.sql.acl.ReflectionBackedAclClient$$anonfun$com$databricks$sql$acl$ReflectionBackedAclClient$$token$2.apply(ReflectionBackedAclClient.scala:137)
at scala.Option.getOrElse(Option.scala:121)
at com.databricks.sql.acl.ReflectionBackedAclClient.com$databricks$sql$acl$ReflectionBackedAclClient$$token(ReflectionBackedAclClient.scala:137)
at com.databricks.sql.acl.ReflectionBackedAclClient$$anonfun$getValidPermissions$1.apply(ReflectionBackedAclClient.scala:86)
at com.databricks.sql.acl.ReflectionBackedAclClient$$anonfun$getValidPermissions$1.apply(ReflectionBackedAclClient.scala:81)
at com.databricks.sql.acl.ReflectionBackedAclClient.stripReflectionException(ReflectionBackedAclClient.scala:73)
at com.databricks.sql.acl.Refle
The DBFS Location is correct, any suggestions or blogs are welcomed for this!
The syntax for reading data with R on Databricks depends on whether you are reading into Spark or into R on the driver. See below:
# reading into Spark
sparkDF <- read.df(source = "parquet",
path = "dbfs:/tmp/lrs.parquet")
# reading into R
r_df <- read.csv("/dbfs/tmp/lrs.csv")
When reading into Spark, use the dbfs:/ prefix, when reading into R directly use /dbfs/.
We should use dbfs before the directory path.
For Example: /dbfs/tmp/lrs.parquet

Finding and setting the working directory within a super computer

Obviously one can find and set working directories with getwd() and setwd(). This question is a bit more complicated. I'm running two files on a super computer (one is a regular R file which calls the other file (a .stan file). The way I submit work to the super computer is that I have to zip a folder containing the data, the .R file, and the .stan file. I upload this folder, and I pull this folder by setting it as one of the parameters in the super computer. I call the data using the standard read.csv() command and everything is hunky dory.
However, when I call the .stan file from the .R file, it can't access it because it needs to know the working directory.
This is the error I get:
> fit <- stan(file="ace_thresholds.stan", data=stanData, cores = 4)
Error in file(fname, "rt") : cannot open the connection
In addition: Warning messages:
1: In normalizePath(file) :
path[1]="ace_thresholds.stan": No such file or directory
2: In file(fname, "rt") :
cannot open file 'ace_thresholds.stan': No such file or directory
Error in get_model_strcode(file, model_code) :
cannot open model file "ace_thresholds.stan"
Calls: stan -> stan_model -> stanc -> get_model_strcode
Execution halted
When I tried setting the working directory to the unzipped NSG_stan folder (which is what I assumed the wd to be, I received this error:
fit <- stan(file="NSG_stan/ace_thresholds.stan", data=stanData, cores = 4)
Error in file(fname, "rt") : cannot open the connection
In addition: Warning messages:
1: In normalizePath(file) :
path[1]="NSG_stan/ace_thresholds.stan": No such file or directory
2: In file(fname, "rt") :
cannot open file 'NSG_stan/ace_thresholds.stan': No such file or directory
Error in get_model_strcode(file, model_code) :
cannot open model file "NSG_stan/ace_thresholds.stan"
Calls: stan -> stan_model -> stanc -> get_model_strcode
Execution halted
So I tried running print(getwd()) within the script and in the printout I see that the wd is
"/projects/ps-nsg/home/nsguser/ngbw/workspace/NGBW-JOB-RTOOL_TG-EBE9CDBF28BF42AF8CB6EC9355006B3E/NSG_stan"
which means that the working directory will shift with every job. So to accurately set the working directory, I'd need to set it to the current folder within the script. I looked for various posts on how to do this like the following
# install.packages("rstudioapi") # run this if it's your first time using it to install
library(rstudioapi) # load it
# the following line is for getting the path of your current open file
current_path <- getActiveDocumentContext()$path
# The next line set the working directory to the relevant one:
setwd(dirname(current_path ))
# you can make sure you are in the right directory
print( getwd() )
The issue with this is that it's a super computer, so it's difficult to install packages, because every time I want to install something, I have to email the folks associated with the super comp and that all takes time.
I've looked over this thread as well: R command for setting working directory to source file location in Rstudio. Appears to be a lot of dissent over what works. I tried a couple of them, and they didn't work.
setwd(getSrcDirectory()[1])
this.dir <- dirname(parent.frame(2)$ofile)
setwd(this.dir)
I've included the .R file below, in the event that it helps, but I think this is probably a pretty easy answer for someone with a decent amount of coding experience.
.R file (so it's the "ace_thresholds.stan" file that I either need to link to the current wd or to include the code that would set the wd, such that this "ace_thresholds.stan" call would work. Does that make sense?
Thanks much!
dat <- ace.threshold.t2.samp
dat <- subset(dat, !is.na(rw))
dat$condition <- factor(dat$condition)
dat$pid <- factor(dat$pid)
nTotal <- dim(dat)[1]
nCond <- length(unique(dat$condition))
nSubj <- length(unique(dat$pid))
intensity <- dat$rw
condition <- as.numeric(dat$condition)
pid <- as.numeric(dat$pid)
correct <- dat$correct_button == "correct"
chancePerformance <- 1/2
stanData <- list(nTotal=nTotal, nLevels=nCond, nSubj = nSubj, subject = pid, intensity=intensity, level=condition, correct=correct, chancePerformance=chancePerformance)
fit.rw <- stan(file="ace_thresholds.stan", data=stanData, cores = 4, control=list(max_treedepth=15, adapt_delta=0.90))

Download a custom dataset in Azure ML Jupyter/iPython Notebook using R

I need to download a custom dataset in an Azure Jupyter/iPython Notebook.
My ultimate goal is to install an R package. To be able to do this the package (the dataset) needs to be downloaded in code. I followed the steps outlined by Andrie de Vries in the comments section of this post: Jupyter Notebooks with R in Azure ML Studio.
Uploading the package as a ZIP file was without problems, but when I run the code in my notebook I get an error:
Error in curl(x$DownloadLocation, handle = h, open = conn): Failure
when receiving data from the peer Traceback:
download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
lapply(1:nrow(datasets), function(j) get_dataset(datasets[j, . ], ...))
FUN(1L[[1L]], ...)
get_dataset(datasets[j, ], ...)
curl(x$DownloadLocation, handle = h, open = conn)
So I simplified my code into:
library("AzureML")
ws <- workspace()
ds <- datasets(ws)
ds$Name
data <- download.datasets(ws, "plotly_3.6.0.tar.gz.zip")
head(data)
Where "plotly_3.6.0.tar.gz.zip" is the name of my dataset of data type "Zip".
Unfortunately this results in the same error.
To rule out data type issues I also tried to download another dataset of mine which is of data type "Dataset". Also the same error.
Now I change the dataset I want to download to one of the sample datasets of AzureML Studio.
"text.preprocessing.zip" is of datatype Zip
data <- download.datasets(ws, "text.preprocessing.zip")
"Flight Delays Data" is of datatype GenericCSV
data <- download.datasets(ws, "Flight Delays Data")
Both of the sample datasets can be downloaded without problems.
So why can't I download my own saved dataset?
I could not find anything helpful in the documentation of the download.datasets function. Not on rdocumentation.org, nor on cran.r-project.org (page 17-18).
Try this:
library(AzureML)
ws <- workspace(
id = "your AzureML ID",
auth = "your AzureML Key"
)
name <- "Name of your saved data"
ws <- workspace()
It seems the error I got was due to a bug in the (then early) Azure ML Studio.
I tried again after the reply of Daniel Prager only to find out my code works as expected without any changes. Adding the id and auth parameters was not needed.

Using R package BerkeleyEarth

I'm working for the first time with the R package BerkeleyEarth, and attempting to use its convenience functions to access the BEST data. I think maybe it's just a problem with their servers (a matter I've separately addressed to the package's maintainer) but I wanted to know if it's instead something silly I'm doing.
To reproduce my fault
library(BerkeleyEarth)
downloadBerkeley()
which provides the following error message
trying URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
Error in download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
cannot open URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
In addition: Warning message:
In download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
InternetOpenUrl failed: 'A connection with the server could not be established'
Has anyone had a better experience using this package?
The error message is pointing to a different URL than one should get judging what URLs are listed at http://berkeleyearth.org/data/ that point to the zip formatted files. There are another set of .nc files that appear to be more recent. I would replace the entries in the BerkeleyUrls dataframe with the ones that match your analysis strategy:
This is the current URL that should be in position 1,1:
http://berkeleyearth.lbl.gov/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip
And this is the one that is in the package dataframe:
> BerkeleyUrls[1,1]
[1] "http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip"
I suppose you could try:
BerkeleyUrls[, 1] <- sub( "download\\.berkeleyearth\\.org", "berkeleyearth.lbl.gov", BerkeleyUrls[, 1])

Resources