How do I use getNodeSet (XML package) within a function? - r

I am trying to develop a script to extract information from xml files. After parsing the XML file I use
idNodes <- getNodeSet(doc, "//compound[#identifier='101.37_1176.0998m/z']")
to subset a particular part of the document and then extract information I need using lines such as
subject <- sapply(idNodes, xpathSApply, path = './condition/sample', function(x) xmlAttrs(x)['name'])
My xml file has hundreds of identifiers of the type 101.37_1176.0998m/z
It is not possible to load all of the identifiers at once so I need iterate through the file by using getNodeSet followed by data extraction
My script works fine if I enter the identifier manually, i.e.
idNodes <- getNodeSet(doc, "//compound[#identifier='101.37_1176.0998m/z']")
but I would like to write a function so I can use do.call to pass the function a list of identifiers.
I have tried
xtract <- function(id){
idNodes <- getNodeSet(doc, "//compound[#identifier='id']")}
but when I use this function, i.e.
xtract('102.91_1180.5732m/z')
or
compounds <- c("101.37_1176.0998m/z", "102.91_1180.5732m/z")
do.call("xtract", list(compounds))
it is clear that getNodeSet has not worked, i.e. there is no data to be extracted.
If I use
xtract(102.91_1180.5732m/z)
I get: Error: unexpected input in "xtract(102.91_"
Can anyone help resolve this problem?

In the function it should be
idNodes <- getNodeSet(doc, paste0("//compound[#identifier='",id,"']"))
then the following call will work
xtract('102.91_1180.5732m/z')

Related

parameter not passed to the function when using walk function in PURRR package

I am using the purrr:walk to read multiple excel files and it failed. I have 3 questions:
(1) I used the function list.files to read the excel file list in one folder. But the returned values also included the subfolders. I tried set value for the parameters recursive= and include.dirs=, but it didn't work.
setwd(file_path)
files<-as_tibble(list.files(file_path,recursive=F,include.dirs=F)) %>%
filter(str_detect(value,".xlsx"))
files
(2) When I used the following piece of code, it can run without any error or warning message, but there is no returned data.
###read the excel data
file_read <- function(value1) {
print(value1)
file1<-read_excel(value1,sheet=1)
}
walk(files$value,file_read)
When I used the following, it worked. Not sure why.
test<-read_excel(files$value,sheet=1)
(3) In Q2, actually I want to create file1 to file6, suppose there are 6 excel files. How can I dynamically assign the dataset name?
list.files has pattern argument where you can specify what kind of files you are looking for. This will help you avoid filter(str_detect(value,".xlsx")) step. Also list.files only returns the files that are included in the main directory (file_path) and not it's subdirectory unless you specify recursive = TRUE.
library(readxl)
setwd(file_path)
files <- list.files(pattern = '\\.xlsx')
In the function you need to return the object.
file_read <- function(value1) {
data <- read_excel(value1,sheet=1)
return(data)
}
Now you can use map/lapply to read the files.
result <- purrr::map(files,file_read)

Why can I only read one .json file at a time?

I have 500+ .json files that I am trying to get a specific element out of. I cannot figure out why I cannot read more than one at a time..
This works:
library (jsonlite)
files<-list.files(‘~/JSON’)
file1<-fromJSON(readLines(‘~/JSON/file1.json),flatten=TRUE)
result<-as.data.frame(source=file1$element$subdata$data)
However, regardless of using different json packages (eg RJSONIO), I cannot apply this to the entire contents of files. The error I continue to get is...
attempt to run same code as function over all contents in file list
for (i in files) {
fromJSON(readLines(i),flatten = TRUE)
as.data.frame(i)$element$subdata$data}
My goal is to loop through all 500+ and extract the data and its contents. Specifically if the file has the element ‘subdata$data’, i want to extract the list and put them all in a dataframe.
Note: files are being read as ASCII (Windows OS). This does bot have a negative effect on single extractions but for the loop i get ‘invalid character bytes’
Update 1/25/2019
Ran the following but returned errors...
files<-list.files('~/JSON')
out<-lapply(files,function (fn) {
o<-fromJSON(file(i),flatten=TRUE)
as.data.frame(i)$element$subdata$data
})
Error in file(i): object 'i' not found
Also updated function, this time with UTF* errors...
files<-list.files('~/JSON')
out<-lapply(files,function (i,fn) {
o<-fromJSON(file(i),flatten=TRUE)
as.data.frame(i)$element$subdata$data
})
Error in parse_con(txt,bigint_as_char):
lexical error: invalid bytes in UTF8 string. (right here)------^
Latest Update
Think I found out a solution to the crazy 'bytes' problem. When I run readLines on the .json file, I can then apply fromJSON),
e.x.
json<-readLines('~/JSON')
jsonread<-fromJSON(json)
jsondf<-as.data.frame(jsonread$element$subdata$data)
#returns a dataframe with the correct information
Problem is, I cannot apply readLines to all the files within the JSON folder (PATH). If I can get help with that, I think I can run...
files<-list.files('~/JSON')
for (i in files){
a<-readLines(i)
o<-fromJSON(file(a),flatten=TRUE)
as.data.frame(i)$element$subdata}
Needed Steps
apply readLines to all 500 .json files in JSON folder
apply fromJSON to files from step.1
create a data.frame that returns entries if list (fromJSON) contains $element$subdata$data.
Thoughts?
Solution (Workaround?)
Unfortunately, the fromJSON still runs in to trouble with the .json files. My guess is that my GET method (httr) is unable to wait/delay and load the 'pretty print' and thus is grabbing the raw .json which in-turn is giving odd characters and as a result giving the ubiquitous '------^' error. Nevertheless, I was able to put together a solution, please see below. I want to post it for future folks that may have the same problem with the .json files not working nicely with any R json package.
#keeping the same 'files' variable as earlier
raw_data<-lapply(files,readLines)
dat<-do.call(rbind,raw_data)
dat2<-as.data.frame(dat,stringsasFactors=FALSE)
#check to see json contents were read-in
dat2[1,1]
library(tidyr)
dat3<-separate_rows(dat2,sep='')
x<-unlist(raw_data)
x<-gsub('[[:punct:]]', ' ',x)
#Identify elements wanted in original .json and apply regex
y<-regmatches(x,regexc('.*SubElement2 *(.*?) *Text.*',x))
for loops never return anything, so you must save all valuable data yourself.
You call as.data.frame(i) which is creating a frame with exactly one element, the filename, probably not what you want to keep.
(Minor) Use fromJSON(file(i),...).
Since you want to capture these into one frame, I suggest something along the lines of:
out <- lapply(files, function(fn) {
o <- fromJSON(file(fn), flatten = TRUE)
as.data.frame(o)$element$subdata$data
})
allout <- do.call(rbind.data.frame, out)
### alternatives:
allout <- dplyr::bind_rows(out)
allout <- data.table::rbindlist(out)

How to create an object by adding a variable to a fixed value?

I am trying to write a program to open a large amount of files and run them through a function I made called "sort". Every one of my file names starts with "sa1", however after that the characters vary based on the file. I was hoping to do something along the lines of this:
for(x in c("Put","Characters","which","Vary","by","File","here")){
sa1+x <- read.csv("filepath/sa1+x",header= FALSE)
sa1+x=sort(sa1+x)
return(sa1+x)
}
In this case, say that x was 88. It would open the file sa188, name that dataframe sa188, and then run it through the function sort. I dont think that writing sa1+x is the correct way to bind together two values, but I dont know a way to.
You need to use a list to contain the data in each csv file, and loop over the filenames using paste0.
file_suffixes <- c("put","characters","which","vary","by","file","here")
numfiles <- length(file_suffixes)
list_data <- list()
sorted_data <- list()
filename <- "filepath/sa1"
for (x in 1:numfiles) {
list_data[[x]] <- read.csv(paste0(filename, file_suffixes[x]), header=FALSE)
sorted_data[[x]] <- sort(list_data[[x]])
}
I am not sure why you use return in that loop. If you're writing a function, you should be returning the sorted_data list which contains all your post-sorting data.
Note: you shouldn't call your function sort because there is already a base R function called sort.
Additional note: you can use dir() and regex parsing to find all the files which start with "sa1" and loop over all of them, thus freeing you from having to specify the file_suffixes.

R: Loading data from folder with multiple files

I have a folder with multiple files to load:
Every file is a list. And I want to combine all the lists loaded in a single list. I am using the following code (the variable loaded every time from a file is called TotalData) :
Filenames <- paste0('DATA3_',as.character(1:18))
Data <- list()
for (ii in Filenames){
load(ii)
Data <- append(Data,TotalData)
}
Is there a more elegant way to write it? For example using apply functions?
You can use lapply. I assume that your files have been stored using save, because you use load to get them. I create two files to use in my example as follows:
TotalData<-list(1:10)
save(TotalData,file="DATA3_1")
TotalData<-list(11:20)
save(TotalData,file="DATA3_2")
And then I read them in by
Filenames <- paste0('DATA3_',as.character(1:2))
Data <- lapply(Filenames,function(fn) {
load(fn)
return (TotalData)
})
After this, Data will be a list that contains the lists from the files as its elements. Since you are using append in your example, I assume this is not what you want. I remove one level of nesting with
Data <- unlist(Data,recursive=FALSE)
For my two example files, this gave the same result as your code. Whether it is more elegant can be debated, but I would claim that it is more R-ish than the for-loop.

Open multiple files and assign to individual variables using R

I have a 100 text files with matrices which I want to open using R - the read.table() command can be used for that.
I cant figure out how to assign these files to separate variable names so that I can carry out operations on the matrices.
I am trying to use the for loop but keep getting error messages.
I hope somebody can help me out with this...
If you have 100 files, it may make more sense to simply keep them in one neat list.
# Get the list of files
#----------------------------#
folder <- "path/to/files"
fileList <- dir(folder, recursive=TRUE) # grep through these, if you are not loading them all
# use platform appropriate separator
files <- paste(folder, fileList, sep=.Platform$file.sep)
# Read them in
#----------------------------#
myMatrices <- lapply(files, read.table)
Then access via, eg, myMatrices[[37]] or using lapply
Would it be easer to just use list.files?
For example:
files <- list.files(directory/path, pattern= "regexp.if.needed")
And then you could access each element by calling files[1], files[2], etc. This would allow you to pull out either all the files in a directory, or just the ones that matched a regular expression.

Resources