Vectorizing for-loop in R - r

Oh, man. I am so terrible at removing for-loops from my code because I find them so intuitive and I first learned C++. Below, I am fetching IDs for a search (copd in this case) and using that ID to retrieve its full XML file and from that save its location into a vector. I do not know how to speed this up, and it took about 5 minutes to run on 700 IDs, whereas most searches have 70,000+ IDs. Thank you for any and all guidance.
library(rentrez)
library(XML)
# number of articles for term copd
count <- entrez_search(db = "pubmed", term = "copd")$count
# set max to count
id <- entrez_search(db = "pubmed", term = "copd", retmax = count)$ids
# empty vector that will soon contain locations
location <- character()
# get all location data
for (i in 1:count)
{
# get ID of each search
test <- entrez_fetch(db = "pubmed", id = id[i], rettype = "XML")
# convert to XML
test_list <- XML::xmlToList(test)
# retrieve location
location <- c(location, test_list$PubmedArticle$MedlineCitation$Article$AuthorList$Author$AffiliationInfo$Affiliation)
}

This may give you a start - it seems to be possible to pull down multiple at once.
library(rentrez)
library(xml2)
# number of articles for term copd
count <- entrez_search(db = "pubmed", term = "copd")$count
# set max to count
id_search <- entrez_search(db = "pubmed", term = "copd", retmax = count, use_history = T)
# get all
document <- entrez_fetch(db = "pubmed", rettype = "XML", web_history = id_search$web_history)
document_list <- as_list(read_xml(document))
Problem is that this is still time consuming because there are a large number of documents. Its also curious that it returns exactly 10,000 articles when I've tried this - there may be a limit to what you can return at once.
You can then use something like the purrr package to start extracting the information you want.

Related

Rentrez is pulling the wrong data from NCBI in R?

I am trying to download sequence data from E. coli samples within the state of Washington - it's about 1283 sequences, which I know is a lot. The problem that I am running into is that entrez_search and/or entrez_fetch seem to be pulling the wrong data. For example, the following R code does pull 1283 IDs, but when I use entrez_fetch on those IDs, the sequence data I get is from chickens and corn and things that are not E. coli:
search <- entrez_search(db = "biosample",
term = "Escherichia coli[Organism] AND geo_loc_name=USA:WA[attr]",
retmax = 9999, use_history = T)
Similarly, I tried pulling the sequence from one sample manually as a test. When I search for the accession number SAMN30954130 on the NCBI website, I see metadata for an E. coli sample. When I use this code, I see metadata for a chicken:
search <- entrez_search(db = "biosample",
term = "SAMN30954130[ACCN]",
retmax = 9999, use_history = T)
fetch_test <- entrez_fetch(db = "nucleotide",
id = search$ids,
rettype = "xml")
fetch_list <- xmlToList(fetch_test)
The issue here is that you are using a Biosample UID to query the Nucleotide database. However, the UID is then interpreted as a Nucleotide UID, so you get a sequence record unrelated to your original Biosample query.
What you need to use in this situation is entrez_link, which uses a UID to link records between two databases.
For example, your Biosample accession SAMN30954130 has the Biosample UID 30954130. You link that to Nucleotide like this:
nuc_links <- entrez_link(dbfrom='biosample', id=30954130, db='nuccore')
And you can get the corresponding Nucleotide UID(s) like this:
nuc_links$links$biosample_nuccore
[1] "2307876014"
And then:
fetch_test <- entrez_fetch(db = "nucleotide",
id = 2307876014,
rettype = "xml")
This is covered in the section "Finding cross-references" of the rentrez tutorial.

Problems extracting metadata from NCBI in R

I am trying to extract some information (metadata) from GenBank using the R package "rentrez" and the example I found here https://ajrominger.github.io/2018/05/21/gettingDNA.html. Specifically, for a particular group of organisms, I search for all records that have geographical coordinates and then want to extract data about the accession number, taxon, sequenced locus, country, lat_long, and collection date. As an output, I want a csv file with the data for each record in a separate row. It seems that the code below can do the job but at some point, rows get muddled with data from different records overlapping the neighbouring rows. For example, from 157 records that rentrez retrieves from NCBI 109 records in the file look like what I want to achieve but the rest is a total mess. I would greatly appreciate any advice on how to fix the issue because I am a total newbie with R and figuring out each step takes a lot of time.
setwd ("C:/R-Works")
library('XML')
library('rentrez')
argasid <- entrez_search(db="nuccore", term = "Argasidae[Organism] AND [lat]", use_history=TRUE, retmax=15000)
x <- entrez_fetch (db="nuccore", id=argasid$ids, rettype= "native", retmode="xml", parse=TRUE)
x <-xmlToList(x)
cleanEntrez <- function(x) {
basePath <- 'Seq-entry_seq.Bioseq'
c(
genbank = as.character(x[paste(basePath,
'Bioseq_id', 'Seq-id', 'Seq-id_genbank',
'Textseq-id', 'Textseq-id_accession',
sep = '.')]),
taxon = as.character(x[paste(basePath,
'Bioseq_descr', 'Seq-descr', 'Seqdesc',
'Seqdesc_source', 'BioSource', 'BioSource_org',
'Org-ref', 'Org-ref_taxname',
sep = '.')]),
bseqdesc_title = as.character(x[paste(basePath,
'Bioseq_descr', 'Seq-descr', 'Seqdesc',
'Seqdesc_title',
sep = '.')]),
lat_lon = as.character(x[grep('lat-lon', x) + 1]),
geo_description = as.character(x[grep('country', x) + 1]),
coll_date = as.character(x[grep('collection-date', x) + 1])
)
}
getGenbankMeta <- function(ids) {
allRec <- entrez_fetch(db = 'nuccore', id = ids,
rettype = 'native', retmode = 'xml',
parsed = TRUE)
allRec <- xmlToList(allRec)[[1]]
o <- lapply(allRec, function(x) {
cleanEntrez(unlist(x))
})
temp <- array(unlist(o), dim = c(length(o[[1]]), length(ids)))
seqVec <- temp[nrow(temp), ]
seqDF <- as.data.frame(t(temp[-nrow(temp), ]))
names(seqDF) <- names(o[[1]])[-nrow(temp)]
return(list(seq = seqVec, data = seqDF))
}
write.csv(getGenbankMeta(argasid$ids), 'argasid_georef.csv')

Fetching data from OECD into R via SDMX(XML)

I want to extract data from the OECD website particularily the dataset "REGION_ECONOM" with the dimensions "GDP" (GDP of the respective regions) and "POP_AVG" (the average population of the respective region).
This is the first time I am doing this:
I picked all the required dimensions on the OECD website and copied the SDMX (XML) link.
I tried to load them into R and convert them to a data frame with the following code:
(in the link I replaced the list of all regions with "ALL" as otherwise the link would have been six pages long)
if (!require(rsdmx)) install.packages('rsdmx') + library(rsdmx)
url2 <- "https://stats.oecd.org/restsdmx/sdmx.ashx/GetData/REGION_ECONOM/1+2.ALL.SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
sdmx2 <- readSDMX(url2)
stats2 <- as.data.frame(sdmx2)
head(stats2)
Unfortunately, this returns a "400 Bad request" error.
When just selecting a couple of regions the error does not appear:
if (!require(rsdmx)) install.packages('rsdmx') + library(rsdmx)
url1 <- "https://stats.oecd.org/restsdmx/sdmx.ashx/GetData/REGION_ECONOM/1+2.AUS+AU1+AU101+AU103+AU104+AU105.SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
sdmx1 <- readSDMX(url1)
stats1 <- as.data.frame(sdmx1)
head(stats1)
I also tried to use the "OECD" package to get the data. There I had the same problem. ("400 Bad Request")
if (!require(OECD)) install.packages('OECD') + library(OECD)
df1<-get_dataset("REGION_ECONOM", filter = "GDP+POP_AVG",
start_time = 2008, end_time = 2009, pre_formatted = TRUE)
However, when I use the package for other data sets it does work:
df <- get_dataset("FTPTC_D", filter = "FRA+USA", pre_formatted = TRUE)
Does anyone know where my mistake could lie?
the sdmx-ml api does not seem to work as explained (using the all parameter), whereas the json API works just fine. The following query returns the values for all countries and returns them as json - I simply replaced All by an empty field.
query <- https://stats.oecd.org/sdmx-json/data/REGION_ECONOM/1+2..SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?
Transforming it to a readable format is not so trivial. I played around a bit to find the following work-around:
# send a GET request using httr
library(httr)
query <- "https://stats.oecd.org/sdmx-json/data/REGION_ECONOM/1+2..SNA_2008.GDP+POP_AVG.REAL_PPP.ALL.1990+1991+1992+1993+1994+1995+1996+1997+1998+1999+2000+2001+2002+2003+2004+2005+2006+2007+2008+2009+2010+2011+2012+2013+2014+2015+2016+2017+2018/all?"
dat_raw <- GET(query)
dat_parsed <- parse_json(content(dat_raw, "text")) # parse the content
Next, access the observations from the nested list and transform them to a matrix. Also extract the features from the keys:
dat_obs <- dat_parsed[["dataSets"]][[1]][["observations"]]
dat0 <- do.call(rbind, dat_obs) # get a matrix
new_features <- matrix(as.numeric(do.call(rbind, strsplit(rownames(dat0), ":"))), nrow = nrow(dat0))
dat1 <- cbind(new_features, dat0) # add feature columns
dat1_df <- as.data.frame(dat1) # optionally transform to data frame
Finally you want to find out about the keys. Those are hidden in the "structure". This one you also need to parse correctly, so I wrote a function for you to easier extract the values and ids:
## Get keys of features
keys <- dat_parsed[["structure"]][["dimensions"]][["observation"]]
for (i in 1:length(keys)) print(paste("id position:", i, "is feature", keys[[i]]$id))
# apply keys
get_features <- function(data_input, keys_input, feature_index, value = FALSE) {
keys_temp <- keys_input[[feature_index]]$values
keys_temp_matrix <- do.call(rbind, keys_temp)
keys_temp_out <- keys_temp_matrix[, value + 1][unlist(data_input[, feature_index])+1] # column 1 is id, 2 is value
return(unlist(keys_temp_out))
}
head(get_features(dat1_df, keys, 7))
head(get_features(dat1_df, keys, 2, value = FALSE))
head(get_features(dat1_df, keys, 2, value = TRUE))
I hope that helps you in your project.
Best, Tobias

Storing data from a for loop in a data frame

I am trying to create a function that interacts with the pubmed api to retrieve xml files associated with 100 publications. I then want to parse the xml files individually to retrieve the title of each publication and the abstract of each publication. I am using the Rentrez package to interact with the api, and have successfully retrieved the necessary xml files. I am using the xml package to parse the xml files, and have verified that the Xpath expressions retrieve the data that I want. In truth, I am looking to take data from other fields (journal title, Mesh Terms, etc., but I am stuck at this step here)
However, I have not been able to create a proper for loop to move this data into a data frame. I receive the following error from running my code:
error in $<-.data.frame(*tmp*, "Abstract", value = list("text of abstract"):
replacement has 1 row, data has 0
When I test the function to receive title information (by removing the expression to retrieve abstract information), I receive an empty data frame with no information about the titles that I want. But there is no error message then.
If I execute pubmed_parsed("Kandel+Eric", n=2), my goal is to receive a data frame with the character vectors from two titles in the column "ATitle" (titles: "Roles for small noncoding RNAs in silencing of retrotransposons in the mammalian brain" and "ApCPEB4, a non-prion domain containing homolog of ApCPEB, is involved in the initiation of long-term facilitation"). And the character vectors from the two abstracts to correspondingly appear in the column "Abstract" (portions of abstracts: "Piwi-interacting RNAs (piRNAs), long thought to be restricted to gremlin...", "Two pharmacologically distinct types of local protein synthesis are required for synapse- specific...").
library(xml)
library(rentrez)
pubmed_parsed <- function(term, n=100){
df <- data.frame(ATitle = character(), JTitle = character(), MeshTerms = character(), Abstract = character(), FAuthor = character(), LAuthor = character(), stringsAsFactors = FALSE)
IdList <- entrez_search(db = "pubmed", term = term, retmode = "xml", retmax = n)
for (i in 1:n){
XmlFile <- entrez_fetch(db = "pubmed", id=IdList$ids[i], rettype = "xml", retmode = "xml", parsed=TRUE)
Parsed <- xmlRoot(XmlFile)
df$ATitle[i] <- xpathSApply(Parsed, "/PubmedArticleSet/PubmedArticle/MedlineCitation/Article/Title", xmlValue, simplify = FALSE)
df$Abstract[i] <- xpathSApply(Parsed, "/PubmedArticleSet/PubmedArticle/MedlineCitation/Article/Title", xmlValue, simplify = FALSE)
}
df
}
Here's one way to get a table and a few suggestions. First, I would use the Web history option and download all results together instead of looping through downloads.
ids <- entrez_search(db = "pubmed", term = "Kandel ER", use_history = TRUE)
ids
Entrez search result with 502 hits (object contains 20 IDs and a web_history object)
Search term (as translated): Kandel ER[Author]
doc <- entrez_fetch(db="pubmed", web_history=ids$web_history, rettype="xml", retmax = 3, parsed=TRUE)
Next, get the articles into a node set and query that to handle all your missing and multiple tags.
articles <- getNodeSet( doc, "//PubmedArticle")
length(articles)
[1] 3
articles[[1]]
<PubmedArticle>
<MedlineCitation Status="Publisher" Owner="NLM">
<PMID Version="1">27791114</PMID>
<DateCreated>
...
I usually create a function to add NAs if tags are missing and join multiple tags using a comma.
xpath2 <-function(x, path, fun = xmlValue, ...){
y <- xpathSApply(x, path, fun, ...)
ifelse(length(y) == 0, NA,
ifelse(length(y) > 1, paste(unlist(y), collapse=", "), y))
}
Then just apply that function to the nodes (with the leading dot in xpath so it's relative to that node). This will combine multiple keywords into a comma-separated list and include NA for article 3 with missing keywords.
sapply(articles, xpath2, ".//Keyword")
[1] "DNA methylation, behavior, endogenous siRNA, piwi-interacting RNA, transposon"
[2] "Aplysia, CPEB, CPEB4, Long-term facilitation"
[3] NA
Most xpath should work
sapply(articles, xpath2, ".//PubDate/Year")
[1] "2016" "2016" "2016"
sapply(articles, xpath2, ".//ArticleId[#IdType='pmc']")
[1] "PMC5111663" "PMC5075418" NA
You can also use xmlGetAttr if needed
sapply(articles, xpath2, ".//Article", xmlGetAttr, "PubModel")
[1] "Print-Electronic" "Electronic" "Electronic"
Finally, create a data.frame
data.frame(
ATitle = sapply(articles, xpath2, ".//ArticleTitle"),
JTitle = sapply(articles, xpath2, ".//Journal/Title"),
Keywords = sapply(articles, xpath2, ".//Keyword"),
Authors = sapply(articles, xpath2, ".//Author/LastName"),
Abstract = sapply(articles, xpath2, ".//AbstractText"))
I'm not sure what happened to MeSH terms, but I only see Keywords in the few examples I downloaded. Also, there are probably a few ways to get first and last authors. You could get both last name and initials (assuming both are always present) and replace the comma before the initials to get an Author string. Then split that to get first and last author or even print the first three below.
au <- sapply(articles, xpath2, ".//Author/LastName|.//Author/Initials")
au <- gsub(",( [A-Z]+,?)", "\\1", au)
authors_etal <- function(x, authors=3, split=", *"){
y <- strsplit(x, split)
sapply(y, function(x){
if(length(x) > (authors + 1)) x <- c(x[1:authors], "et al.")
paste(x, collapse=", ")
})
}
authors_etal(au)
[1] "Nandi S, Chandramohan D, Fioriti L, et al."
[2] "Lee SH, Shim J, Cheong YH, et al."
[3] "Si K, Kandel ER"

Poor Performing Loop Function - Options?

New to R ... struggling to produce results on 10,000 lines; Data model actually has about 1M lines. Is there a better option than a Loop? Read about vectorization and attempted tapply with no success.
Data set has a column of free form text and a category associated to the text. I need to parse the text into distinct words to then perform statistics on the frequency of words being able to predict the category with a certain degree of accuracy. I read in the data via read.table and create a data.frame called data.
Function attempts to parse Text, and count occurrences of each word:
data <- data.frame(category = c("cat1","cat2","cat3", "cat4"),
text = c("The quick brown fox",
"Jumps over the fence",
"The quick car hit a fence",
"Jumps brown"))
parsefunc <- function(data){
finalframe <- data.frame()
for (i in 1:nrow(data)){
description <- strsplit(as.character(data[i,2]), " ")[[1]]
category <- rep(data[i,1], length(description))
worddataframe <- data.frame(description, category)
finalframe <- rbind(finalframe, worddataframe)
}
m1<- ddply(finalframe, c("description","category"), nrow)
m2<- ddply(m1, 'description', transform, totalcount = sum(nrow), percenttotal = nrow/sum(nrow))
m3 <- m2[(m2$totalcount>10) & (m2$percenttotal>0.8), ]
m3
}
This will get your finalframe and do something close to your m1,2, and 3 part. You'll have to edit it to do exactly what you want. I used a longer data set of 40k rows to make sure it performs alright:
# long data set
data <- data.frame(Category = rep(paste0('cat',1:4),10000),
Text = rep(c('The quick brown fox','Jumps over the fence','The quick car hit a fence','Jumps brown cars'),10000),stringsAsFactors = F)
# split into words
wordbag <- strsplit(data$Text,split = ' ')
# find appropriate category for each word
categoryvar <- rep(data$Category,lapply(wordbag,length))
# stick them in a data frame and aggregate
newdf <- data.frame(category = categoryvar,word = tolower(unlist(wordbag)))
agg <- aggregate(list(wordcount = rep(1,nrow(newdf))),list(category = newdf$category,word =newdf$word),sum)
# find total count in entire data set and put in data set
wordagg <- aggregate(list(totalwordcount = rep(1,nrow(newdf))),list(word =newdf$word),sum)
agg <- merge(x = agg,y = wordagg,by = 'word')
# find percentages and do whatever else you need
agg$percentageofword <- agg$wordcount/agg$totalwordcount

Resources