Extract multiple json documents from aws s3 using R - r

I am currently experimenting with extracting documents from aws S3 and R. I have successfully managed to extract 1 document and create a dataframe with that document. I would like to be able to extract multiple documents which are within multiple sub folders of eventstore/footballStats/.
CODE demonstrates 1 document being pulled which works.
install.packages("aws.s3", repos = c("cloudyr" = "http://cloudyr.github.io/drat")) # runs an update for aws S3
library(aws.s3)
# Set credentials for S3 ####
Sys.setenv("AWS_ACCESS_KEY_ID" = "KEY","AWS_SECRET_ACCESS_KEY" = "AccessKey")
# Extracts 1 document raw vector representation of an S3 documents ####
DataVector <-get_object("s3://eventstore/footballStats/2017-04-22/13/01/doc1.json")
I have then tried code below to pull all documents from the folder and subfolders but receive an error.
DataVector <-get_object("s3://eventstore/footballStats/2017-04-22/*")
ERROR :
chr "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error>
<Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><K"| __truncated__
Is there an alternative r package I should be using? or Is the function get_object() only work for 1 document and I should be using another function from aws.s3 library?

Based on the hints from Drj and Thomas I was able to solve this..
### Displays Buckets in s3####
bucketlist()
### Builds a dataframe of the files in a bucket###
dfBucket <- get_bucket_df('eventstore', 'footballStats/2017-04-22/')
# creates path based on data in bucket
path <- dfBucket$Key
### Extracts all data into values ####
s3Data <- NULL
for (lineN in path) {
url <- paste('s3://eventstore/',lineN, sep= "")
s3Vector <- get_object(url)
s3Value <- rawToChar(s3Vector)
s3Data <- c(s3Data, s3Value)
}
To create a dataframe from the data use tidyjson and dplyr. See link for well explained document on this.
https://cran.r-project.org/web/packages/tidyjson/vignettes/introduction-to-tidyjson.html

Related

How to combine Scoups and WoS data for biblioshiny in R

I am tring to conduct a basic bibliometrix analysis using biblioshiny. However, since I have both Scopus and WoS databases, I am finding it difficult to combine them. So far, I have been able to import both the data using codes in R, and I have also already combined them. But I can't figure out how to use this combined data as input into the biblioshiny() app.
#Importing WoS and Scopus data individually
m1 = convert2df("WOS.txt", "wos", "plaintext")
m2 = convert2df("scopus.csv", "scopus", "csv")
#Merging them
M = mergeDbSources(m1, m2, remove.duplicated = TRUE)
#Creating the results
results = biblioAnalysis(M, sep = ";")
I just need to know how to export the results in a relevant format for data input in biblioshiny. Please help!
Put all of the WOS data files (in txt format) into a zip file and upload that zip file into biblioshiny. That's all you have to do.
use this command
library(openxlsx)
write.xlsx(results, file="mergedfile.xlsx")
it will save results with a name of mergedfile

creating a loop for "load" and "save" processes

I have a data.frame (dim: 100 x 1) containing a list of url links, each url looks something like this: https:blah-blah-blah.com/item/123/index.do .
The list (the list is a data.frame called my_list with 100 rows and a single column named col and is in character format $ col: chr) together looks like this :
1 "https:blah-blah-blah.com/item/123/index.do"
2" https:blah-blah-blah.com/item/124/index.do"
3 "https:blah-blah-blah.com/item/125/index.do"
etc.
I am trying to import each of these url's into R and collectively save the object as an object that is compatible for text mining procedures.
I know how to successfully convert each of these url's (that are on the list) manually:
library(pdftools)
library(tidytext)
library(textrank)
library(dplyr)
library(tm)
#1st document
url <- "https:blah-blah-blah.com/item/123/index.do"
article <- pdf_text(url)
Once this "article" file has been successfully created, I can inspect it:
str(article)
chr [1:13]
It looks like this:
[1] "abc ....."
[2] "def ..."
etc etc
[15] "ghi ...:
From here, I can successfully save this as an RDS file:
saveRDS(article, file = "article_1.rds")
Is there a way to do this for all 100 articles at the same time? Maybe with a loop?
Something like :
for (i in 1:100) {
url_i <- my_list[i,1]
article_i <- pdf_text(url_i)
saveRDS(article_i, file = "article_i.rds")
}
If this was written correctly, it would save each article as an RDS file (e.g. article_1.rds, article_2.rds, ... article_100.rds).
Would it then be possible to save all these articles into a single rds file?
Please note that list is not a good name for an object, as this will
temporarily overwrite the list() function. I think it is usually good
to name your variables according to their content. Maybe url_df would be
a good name.
library(pdftools)
#> Using poppler version 20.09.0
library(tidyverse)
url_df <-
data.frame(
url = c(
"https://www.nimh.nih.gov/health/publications/autism-spectrum-disorder/19-mh-8084-autismspecdisordr_152236.pdf",
"https://www.nimh.nih.gov/health/publications/my-mental-health-do-i-need-help/20-mh-8134-mymentalhealth-508_161032.pdf"
)
)
Since the urls are already in a data.frame we could store the text data in
an aditional column. That way the data will be easily available for later
steps.
text_df <-
url_df %>%
mutate(text = map(url, pdf_text))
Instead of saving each text in a separate file we can now store all of the data
in a single file:
saveRDS(text_df, "text_df.rds")
For historical reasons for loops are not very popular in the R community.
base R has the *apply() function family that provides a functional
approach to iteration. The tidyverse has the purrr package and the map*()
functions that improve upon the *apply() functions.
I recommend taking a look at
https://purrr.tidyverse.org/ to learn more.
It seems that there are certain url's in your data which are not valid pdf files. You can wrap it in tryCatch to handle the errors. If your dataframe is called df with url column in it, you can do :
library(pdftools)
lapply(seq_along(df$url), function(x) {
tryCatch({
saveRDS(pdf_text(df$url[x]), file = sprintf('article_%d.rds', x)),
},error = function(e) {})
})
So say you have a data.frame called my_df with a column that contains your URLs of pdf locations. As by your comments, it seems that some URLs lead to broken PDFs. You can use tryCatch in these cases to report back which links were broken and check manually what's wrong with these links.
You can do this in a for loop like this:
my_df <- data.frame(url = c(
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf", # working pdf
"https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pfd" # broken pdf
))
# make some useful new columns
my_df$id <- seq_along(my_df$url)
my_df$status <- NA
for (i in my_df$id) {
my_df$status[i] <- tryCatch({
message("downloading ", i) # put a status message on screen
article_i <- suppressMessages(pdftools::pdf_text(my_df$url[i]))
saveRDS(article_i, file = paste0("article_", i, ".rds"))
"OK"
}, error = function(e) {return("FAILED")}) # return the string FAILED if something goes wrong
}
my_df$status
#> [1] "OK" "FAILED"
I included a broken link in the example data on purpose to showcase how this would look.
Alternatively, you can use a loop from the apply family. The difference is that instead of iterating through a vector and applying the same code until the end of the vector, *apply takes a function, applies it to each element of a list (or objects which can be transformed to lists) and returns the results from each iteration in one go. Many people find *apply functions confusing at first because usually people define and apply functions in one line. Let's make the function more explicit:
s_download_pdf <- function(link, id) {
tryCatch({
message("downloading ", id) # put a status message on screen
article_i <- suppressMessages(pdftools::pdf_text(link))
saveRDS(article_i, file = paste0("article_", id, ".rds"))
"OK"
}, error = function(e) {return("FAILED")})
}
Now that we have this function, let's use it to download all files. I'm using mapply which iterates through two vectors at once, in this case the id and url columns:
my_df$status <- mapply(s_download_pdf, link = my_df$url, id = my_df$id)
my_df$status
#> [1] "OK" "FAILED"
I don't think it makes much of a difference which approach you choose as the speed will be bottlenecked by your internet connection instead of R. Just thought you might appreciate the comparison.

Using unz() to read in SAS data set into R

I am trying to read in a data set from SAS using the unz() function in R. I do not want to unzip the file. I have successfully used the following to read one of them in:
dir <- "C:/Users/michael/data/"
setwd(dir)
dir_files <- as.character(unzip("example_data.zip", list = TRUE)$Name)
ds <- read_sas(unz("example_data.zip", dir_files))
That works great. I'm able to read the data set in and conduct the analysis. When I try to read in another data set, though, I encounter an error:
dir2_files <- as.character(unzip("data.zip", list = TRUE)$Name)
ds2 <- read_sas(unz("data.zip", dir2_files))
Error in read_connection_(con, tempfile()) :
Evaluation error: error reading from the connection.
I have read other questions on here saying that the file path may be incorrectly specified. Some answers mentioned submitting list.files() to the console to see what is listed.
list.files()
[1] "example_data.zip" "data.zip"
As you can see, I can see the folders, and I was successfully able to read the data set in from "example_data.zip", but I cannot access the data.zip folder.
What am I missing? Thanks in advance.
Your "dir2_files" is String vector of the names of different files in "data.zip". So for example if the files that you want to read have them names at the positions "k" in "dir_files" and "j" in "dir2_files" then let update your script like that:
dir <- "C:/Users/michael/data/"
setwd(dir)
dir_files <- as.character(unzip("example_data.zip", list = TRUE)$Name)
ds <- read_sas(unz("example_data.zip", dir_files[k]))
dir2_files <- as.character(unzip("data.zip", list = TRUE)$Name)
ds2 <- read_sas(unz("data.zip", dir2_files[j]))

How to use Rscript with readr to get data from aws s3

I have some R code with readr package that works well on a local computer - I use list.files to find files with a specific extension and then use readr to operate on those files found.
My question: I want to do something similar with files in AWS S3 and I am looking for some pointers on how to use my current R code to do the same.
Thanks in advance.
What I want:
Given AWS folder/file structure like this
- /folder1/subfolder1/quant.sf
- /folder1/subfolder2/quant.sf
- /folder1/subfolder3/quant.sf
and so on where every subfolder has the same file 'quant.sf', I would like to get a data frame which has the S3 paths and I want to use the R code shown below to operate on all the quant.sf files.
Below, I am showing R code that works currently with data on a Linux machine.
get_quants <- function(path1, ...) {
additionalPath = list(...)
suppressMessages(library(tximport))
suppressMessages(library(readr))
salmon_filepaths=file.path(path=path1,list.files(path1,recursive=TRUE, pattern="quant.sf"))
samples = data.frame(samples = gsub(".*?quant/salmon_(.*?)/quant.sf", "\\1", salmon_filepaths) )
row.names(samples)=samples[,1]
names(salmon_filepaths)=samples$samples
# IF no tx2Gene available, we will only get isoform level counts
salmon_tx_data = tximport(salmon_filepaths, type="salmon", txOut = TRUE)
## Get transcript count summarization
write.csv(as.data.frame(salmon_tx_data$counts), file = "tx_NumReads.csv")
## Get TPM
write.csv(as.data.frame(salmon_tx_data$abundance), file = "tx_TPM_Abundance.csv")
if(length(additionalPath > 0)) {
tx2geneFile = additionalPath[[1]]
my_tx2gene=read.csv(tx2geneFile,sep = "\t",stringsAsFactors = F, header=F)
salmon_tx2gene_data = tximport(salmon_filepaths, type="salmon", txOut = FALSE, tx2gene=my_tx2gene)
## Get Gene count summarization
write.csv(as.data.frame(salmon_tx2gene_data$counts), file = "tx2gene_NumReads.csv")
## Get TPM
write.csv(as.data.frame(salmon_tx2gene_data$abundance), file = "tx2gene_TPM_Abundance.csv")
}
}
I find it easiest to use the aws.s3 R package for this. In this case what you would do is use the s3read_using() and s3write_using() functions to save to and from S3. Like this:
library(aws.s3)
my_tx2gene=s3read_using(FUN=read.csv, object="[path_in_s3_to_file]",sep = "\t",stringsAsFactors = F, header=F)
It basically is a wrapper around whatever function you want to use for file input/output. Works great with read_json, saveRDS, or anything else!

Macro with Iteration through URL

Having a problem creating a macro variable within an API call in R. I am trying to loop through a vector of zip codes and make an API call on that vector iteratively. Pretty unfamiliar with iterating through a R list that needs to be macro'd out.
Here is my code:
# creating a dataframe of 10 sample California zip codes to iterate through from database
zip_iterations<-sqlQuery(ch,"Select distinct zip from zip_codes where state='CA' limit 10",believeNRows="F")
# Calling the api to retrieve the JSON
json_file <- "http://api.openweathermap.org/data/2.5/weather?zip=**'MACRO VECTOR TO ITERATE'**
My goal is to go through the list of 10 zip codes in the dataframe by using a macro.
R doesn't use macros per se, but there are lots of alternative ways of doing what it sounds like you want to do. This version will return a character vector json_file, with each entry containing the body of the HTTP response for that zip code:
library("httr")
json_file <- character(0)
urls <- paste0("http://api.openweathermap.org/data/2.5/weather?zip=", zip_iterations)
for (i in seq_along(urls)) {
json_file[i] <- content(GET(urls[i]), as = "text")
}
You can then parse the resulting vector into a set of R lists using, for example, fromJSON() from the jsonlite package, such as:
library("jsonlite")
lapply(json_file, fromJSON)
The result of that will be a list of lists.

Resources