Using the code below to pull a .csv file from Microsoft Outlook into R for routine data manipulation and keep getting the following error (specifically after running the results lines):
<checkErrorInfo> 80020009
No support for InterfaceSupportsErrorInfo
checkErrorInfo -2147352567
Error: Exception occurred.
Similar posts in Stack Overflow suggested adding sys.sleep() to fix this issue, allowing the system adequate time to search through e-mail subjects. After adding sys.sleep() with various time frames (ranging from 5-50), I'm still getting this error. Any suggestions or advice?
#Load in dataset from e-mail
outlook_app <-COMCreate("Outlook.Application")
search <- outlook_app$AdvancedSearch(
"Inbox",
"urn:schemas:httpsmail:subject = 'Outcome Information to Date'"
)
Sys.sleep(5)
results <- search$Results()
Sys.sleep(5)
for (i in 1:results$Count()) {
if(as.Date("1899-12-30")+floor(results$Item(i)$RecievedTime())
== as.Date(strptime(Sys.time(),format="%Y-%m-%d"))) {
email <- results$Item(i)
}
}
attachment_file<-tempfile()
email$Attachments(1)$SaveAsFile(attachment_file)
#Save outcome data in a dataframe
outcomedata<-read.csv(attachment_file)
Related
I'm getting an error when running getOptionChain from quantmod package.
The code should get Option chain data and subset into a new list that contains only the puts.
The error I get is: Error in .Date(Exp/86400) : could not find function ".Date"
Same code, sometimes runs without error. If I shorten the list of Symbols, there's no error, but the error as far as I know is not related to a specific symbol, because I made to run successfully. Seems random but frequent error.
All symbols are weekly expirations and the desired output is the next weekly expiration, so from my understanding, there's no need to specify a exp date.
library(quantmod)
library(xts)
Symbols<-c ("AA","AAL","AAOI","AAPL","ABBV","ABC","ABNB","ABT","ACAD","ACB","ACN","ADBE","ADI","ADM","ADP",
"ADSK","AEO","AFL","AFRM","AG","AGNC","AHT","AIG","AKAM","ALGN","AMAT","AMBA","AMC","AMD","AMGN",
"AMPX","AMRN","AMRS","AMZN","ANET","ANF","ANY","APA","APO","APPH","APPS","APRN","APT","AR","ARVL")
Options.20221118 <- lapply(Symbols, getOptionChain)
names(Options.20221118) <- Symbols
only_puts_list <- lapply(Options.20221118, function(x) x$puts)
After upgrading to R 4.2.2 the issue is fixed.
I am trying to extract the body of a few outlook emails with their email subject containing the keyword "Permission" using RDCOMClient in R.
This is the code that I have written.
OutApp <- COMCreate("Outlook.Application")
OutlookNameSpace <- OutApp$GetNameSpace("MAPI")
folderName <- "Inbox"
search <- OutApp$AdvancedSearch(folderName, "urn:schemas:httpmail:subject like '%Permission%'")
results <- search$Results()
body <- c()
for (i in 1:results$Count()){
body <- c(body, results$Item(i)$Body())
}
When I ran the codes line by line, I am able to obtain the character vector body without error.
However, when I ran the entire chunk together, an error is encountered.
< CheckErrorInfo > 80020009
No support for InterfaceSupportsErrorInfo
CheckErrorInfo -2147352567
Error: Exception occurred.
I had tried adding Sys.sleep(1) as suggested in Running Excel macros from R through RDCOMClient, error -2147418111, both outside and inside the for loop, but I still get the same error.
Ultimately, I would like to run this script automatically using source( ).
Could someone please help me understand why this error has occurred, and how do I resolve it?
In addition, if I would like to access a shared inbox instead of my personal inbox, how should I change the folderName so that the search will be done in the correct mailbox?
Thank you!
So I have the following code:
data_load <- function(){
links_data <- readline(prompt= "Please specify the location of the data for links: ")
links <- read.csv(links_data, header=T, as.is=T)
nodes_data <- readline(prompt= "Please specify the location of the data for nodes: ")
nodes <- read.csv(nodes_data, header=T, as.is=T)
print("Data Loaded")
return(list(links=links, nodes=nodes))
}
data=data_load()
links=data$links
nodes=data$nodes
When I do load.all I don't encounter a problem but when I try to install and restart or clean and build or when I test/check package. I get the following:
ERROR: lazy loading failed for package ‘<package name>’
Exited with status 1.
Since there is no data available just yet. How can I solve this problem?
I am using RStudio.
Sorry if it is a bad question but I couldn't solve it.
I have built this function which I run in a loop over a list of ~1million URLs in order to do some webscraping, but after a while my memory is full and R shuts down.
library(tm.plugin.webmining)
getContents <- function(url) {
out <- tryCatch(
{extractContentDOM(url, asText = F,threshold=0.5)},
error=function(cond) {
message(paste("URL does not seem to exist:",
message("Here's the original error message:")
message(cond)
return(NA)},
warning=function(cond) {
message(paste("URL caused a warning:", url))
message("Here's the original warning message:")
message(cond)
return(NA)},
finally={
message(paste("Processed URL:", url))})
return(out)}
#save text
a=getContents(http://www.nytimes.com/)
If I do so, I always get a problem regarding memory management. Basically I loop through the list of URLs, extract the text, analyze it.
Every time I run the function, it increases the used memory by some MB. When you then try to release the memory to system with
rm(list = ls())
gc()
the task manager does not does not show that memory has been given back to the system; after a while there's a system shutdown because there's no available memory left. I also tried to restart R, but there seems to be no way to restart R in a loop, so that the loop goes on afterwards.
Ive already read a lot about that topic, but I didnt find a proper answer to that problem yet.
thanks in advance!
If you have such a large web scraping, I would encourage you to store on hard drive as it goes through for each site instead of memory. At least that is what I've done in the past and has worked really well.
I am using R and the GEOQuery package for downloading a set of GEO profiles. For doing this I use the following instructions:
library(Biobase)
library(GEOquery)
gdsAcc<-getGEO('GDS1245',destdir=".")
which downloads the GDS1245.soft.gz in the specified directory.
The problem is that some GEO profiles have been removed, so when I use the above mentioned instructions in a loop and I came with something like:
gdsAcc<-getGEO('GDS450',destdir=".")
in the last case the profile GDS450 does not exist so it throws an error and the program stops. I would like to know how I can catch that error so that in case that the profile does not exist the program will continue looking for the other profiles.
My algorithm is something like:
for (i in 1:length_GEO_profiles){
disease<-GEOname
gdsName<-paste("GDS",disease,sep="")
gdsAcc<-getGEO(gdsName,destdir=".")
}
Any help?
Thanks
You should look at try and tryCatch. Here's an example to get you started:
for(i in 1:3) {
if(i == 1)
gdsAcc <- try(getGEO('GDS450',destdir="."))
cat(i, "\n")
}
If you want to do something with the error, then use an if statement:
if(class(gdsAcc) == "try-error") cat("HELP")
Related questions
Exception handling in R
Equivalent of "throw" in R
catching an error and then branching logic