I have built this function which I run in a loop over a list of ~1million URLs in order to do some webscraping, but after a while my memory is full and R shuts down.
library(tm.plugin.webmining)
getContents <- function(url) {
out <- tryCatch(
{extractContentDOM(url, asText = F,threshold=0.5)},
error=function(cond) {
message(paste("URL does not seem to exist:",
message("Here's the original error message:")
message(cond)
return(NA)},
warning=function(cond) {
message(paste("URL caused a warning:", url))
message("Here's the original warning message:")
message(cond)
return(NA)},
finally={
message(paste("Processed URL:", url))})
return(out)}
#save text
a=getContents(http://www.nytimes.com/)
If I do so, I always get a problem regarding memory management. Basically I loop through the list of URLs, extract the text, analyze it.
Every time I run the function, it increases the used memory by some MB. When you then try to release the memory to system with
rm(list = ls())
gc()
the task manager does not does not show that memory has been given back to the system; after a while there's a system shutdown because there's no available memory left. I also tried to restart R, but there seems to be no way to restart R in a loop, so that the loop goes on afterwards.
Ive already read a lot about that topic, but I didnt find a proper answer to that problem yet.
thanks in advance!
If you have such a large web scraping, I would encourage you to store on hard drive as it goes through for each site instead of memory. At least that is what I've done in the past and has worked really well.
Related
I’m writing a function for writing some information to file. To offer more options for fine control I’m first opening a connection before calling writeLines().
As it has been stated in many places (eg How and when should I use on.exit?) the function on.exit() may be very helpful to reset settings in case unforeseen events making functions stall.
I hope you agree, writing files can certainly be considered a somehow risky activity (eg no more disk space, …).
So I was thinking of charging on.exit() to make sure the connection gets closed if anything during writeLines() caused my function to stop with an error. However, when I do so I get an error right after writeLines() gets run !
Has anybody an explanation ?
The workarounds I found is a) don’t use on.exit() or b) add try(). But I wonder if the second option would realy be helpful if an unforeseen event would make writeLines() stall. Even checking if the connection is still open to avoid closing twice didn't help.
Surprisingly, so far I couldn't find other examples of people using on.exit() to make sure the current connection gets closed when/after writing text.
Here an example :
txt1 <- c("ABCDEF","abcdefgh")
fxWrite <- function(txt, fileName, opt=0, eol="\n") {
con <- file(fileName)
open(con, open="wb")
if(identical(opt,1)) on.exit(try(close(con), silent=TRUE), add=TRUE)
if(identical(opt,2)) on.exit(close(con), add=TRUE) # regular on.exit() call
writeLines(as.character(txt), con=con, sep=eol)
if(isOpen(con=con)) close(con)
}
fxWrite(txt1, "test1.txt", 0) # no on.exit()
fxWrite(txt1, "test2.txt", 1)
fxWrite(txt1, "test3.txt", 2) # regular on.exit(),
# with opt=2 the file gets written, BUT : Error in close.connection(con) : invalid connection
We have a number of MS Access databases on a server which are copies from remote locations which are updated overnight. We collate some of the data from these machines for reporting purposes on a daily basis. Sometimes the overnight update fails, meaning we don’t have access to all of the databases, so I am attempting to write an R script which will test if we can connect (using a list of the database paths), and output an updated version of the list including only those which we can connect to. This will then be used to run a further script which will only update the data related to the available databases.
This is what I have so far (I am new to R but reasonably proficient in SAS and SQL – attempting to use R both as a learning exercise and for potential cost savings);
{
# Create Store data locations listing
A=matrix(c(1000,1,"One","//Server/Comms1/Access.mdb"
,2000,2,"Two","//Server/Comms2/Access.mdb"
,3000,3,"Three","//Server/Comms3/Access.mdb"
)
,nrow=3,ncol=4,byrow=TRUE)
# Add column names
colnames(A)<-c("Ref1","Ref2","Ref3","Location")
#Create summary for testing connections (Ref1 and Location)
B<-A[,c(1,4)]
ConnectionTest<-function(Ref1,Location)
{
out<-tryCatch({ch<-odbcDriverConnect(paste("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=",Location))
sqlQuery(ch,paste("select ",Ref1," as Ref1,COUNT(variable) as Count from table"))}
,error=matrix(c(Ref1,0),nrow=1,ncol=2,byrow=TRUE)
)
return(out)
}
#Run function, using 'B' to provide arguments
C<-apply(B,1,function(x)do.call(ConnectionTest,as.list(x)))
#Convert to matrix and add column names
D<-matrix(unlist(C),ncol=2,byrow=T)
colnames(D)<-c("Ref1","Count")
}
When I run the script I get the following error message;
Error in value[3L] : attempt to apply non-function
I am guessing this is because I am using TryCatch incorrectly inside the UDF?
Does anyone have any advice on what I am doing incorrectly, or even if this is the best way to do what I am attempting?
Thanks
(apologies if this is formatted incorrectly, having to post on my phone due to Stackoverflow posting being blocked)
Edit - I think I fixed the 'Error in value[3L]' issue by adding function(e) {} around the matrix function in the error part of the tryCatch.
The issue now is that the script just fails if it can't reach one of the databases, rather than doing the matrix function. Do I need to add something else to make it ignore the error?
Edit 2 - it seems tryCatch does now work - it processes the
alternate function upon error but also shows warnings about the error, which makes sense.
As mentioned in the edit above, using 'function(e) {}' to wrap the Matrix function in the error section of the tryCatch fixed the 'Error in value[3L]' issue, so the script now works, but displays error messages if it can't access a particular channel. I am guessing the 'warning' section of the tryCatch can be used to adjust these as necessary.
I am quite new to R and am trying to access some information on the internet, but am having problems with connections that don't seem to be closing. I would really appreciate it if someone here could give me some advice...
Originally I wanted to use the WebChem package, which theoretically delivers everything I want, but when some of the output data is missing from the webpage, WebChem doesn't return any data from that page. To get around this, I have taken most of the code from the package but altered it slightly to fit my needs. This worked fine, for about the first 150 usages, but now, although I have changed nothing, when I use the command read_html, I get the warning message " closing unused connection 4 (http:....." Although this is only a warning message, read_html doesn't return anything after this warning is generated.
I have written a simplified code, given below. This has the same problem
Closing R completely (or even rebooting my PC) doesn't seem to make a difference - the warning message now appears the second time I use the code. I can run the querys one at a time, outside of the loop with no problems, but as soon as I try to use the loop, the error occurs again on the 2nd iteration.
I have tried to vectorise the code, and again it returned the same error message.
I tried showConnections(all=TRUE), but only got connections 0-2 for stdin, stdout, stderr.
I have tried searching for ways to close the html connection, but I can't define the url as a con, and close(qurl) and close(ttt) also don't work. (Return errors of no applicable method for 'close' applied to an object of class "character and no applicable method for 'close' applied to an object of class "c('xml_document', 'xml_node')", repectively)
Does anybody know a way to close these connections so that they don't break my routine? Any suggestions would be very welcome. Thanks!
PS: I am using R version 3.3.0 with RStudio Version 0.99.902.
CasNrs <- c("630-08-0","463-49-0","194-59-2","86-74-8","148-79-8")
tit = character()
for (i in 1:length(CasNrs)){
CurrCasNr <- as.character(CasNrs[i])
baseurl <- 'http://chem.sis.nlm.nih.gov/chemidplus/rn/'
qurl <- paste0(baseurl, CurrCasNr, '?DT_START_ROW=0&DT_ROWS_PER_PAGE=50')
ttt <- try(read_html(qurl), silent = TRUE)
tit[i] <- xml_text(xml_find_all(ttt, "//head/title"))
}
After researching the topic I came up with the following solution:
url <- "https://website_example.com"
url = url(url, "rb")
html <- read_html(url)
close(url)
# + Whatever you wanna do with the html since it's already saved!
I haven't found a good answer for this problem. The best work-around that I came up with is to include the function below, with Secs = 3 or 4. I still don't know why the problem occurs or how to stop it without building in a large delay.
CatchupPause <- function(Secs){
Sys.sleep(Secs) #pause to let connection work
closeAllConnections()
gc()
}
I found this post as I was running into the same problems when I tried to scrape multiple datasets in the same script. The script would get progressively slower and I feel it was due to the connections. Here is a simple loop that closes out all of the connections.
for (i in seq_along(df$URLs)){function(i)
closeAllConnections(i)
}
In a nutshell I am trying to parallelise my whole script over dates using Snow and adply but continually get the below error.
Error in unserialize(socklist[[n]]) : error reading from connection
In addition: Warning messages:
1: <anonymous>: ... may be used in an incorrect context: ‘.fun(piece, ...)’
2: <anonymous>: ... may be used in an incorrect context: ‘.fun(piece, ...)’
I have set up the parallelisation process in the following way:
Cores = detectCores(all.tests = FALSE, logical = TRUE)
cl = makeCluster(Cores, type="SOCK")
registerDoSNOW(cl)
clusterExport(cl, c("Var1","Var2","Var3","Var4"), envir = environment())
exposureDaily <- adply(.data = dateSeries,.margins = 1,.fun = MainCalcFunction,
.expand = TRUE, Var1, Var2, Var3,
Var4,.parallel = TRUE)
stopCluster(cl)
Where dateSeries might look something like
> dateSeries
marketDate
1 2016-04-22
2 2016-04-26
MainCalcFunction is a very long script with multiple of my own functions contained within it. As the script is so long reproducing it wouldn't be practical, and a hypothetical small function would defeat the purpose as I have already got this methodology to work with other smaller functions. I can say that within MainCalcFunction I call all my libraries, necessary functions, and a file containing all other variables aside from those exported above so that I don't have to export a long list libraries and other objects.
MainCalcFunction can run successfully in its entirety over 2 dates using adply but not parallelisation, which tells me that it is not a bug in the code that is causing the parallelisation to fail.
Initially I thought (from experience) that the parallelisation over dates was failing because there was another function within the code that utilised parallelisation, however I have subsequently rebuilt the whole code to make sure that there was no such function.
I have poured over the script with a fine tooth comb to see if there was any place where I accidently didn't export something that I needed and I can't find anything.
Some ideas as to what could be causing the code to fail are:
The use of various option valuation functions in fOptions and rquantlib
The use of type sock
I am aware of this question already asked and also this question, and while the first question has helped me, it hasn't yet help solve the problem. (Note: that may be because I haven't used it correctly, having mainly used loginfo("text") to track where the code is. Potentially, there is a way to change that such that I log warning and/or error messages instead?)
Please let me know if there is any other information I can provide to help in solving this. I would be so appreciative if someone could provide some guidance, as the code takes close to 40 minutes to run for a day and I need to run it for close to a year, therefore parallelisation is essential!
EDIT
I have tried to implement the suggestion in the first question included above by utilising the outfile option. Given I am using Windows, I have done this by including the following lines before the exporting of the key objects and running MainCalcFunction :
reportLogName <- paste("logout_parallel.txt", sep="")
addHandler(writeToFile,
file = paste(Save_directory,reportLogName, sep="" ),
level='DEBUG')
with(getLogger(), names(handlers))
loginfo(paste("Starting log file", getwd()))
mc<-detectCores()
cl<-makeCluster(mc, outfile="")
registerDoParallel(cl)
Similarly, at the beginning of MainCalcFunction, after having sourced my libraries and functions I have included the following to print to file:
reportLogName <- paste(testDate,"_logout.txt", sep="")
addHandler(writeToFile,
file = paste(Save_directory,reportLogName, sep="" ),
level='DEBUG')
with(getLogger(), names(handlers))
loginfo(paste("Starting test function ",getwd(), sep = ""))
In the MainCalcFunction function I have then put loginfo("text") statements at key junctures to inform me of where the code is at.
This has resulted in some text files being available after the code fails due to the aforementioned error. However, these text files provide no more information on the cause of the error aside from at what point. This is despite having a tryCatch statement embedded in MainCalcFunction where at the end, on any instance of error I have added the line logerror(e)
I am posting this answer in case it helps anyone else with a similar problem in the future.
Essentially, the error unserialize(socklist[[n]]) doesn't tell you a lot, so to solve it it's a matter of narrowing down the issue.
Firstly, be absolutely sure the code runs over several dates in non-parallel with no errors
Ensure the parallelisation is set up correctly. There are some obvious initial errors that many other questions respond to, e.g., hidden parallelisation inside the code which means parallelisation is occurring twice.
Once you are sure that there is no problem with the code and the parallelisation is set up correctly start narrowing down. The issue is likely (unless something has been missed above) something in the code which isn't a problem when it is run in serial, but becomes a problem when run in parallel. The easiest way to narrow down is by setting outfile = "Log.txt" in which make cluster function you use, e.g., cl<-makeCluster(cores-1, outfile="Log.txt"). Then add as many print("Point in code") comments in your function to narrow down on where the issue is occurring.
In my case, the problem was the line jj = closeAllConnections(). This line works fine in non-parallel but breaks the code when in parallel. I suspect it has something to do with the function closing all connections including socket connections that are required for the parallelisation.
Try running using plain R instead of running in RStudio.
I am running an optimization program I wrote in a multi-language framework. Because I rely on different languages to accomplish the task, everything must be standalone so it can be launched through a batch file. Everything has been going fine for 2-3 months, but I finally ran out of luck when one of the crucial parts of this process, executed through a standalone R script, encountered something new and gave me an error message. This error message makes everything screech to a halt despite my best efforts:
selMEM<-forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
Procedure stopped (adjR2thresh criteria) adjR2cum = 0.000000 with 0 variables (superior to -0.005810)
Error in forward.sel(muskfreq, musk.MEM, adjR2thresh = adjR2) :
No variables selected. Please change your parameters.
I know why I am getting this message: it is warning me that no variables are above the threshold I have programmed to retain during a forward selection. Although this didn't happen in hundreds of runs, it's not that big a deal, I just need to tell R what to do next. This is where I am lost. After an exhaustive search through several posts (such as here), it seams that try() and tryCatch() are the way to go. So I have tried the following:
selMEM<-try(forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2))
if(inherits(selMEM, "try-error")) {
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
} else {
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}
The code after the problematic line works perfectly if I execute it line by line in R, but when I execute it as a standalone script from the command prompt, I still get the same error message and my whole process screeches to a halt before it executes what follows.
Any suggestions on how to make this work?
Note this in the try help:
try is implemented using tryCatch; for programming, instead of
try(expr, silent = TRUE), something like tryCatch(expr, error =
function(e) e) (or other simple error handler functions) may be more
efficient and flexible.
Look to tryCatch, possibly:
selMEM <- tryCatch({
forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
}, error = function(e) {
message(e)
return(NULL)
})
if(is.null(selMEM)) {
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
} else {
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}
Have you tried setting the silent parameter to true in the Try function?
max<-0
cumR2<-0
adjR2<-0
pvalue<-NA
try({
selMEM <- forward.sel(muskfreq, musk.MEM, adjR2thresh=adjR2)
max<-dim(selMEM)[1]
cumR2<-selMEM$R2Cum[max]
adjR2<-selMEM$AdjR2Cum[max]
pvalue<-selMEM$pval[max]
}, silent=T)