Slow or stacking file.choose() in R - r

If I have more data loaded in R I'm having difficulties with opening and choosing new file via file.choose() and later upload via read.csv(), but I would not get to that point since the file.choose function stacks and the R "crushes" and reports something like "unidentified error occurred and that the R must restart".
I'm using RStudio and running this on Windows 7. The hardware is up to date.
Could someone point me on why this is happing and what would be a remedy against this. Are there other options to select file? I know I can insert the path right into the read.csv command, but the (file is different every time).
EDIT:
The error just happened again. I can not reproduce the error so it happens rather only with high likelihood if the conditions for it are met.
The error reads as: R Session Aborted.R encountered fatal error. The session was terminated. And in window: "Start New Session".
EDIT 2:
I would just rephrase my question. The question is whether there is other option like command or package that deals with choosing a file. [file.choose()]
The error can not be reproduced and hence I can not expect someone gives reasonable comment on this. But if this occurred someone in the past and solved it, I would like to hear about it. Thanks.
EDIT 3: Further to the error. I have spotted just now sentence in red in Console: Error: Unable to provide connection with R

Related

BiocParallel error: cannot open the connection, how do I fix it?

I'm trying to use the package bambu to quantify gene counts from bam files. I am using my university's HPC, so I have written an R script and a batch submission file to launch it.
When the script gets to the point of running the bambu function, it gives the following error:
Start generating read class files
| | 0%[W::hts_idx_load2] The index file is older than the data file: ./results/minimap2/KD_R1.sorted.bam.bai
[W::hts_idx_load2] The index file is older than the data file: ./results/minimap2/KD_R3.sorted.bam.bai
[W::hts_idx_load2] The index file is older than the data file: ./results/minimap2/WT_R1.sorted.bam.bai
[W::hts_idx_load2] The index file is older than the data file: ./results/minimap2/WT_R2.sorted.bam.bai
|================== | 25%
Error: BiocParallel errors
element index: 1, 2, 3
first error: cannot open the connection
In addition: Warning message:
stop worker failed:
attempt to select less than one element in OneIndex
Execution halted
So it looks like BiocParallel isn't happy and cannot open a certain connection, but I'm not sure how to fix this?
This is my R script:
#Bambu R script
#load libraries
library(Rsamtools)
library(bambu)
#Creating files
bamFiles<- Rsamtools::BamFileList(c("./results/minimap2/KD_R1.sorted.bam","./results/minimap2/KD_R2.sorted.bam","./results/minimap2/KD_R3.sorted.bam","./results/minimap2/WT_R1.sorted.bam","./results/minimap2/WT_R2.sorted.bam","./results/minimap2/WT_R3.sorted.bam"))
annotation<-prepareAnnotations("./ref_data/Homo_sapiens.GRCh38.104.chr.gtf")
fa.file<-"./ref_data/Homo_sapiens.GRCh38.dna.primary_assembly.fa"
#Running bambu
se<- bambu(reads=bamFiles, annotations=annotation, genome=fa.file,ncore=4)
se
seGene<- transcriptToGeneExpression(se)
#Saving files
save.file<-tempfile(fileext=".gtf")
writeToGTF(rowRanges(se),file=save.file)
save.dir <- tempdir()
writeBambuOutput(se,path=save.fir,prefix="Nanopore_")
writeBambuOutput(seGene,path=save.fir,prefix="Nanopore_")
If you have any ideas on why this happens it would be so helpful! Thank you
I think that #Chris has a good point. Under the hood it seems likely that bambu is running htslib based on those warnings. While they may indeed only be warnings, I would like to know what the results would look like if you ran this interactively.
This question is hard to answer right now as it's missing some information (what do the files look like, a minimal reproducible example, etc.). But in the meantime here are some possibly useful questions for figuring it out:
what does bamFiles look like? Does it have the right number of read records? Do all of those files have nonzero read records? Are any suspiciously small?
What are the timestamps on the bai vs bam files (e.g. ls -lh /results/minimap2/)? Are they about what you'd expect or is it wonky? Are any of them (say, ./results/minimap2/WT_R2.sorted.bam.bai) weirdly small?
What happens when you run it interactively? Where does it fail? You say it's at the bambu() call, but how do you know that?
What happens when you run bambu() with ncores=1?
It seems very likely that this is due to a problem with the files, and it is only at the biocParallel step that the error is bubbling up to the top. Many utilities have an annoying habit of being happy to accept an empty file, only to fail confusingly without informative error messages when asked to do something with the empty file.
You might also consider raising an issue with the developers.
(why the warning is only possibly a problem: The index file sometimes has a timestamp like that for very small alignment files which are generated and indexed programmatically, where the indexing step is near-instantaneous.)

Text mining with tm in R antiword error

So I'm rather new to R, and I'm learning how to mine text from this handy website: https://eight2late.wordpress.com/2015/05/27/a-gentle-introduction-to-text-mining-using-r/
I do have my own text set of .doc, .docx, and .xlsx files and I'm trying to mine them. They're located in a folder in my working directory called 'files', but I have already encountered an error after simply writing a few lines of code.
The code I have so far is:
library(tm)
library(readtext)
data = readtext('files')
At this point, after waiting for 25 seconds or so, I get the error:
Error: System call to 'antiword' failed (1): The Big Block Depot is damaged
and the code stops running there.
I have tried searching online for solutions but it seems like a fairly rare error and so I only found 1 possible solution at https://github.com/ropensci/antiword/issues/1 but that did not work for me.
This solution suggested that one of my files were corrupt, and suggested using the code
fixInNamespace(antiword, pos="package:antiword")
to change the error to a warning to not interrupt the reading of the files. I tried that, and at first it raised the error of
Error in as.environment(pos):
no item called "package:antiword" on the search list
After which, I loaded the antiword library with a library(antiword) and changed the stop( to a warning(. However, when I ran the data = readtext('files') line again, it immediately raised the error
Error in is_windows() : could not find function "is_windows"
I'm at a loss here! Any help would be appreciated. Should I be using another package in this case?
I had the same problem with my code, where I tried to get a doc. file in R. I also used the readtext library. What helped me was converting the Word documents I was trying to get into R from doc. to docx. When I ran the same code after it worked.

How do I get pretty error messages in RStudio?

When working in RStudio (version 0.99.482 and 1.0.136), if I source a file and an error occur I just get the error message without any context. If the file is longer than a few lines of code, that informaiton is largely useless. What I want to know when an error occurs is the following:
What function threw the error? Preferably what function I called from my script, not something burried deep inside a package.
On what line of my file did the error occur?
This seems like a very basic thing for a scripting language to do, but yet I am unable to find an easy solution. Yes, I am aware of the traceback() function, but (a) it is a hassle to call it every time there is an error, and (b) the output is massive and not that helpful.

R - "Browser()" on Error?

I have been using some R libraries to analyze some large data recently, and I find myself frustrated by waiting several hours for the beginning of an analysis, just to get to the end and receive some trivial error, like that I did not install a prerequisite library, or that one of my parameters was wrong. So, then I have to start all over, do the exact same analysis, generate the same variables that it had when it died, and wait a long time. Please note that these are not handled exceptions--they are fatal errors from R.
This is just a thought--and perhaps it is too good to be true, so please at least explain why it wouldn't work--but is there any way to cause R to execute "browser()" in the environment whenever it has a fatal error? For example, say it is executing a script, and encounters "require(notInstalledYet)". Instead of just dying, and losing all the variables in the memory, it would be great if it would give me a browser() at the place it died, so that I could at least save the variables, and at best, fix the problem (e.g. install the library) and try again.
You can change the error option to open a browser on error
options(error=browser)
the default is
options(error=NULL)

Error in doWithOneRestart

I have a longer, complex code (>7000 lines) with many nested functions, each of them enclosed in a separate tryCatch. The code works perfectly except for a "pseudo-error":
Error in doWithOneRestart(return(expr), restart): no function to return from, jumping to top level
doWithOneRestart() is internal in R as an element of the tryCatch function. I call it "pseudo-error", because the tryCatch should lead to stop() if an error ocurrs and write the error message in a log file. Instead, this "error" is not stopping the program (actually not influencing it at all) and it is shown only on the console and not written into the log file. Usual debugging procedures did not help, because the error is not reproducible (!): it may ocurr at different processing stages of the program. Changing the warning options to 0 or -1 will not help.
Since the program does the job, this error is not critical. But I would like to understand what is happening. Maybe someone has already experienced the same problem, or could come up with an original debugging strategy ...
Update (28.10.2013):
I found out where the problem came from. It's linked to a problem with java heap overflow (I was using the xlsx package to read Excel files). Among many other problems: although the connection to the Excel file is closed (definitely!), the system considers it as an unused connection (shown in traceback()), tries to close it, but finds out it is already closed: you get the "pseudo-error" described above, and never exactly at the same moment (not reproducible). Using the garbage collector gc() at the right place solved the problem. The script is now running stable for several days.
Advice from Peter Dalgaard on R-help.
The easiest way to get that message is to execute return() from the
top level:
return(1)
You might be trying to return() from source()d file. Or maybe
source()ing something that was intended to be inside a function body
(extraneous '}' characters can do that).
The usual debugging strategies should work: calling traceback() after the error, or setting options(error = recover).

Resources