I am downloading a file from Azure to a connection. The download appears to go through but unserializing gives an error. Any ideas what may be going wrong or how to go about debugging?
con <- rawConnection(raw(0), "r+")
storage_download(AzureContainer, src="sourcefile.fst", dest=con)
unserialize(con)
Error in unserialize(con) : unknown input format
Related
I am suddenly getting an error messaging when using tigris::counties() even though the same code worked perfectly fine a few weeks ago. Can anyone replicate this error or know of a workaround?
require(tigris)
counties_VT <- counties(state="VT",cb=T,class="sf")
Error Message:
Retrieving data for the year 2020
Error: Cannot open "C:\Users\QSMITH\AppData\Local\Temp\RtmpYZZH4z"; The source could be corrupt or not supported. See `st_drivers()` for a list of supported formats.
In addition: Warning message:
In unzip(file_loc, exdir = tmp) : error 1 in extracting from zip file
Thank you!
I am trying to import the data in my Excel file into R using Openxlsx library:
library(openxlsx)
data <- read.xlsx("datafile.xlsx", sheet = "Sheet1")
However, I get the following error:
Error in file(con, "r") : invalid 'description' argument
In addition: Warning message:
In unzip(xlsxFile, exdir = xmlDir) : error 1 in extracting from zip file
This error is thrown because your Excel file is open.
Save and close the Excel file and try again, it will work.
There's also another possibility: the XLSX file could be password protected. If you delete the password, then this can fix the error.
I think the best way to solve this problem is to reset the pathway of your data source. Please do not include any characters without English in your pathway.
setwd("C:\\Users\\your path way (where you store datafile.xlsx)")
P.S.
Rstudio2021 seem not friendly to non-English user ☺☺☺
I'm trying to read a SBML file (Test.xml) using the R package SBMLR. Below is the code I executed.
library(SBMLR)
file <- system.file("models","Test.xml",package = "SBMLR")
doc <- readSBML(file)
When I execute the 3rd line I get an error message saying:
Error in xmlEventParse(filename, handlers = sbmlHandler(),
ignoreBlanks = TRUE) : File does not exist
I have tried to read the file using rsbml library as well.. But still I'm getting an error saying
Error: File unreadable.
I'm following this guide at the moment. Any help regarding the issue is highly appreciated!
I'm learning R programming, using the book, "The Art of R Programming".
In chapter 3.2.3 Extended Example: Image Manipulation. The author Matloff tries to use a Mount Rushmore gray-scale image to illustrate that the image is stored in matrix. He used a library called pixmap. And I downloaded the package, installed it.
> library(pixmap)
> mtrush1 <- read.pnm("mtrush1.pgm")
> mtrush1
Pixmap image
Type : pixmapGrey
Size : 194x259
Resolution : 1x1
Bounding box : 0 0 259 194
> plot(mtrush1)
This is what the book has written, and I tried to run this, but got the error message,
> library(pixmap)
> mtrush1 <- read.pnm("mtrush1.pgm")
Error in file(file, open = "rb") : cannot open the connection
In addition: Warning message:
In file(file, open = "rb") :
cannot open file 'mtrush1.pgm': No such file or directory
starting httpd help server ... done
What does this mean? cannot open the connection? And also the mtrush1.pgm does not exist? How should I fix it here? Any help? Much appreciated.
Summary:
Add the argument cellres=1 to your function call and you should be fine.
Answer:
The second error you saw--Warning message: In rep(cellres, length = 2) : 'x' is NULL so the result will be NULL--is because you haven't set the cellres argument and, as a result, "cellres" assumes its default value (i.e. 'NULL'--hence the warning). For what you're working on, setting the cellres argument to 1 will do the trick (though you can pass in a two-element vector with unequal values and see just how it affects your figure by plotting the resulting object).
Note: Though it's a little late to be answering, I figure that since I had the same problem earlier today (and since Google was no help) a response was probably warranted.
This means that the file mtrush1.pgm is not in current directory. You should either setwd to the directory that contains this file, or specify the complete path in read.pnm.
For the file mtrush1.pgm, you can download it from http://heather.cs.ucdavis.edu/~matloff/
The file mtrush1.pgm and the R scripts from the book "The Art Of R Programming" can be found at this GitHub site.
I currently have a script in R that loops around 2000 times (for loop), and on each loop it queries data from a database using a url link and the read.csv function to put the data into a variable.
My problem is: when I query low amounts of data (around 10000 rows) it takes around 12 seconds per loop and its fine. But now I need to query around 50000 rows of data per loop and the query time increases quite a lot, to 50 seconds or so per loop. And this is fine for me but sometimes I notice it takes longer for the server to send the data (≈75-90 seconds) and APPARENTLY the connection times out and I get these errors:
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : cannot open: HTTP status was '0 (nil)'
or this one:
Error in file(file, "rt") : cannot open the connection
In addition: Warning message:
In file(file, "rt") : InternetOpenUrl failed: 'The operation timed
out'
I don't get the same warning every time, it changes between those two.
Now, what I want is to avoid my program to stop when this happens, or to simply prevent this timeout error and tell R to wait more time for the data. I have tried these settings at the start of my script as a possible solution but it keeps happening.
options(timeout=190)
setInternet2(use=NA)
setInternet2(use=FALSE)
setInternet2(use=NA)
Any other suggestions or workarounds? Maybe to skip to the next loop when this happens and store in a variable the loop number of the times this error occurred so it can be queried again in the end but only for those i's in the loop that were skipped due to the connection error? The ideal solution would be, of course, to avoid having this error.
A solution using the RCurl package:
You can change the timeout option using
curlSetOpt(timeout = 200)
or by passing it into the call to getURL
getURL(url_vect[i], timeout = 200)
A solution using base R:
Simply download each file using download.file, and then worry about manipulating those file later.
I see this is an older post, but it still comes up early in the list of Google results, so...
If you are downloading via WinInet (rather than curl, internal, wget, etc.) options, including timeout, are inherited from the system. Thus, you cannot set the timeout in R. You must change the Internet Explorer settings. See Microsoft references for details:
https://support.microsoft.com/en-us/kb/181050
https://support.microsoft.com/en-us/kb/193625
This is partial code that I show you, but you can modify to you're needs :
# connect to website
withRestarts(
tryCatch(
webpage <- getURL(url_vect[i]),
finally = print(" Succes.")
),
abort = function(){},
error = function(e) {
i<-i+1
}
)
In my case the url_vect[i] was one of the url's I copied information. This will increase the time you need to wait for the program to finish sadly.
UPDATED
tryCatch how to example