I am trying to use easyPubMed package in R to download content from PubMed in txt format. I am trying to run the following example:
dami_on_pubmed <- get_pubmed_ids("Damiano Fantini[AU]")
However, at this point I get the following error:
Error in url(myPubmedURL, open = "rb") : https:// URLs are not supported
How to solve this problem?
Thanks
Related
I am having serious problems related with setwd. Just starting R session the first message I get (doing nothing) is:
Error in setwd(dir) : cannot change working directory
Then I had problems using specific libraries. For example with plotly, I can download the library but when I try to apply the funcion plot_ly I get the same error.
Now I am trying to use R Markdown so I need to install tinytex, when I execute tinytex::install_tinytex() I get this error:
Error in setwd(tempdir()) : cannot change working directory
I have been looking information about this issue but with not success. How can I solve this?
I have a problem with (I am using Windows 10) running library(tesseract) which shows Warning message:
Unable to find English training data.
I have downloaded "eng.traineddata" from https://github.com/tesseract-ocr/tessdata
While try to run
eng <- tesseract("eng")
It displays an error:
Error in tesseract_engine_internal(datapath, language, configs, opt_names, :
Unable to find training data for: eng. Please consult manual for: ?tesseract_download
You've probably used legacy, incompatible traineddata file. You'd need either tessdata_fast or tessdata_best data.
https://github.com/tesseract-ocr
With R4.1 I had to create the file "C:\Program Files (x86)\Tesseract-OCR" and add to it the eng.traineddata file downloaded from https://github.com/tesseract-ocr/tessdata_best/blob/main/eng.traineddata.
I'm trying to use read.xls from gdata to import an Excel file directly into R. I'm on a Windows machine running 64 bit R.
I have checked my PATH variable for perl and I appear to have that set correctly, so that doesn't appear to be a problem. Here's my code, and I've attached my error below. Does anyone have any pointers on how I can get this done?
require(RCurl)
require(gdata)
url <- "https://dl.dropboxusercontent.com/u/27644144/NADAC%2020140101.xls"
test <- read.xls(url)
The error I'm getting is:
Error in xls2sep(xls, sheet, verbose = verbose, ..., method = method, :
Intermediate file 'C:\Users\Me\AppData\Local\Temp\RtmpeoJNxP\file338c26156d7.csv' missing!
In addition: Warning message:
running command '"C:\STRAWB~1\perl\bin\perl.exe" "C:/Users/Me/Documents/R/win-library/3.0/gdata/perl/xls2csv.pl" "https://dl.dropboxusercontent.com/u/27644144/NADAC%2020140101.xls" "C:\Users\Me\AppData\Local\Temp\RtmpeoJNxP\file338c26156d7.csv" "1"' had status 22
Error in file.exists(tfn) : invalid 'file' argument
#G.G is correct that read.xls does not support https. However, if you simply replace the https with http in the url you should be able to download the file.
Give this a try:
require(RCurl)
require(gdata)
url <- "http://dl.dropboxusercontent.com/u/27644144/NADAC%2020140101.xls"
test <- read.xls(url)
read.xls supports http and ftp but does not support https. Download it first and then use read.xls with the downloaded file.
I am trying to read a HTML table using the XML command readHTMLTable. It is information about options in the Eurex page. When i run the code:
info<-readHTMLTable("http://www.eurexchange.com/action/exchange-en/155392-31606/31608/quotesSingleViewOption.do?callPut=Call&maturityDate=201312",which=1)
I get this error message:
Error: failed to load external entity "http://www.eurexchange.com/action/exchange-en/155392-31606/31608/quotesSingleViewOption.do?callPut=Call&maturityDate=201312"
I have the correct packages installed and the last r version.
Does anybody know what could be the problem?
Thank you
Use function GET of package httr to retrieve html content:
info<-readHTMLTable(rawToChar(GET("http://www.eurexchange.com/action/exchange-en/155392-31606/31608/quotesSingleViewOption.do?callPut=Call&maturityDate=201312")$content),which=1)
I'M running the following code in R:
library(GEOquery)
mypath <- "C:/Users/Farzin/Desktop/BIOC"
GDS1 <- getGEO('GDS1',destdir=mypath)
But I'm getting the following error:
Using locally cached version of GDS1 found here:
C:/Users/Farzin/Desktop/BIOC/GDS1.soft.gz
Error in read.table(con, sep = "\t", header = FALSE, nrows = nseries) :
invalid 'nlines' argument
Could anyone please tell me how I could get rid of this error?
I have had the same error using GEOquery (version 2.23.5) with R and Bioconductor from ubuntu (12.04), whatever GDS file I queried. Could it be that the GEOquery package is faulty ?
In my experience, getGEO is extremely finicky. I commonly experience issues connecting to the GEO server. If this happens during download, getGEO leaves a partial file. But since the partial file is there, when you try to re-download, it will use this cached, partially downloaded file, and run into the error you see (which you want, because its not the full file).
To solve this, delete the cached SOFT file and retry the download.