Problem with saving kable table (install_phantomjs) - r

Let's consider very simple table created by kable
library(knitr)
library(kableExtra)
x <- data.frame(1:3, 2:4, 3:5)
x <- kable(x, format = "pipe", col.names = c("X_1", "X_2", "X_3"), caption = "My_table")
I want to save this table into .pdf format
x %>% save_kable("My_table.pdf")
But I get error:
PhantomJS not found. You can install it with webshot::install_phantomjs(). If it is installed, please make sure the phantomjs executable can be found via the PATH variable.
However, when trying to install it by proposed command:
webshot::install_phantomjs()
I get error:
Error in utils::download.file(url, method = method, ...) :
cannot open URL 'https://github.com/wch/webshot/releases/download/v0.3.1/phantomjs-2.1.1-windows.zip'
So my question is - Is there any possbility to save kable table without using phanomjs?

The command works for me and the URL is also available.
I suspect that the file (it's a .zip file) is being blocked by your firewall or anti-virus software.

Related

R markdown cannot open URL when using download.file

*Note this problem only occurs on Windows.
I have the following code that runs properly out of a normal script or the console:
tdir <- tempdir()
stateurl <- "https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip"
if(file.exists(paste(tdir,"/cb_2018_us_state_500k.shp",sep=""))==F){
download.file(stateurl, destfile = file.path(tdir, "States.zip"))
unzip(file.path(tdir,"States.zip"),exdir=tdir)}
But when placing that same script in a chunk and trying to knit to HTML in Rmarkdown, I am left with the warning "could not open URL connection."
I am lost as to the potential issue why something simple like downloading a file would run in the console but not in RMarkdown.
I could reproduce the error about 50% of the time with the provided code without obvious pattern (i.e. repeateadly running "Knit to HTML" from the same session will randomly fail/work).
For me, the problem goes away if I explicitly specify method = "libcurl" as argument to download.file (instead of the default method = "auto", which uses "wininet" on Windows)
tdir <- tempdir()
stateurl <- "https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip"
if(file.exists(paste(tdir,"/cb_2018_us_state_500k.shp",sep=""))==F){
download.file(stateurl, destfile = file.path(tdir, "States.zip"), method = "libcurl")
unzip(file.path(tdir,"States.zip"),exdir=tdir)}
With this "Knit to HTML" is working consistently (at least for my 10+ tests).

Errors when using kable. Issues with phantomjs or ghostscript?

I am trying to create a table in R. I used mtcars as ax example dataset. I keep getting the error below. I am truly at a loss for how exactly to fix this and where to move certain files or programs to (ghostscript, magick, phantomjs..). Is there an easy fix? I am not using markdown.
kable(rawCSV, "latex", booktabs = T) %>%
kable_styling(latex_options = c("solid", "scale_down")) %>%
as_image()
save_kable(file="table.pdf")
Error in save_kable_latex(x, file, latex_header_includes, keep_tex, density) :
We hit an error when trying to use magick to read the generated PDF file. You may check your magick installation and try to use magick::image_read to read the PDF file manually. It's also possible that you didn't have ghostscript installed.

Can't open .biom file for Phyloseq tree plotting

After trying to read a biom file:
rich_dense_biom <-
system.file("extdata", "D:\sample_otutable.biom", package = "phyloseq")
myData <-
import_biom(rich_dense_biom, treefilename, refseqfilename, parseFunction =
parse_taxonomy_greengenes)
the following errors are showing
Error in read_biom(biom_file = BIOMfilename) :
Both attempts to read input file:
either as JSON (BIOM-v1) or HDF5 (BIOM-v2).
Check file path, file name, file itself, then try again.
Are you sure D:\sample_otutable.biom really exists? And is a system file?
In R for Windows, it is at least safer (if not required?) to separate file paths with \\
This works for me
library("devtools")
install_github("biom", "joey711")
library(biom)
biom.file <-
"C:\\Users\\Mark Miller\\Documents\\R\\win-library\\3.3\\biom\\extdata\\min_dense_otu_table.biom"
my.data <- import_biom(BIOMfilename = biom.file)

trying to use fread() on .csv file but getting internal error "ch>eof"

I am getting an error from fread:
Internal error: ch>eof when detecting eol
when trying to read a csv file downloaded from an https server, using R 3.2.0. I found something related on Github, https://github.com/Rdatatable/data.table/blob/master/src/fread.c, but don't know how I could use this, if at all. Thanks for any help.
Added info: the data was downloaded from here:
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv"
then I used
download.file(fileURL, "Idaho2006.csv", method = "Internal")
The problem is that download.file doesn't work with https with method=internal unless you're on Windows and set an option. Since fread uses download.file when you pass it a URL and not a local file, it'll fail. You have to download the file manually then open it from a local file.
If you're on Linux or have either of the following already then do method=wget or method=curl instead
If you're on Windows and don't have either and don't want to download them then do setInternet2(use = TRUE) before your download.file
http://www.inside-r.org/r-doc/utils/setInternet2
For example:
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv"
tempf <- tempfile()
download.file(fileURL, tempf, method = "curl")
DT <- fread(tempf)
unlink(tempf)
Or
fileURL <- "https://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv"
tempf <- tempfile()
setInternet2 = TRUE
download.file(fileURL, tempf)
DT <- fread(tempf)
unlink(tempf)
fread() now utilises curl package for downloading files. And this seems to work just fine atm:
require(data.table) # v1.9.6+
fread(fileURL, showProgress = FALSE)
The easiest way to fix this problem in my experience is to just remove the s from https. Also remove the method you don't need it. My OS is Windows and i have tried the following code and works.
fileURL <- "http://d396qusza40orc.cloudfront.net/getdata%2Fdata%2Fss06pid.csv"
download.file(fileURL, "Idaho2006.csv")

Is it possible to load knitr chunks from a URL

I am attempting to pull R code chunks over HTTPS from an R script into a LaTeX document.
The .R file is in rstudio server and shared via webdav.
The LaTeX document resides on a server that cannot store files locally (ShareLaTeX).
Therefore, to get around the problem, I thought I'd use URL calls,
the following works for pulling in data:
<<load_data, echo=FALSE, cache=FALSE>>=
library(RCurl)
x <- getURL("https://user:pass#my.webdav.server.net/webdav/data/data.csv")
y <- read.csv(text = x,stringsAsFactors=FALSE,na.strings = "NA")
y
#
However, I would also like to pull in code chunks.
I have tried the following:
<<external-code, cache=FALSE>>=
z<-getURL("https://user:pass#my.webdav.server.net/webdav/model.R")
read_chunk(z, lines = code, labels = "foo")
#
However, this returns the error:
error in read_chunk(z, lines = code, labels = "foo"): object `code` not found
Is there some way to make knitr parse this variable as a file, or read the external URL?

Resources