Get blogdown on r-blogggers for hugo-goa - r

I attempt to bring the following blog style blogdown::new_site(theme = "shenoybr/hugo-goa") to r-bloggers.
Following How do I get my blogdown blog on R-Bloggers? seems to be outdated.
A new solution appeared: https://stackoverflow.com/a/63499033/8538074, which i attempted to follow.
dir.create("blog18")
setwd("blog18")
blogdown::new_site(theme = "shenoybr/hugo-goa")
Then follow the answer linked above:
dir.create(path = "layouts/tags")
xx = readLines("https://raw.githubusercontent.com/liao961120/Hugo-RSS/master/layouts/categories/rss.xml")
writeLines(text = xx, con = "layouts/tags/rss.xml")
blogdown::build_site()
Push to github:
go to github and create repo
{GITHUBUSERNAME}.github.io
for me it is TyGu1.github.io
Drag and drop all files in github upload
Check Website:
go to https://tygu1.github.io/
--> Website is up
As suggested in https://www.r-bloggers.com/add-your-blog/
go to
paste0("https://simplepie.org/demo/?feed=", URLencode("https://tygu1.github.io/", reserved = TRUE))
"https://simplepie.org/demo/?feed=https%3A%2F%2Ftygu1.github.io%2F"
RSS feed does not work.
Question:
Which steps do i have to Change for the rss feed to be valid (using the shenoybr/hugo-goa style)
Edit:
As asked/suggested in the question: Github pages seems to activated already:
(In other repos i see "GitHub Pages is currently disabled. ", therefore i assume it is activated in the current repo, see the picture below).

The address of your website is correct but the one of your blog is https://tygu1.github.io/blog.
The XML feed is located at https://tygu1.github.io/blog/index.xml. If you paste it in simplepie.org, it works: https://simplepie.org/demo/?feed=https%3A%2F%2Ftygu1.github.io%2Fblog%2Findex.xml.
So the problem was the missing /blog.

Related

box_dir_create() function from boxr package is not creating a folder in Box

I've been having difficulty getting boxr to successfully create a file within my box directory. My code reads:
library(boxr)
box_auth()
my_file_dir <- box_setwd("76009318507")
box_dir_create(dir_name="TEST", parent_dir_id = my_file_dir)
after running which, I get the following output:
box.com remote folder reference
name :
dir id :
size :
modified at :
created at :
uploaded by :
owned by :
shared link : None
parent folder name :
parent folder id :
Checking my box directory, I find no folders have been created.
I've tried using additional arguments within box_dir_create, but according to the documentation only dir_name and parent_dir_name are accepted.
Any help is much appreciated. I understand this is a somewhat obscure R package, so I've included links to the documentation below:
https://cran.r-project.org/web/packages/boxr/boxr.pdf
https://github.com/r-box/boxr
I got an answer via the package's developer, and I figured I'd pay it forward for any fellow travelers in the future.
It turns out that box_setwd() sets a default directory but returns nothing. Using
box_dir_create(dir_name="TEST", parent_dir_id = "76009318507")
creates the folder successfully. It will not do so if a folder of the same name is already created.
After more digging, I was also told that box_dir_create() is quietly passing back a lot of useful information, including the newly created directory's ID. To access it you can save the function results as a variable, like so:
b <- box_dir_create("test_dir")
names(b) # lots of info
b$id # what you want
box_ul(b$id, "image_file.jpg") # is this file by file?
box_push(b$id, "image_directory/") # or a directory wide operation?
Thanks for your help, and I hope this helps someone else down the road. Cheers!

R webshot package not capturing everything

The page at this link in the webshot command includes ovals to indicate air quality in a particular location
webshot("https://www.purpleair.com/map?&zoom=12&lat=39.09864026298141&lng=-108.56749455168722&clustersize=27&orderby=L&latr=0.22700642752714373&lngr=0.4785919189453125", "paMap.png")
The png that webshot produces doesn't include these ovals. I suspect these are created with javascript and webshot is not picking them up. But I don't know how to tell it to do so, or even if it is possible.
Although this issue is not directly related with webshot versions, you should consider to try webshot2 on https://github.com/rstudio/webshot2 instead of using webshot. I have prepared a blog post including various details about webshot2. You can see the details from here. In addition, see my detailed answer about the issues on webshot compared to webshot2.
I have replicated your scenario with webshot2 and delay parameter, the issue is resolved as below screenshot. The main issue is related with delay side. Basically, the URL needs a longer delay for all assets to display.
The code
library(webshot2)
temp_url = "https://www.purpleair.com/map?&zoom=12&lat=39.09864026298141&lng=-108.56749455168722&clustersize=27&orderby=L&latr=0.22700642752714373&lngr=0.4785919189453125"
webshot(url = temp_url, file = "paMap.png", delay = 4)
The output file
library(RSelenium)
remDr <- remoteDriver(remoteServerAddr = "192.168.99.100", port = 4445L)
remDr$open()
remDr$navigate("https://www.purpleair.com/map?&zoom=12&lat=39.09864026298141&lng=-108.56749455168722&clustersize=27&orderby=L&latr=0.22700642752714373&lngr=0.4785919189453125")
remDr$screenshot(file = "paMag.png")
effect:

Documenter.jl: #autodocs for specific source files

From Documenter.jl's documentation of #autodocs:
[...], a Pages vector may be included in #autodocs to filter
docstrings based on the source file in which they are defined:
```#autodocs
Modules = [Foo]
Pages = ["a.jl", "b.jl"]
```
However, it also says
Note that page matching is done using the end of the provided strings
and so a.jl will be matched by any source file that ends in a.jl, i.e.
src/a.jl or src/foo/a.jl.
How can I restrict the #autodocs block to specific source files?
My package's source code is organized as
src/
foo/a.jl
foo/b.jl
ignore/a.jl
ignore/b.jl
other.jl
How to make the #autodocs block only consider files src/foo/a.jl and src/foo/b.jl but not src/ignore/a.jl and src/ignore/b.jl?
Unfortunately, Pages = ["foo/a.jl", "foo/b.jl"] didn't do it for me.
Thanks in advance.
x-ref: https://discourse.julialang.org/t/documenter-jl-autodocs-for-specific-source-files/8784
x-ref: https://github.com/JuliaDocs/Documenter.jl/issues/630
Turns out that this is a Windows issue due to absence of normalization of path separators (see linked github issue).
On Linux Pages = ["foo/a.jl", "foo/b.jl"] should work.
On Windows Pages = ["foo\\a.jl", "foo\\b.jl"] should work.
EDIT: joinpath.("foo", ["a.jl", "b.jl"]) should work on any OS.

Downloading a complete html

I'm trying to scrape some website using R. However, I cannot get all the information from the website due to an unknown reason. I found a work around by first downloading the complete webpage (save as from browser). I was wondering whether it would be to download complete websites using some function.
I tried "download.file" and "htmlParse" but they seems to only download the source code.
url = "http://www.tripadvisor.com/Hotel_Review-g2216639-d2215212-Reviews-Ayurveda_Kuren_Maho-Yapahuwa_North_Western_Province.html"
download.file(url , "webpage")
doc <- htmlParse(urll)
ratings = as.data.frame(xpathSApply(doc,'//div[#class="rating reviewItemInline"]/span//#alt'))
This worked with rvest first go.
llply(html(url) %>% html_nodes('div.rating.reviewItemInline'),function(i)
data.frame(nth_stars = html_nodes(i,'img') %>% html_attr('alt'),
date_var = html_text(i)%>%stri_replace_all_regex('(\n|Reviewed)','')))

Downloading pics via R Programming

I am trying to download the image of from this website listed below via R programming
http://www.ebay.com/itm/2pk-HP-60XL-Ink-Cartridge-Combo-Pack-CC641WN-CC644WN-/271805060791?ssPageName=STRK:MESE:IT
Which package should I use and how should I process it?
Objective: To download the image in this page to a folder
AND / OR
to find the image URL.
I used
url2 <- "http://www.ebay.co.uk/itm/381164104651"
url_content <- html(url2)
node1 <- html_node(url_content, "#icImg")
node1 has the image url, but when I am trying to edit this content I get an error saying it is a non-character element.
This solved my problem
html_node(url_content, "#icImg") %>% xml_attr("src")

Resources