running this line of code
library(ggmap)
geocode(location="Somewhere in Nigeria Winning",source="dsk", output="more")
throws this error
Error in vapply(gc$results[[1]]$address_components, function(x) x[[nameToGrab]], : values must be length 1, but FUN(X[[1]]) result is length 0
can someone shed some light on why this may be happening and what i can do to fix it? i know this seems like an odd question however this string showed up in a set of data to prune and seems to be breaking geocode.
just want to post an answer since i have located the problem, and anyone would like the workaround/want to know what to do. The issue is not with the DSK but with the ggmap code itself. If you run
geocode(location="not a place in the slightest",source="dsk", output="more")
then the library will fail gracefully and give you back the expected.
geocode failed with status ZERO_RESULTS, location = "not a place in the slightest"
however with certain strings in the middle like the one above it fails less gracefully because DSK returns data but it isn't good data the issue is caused by the data that sits here in the resulting array (note for obvious reasons you cant get this using the more flag you need to use all).
$results[[1]]$address_components[[1]]$long_name
NULL
$results[[1]]$address_components[[1]]$short_name
NULL
the code in the lib that parses this into the data frame with the "more" flag chokes here
nameToGrab <- `if`(nameType == "long", "long_name", "short_name")
outputVals <- vapply(gc$results[[1]]$address_components, function(x)
x[[nameToGrab]], character(1))
outputNames <- vapply(gc$results[[1]]$address_components, function(x){ if(length(x$types) == 0) return("query")
x$types[1]
},
character(1)
)
becasue both long_name and short_name are NULL the vapply is going to fail spectacularly.
Related
I'm trying to access an API from iNaturalist to download some citizen science data. I'm using the package rinat to get this done (see vignette). The loop below is, essentially, pulling all observations for one species, in one state, in one year iteratively on a per-month basis, then summing the number of observations for that year (input parameters subset from my actual script for convenience).
require(rinat)
state_ids <- c(18, 14)
bird_ids <- c(14886,1409)
months <- c(1:12)
final_nums <- vector()
for(i in 1:length(state_ids)){
total_count <- vector()
for(j in 1:length(months)){
monthly <- get_inat_obs(place_id=state_ids[i],
taxon_id=bird_ids[i],
year=2019,
month = months[j])
total_count <- append(total, length(monthly$scientific_name))
print(paste("done with month", months[j], "in state", state_ids[i]))
}
final_nums <- append(final_nums, sum(total_count))
print(paste("done with state", state_ids[i]))
}
Occasionally, and seemingly randomly, I get the following error:
No encoding supplied: defaulting to UTF-8.
Error in if (!x$headers$`content-type` == "text/csv; charset=utf-8") { :
argument is of length zero
This ends up breaking the loop or makes the loop run without actually pulling any real data. Is this an issue with my script, or the API, or something else? I've tried manually supplying encoding information to the get_inat_obs() function, but it doesn't accept that as an argument. Thank you in advance!
I don't believe this is an error in your script. The issue is with the api most likely.
the error argument is of length zero is a common error when you try to make a comparison that has no length. For example:
if(logical(0) == "TEST") print("WORKED!!")
#Error in if (logical(0) == "TEST") print("WORKED!!") :
# argument is of length zero
I did some a few greps on their source code to see where this if statement is and it seems to be within inat_handle line 211 in get_inate_obs.R
This would suggest that the authors did not expect for
!x$headers$`content-type` == 'text/csv; charset=utf-8'
to evaluate to logical(0), but more specifically
x$headers$`content-type`
to be NULL.
I would suggest making a bug report on their GitHub and recommend they change the specified line to:
if(is.null(x$headers$`content-type`) || !x$headers$`content-type` == 'text/csv; charset=utf-8'){
Suggesting a bug is usually more well received if you have a reproducible example.
Also, you could totally make this change yourself locally by cloning out the git repository, editing the file, rebuild the package, and then confirm if you no longer get an error in your code.
On this code when I use for loop or the function lapply I get the following error
"Error in get_entrypoint (debug_port):
Cannot connect R to Chrome. Please retry. "
library(rvest)
library(xml2) #pull html data
library(selectr) #for xpath element
url_stackoverflow_rmarkdown <-
'https://stackoverflow.com/questions/tagged/r-markdown?tab=votes&pagesize=50'
web_page <- read_html(url_stackoverflow_rmarkdown)
questions_per_page <- html_text(html_nodes(web_page, ".page-numbers.current"))[1]
link_questions <- html_attr(html_nodes(web_page, ".question-hyperlink")[1:questions_per_page],
"href")
setwd("~/WebScraping_chrome_print_to_pdf")
for (i in 1:length(link_questions)) {
question_to_pdf <- paste0("https://stackoverflow.com",
link_questions[i])
pagedown::chrome_print(question_to_pdf)
}
Is it possible to build a for loop() or use lapply to repeat the code from where it break? That is, from the last i value without breaking the code?
Many thanks
I edited #Rui Barradas idea of tryCatch().
You can try to do something like below.
The IsValues will get either the link value or bad is.
IsValues <- list()
for (i in 1:length(link_questions)) {
question_to_pdf <- paste0("https://stackoverflow.com",
link_questions[i])
IsValues[[i]] <- tryCatch(
{
message(paste("Converting", i))
pagedown::chrome_print(question_to_pdf)
},
error=function(cond) {
message(paste("Cannot convert", i))
# Choose a return value in case of error
return(i)
})
}
Than, you can rbind your values and extract the bad is:
do.call(rbind, IsValues)[!grepl("\\.pdf$", do.call(rbind, IsValues))]
[1] "3" "5" "19" "31"
You can read more about tryCatch() in this answer.
Based on your example, it looks like you have two errors to contend with. The first error is the one you mention in your question. It is also the most frequent error:
Error in get_entrypoint (debug_port): Cannot connect R to Chrome. Please retry.
The second error arises when there are links in the HTML that return 404:
Failed to generate output. Reason: Failed to open https://lh3.googleusercontent.com/-bwcos_zylKg/AAAAAAAAAAI/AAAAAAAAAAA/AAnnY7o18NuEdWnDEck_qPpn-lu21VTdfw/mo/photo.jpg?sz=32 (HTTP status code: 404)
The key phrase in the first error is "Please retry". As far as I can tell, chrome_print sometimes has issues connecting to Chrome. It seems to be fairly random, i.e. failed connections in one run will be fine in the next, and vice versa. The easiest way to get around this issue is to just keep trying until it connects.
I can't come up with any fix for the second error. However, it doesn't seem to come up very often, so it might make sense to just record it and skip to the next URL.
Using the following code I'm able to print 48 of 50 pages. The only two I can't get to work have the 404 issue I describe above. Note that I use purrr::safely to catch errors. Base R's tryCatch will also work fine, but I find safely to be a little more convient. That said, in the end it's really just a matter of preference.
Also note that I've dealt with the connection error by utilizing repeat within the for loop. R will keep trying to connect to Chrome and print until it is either successful, or some other error pops up. I didn't need it, but you might want to include a counter to set an upper threshold for the number of connection attempts:
quest_urls <- paste0("https://stackoverflow.com", link_questions)
errors <- NULL
safe_print <- purrr::safely(pagedown::chrome_print)
for (qurl in quest_urls){
repeat {
output <- safe_print(qurl)
if (is.null(output$error)) break
else if (grepl("retry", output$error$message)) next
else {errors <- c(errors, `names<-`(output$error$message, qurl)); break}
}
}
I am using RStudio 3.4.4 on a windows 10 machine.
I have got a vector of artist names and I am trying to get genre information for them all on spotify. I have successfully set up the API and the RSpotify package is working as expected.
I am trying to build up to create a function but I am failing pretty early on.
So far i have the following but it is returning unexpected results
len <- nrow(Artist_Nam)
artist_info <- character(artist)
for(i in 1:len){
ifelse(nrow(searchArtist(Artist_Nam$ArtistName[i], token = keys))>=1,
artist_info[i] <- searchArtist(Artist_Nam$ArtistName[i], token = keys)$genres[1],
artist_info[i] <- "")
}
artist_info
I was expecting this to return a list of genres, and artists where there is not a match on spotify I would have an empty entry ""
Instead what is returned is a list and entries are populated with genres and on inspection these genres are correct and there are "" where there is no match however, something odd happens from [73] on wards (I have over 3,000 artists), the list now only returns "".
despite when i actually look into this using the searchArtist() manually there are matches.
I wonder if anyone has any suggestions or has experienced anything like this before?
There may be a rate limit to the number of requests you can make a minute and you may just be hitting that limit. Adding a small delay with Sys.sleep() within your loop to prevent you from hitting their API too hard to be throttled.
[Win 10; R 3.4.3; RStudio 1.1.383; Rfacebook 0.6.15]
Hi!
I would like to ask two questions concerning the Rfacebook's getPost function:
Even though I have tried all possible combinations of the logical values for the arguments "comments", "reactions" and "likes", the best result I could get so far was a list of 3 components for each post ("post", "comments", and "likes") - that is, without the "reactions" component. Nevertheless, according to the rdocumentation, "getPost returns a list with up to four components: post, likes, comments, and reactions". getPost
Besides the (somehow strange) fact that, according to the same documentation, the argument "reactions" should be FALSE (default) in order to retrieve info on the total reactions to the post(s), I noticed a seemingly odd result: if I simultaneously set "reactions" and "likes" to be either TRUE or FALSE, R returns neither an error nor a warning message. The reason I find it a bit odd is because likes = !reactions in its own definition.
Here is the code:
#packageVersion("Rfacebook")
#[1] ‘0.6.15'
## temporary access token
fb_oauth <- "user access token"
qtd <- 5000
#pag_loop$id[1]
#[1] "242862559586_10156144461009587"
# arguments with default value (reactions = F, likes = T, comments = T)
x <- getPost(pag_loop$id[1], token = fb_oauth, n = qtd)
str(x)
# retrieves a list of 3: posts, likes, comments
Can someone please explain to me why I don't get the reaction's component?
Best,
Luana
Men, this is by the new version of facebook. This worked fine to V2.10 Version of API of facebook. As V2.11 and forward, it no longer works well.
I also can not capture the reactions, and the user's name is null. I have win 10 and R 3.4.2. Could to be R version? please, if you can to resolve this issue send me the response to my email
fine people of stackoverflow. I have become trapped on a rather simple part of my program and was wondering if you guys could help me.
library(nonlinearTseries)
tt<-c(0,500,1000)
mm<-rep(0,2)
for (j in 1:2){mm[j]=estimateEmbeddingDim(window(rnorm(1000), start=tt[j],end=tt[j+1]), number.points=(tt[j+1]-tt[j]),do.plot=FALSE)}
Warning message:
In window.default(rnorm(1000), start = tt[j], end = tt[j + 1]) :
'start' value not changed
If I plug in the values directly (tt[1], tt[2], tt[3]), it works but I also get a warning
estimateEmbeddingDim(window(rnorm(1000), start=tt[1],end=tt[2]), number.points=(tt[2]-tt[1]),do.plot=FALSE)
[1] 9
Warning message:
In window.default(rnorm(1000), start = tt[1], end = tt[2]) :
'start' value not changed
Thanks, Matt.
The problem seems to be with the
window(rnorm(1000), start=tt[j],end=tt[j+1])
lines. First of all, window is only meant to be used with a time series object (class=="ts"). In this case, rnorm(1000) simply returns a numeric vector, there are no dates associated with this object. So i'm not sure what you think this function does. Did you only want to extract the values that were between 0-500 and 500-1000? If so that seems a bit because with a standard normal variable, the max of 1000 samples isn't likely to be much over 4 let alone 500.
So be sure to use a proper "ts" object with dates and everything to get this to work.