Function decode_short_URL from twitteR package not working - r

I am using decode_short_url of the twitteR package to decode shortened URLs from Twitter posts, but I am not able to get the desired results, It is always giving back the same results such as:
decode_short_url(decode_short_url("http://bit.ly/23226se656"))
## http://bit.ly/23226se656
## [1] "http://bit.ly/23226se656

UPDATE I wrapped this functionality in a package and managed to get it on CRAN same-day. Now, you can just do:
library(longurl)
expand_urls("http://bit.ly/23226se656", check=TRUE, warn=TRUE)
|++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
## Source: local data frame [1 x 2]
##
## orig_url expanded_url
## 1 http://bit.ly/23226se656 NA
##
## Warning message:
## In FUN(X[[i]], ...) : client error: (404) Not Found
You can pass in a vector of URLs and get a data_frame/data.frame back in that form.
That particular bit.ly URL gives a 404 error. Here's a version of decode_short_url that has an optional check parameter that will attempt a HEAD request and throw a warning message for any HTTP status other than 200.
You can further modify it to return NA in the event the "expanded" link 404's (I have no idea what you need this to really do in the event the link is bad).
NOTE that the addd HEAD request will significantly slow the process down, so you may want to do a first pass with check=FALSE to a separate column, then compare which weren't "expanded", then check those with check=TRUE.
You might also want to rename this to avoid namespace conflicts with the one from twitteR.
decode_short_url <- function(url, check=FALSE, ...) {
require(httr)
request_url <- paste("http://api.longurl.org/v2/expand?url=",
url, "&format=json", sep="")
response <- GET(request_url, query=list(useragent="twitteR"), ...)
parsed <- content(response, as="parsed")
ret <- NULL
if (!("long-url" %in% names(parsed))) {
ret <- url
} else {
ret <- parsed[["long-url"]]
}
if (check) warn_for_status(HEAD(url))
return(url)
}
decode_short_url("http://bit.ly/23226se656", check=TRUE)
## [1] "http://bit.ly/23226se656"
## Warning message:
## In decode_short_url("http://bit.ly/23226se656", check = TRUE) :
## client error: (404) Not Found

Related

How to use Trycatch to skip errors in data downloading in R

I am trying to download data from the USGS website using the dataRetrieval package of R.
For that purpose, I have generated a function called getstreamflow in R that works fine when I ran for example.
siteNumber <- c("094985005","09498501","09489500","09489499","09498502")
Streamflow = getstreamflow(siteNumber)
The output of the function is a list of data frames
I could run the function when there is no issue downloading the data, but for some stations, I got the following error:
Request failed [404]. Retrying in 1.1 seconds...
Request failed [404]. Retrying in 3.3 seconds...
For: https://waterservices.usgs.gov/nwis/site/?siteOutput=Expanded&format=rdb&site=0946666666
To avoid that the function stops when encounters an error, I am trying to use tryCatch as in the following code:
Streamflow = tryCatch(
expr = {
getstreamflow(siteNumber)
},
error = function(e) {
message(paste(siteNumber," there was an error"))
})
I want the function to skip the station and go to the next when encountering an error. Currently, the output I got is the one presented below, that obviously is wrong, because it says that for all the stations there was an error:
094985005 there was an error09498501 there was an error09489500 there was an error09489499 there was an error09498502 there was an error09511300 there was an error09498400 there was an error09498500 there was an error09489700 there was an error09500500 there was an error09489082 there was an error09510200 there was an error09489100 there was an error09490500 there was an error09510180 there was an error09494000 there was an error09490000 there was an error09489086 there was an error09489089 there was an error09489200 there was an error09489078 there was an error09510170 there was an error09493500 there was an error09493000 there was an error09498503 there was an error09497500 there was an error09510000 there was an error09509502 there was an error09509500 there was an error09492400 there was an error09492500 there was an error09497980 there was an error09497850 there was an error09492000 there was an error09497800 there was an error09510150 there was an error09499500 there was an error... <truncated>
What I am doing wrong using the tryCatch?
Answer
You wrote the tryCatch outside of getstreamflow. Hence, if one site fails, then getstreamflow will return an error and nothing else. You should either supply 1 site at a time, or put the tryCatch inside getstreamflow.
Example
x <- 1:5
fun <- function(x) {
for (i in x) if (i == 5) stop("ERROR")
return(x^2)
}
tryCatch(fun(x), error = function(e) paste0("wrong", x))
This returns:
[1] "wrong1" "wrong2" "wrong3" "wrong4" "wrong5"
Multiple arguments
You indicated that you have both siteNumber and datatype to iterate over.
Using Map, we can define a function that takes two inputs:
Map(function(x, y) tryCatch(fun(x, y),
error = function(e) message(paste(x, " there was an error"))),
x = siteNumber,
y = datatype)
Using a for-loop, we can just iterate over them:
Streamflow <- vector(mode = "list", length = length(siteNumber))
for (i in seq_along(siteNumber)) {
Streamflow[[i]] <- tryCatch(getstreamflow(siteNumber[i], datatype), error = function(e) message(paste(x, " there was an error")))
}
Or, as suggested, just modify getstreamflow.

R loop completes only 3 iterations out of 2504

I've written a function to download multiple files from NOAA's database. Firstly, I've got sites which is a list of site ID's that I want to download off the website. It looks like this:
> head(sites)
[[1]]
[1] "9212"
[[2]]
[1] "10158"
[[3]]
[1] "11098"
> length(sites)
[1] 2504
My function is shown below.
tested<-lapply(seq_along(sites), function(x) {
no<-sites[[x]]
data=GET(paste0('https://www.ncdc.noaa.gov/paleo-search/data/search.json?xmlId=', no))
v<-content(data)
check=GET(v$statusUrl)
j<-content(check)
URL<-j$archive
download.file(URL, destfile=paste0('./tree_ring/', no, '.zip'))
})
The weird issue is that it works for the first three sites (downloads properly), but then it stops after the three sites and throws the following error:
Error in charToRaw(URL) : argument must be a character vector of length 1
I've tried manually downloading the 4th and 5th site (using the same code as above, but not within function) and it works fine. What could be going on here?
EDIT 1: Showing more site ID's as requested
> dput(sites[1:6])
list("9212", "10158", "11098", "15757", "15777", "15781")
I converted your code to a for loop so I could see the most recent values of all your variables when things fail.
The fails aren't consistently on the 4th site. Running your code a few times, sometimes it fails on 2, or 3, or 4. When it fails, if I look at j, I see this:
$message
[1] "finalizing archive"
$status
[1] "working"
$message
[1] "finalizing archive"
$status
[1] "working"
If I re-run check=GET(v$statusUrl); j<-content(check) a few seconds later, then I see
$archive
[1] "https://www.ncdc.noaa.gov/web-content/paleo/bundle/1986420067_2020-04-23.zip"
$status
[1] "complete"
So, I think it takes the server a little bit of time to prepare the file for download, and sometimes R asks for the file before it's ready, which causes an error. A simple fix might look like this:
check_status <- function(v) {
check <- GET(v$statusUrl)
content(check)
}
for(x in seq_along(sites)) {
no<-sites[[x]]
data=GET(paste0('https://www.ncdc.noaa.gov/paleo-search/data/search.json?xmlId=', no))
v<-content(data)
try_counter <- 0
j <- check_status(v)
while(j$status != "complete" & try_counter < 100) {
Sys.sleep(0.1)
j <- check_status(v)
}
URL<-j$archive
download.file(URL, destfile=paste0(no, '.zip'))
}
If the status isn't ready, this version will wait 0.1 seconds before checking again, up to 10 seconds.

Ignore error when importing JSON files in R

I have this for loop that download a json file from a solr search server.
It loops over a vector that contain keywords (100, in this case):
library(jsonlite)
for (i in 1:100) {
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
}
It works fine, until it reaches a certain keyword that is not found on the solr, and returns this error:
Error in open.connection(con, "rb") : HTTP error 400.
And then the loop stops.
Is there a way to ignore the error and proceed the loop?
I've read something using tryCatch but still couldn't figure it out.
Simpler than tryCatch, you can use the function try inside your keyword loop. This will attempt to load the URL, but if an error is encountered will print the error but continue to the next keyword.
library(jsonlite)
for (i in 1:100) {
try({
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
})
}
If you also don't want to have the errors printed, specify silent = TRUE:
library(jsonlite)
for (i in 1:100) {
try({
docs <- fromJSON(paste("http://myurl.com/solr/select?df=topic&fq=",keywords[i],"&indent=on&q=*:*&rows=1&wt=json",sep=""))
numFound <- docs$response$numFound
print(numFound)
}, silent = TRUE)
}
I'm partial to purrr's safely for this kind of task, which works well in purrr's map functions. You can test it by getting JSONs from GitHub's API:
keywords <- c("hadley", "gershomtripp", "lsjdflkaj")
url <- "https://api.github.com/users/{.}/repos"
Now we can get the JSONs and extract the repo IDs
library(jsonlite)
library(purrr)
library(glue)
json_list <- map(keywords, safely(~ fromJSON(glue(url)) %>% .$id))
This will return a list of elements containing result and error. If there was an error it will be saved in error, otherwise the results will be save in result.
[[1]]
[[1]]$result
[1] 40423928 40544418 14984909 12241750 5154874 9324319 20228011 82348 888200 3116998
[11] 8296284 137344416 133734429 2788278 28724058 9470424 116708612 34325557 41144 41157
[21] 78543290 66588778 35225488 14507273 15718805 18562209 12522 115742443 119107571 201908
[[1]]$error
NULL
[[2]]
[[2]]$result
[1] 150995700 141743224 127107806 130802586 185857872 131488780 148619375 165221804 135417803 127116088
[11] 181662388 173351888 127131146 136896011
[[2]]$error
NULL
[[3]]
[[3]]$result
NULL
[[3]]$error
<simpleError in open.connection(con, "rb"): HTTP error 404.>

Check if URL exists in R

I want to loop over a list of URLs and I want to find out if these URLs exist or not.
RCurl provides the url.exists() function. However, the output doesn't seem to be right, because for example it says that amazon.com is not registered (it does so because the url.exists()-function doesn't return a value in the 200 range, in the case of amazon.com it's 405 ("method not allowed").
I also tried HEAD() and GET() provided by the httr package. But sometimes I get error messages here, for example for timeouts or because the URL is not registered.
Error messages look like this:
Error in curl::curl_fetch_memory(url, handle = handle) :
Timeout was reached: Connection timed out after 10000 milliseconds
Error in curl::curl_fetch_memory(url, handle = handle) :
Could not resolve host: afsadadssadasf.com
When I get such an error, the whole for loop stops. Is it possible to continue the for loop? I tried tryCatch(), but to my knowledge this can only help when the problem is in the dataframe itself.
pingr::ping() only uses ICMP which is blocked on sane organizational networks since attackers used ICMP as a way to exfiltrate data and communicate with command-and-control servers.
pingr::ping_port() doesn't use the HTTP Host: header so the IP address may be responding but the target virtual web host may not be running on it and it definitely doesn't validate that the path exists at the target URL.
You should clarify what you want to happen when there are only non-200:299 range HTTP status codes. The following makes an assumption.
NOTE: You used Amazon as an example and I'm hoping that's the first site that just "came to mind" since it's unethical and a crime to scrape Amazon and I would appreciate my code not being brought into your universe if you are in fact just a brazen content thief. If you are stealing content, it's unlikely you'd be up front here about that, but on the outside chance you are both stealing and have a conscience, please let me know so I can delete this answer so at least other content thieves can't use it.
Here's a self-contained function for checking URLs:
#' #param x a single URL
#' #param non_2xx_return_value what to do if the site exists but the
#' HTTP status code is not in the `2xx` range. Default is to return `FALSE`.
#' #param quiet if not `FALSE`, then every time the `non_2xx_return_value` condition
#' arises a warning message will be displayed. Default is `FALSE`.
#' #param ... other params (`timeout()` would be a good one) passed directly
#' to `httr::HEAD()` and/or `httr::GET()`
url_exists <- function(x, non_2xx_return_value = FALSE, quiet = FALSE,...) {
suppressPackageStartupMessages({
require("httr", quietly = FALSE, warn.conflicts = FALSE)
})
# you don't need thse two functions if you're alread using `purrr`
# but `purrr` is a heavyweight compiled pacakge that introduces
# many other "tidyverse" dependencies and this doesnt.
capture_error <- function(code, otherwise = NULL, quiet = TRUE) {
tryCatch(
list(result = code, error = NULL),
error = function(e) {
if (!quiet)
message("Error: ", e$message)
list(result = otherwise, error = e)
},
interrupt = function(e) {
stop("Terminated by user", call. = FALSE)
}
)
}
safely <- function(.f, otherwise = NULL, quiet = TRUE) {
function(...) capture_error(.f(...), otherwise, quiet)
}
sHEAD <- safely(httr::HEAD)
sGET <- safely(httr::GET)
# Try HEAD first since it's lightweight
res <- sHEAD(x, ...)
if (is.null(res$result) ||
((httr::status_code(res$result) %/% 200) != 1)) {
res <- sGET(x, ...)
if (is.null(res$result)) return(NA) # or whatever you want to return on "hard" errors
if (((httr::status_code(res$result) %/% 200) != 1)) {
if (!quiet) warning(sprintf("Requests for [%s] responded but without an HTTP status code in the 200-299 range", x))
return(non_2xx_return_value)
}
return(TRUE)
} else {
return(TRUE)
}
}
Give it a go:
c(
"http://content.thief/",
"http://rud.is/this/path/does/not_exist",
"https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=content+theft",
"https://www.google.com/search?num=100&source=hp&ei=xGzMW5TZK6G8ggegv5_QAw&q=don%27t+be+a+content+thief&btnK=Google+Search&oq=don%27t+be+a+content+thief&gs_l=psy-ab.3...934.6243..7114...2.0..0.134.2747.26j6....2..0....1..gws-wiz.....0..0j35i39j0i131j0i20i264j0i131i20i264j0i22i30j0i22i10i30j33i22i29i30j33i160.mY7wCTYy-v0",
"https://rud.is/b/2018/10/10/geojson-version-of-cbc-quebec-ridings-hex-cartograms-with-example-usage-in-r/"
) -> some_urls
data.frame(
exists = sapply(some_urls, url_exists, USE.NAMES = FALSE),
some_urls,
stringsAsFactors = FALSE
) %>% dplyr::tbl_df() %>% print()
## A tibble: 5 x 2
## exists some_urls
## <lgl> <chr>
## 1 NA http://content.thief/
## 2 FALSE http://rud.is/this/path/does/not_exist
## 3 TRUE https://www.amazon.com/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=con…
## 4 TRUE https://www.google.com/search?num=100&source=hp&ei=xGzMW5TZK6G8ggegv5_QAw&q=don%27t…
## 5 TRUE https://rud.is/b/2018/10/10/geojson-version-of-cbc-quebec-ridings-hex-cartograms-wi…
## Warning message:
## In FUN(X[[i]], ...) :
## Requests for [http://rud.is/this/path/does/not_exist] responded but without an HTTP status code in the 200-299 range
Here is a simple solution to the problem.
urls <- c("http://www.amazon.com",
"http://this.isafakelink.biz",
"https://stackoverflow.com")
valid_url <- function(url_in,t=2){
con <- url(url_in)
check <- suppressWarnings(try(open.connection(con,open="rt",timeout=t),silent=T)[1])
suppressWarnings(try(close.connection(con),silent=T))
ifelse(is.null(check),TRUE,FALSE)
}
sapply(urls,valid_url)
Try the ping function in the pingr package. It gives the timings of pings.
library(pingr)
ping("amazon.com") # good site
## [1] 45 46 45
ping("xxxyyyzzz.com") # bad site
## [1] NA NA NA
Here's a function that evaluates an expression and returns TRUE if it works and FALSE if it doesn't. You can also assign variables inside the expression.
try_catch <- function(exprs) {!inherits(try(eval(exprs)), "try-error")}
try_catch(out <- log("a")) # returns FALSE
out # Error: object 'out' not found
try_catch(out <- log(1)) # returns TRUE
out # out = 0
You can use the expression to check for success.
done <- try_catch({
# try something
})
if(!done) {
done <- try_catch({
# try something else
})
}
if(!done) {
# default expression
}

Scripting userTimeline from twitteR package to data mine from a list of users in Twitter, accounting for missing users in a list

I am attempting to compile a corpus of the usertimelines of a specific sub-set of Twitter users. My problem is that in the existing code (given below), when a user's account has been suspended or deleted, the code breaks, giving the provided output & error (below).
## ORIGINAL ##
for (user in users){
# Download user's timeline from Twitter
tweets <- userTimeline(user)
# Extract tweets
tweets <- unlist( lapply(tweets, function(t) t$getText() ) )
# Save tweets to file
write.csv(tweets, file=paste(user, ".csv", sep=""), row.names=F)
#Sys.sleep(sleepTime)
}
[1] "Not Found"
Error in twInterfaceObj$doAPICall(cmd, params, method,
...) : Error: Not Found
My question is, how can I keep the script running, saving some sort of null result for the 'missing' (deleted/inactive) accounts?
I am using the twitteR package in R: ftp://cran.r-project.org/pub/R/web/packages/twitteR/twitteR.pdf
#EDIT#
# Extract tweets
# Pause for 60 sec
sleepTime = 60
for (user in users)
{
# tell the loop to skip a user if their account is protected
# or some other error occurs
result <- try(userTimeline(user), silent = TRUE);
if(class(result) == "try-error") next;
# Download user's timeline from Twitter
tweets <- userTimeline(user)
# Extract tweets
tweets <- unlist( lapply(tweets, function(t) t$getText() ) )
# Save tweets to file
write.csv(tweets, file=paste(user, ".csv", sep=""), row.names=F)
# Tell the loop to pause for 60s between iterations to avoid exceeding the Twitter API request limit
print('Sleeping for 60 seconds...')
Sys.sleep(sleepTime);
}
#
# Now inspect tweets to see the user's timeline data
You can catch the exception. see ?try or ?tryCatch. For example:
tweets <- try(userTimeline(user),silent=TRUE)
if(inherits(tweets ,'try-error'))
return(NULL)
else{
## process normally here
}

Resources