Is there another method to extract tweets for a specific time span, rather than using number of tweets, using SearchTwitter? I am able to fetch a certain number of tweets for Nordstrom but have not been successful is doing so for specific dates.
library('twitteR')
nord1<- searchTwitteR("#nordstrom", n= 1000) #works fine
nord2<- searchTwitteR('nordstrom', since = '2012-01-01', until = '2015-11-13')
Warning message:
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
25 tweets were requested but the API can only return 0
The Twitter search API only allows access to the most recent tweets (last 6-9 days). I was instead searching for much earlier dates when I came across the issue.
Related
I am using the googleanalyticsR to download all the data I can from Google Analytics. My objective it is to build a small dataframe to analyze.
To download all the data I created a loop:
for (i in 1:length(metricsarray)) {
print(paste(i))
tryCatch( google_analytics_4(my_id,
date_range = c(start_date, end_date ),
metrics = metricsarray[i],
dimensions = c( 'transactionId'),
max = -1)) %>%
assign(gsub(" ", "", paste( "metricsarray",i, sep="")), ., inherits = TRUE)
}
The loop runs from 1 to 11 with no problems, i.e. Prints the number of i and gives me the message:
Downloaded [3537] rows from a total of [3537]
But I got this error when it reaches i = 12 in metricsarray[i]:
2017-10-04 10:37:56> Downloaded [0] rows from a total of [].
Error in if (nrow(out) < all_rows) { : argument is of length zero
I used the tryCatch, but with no effect, it continues. My objetive was that it would continue to test each of the metricsarray[i] until the end.
Also, also continue when it finds the error:
JSON fetch error: Selected dimensions and metrics cannot be queried
together.
I am new to using the googleanalytics API in R, feel free to suggest solutions, articles or anything we might think it will help me gain more knowledge about this.
Thank you,
JSON fetch error: Selected dimensions and metrics cannot be queried
together.
Not all Google analytics dimensions and metrics can be queried together. The main reasons for that is either the data doesnt exist or the data would make no sence.
The best way to test what metadata can be queried together is to check the dimensions and metrics reference. Invalid items will be grayed out.
I've edited my question to be more relevant
It's only been less than a month since I started to learn R and I'm trying to use it to get rid of the tedious work related to Facebook (extracting comments) that we use for our reports.
Using Rfacebook package, i made this script where it extracts (1) the posts of the page for a given period, and (2) Comments on those posts. It worked well for the page I'm doing the report for, but when I tried it on other pages with posts that had zero comments, it reported an error.
Here's the script:
Loading libraries
library(Rfacebook)
library(lubridate)
library(tibble)
Setting time period. Change time as you please.
current_date <-Sys.Date()
past30days<-current_date-30
Assigning a page. Edit this to the page you are monitoring*
brand<-'bpi'
Authenticating Facebook. Use your own
app_id <- "xxxxxxxxxxxxxxxx"
app_secret <- "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
token <- fbOAuth(app_id,app_secret,extended_permissions = FALSE,legacy_permissions = FALSE)
Extract all posts from a page
listofposts <- getPage(brand, token, n = 5000, since = past30days, until = current_date, feed = FALSE, reactions = FALSE, verbose=TRUE)
write.csv(listofposts,file = paste0('AsOf',current_date,brand,'Posts','.csv'))
Convert to a data frame
df<-as_tibble(listofposts)
Convert to a vector
postidvector<-df[["id"]]
Get the number of posts in the period
n<-length(postidvector)
Produce all comments via loop
reactions<-vector("list",n)
for(i in 1:n){
reactions[[i]]<-assign(paste(brand,'Comments', i, sep = ""), (getPost((postidvector[i]),token,comments=T,likes=F,n.likes=5000,n.comments=10000)))
}
Extract all comments per post to CSV
for(j in 1:n){
write.csv(reactions[[j]],file = paste0('AsOf',current_date,brand,'Comments' ,j, '.csv'))
}
Here's the error when exporting the comments to CSV when I tried it on the
pages with posts that had ZERO comments:
Error in (function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, :
arguments imply differing number of rows: 1, 0
I tried it on a heavy traffic page, and it worked fine too. One post had 10,000 comments and it extracted just fine. :(
Thanks in advance! :D
Pages can be restricted by age or location. You canĀ“t use an App Access Token for those, because it does not include a user session so Facebook does not know if you are allowed to see the Page content. You will have to use a User Token or Page Token for those.
I am planning to extract tweets for a particular span using the R package twitteR.
data<- searchTwitter("Kohli",n=100, lang="en", since="2014-12-01", until="2014-12-30",cainfo="cacert.pem")
But I am getting this message
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
100 tweets were requested but the API can only return 0
I can search for tweets without the since and until parameters.
Any kind of help is appreciated.
Just recently it changed where you can only extract a week's worth of tweets.
https://cran.r-project.org/web/packages/twitteR/twitteR.pdf
I am using the twitteR package in R to extract tweets based on their ids.
But I am unable to do this for multiple tweet ids without hitting either a rate limit or an error 404.
This is because I am using the showStatus() - one tweet id at a time.
I am looking for a function similar to getStatuses() - multiple tweet id/request
Is there an efficient way to perform this action.
I suppose only 60 requests can be made in a 15 minute window using the outh.
So, how do I ensure :-
1.Retrieve multiple tweet ids for single request thereafter repeating these requests.
2.Rate limit is under check.
3.Error handling for tweets not found.
P.S : This activity is not user based.
Thanks
I have come across the same issue recently. For retrieving tweets in bulk, Twitter recommends using the lookup-method provided by its API. That way you can get up to 100 tweets per request.
Unfortunately, this has not been implemented in the twitteR package yet; so I've tried to hack together a quick function (by re-using lots of code from the twitteR package) to use that API method:
lookupStatus <- function (ids, ...){
lapply(ids, twitteR:::check_id)
batches <- split(ids, ceiling(seq_along(ids)/100))
results <- lapply(batches, function(batch) {
params <- parseIDs(batch)
statuses <- twitteR:::twInterfaceObj$doAPICall(paste("statuses", "lookup",
sep = "/"),
params = params, ...)
twitteR:::import_statuses(statuses)
})
return(unlist(results))
}
parseIDs <- function(ids){
id_list <- list()
if (length(ids) > 0) {
id_list$id <- paste(ids, collapse = ",")
}
return(id_list)
}
Make sure that your vector of ids is of class character (otherwise there can be a some problems with very large IDs).
Use the function like this:
ids <- c("432656548536401920", "332526548546401821")
tweets <- lookupStatus(ids, retryOnRateLimit=100)
Setting a high retryOnRateLimit ensures you get all your tweets, even if your vector of IDs has more than 18,000 entries (100 IDs per request, 180 requests per 15-minute window).
As usual, you can turn the tweets into a data frame with twListToDF(tweets).
I am trying to retrieve ~3000 tweets with keyword "nba" or hashtag "#nba" using twitteR function 'searchTwitter' but it only returns 299 tweets for "nba" and 398 tweets for "#nba" between 2013-01-01 and 2014-02-25. I am really confused, is this normal? Has anyone else experienced similar problem using twitteR? Please help. Much appreciated!
library(twitteR)
library(plyr)
library(stringr)
load("~/twitter_authentication.Rdata")
registerTwitterOAuth(cred)
nbahash_tweets = searchTwitter("#nba",since='2013-01-01', until='2014-02-25',n=3000)
nba_tweets = searchTwitter("nba",since='2013-01-01', until='2014-02-25',n=3000)
Warning message:
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
3000 tweets were requested but the API can only return 398
and then
Warning message:
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
3000 tweets were requested but the API can only return 299
This is due to the limitations of the Search API of Twitter. In the FAQ of Twitter I found the following Question:
Why are the Tweets I'm looking for not in Twitter Search, the Search API, or Search widgets?
There it's written:
Due to capacity constraints, the index currently only covers about a week's worth of tweets.
Hence, even if you want to download tweets starting from 1st of January 2013 you will only get tweets from at most one week ago. So the number of tweets won't be that big.
Furthermore, they say:
Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned.
Therefore, there will be even less tweets available.
So, as far as I can see, your code should be OK and the small number of downloaded tweets is due to the Search API.