twitteR package date range issue in R - r

So I am trying to find tweets based on date range with this code:
tweets <- searchTwitter(c("Alzheimer"), n=500, lang="en",
since="2011-03-01", until="2011-03-02")
and I get the warning message
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit
= retryOnRateLimit, :
500 tweets were requested but the API can only return 0
BUT I don't get this warning message with the code
tweets <- searchTwitter(c("Alzheimer"), n=500, lang="en", since="2011-08-01")
I've read many posts previously about twitter not allowing a date range past a few days...is this still the case currently?? I'm new to coding so any help is greatly appreciated.

MrFlick is right in his comment, the twitter API only returns tweets for the past few days, as to why you are not getting the warning on the second command, my guess is because its working. I tried this command from my terminal and got back 500 tweets.

Related

academictwitteR - Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, : something went wrong. Status code: 403

I have just received an Academic Twitter Developer privileges and am attempting to scrape some tweets. I updated RStudio, regenerated a new bearer token once I got updated Academic Twitter access, and get_bearer() returns my new bearer token. However, I continue to get the following error:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 403
In addition: Warning messages:
1: Recommended to specify a data path in order to mitigate data loss when ingesting large amounts of data.
2: Tweets will not be stored as JSONs or as a .rds file and will only be available in local memory if assigned to an object.
Additionally, I have tried specifying a data path, but I think I am confused as to what these means? I think that's where my issue lies, but does the data path way mean like a specific file pathway on my computer?
Below is the code I was attempting to use.This code worked previously with my professor's bearer token that they used to just show the output:
`tweets <-
get_all_tweets(
query = "#BlackLivesMatter",
start_tweets = "2020-01-01T00:00:00Z",
end_tweets = "2020-01-05T00:00:00Z",
n = 100)`
Thanks in advance!
Status code 403 means forbidden. You may want to check the error codes reference page of Twitter API here.
Perhaps your bearer token is misspelled?

twitteR packaging throwing error when trying to search for tweets by geocode

I am trying to collect tweets based on a geolocation using the searchTwitter function.
geocode = ="40.6687,74.1143,1mi" text = "" count = 200 bList<- searchTwitter(text, n=count, geocode=geocode)
When I run this code, I get an error stating "200 tweets were requested but the API can only return 0". I am not looking for tweets more than 7 days old and have no whitespace in my geocode, which were solutions to previously asked questions about this function.
Why am I getting this error and how can I fix it?
I have tried looking at similar problems online but I cannot figure out why there is an error.

Error in Sys.sleep(reset + 2) : invalid 'time' value

I am currently trying to download data using the search_tweets command from the rtweet package. I have a list of over 400 requests that I want to loop it over. The code seems to run without problems, yet, after a while this happens:
retry on rate limit...
waiting about 16 minutes... #which it then counts down#
Error in Sys.sleep(reset + 2) : invalid 'time' value
Searching for related questions, I only found this: https://github.com/ropensci/rtweet/issues/277. There they say that in the latest rtweet version this issue has been solved. But I am using the latest R and rtweet versions.
Has someone experienced a similar issue? And how have you been able to solve it?
This is the code I am using. Please don't hesitate to tell me if it is a mistake in the code that causes the problem. I was wondering e.g. if it is possible to include a condition that only runs the next request as soon as the first request is fully downloaded?
for (x in 1:length(mdbs)) {
mdbName = mdbs[x]
print(mdbName)
myMDBSearch = paste0(mdbName,"lang:de -filter:retweets")
print(myMDBSearch)
req <- search_tweets(
q = myMDBSearch,
n = 100000,
retryonratelimit = TRUE,
token = bearer_token(),
parse=TRUE
)
data_total <- rbind(data_total,req)
print("sleeping 5 seconds")
Sys.sleep(5)
}

quantmod - getQuote() - '403 Forbidden'

I found an answer to my own question (see below). Still need help.
In the same package, quantmod, there is an option called getSymbol.google.
Nevertheless,
If I use it to get Microsoft value, for example, it works all right
getSymbols.google('MSFT', environment() , src="google", from = (Sys.Date() - 1))
[1] "MSFT"
But, I can´t make it work on a currency pair;
getSymbols.google("GBPUSD", environment() , src="google", from = (Sys.Date() - 1))
Error in download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv'
In addition: Warning message:
In download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv': HTTP status was '400 Bad Request'
Any ideas?
Good morning,
Since the 1ts of November i´m having trouble with the function getQuote from Yahoo. Is a function inside the package "quantmod", which uses yahoo API to request the information.
The description of the function is as follows; Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time.
In r, i´m getting the following error; "HTTP status was '403 Forbidden'"
I´ve look on my browser and the error comes from the following error in Yahoo web page "Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time."
Does anybody know how to solve ir, or, any alternatives to the function getQuote()
Here is an example from RStudio
getQuote("AAPL")
Error in download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv'
In addition: Warning message:
In download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv': HTTP status was '403 Forbidden'
Thanks
seems that yahoo has discontinued this service. Anyone aware of a alternative for yahoo (I'd rather not have to webscrape yahoo for this)
rob
I ran into the same problem... it's kludgey but as a workaround to get the end-of-day value, I have found this to work for now:
Instead of getQuote() to get the Last price (which doesn't seem to work from Yahoo anymore):
underlying<-"AAPL"
quote.last <-getQuote(underlying)$Last
I use "getSymbols" which still works-- throws it into a new data frame, and I pull out the value I want from that:
Hx<-getSymbols(underlying,from=Sys.Date()-1) # allows me to not have to retain the ticker name if I do this across many tickers
quote.last<-as.double(tail(Cl(get(Hx)),1)) # Closing price value from last row of data
rm(list=Hx) # throw away the temporary data frame with quote history
I'm sure the's a more elegant way to do it, but this is what fell out of my brain as a quick workaround that got it done... sadly that doesn't get things like the Bid and Ask that getQuote does.

rtweet giving error in rbind when collecting large numbers of tweets

I'm using the rtweet package in R to pull tweets for data analysis.
When I run the following line of code requesting 18,000 tweets, everything works fine:
t <- search_tweets("at", n=18000, lang='en', geocode='-25.609139,134.361949,3500km', since='2017-08-01', type='recent', retryonratelimit=FALSE)
But when I try to extend this to 100,000 tweets I get an error message
t <- search_tweets("at", n=100000, lang='en', geocode='-25.609139,134.361949,3500km', since='2017-08-01', type='recent', retryonratelimit=TRUE)
Finished collecting tweets!
Error in rbind(deparse.level, ...) :
invalid list argument: all variables should have the same length
Why is this occurring and how do I solve this? Thanks
I suggest updating to the dev version of rtweet. It fixed this issue for me.
devtools::install_github("mkearney/rtweet")

Resources