I am trying to get the full timeline of twitter user through the academictwitteR package and the twitter academic track API (v2). However, I get an error 400. Unfortunately, the explanation here https://developer.twitter.com/en/support/twitter-api/error-troubleshooting
says "The request was invalid or cannot be otherwise served. An accompanying error message will explain further. Requests without authentication or with invalid query parameters are considered invalid and will yield this response.", but the error code does not contain any explanation. I do not know what I am doing wrong. I tried other R packages as well, same error. I have access to all other functions of the API, but cannot adjust the timeline to download either more than 3200 tweets at a time (as my access should allow me to do). I want to full timeline of this user, not just the newest 3200 tweets.
The errorcode looks like that:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 400
This is my request (I tried using the user ID, it does not change anything).
tmlne <- get_user_timeline("25390350",
start_tweets = "2009-03-19T00.00.00Z",
end_tweets = "2021-11-27T00.00.00Z",
bearer_token = get_bearer(),
data_path = "twitter/data/timelines/",
n = 100000,
bind_tweets = F,
file = "tmlnw")
I tried revoking and renewing the bearer token, I tested the token by downloading all tweets from a user (which is not what I am after, but just to see if my access works), that works, but the timelines do not. I can get 3200 with the get_timelines_v2 function of the TwitterV2 package, but I cannot circumvent the 3200 tweets limit and I do not know how to change the request to get older tweets than the most recent 3200, and I cannot get the academictwitteR package to use the timeline function (here I know how to make multiple requests).
What would help me is either an information regarding the Error 400 or how to adjust the get_timelines_v2 function to include older tweets.
Related
I am trying to retrieve tweets from twitter on a full archive basis. However, I finally managed to work things out on my developer page, but the code seems to stumble upon an error that I can not find anywhere on the internet. This is my code without my tokens:
install.packages("RCurl")
library("RCurl")
install.packages("rtweet")
library("rtweet")
consumer_key <- ".."
consumer_secret <- ".."
access_token <- ".."
access_secret <- ".."
app <- "..."
token = rtweet::create_token(app,consumer_key,consumer_secret,access_token,access_secret)
dataBTC1 <- search_fullarchive("Bitcoin", n = 1000, env_name = "Tweets", fromDate = "201501010000")
And this is the error I get:
Error in tweet(x$quoted_status) :
Unidentified value: edit_history, edit_controls, editable.
Please open an issue and notify the maintainer. Thanks!
Literally no idea what it means and how to solve it if possible. Can anyone help me?
Thanks!
These errors occur because the R package you are using (rtweet) does not "know" about the three new fields that were added to the Tweet object when the new editable Tweets feature was released. You will need to ask the rtweet maintainers (as it mentions in the error message) to enable support for these fields in their library, or find an alternative way to call the API.
I have paid premium access to Twitter so can access historical data. I am trying to retrieve tweets on a combined search for the terms 'delivery' and 'accident', ie as if I go to a Twitter search and type 'delivery accident' or 'delivery+accident' in the 'latest' tab. My code for doing so is the following:
deliv_list <- searchTwitter("delivery+accident", n=500)
I am getting results which correspond to my search, but far fewer than I had expected. R returns the error message:
Warning message:
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
500 tweets were requested but the API can only return 248
I subsequently checked and I am able to return 500 records for a different single search, so it is not that I have reached a download limit. There are multiple tweets every day on this theme so I'm not sure why so few results n=248 are returned. Appreciate any help.
It seems twitteR cannot handle full archive premium accounts. search-tweets-python seems to be the way to go. Tantalizingly, this may be able to work inside R via R's reticulate package. See: https://dev.to/twitterdev/running-search-tweets-python-in-r-45eo
I've tried two cryptocurrency API package for R, to name it:
coinmarketcapr package
riingo <- included in tidyquant
I really want to get updated historical data of cryptocurrency, my goal is to predict the thing with some timeseries analysis, but with these packages I kept getting error message, for coinmarketcapr it's understandable because apparently my API subscription plan doesn't cover the historical data, but for the riingo the message shows just like this...
> riingo_crypto_latest("btcusd", resample_frequency = "10min", base_currency = NULL)
Request failed [401]. Retrying in 1.6 seconds...
Request failed [401]. Retrying in 1.5 seconds...
Error: There was an error, but riingo isn't sure why. See Tiingo msg for details.
Tiingo msg) Invalid token.
Can somebody help me? Or maybe suggesting other source for taking cryptocurrency historical data? Thank you in advance for any answer!
P.S. I've already inserted the API key, so it's not the authentication problem.
If you trying to scrape information about new listings at crypto exchanges and some more useful info, you can be interested at using this API:
https://rapidapi.com/Diver44/api/new-cryptocurrencies-listings/
Example request in R:
library(httr)
url <- "https://new-cryptocurrencies-listings.p.rapidapi.com/new_listings"
response <- VERB("GET", url, add_headers(x_rapidapi-host = 'new-cryptocurrencies-listings.p.rapidapi.com', x_rapidapi-key = 'your-api-key', '), content_type("application/octet-stream"))
content(response, "text")
It includes an endpoint with New Listings from the biggest exchanges and a very useful endpoint with information about exchanges where you can buy specific coins and prices for this coin at that exchanges. You can use this information for trading. When currency starts to list at popular exchanges like Binance, KuCoin, Huobi, etc the price increases about 20-30%
I am working in R with the twitteR package, which is meant to get information from Twitter through their API. After getting authentificated, I can download information about any user with the getUser function. However, I am not able to do so with usernames which are only numbers (for example, 1234). With the line getUser("1234") I get the following error message:
Error in twInterfaceObj$doAPICall(paste("users", "show", sep = "/"),
params = params, : Not Found (HTTP 404).
Is there any way to get user information when the username is made completely of numbers? The function tries to search by ID instead of screenname when it finds only numbers.
Thanks in advance!
First of all, twitteR is deprecated in favour of rtweet, so you might want to look into that.
The specific user ID you've provided is a protected account, so unless your account follows it / has access to it, you will not be able to query it anyway.
Using rtweet and some random attempts to find a valid numerical user ID, I succeeded with this:
library(rtweet)
users <- c("989", "andypiper")
usr_df <- lookup_users(users)
usr_df
rtweet also has some useful coercion functions to force the use of screenname or ID (as_screenname and as_userid respectively)
I am trying to extract all the posts from this year from a Facebook page using the Rfacebook package.
However, for the page that I need I get this error:
"Error in callAPI(url = url, token = token) :
Unsupported get request. Object with ID 'XXXXX' does not exist,
cannot be loaded due to missing permissions, or does not support this operation.
Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api"
This is the command that I used:
datafb <- getPage('XXXXX', token, n = 1000, since = '2017/01/01', until = '2017/04/01',
feed = TRUE)
I am sure the page exists, because I can access it from my Facebook account.
Also, the token is valid because it works when I try for other pages.
I really can't see what's wrong. Does anyone have any idea?