Rfacebook package: "Unsuported get request" for the getPage method - r

I am trying to extract all the posts from this year from a Facebook page using the Rfacebook package.
However, for the page that I need I get this error:
"Error in callAPI(url = url, token = token) :
Unsupported get request. Object with ID 'XXXXX' does not exist,
cannot be loaded due to missing permissions, or does not support this operation.
Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api"
This is the command that I used:
datafb <- getPage('XXXXX', token, n = 1000, since = '2017/01/01', until = '2017/04/01',
feed = TRUE)
I am sure the page exists, because I can access it from my Facebook account.
Also, the token is valid because it works when I try for other pages.
I really can't see what's wrong. Does anyone have any idea?

Related

Trying to retrieve tweets from twitter, but unidentified error occurs

I am trying to retrieve tweets from twitter on a full archive basis. However, I finally managed to work things out on my developer page, but the code seems to stumble upon an error that I can not find anywhere on the internet. This is my code without my tokens:
install.packages("RCurl")
library("RCurl")
install.packages("rtweet")
library("rtweet")
consumer_key <- ".."
consumer_secret <- ".."
access_token <- ".."
access_secret <- ".."
app <- "..."
token = rtweet::create_token(app,consumer_key,consumer_secret,access_token,access_secret)
dataBTC1 <- search_fullarchive("Bitcoin", n = 1000, env_name = "Tweets", fromDate = "201501010000")
And this is the error I get:
Error in tweet(x$quoted_status) :
Unidentified value: edit_history, edit_controls, editable.
Please open an issue and notify the maintainer. Thanks!
Literally no idea what it means and how to solve it if possible. Can anyone help me?
Thanks!
These errors occur because the R package you are using (rtweet) does not "know" about the three new fields that were added to the Tweet object when the new editable Tweets feature was released. You will need to ask the rtweet maintainers (as it mentions in the error message) to enable support for these fields in their library, or find an alternative way to call the API.

academictwitteR get_user_timeline Error 400 API v2

I am trying to get the full timeline of twitter user through the academictwitteR package and the twitter academic track API (v2). However, I get an error 400. Unfortunately, the explanation here https://developer.twitter.com/en/support/twitter-api/error-troubleshooting
says "The request was invalid or cannot be otherwise served. An accompanying error message will explain further. Requests without authentication or with invalid query parameters are considered invalid and will yield this response.", but the error code does not contain any explanation. I do not know what I am doing wrong. I tried other R packages as well, same error. I have access to all other functions of the API, but cannot adjust the timeline to download either more than 3200 tweets at a time (as my access should allow me to do). I want to full timeline of this user, not just the newest 3200 tweets.
The errorcode looks like that:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 400
This is my request (I tried using the user ID, it does not change anything).
tmlne <- get_user_timeline("25390350",
start_tweets = "2009-03-19T00.00.00Z",
end_tweets = "2021-11-27T00.00.00Z",
bearer_token = get_bearer(),
data_path = "twitter/data/timelines/",
n = 100000,
bind_tweets = F,
file = "tmlnw")
I tried revoking and renewing the bearer token, I tested the token by downloading all tweets from a user (which is not what I am after, but just to see if my access works), that works, but the timelines do not. I can get 3200 with the get_timelines_v2 function of the TwitterV2 package, but I cannot circumvent the 3200 tweets limit and I do not know how to change the request to get older tweets than the most recent 3200, and I cannot get the academictwitteR package to use the timeline function (here I know how to make multiple requests).
What would help me is either an information regarding the Error 400 or how to adjust the get_timelines_v2 function to include older tweets.

twitteR R API getUser with a username of only numbers

I am working in R with the twitteR package, which is meant to get information from Twitter through their API. After getting authentificated, I can download information about any user with the getUser function. However, I am not able to do so with usernames which are only numbers (for example, 1234). With the line getUser("1234") I get the following error message:
Error in twInterfaceObj$doAPICall(paste("users", "show", sep = "/"),
params = params, : Not Found (HTTP 404).
Is there any way to get user information when the username is made completely of numbers? The function tries to search by ID instead of screenname when it finds only numbers.
Thanks in advance!
First of all, twitteR is deprecated in favour of rtweet, so you might want to look into that.
The specific user ID you've provided is a protected account, so unless your account follows it / has access to it, you will not be able to query it anyway.
Using rtweet and some random attempts to find a valid numerical user ID, I succeeded with this:
library(rtweet)
users <- c("989", "andypiper")
usr_df <- lookup_users(users)
usr_df
rtweet also has some useful coercion functions to force the use of screenname or ID (as_screenname and as_userid respectively)

Scraping login protected website with a challenge form?

I'm trying to do some web scraping from steamspy.com, specifically the total playtime hours for a certain game. That info is behind the login wall for the site, so I've been trying to figure out how to get R past it for html mining.
I tried this method for passing login credentials via POST() but it doesn't seem to work. I noticed that the login handler for that example used POST, whereas looking at the source code for steamspy it seems to use a challenge form and I wasn't sure how to proceed with R.
My attempt thus far looks like this:
handle <- handle("http://steamspy.com")
path <- "/login/"
login <- list(
jschl_vc = "bc4e...",
pass = "148..."
)
response <- POST(handle = handle, path = path, body = login)
I found the values for the jschl_vc and pass from inspecting the source code after I logged in. The code above doesn't work and gives me:
Error in curl::curl_fetch_memory(url, handle = handle) : Failure
when receiving data from the peer
probably since I'm tryign to use POST to a challenge form. Is there way that I'm missing to proceed?

How to Mine Data from a Facebook Group using R?

I am using RFacebook package to do data mining.
For example, to get the facebook page then we do
fb_page <- getPage(page="facebook", token=fb_oauth)
In my case is it is a private group and the URL is something like this. So how do I get the page info for a group? I tried the following but got the following error.
Error in callAPI(url = url, token = token) : Unknown path
components: /posts
my_page <- getPage(page="group/222568978569", token=my_oauth)
This isn´t a reproducible example because the page you mention doesn´t exist. But I suggest that you check that this group has authorized your app. Note that after the introduction of version 2.0 of the Graph API, only friends/groups who are using the application that you used to generate the token to query the API will be returned.

Resources