Extracting replies to a tweet using academictwitteR - r

Having trouble with grabbing tweets using conversation_id. I have academic license and get_bearer() is working fine.
I'm running:
devtools::install_github("cjbarrie/academictwitteR", build_vignettes = TRUE, force = T)
get_bearer()
myquery<-build_query(conversation_id = "1392547623073030144")
myquery
get_all_tweets(query = myquery,
start_tweets = "2020-03-13",
end_tweets = "2021-03-13",
n = Inf,
data_path = "tweet_data",
bind_tweets = FALSE)
and I am getting the error
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 400
Any help would be greatly appreciated.

Related

I have 'Error in normalizePath' in visNetwork R

I'm trying to generate a graph with the VisNetwork package of R. But I got the error
Error in normalizePath(target_dir, "/", TRUE) :
path[1]="query/htmlwidgets-1.2": No such file or directory
The code I've written is :
visNetwork(nodes = nodes, edges = edges, width = "100%", height = 900,
main = list(
style = 'font:caption;font-weight:bold;font-size:15px;text-align:center;')) %>%
visPhysics(solver = "forceAtlas2Based",
stabilization = T,
timestep=0.25,
forceAtlas2Based = list(gravitationalConstant = -50, avoidOverlap = 0.95)) %>%
visOptions(highlightNearest = TRUE) %>%
visEdges(smooth = T) %>%
visNodes(scaling = list(min = 10, max = 40))%>%
visSave(file = save_name, background = "white",selfcontained = T)
}
The error appears when I set save_name to any directory different from './' .
But what is more Surprising is that I can't find the generated file anywhere.
Can anyone help me to solve this so I can set the path I want ?
Thanks

Connecting to Admob with r

Is there smooth solution to query data from Google Admob API into R environment? (for e.g. similar to googleAnalyticsR package).
So if anyone ever is looking for an answer the solution that i worked out is to use library(httr) and library(jsonlite) which are general packages for managing API's
First set up your google app for admob and generate oauth credentials,
store them in as described below and authorize token.
Obtain Token:
options(admob.client_id = client.id)
options(admob.client_secret = key.secret)
# GETTING OAUTH TOKEN
token = oauth2.0_token(endpoint = oauth_endpoints("google"), # 'google is standard
app = oauth_app(appname = "google",
key = getOption('admob.client_id'),
secret = getOption("admob.client_secret")
),
scope = r"{https://www.googleapis.com/auth/admob.report}",
use_oob = TRUE,
cache = TRUE)
You can generate your body request in within avaible google documentation in here:
https://developers.google.com/admob/api/v1/reference/rest/v1/accounts.networkReport/generate
I'am passing mine as.list:
`json body` = toJSON(list(`reportSpec` = list(
`dateRange` = list(
`startDate` = list(
`year` = 2021,
`month` = 10,
`day` = 20),
`endDate` = list(
`year` = 2021,
`month` = 10,
`day` = 22
)),
`dimensions` = list("DATE"),
`metrics` = list("IMPRESSIONS")
)), auto_unbox = TRUE)
Query your request:
test = POST(url = 'https://admob.googleapis.com/v1/accounts/YOURPUBIDGOESINHERE/networkReport:generate',
add_headers(`authorization` = paste("Bearer",
token$credentials$access_token)),
add_headers(`content-type` = "application/json"),
body = `json body`
)
Finalize with some data cleansing and you are done.
jsonText = content(test, as = 'text')
df = fromJSON(jsonText, flatten = T)
df = na.omit(df[c('row.metricValues.IMPRESSIONS.integerValue',
'row.dimensionValues.DATE.value')]) # select only needed columns
row.names(df) = NULL # reindex rows
df

Pagination loop is stuck at x > 99

I am scraping some data from an API, and my code works just fine as long as I extract pages 0 to 98. Whenever my loop reaches 99, I get an error Error: Internal Server Error (HTTP 500)..
Tried to find an answer but I am only proficient in R and C# and cannot understand Python or other.
keywords = c('ABC OR DEF')
parameters <- list(
'q' = keywords,
num_days = 1,
language = 'en',
num_results = 100,
page = 0,
'api_key' = '123456'
)
response <- httr::GET(get_url, query = parameters)
# latest_page_number <- get_last_page(parsed)
httr::stop_for_status(response)
content <- httr::content(response, type = 'text', encoding = 'utf-8')
parsed <- jsonlite::fromJSON(content, simplifyVector = FALSE, simplifyDataFrame = TRUE)
num_pages = round(parsed[["total_results"]]/100)
print(num_pages)
result = parsed$results
for(x in 1:(num_pages))
{
print(x)
parameters <- list(
'q' = keywords,
page = x,
num_days = 7,
language = 'en',
num_results = 100,
'api_key' = '123456'
)
response <- httr::GET(get_url, query = parameters)
httr::stop_for_status(response)
content <- httr::content(response, type = 'text', encoding = 'utf-8')
# content <- httr::content(response)
parsed <- jsonlite::fromJSON(content, simplifyVector = FALSE, simplifyDataFrame = TRUE)
Sys.sleep(0.2)
result = rbind(result,parsed$results[,colnames(result)])
}

Unable to start Selenium session in R

I've been trying to set up a Selenium session with Rwebdriver, but so far with no success. I believe I have initiated a Selenium server, but whenever I run
`> Rwebdriver::start_session(root = "http://localhost:4444/wd/hub/")`
I get the following error:
Error in serverDetails$value[[1]] : subscript out of bounds
After I set options(error = recover) to run a debug, I get the following result:
function (root = NULL, browser = "firefox", javascriptEnabled = TRUE,
takesScreenshot = TRUE, handlesAlerts = TRUE, databaseEnabled = TRUE,
cssSelectorsEnabled = TRUE)
{
server <- list(desiredCapabilities = list(browserName = browser,
javascriptEnabled = javascriptEnabled, takesScreenshot = takesScreenshot,
handlesAlerts = handlesAlerts, databaseEnabled = databaseEnabled,
cssSelectorsEnabled = cssSelectorsEnabled))
newSession <- getURL(paste0(root, "session"), customrequest = "POST",
httpheader = c(`Content-Type` = "application/json;charset=UTF-8"),
postfields = toJSON(server))
serverDetails <- fromJSON(rawToChar(getURLContent(paste0(root,
"sessions"), binary = TRUE)))
sessionList <- list(time = Sys.time(), sessionURL = paste0(root,
"session/", serverDetails$value[[1]]$id))
class(sessionList) <- "RSelenium"
print("Started new session. sessionList created.")
seleniumSession <<- sessionList
}
The issue is at paste0(root, "session/", serverDetails$value[[1]]$id)), and true enough, whenever I print serverDetails$value, all that appears is list(). Does anyone know how to fix the issue? Thank you for your attention and patience in advance.

Twitter GET not working with since_id

Working in R, but that shouldn't really matter.
I want to gather all tweets after : https://twitter.com/ChrisChristie/status/663046613779156996
So Tweet ID : 663046613779156996
base = "https://ontributor_details = "contributor_details=true"
## include_rts
include_rts = "include_rts=true"
## exclude_replies
exclude_replies = "exclude_replies=false"api.twitter.com/1.1/statuses/user_timeline.json?"
queryName = "chrischristie"
query = paste("q=", queryName, sep="")
secondary_url = paste(query, count, contributor_details,include_rts,exclude_replies, sep="&")
final_url = paste(base, secondary_url, sep="")
timeline = GET(final_url, sig)
This (the above) works. There is no since_id. The URL comes out to be
"https://api.twitter.com/1.1/statuses/user_timeline.json?q=chrischristie&count=200&contributor_details=true&include_rts=true&exclude_replies=false"
The below does not, just by adding in the following
cur_since_id_url = "since_id=663046613779156996"
secondary_url = paste(query, count,
contributor_details,include_rts,exclude_replies,cur_since_id_url, sep="&")
final_url = paste(base, secondary_url, sep="")
timeline = GET(final_url, sig)
The url for the above there is
"https://api.twitter.com/1.1/statuses/user_timeline.json?q=chrischristie&count=200&contributor_details=true&include_rts=true&exclude_replies=false&since_id=663046613779156992"
This seems to work:
require(httr)
myapp <- oauth_app(
"twitter",
key = "......",
secret = ".......")
twitter_token <- oauth1.0_token(oauth_endpoints("twitter"), myapp)
req <- GET("https://api.twitter.com/1.1/statuses/user_timeline.json",
query = list(
screen_name="chrischristie",
count=10,
contributor_details=TRUE,
include_rts=TRUE,
exclude_replies=FALSE,
since_id=663046613779156992),
config(token = twitter_token))
content(req)
Have a look at GET statuses/user_timeline

Resources