I am trying to call the stats of a list of players from the call of duty API. This API requires firstly the login in website https://profile.callofduty.com/cod/login. Once logged in, the user can see the stats of a player using the call-of-duty API. For example, the stats of the streamer savyultras90 from Warzone can be seen through the following link: https://my.callofduty.com/api/papi-client/stats/cod/v1/title/mw/platform/psn/gamer/savyultras90/profile/type/wz.
If I log in from the website and try to see the stats of a player and the related json, I am able to do via browser. However, this doesn't seem straightforward in R.
I try to log in using the GET function from httr package as follows
respo <- GET('https://profile.callofduty.com/cod/login', authenticate('USER', 'PWD'))
But when I try to have access to the api and download the JSON file using the function fromJSON from the package jsonlite as follows
data <- fromJSON('https://my.callofduty.com/api/papi-client/stats/cod/v1/title/mw/platform/psn/gamer/savyultras90/profile/type/wz')
I get the error message "Not permitted: not authenticated".
How can I authenticate in one website and stay logged in to call from the API which relies on that authentication?
Seeing I've recently had to develop a PHP API for Warzone, I might be able to guide you in the right direction on how to handle this. But first a few remarks:
You need to authenticate each user individually with the appropriate platform if you want to request that player's data
There is a throttle limit on the amount of API requests
The API of Call of Duty is under strict usage guidelines and should only be used by registered partners. Making usage of the API could result in claims and eventually lawsuits: link
There is no public documentation of the API and the API has changed in the past, breaking several 3rd party tools.
Nevertheless, the process involves several steps as described below:
Register the device making the call
https://profile.callofduty.com/cod/mapp/registerDevice
with a json body in the form of {"deviceId":"INSERT_ID_HERE"}
This will return a response with the authHeader which we will use for as Token in the next calls
Login with Activision credentials
https://profile.callofduty.com/cod/mapp/login
Set the following headers:
Authorization: "INSERT_AUTHHEADER_HERE"
x_cod_device_id: "INSERT_PREVIOUSLY_USED_DEVICEID_HERE"
This in terms will generate a dataset where we will save the following data from:
rtkn, ACT_SSO_COOKIE and atkn.
Make the wanted API call for data
We have all the data now required to make the API call.
For each request we will submit 3 headers:
Authorization: "INSERT_AUTHHEADER_HERE"
x_cod_device_id: "INSERT_PREVIOUSLY_USED_DEVICEID_HERE"
Cookie: ACT_SSO_LOCALE=en_GB;country=GB;API_CSRF_TOKEN=**GENERATE_CSRF_TOKEN**;rtkn=**RTKN_HERE**;ACT_SSO_COOKIE=**ACT_SSO_COOKIE_HERE**;atkn=**ATKN_HERE**
For more reference, you can always look through a Python library or NodeJS library which succesfully implemented the API.
I struggled with this yesterday but finally made some progress. The issue is that you have to obtain an authentication token. The steps can be followed here: https://documenter.getpostman.com/view/7896975/SW7aXSo5#a37a2e5b-84bb-441d-b978-0fd8d42ffd29 but not available in R though.
My code works at first, as long as you don't authenticate again (still trying to figure out why). Basically what I did was to translate the steps in the link and extracted the content in the response from GET:
# Get token ---------------------------------------------------------------
resp <- GET('https://profile.callofduty.com/cod/login')
cookies = c(
'XSRF-TOKEN' = resp$cookies$value[1]
,'new_SiteId' = resp$cookies$value[2]
,'comid' = resp$cookies$value[3]
,'bm_sz' = resp$cookies$value[4]
,'_abck' = resp$cookies$value[5]
# ,'ACT_SSO_COOKIE' = resp$cookies$value[6]
# ,'ACT_SSO_COOKIE_EXPIRY' = resp$cookies$value[7]
# ,'atkn' = resp$cookies$value[8]
# ,'ACT_SSO_REMEMBER_ME' = resp$cookies$value[9]
# ,'ACT_SSO_EVENT' = resp$cookies$value[10]
# ,'pgacct' = resp$cookies$value[11]
# ,'CRM_BLOB' = resp$cookies$value[12]
# ,'tfa_enrollment_seen' = resp$cookies$value[13]
)
headers = c(
)
params = list(
`new_SiteId` = 'cod',
`username` = 'USER',
`password` = 'PWD',
`remember_me` = 'true',
`_csrf` = resp$cookies$value[1]
)
# Authenticate ------------------------------------------------------------
resp_post <- POST('https://profile.callofduty.com/do_login?new_SiteId=cod',
httr::add_headers(.headers=headers),
query = params,
httr::set_cookies(.cookies = cookies))
cookies = c(
'XSRF-TOKEN' = resp_post$cookies$value[1]
,'new_SiteId' = resp_post$cookies$value[2]
,'comid' = resp_post$cookies$value[3]
,'bm_sz' = resp_post$cookies$value[4]
,'_abck' = resp_post$cookies$value[5]
,'ACT_SSO_COOKIE' = resp_post$cookies$value[6]
,'ACT_SSO_COOKIE_EXPIRY' = resp_post$cookies$value[7]
,'atkn' = resp_post$cookies$value[8]
,'ACT_SSO_REMEMBER_ME' = resp_post$cookies$value[9]
,'ACT_SSO_EVENT' = resp_post$cookies$value[10]
,'pgacct' = resp_post$cookies$value[11]
,'CRM_BLOB' = resp_post$cookies$value[12]
,'tfa_enrollment_seen' = resp_post$cookies$value[13]
)
headers = c(
)
params = list(
`new_SiteId` = 'cod',
`username` = 'USER',
`password` = 'PWD',
`remember_me` = 'true',
`_csrf` = resp_post$cookies$value[1]
)
# Get data:
resp_psn <- httr::GET(url = 'https://my.callofduty.com/api/papi-client/stats/cod/v1/title/mw/platform/psn/gamer/savyultras90/profile/type/wz',
httr::add_headers(.headers=headers),
query = params,
httr::set_cookies(.cookies = cookies))
resp_psn_json <- content(resp_psn)
Let me know if you've already managed to resolve this!
Related
i'm currently making a roblox whitelist system and it's almost finished but i need 1 thing more i scripted it and its not work (code below) i didn't found nothing to fix what i have (script and screenshoot of error below), thanks.
local key = 1
local HttpService = game:GetService("HttpService")
local r = HttpService:RequestAsync({
Url = "https://MyWebsiteUrl.com/check.php?key="..key,
Method = "GET"
})
local i = HttpService:JSONDecode(r.Body)
for n, v in pairs(i) do
print(tostring(n)..", "..tostring(v))
end
I assume the website that you are using to validate the key
returns the response in raw if so then
local key = 1
local HttpService = game:GetService("HttpService")
local r = HTTPService:GetAsync("https://MyWebsiteUrl.com/check.php?key="..key)
local response = JSON:Decode(r)
print(response)
I think this is because you tried to concat a string (the url) with a number (the key variable) try to make the key a string
I am performing a bibliometric analysis, and have chosen to use rscopus to automate my document searches. I performed a test search, and it worked; the documents returned by scopus_search() exactly matched a manual check that I performed. Here's my issue: rscopus returned only information on the first author (and their affiliation) of each article, but I need information on all authors/affiliations for each article pulled for my particular research questions. I've scoured the rscopus documentation, as well as Elsevier's Developer notes for API use, but can't figure this out. Any ideas on what I'm missing?
query1 <- 'TITLE-ABS-KEY ( ( recreation ) AND ( management ) AND (challenge)'
run1 <- scopus_search(query = query1, api_key = apikey, count = 20,
view = c('STANDARD', 'COMPLETE'), start = 0, verbose = TRUE,
max_count = 20000, http = 'https://api.elsevier.com/content/search/scopus',
headers = NULL, wait_time = 0)
I wanted to post an update since I figured out what was going wrong. I was using the university VPN to access the Scopus API, but the IP address associated with that VPN was not within the range of addresses included in my institution's Scopus license. So, I did not have permission to get "COMPLETE" results. I reached out to Elsevier and very quickly got an institution key that I could add to the search. My working search looks as follows...
query1 <- 'TITLE-ABS-KEY ( ( recreation ) AND ( management ) AND (challenge)'
run1 <- scopus_search(query = query1, api_key = apikey, count = 20,
view = c('COMPLETE'), start = 0, verbose = TRUE,
max_count = 20000, http='https://api.elsevier.com/content/search/scopus',
headers = inst_token_header(insttoken), wait_time = 0)
Just wanted to reiterate Brenna's comment - I had the same issue using the VPN to access the API (which can be resolved by being on campus). Elsevier were very helpful and provided an institutional token very quickly - problem solved.
Otherwise the other workaround I found was to use CrossRef data using library(rcrossref)
I used the doi column from the scopusdata from my original Scopus search:
crossrefdata <- scopusdata %>%
pmap(function(doi){
cr_works(dois = doi) # returns CrossRef metadata for each doi
}) %>%
map(pluck("data")) %>% # cr_works returns a list, select only the 'data'
bind_rows()
You can then manipulate the crossref metadata however you need with full author list.
Let say there is R code for REST API based on using the "plumber" package.
Here is a function from it.
#' Date of sale
#' #get /date_of_sale
#' #param auth_key Auth key
#' #param user_id User ID
#' #example https://company.com/date_of_sale?auth_key=12345&user_id=6789
function(auth_key, user_id) {
# ...
}
Let say there is another R script that uses API request to this server like
api_string <- "https://company.com/date_of_sale?auth_key=12345&user_id=6789"
date_of_sale <- jsonlite::fromJSON(api_string)
Is it possible to get a description of the parameters "auth_key" and "user_id" in the second script to have a full explanation of what means each parameter? For example, get for "auth_key" a string "Auth key"? Or how it will be possible to get access to function "date_of_sale" metadata at all?
Thanks for any ideas?
Using a file plumber.R with content as you provided. Assuming it is in the working directory.
In R
pr_read <- plumber::pr("plumber.R")
spec <- pr_read$getApiSpec()
spec$paths$`/date_of_sale`$get$parameters
spec is an R list with the same structure as an OpenAPI document.
If you do not have access to API plumber file but your API is running somewhere you have access to.
spec <- jsonlite::fromJSON("{api_server}/openapi.json", simplifyDataFrame = FALSE)
spec$paths$`/date_of_sale`$get$parameters
Again this follows the OpenAPI documentation standards.
Since this is an API GET request you can not access the description of the variable unless you explicitly include it in the API response.
I've learned some R script purely for this question, my guess is this is how you've prepared your JSON API response.
You can do something like this in your JSON request.
library(rjson)
auth_key <- "some_key";
user_id <- "340";
x <- list(auth_key = list(
type = typeof(auth_key),
lenght = length(auth_key),
attributes = attributes(auth_key),
string = auth_key
),
user_id = list(
type = typeof(user_id),
lenght = length(user_id),
attributes = attributes(user_id),
string = user_id
),
data = "your_data"
);
#x
json <- toJSON(x, indent=0, method="C" )
fromJSON( json )
You might want to look a these.
https://stat.ethz.ch/R-manual/R-devel/library/base/html/typeof.html
https://ramnathv.github.io/pycon2014-r/learn/structures.html https://rdrr.io/cran/rjson/man/toJSON.html https://www.rdocumentation.org/packages/base/versions/3.6.2/topics/attributes
I am trying to pull a list of Coinbase accounts into R using their API. I receive an authentication error saying "invalid signature". Something is clearly wrong when I create my sha256 signature but I can't figure out what the issue is. I have not had this same issue when accessing the GDAX API using a sha256 signature.
API Key Documentation
API key is recommend if you only need to access your own account. All API >key requests must be signed and contain the following headers:
CB-ACCESS-KEY The api key as a string
CB-ACCESS-SIGN The user generated message signature (see below)
CB-ACCESS-TIMESTAMP A timestamp for your request
All request bodies should have content type application/json and be valid JSON.
The CB-ACCESS-SIGN header is generated by creating a sha256 HMAC using the secret key on the prehash string timestamp + method + requestPath + body (where + represents string concatenation). The timestamp value is the same as the CB-ACCESS-TIMESTAMP header.
My Code
library(httr)
library(RCurl)
library(digest)
coinbase_url <- "https://api.coinbase.com"
coinbase_reqPath <- "/v2/accounts/"
coinbase_fullPath <- paste(coinbase_url, coinbase_reqPath,sep = "")
coinbase_key <- "XXXXMYKEYXXX"
coinbase_secret <- "XXXXXMYSECRETKEYXXXX"
cb_timestamp <- format(as.numeric(Sys.time()), digits=10)
coinbase_message <- paste0(cb_timestamp,"GET", coinbase_reqPath)
coinbase_sig <- hmac(key = coinbase_secret, object = coinbase_message, algo = "sha256", raw = F)
coinbase_acct <- content(GET(coinbase_fullPath,
add_headers(
"CB-ACCESS-KEY" = coinbase_key,
"CB-ACCESS-SIGN" = coinbase_sig,
"CB-ACCESS-TIMESTAMP" = cb_timestamp,
"Content-Type"="application/json")))
Sorry for not updating this earlier. The answer is simple, I just needed to remove the final forward slash in "/v2/accounts/" when specifying my request path.
I am presently using the streamR package in R to stream tweets from the filter stream in twitter. I have a handshaken ROAuth object that I use for this. My piece of code looks like:
# load the Twitter auth object
load("twitter_oAuth3.RData")
load("keywords3.RData")
streamTweet = function(){
require(streamR)
require(ROAuth)
stack = filterStream(file.name="",track=keywords,timeout=500,oauth=twitter_oAuth)
return(stack)
}
I wanted to create a real time application, which involves dumping these tweets into an activeMQ topic. My code for that is:
require(Rjms)
# Set logger properties
url = "tcp://localhost:61616"
type = "T"
name = "TwitterStream"
# initialize logger
topicWriter = initialize.logger(url,type,name)
topicWrite = function(input){
# print("writing to topic")
to.logger(topicWriter,input,asString=TRUE,propertyName='StreamerID',propertyValue='1')
return()
}
logToTopic = function(streamedStack){
# print("inside stack-writer")
stacklength = length(streamedStack)
print(c("Length: ",stacklength))
for(i in 1:stacklength){
print(c("calling for: ",i))
topicWrite(streamedStack[i])
}
return()
}
Now my problem is that of the timeout that filterStream() needs. I looked under the hood, and found this call that the function makes:
url <- "https://stream.twitter.com/1.1/statuses/filter.json"
output <- tryCatch(oauth$OAuthRequest(URL = url, params = params,
method = "POST", customHeader = NULL,
writefunction = topicWrite, cainfo = system.file("CurlSSL",
"cacert.pem", package = "RCurl")), error = function(e) e)
I tried removing the timeout component but it doesn't seem to work. Is there a way I can maintain a stream forever (until I kill it) which dumps each tweet as it comes into a topic?
P.S. I know of a java implementation that makes a call to the twitter4j API. I, however, have no idea how to do it in R.
The documentation for streamR package mentions that the default option for timeout option in filterStream() is 0 which will keep the connection open permanently.
I quote:
"numeric, maximum length of time (in seconds) of connection to stream. The
connection will be automatically closed after this period. For example, setting
timeout to 10800 will keep the connection open for 3 hours. The default is 0,
which will keep the connection open permanently."
Hope this helps.