I need to bring data from this site (image below) in R
but I dont know how pass the parameters Key and Client via code, can anyone help me
Follow my code:
dados_get <- GET('http://api.climatempo.com.br/api/v1/forecast/72hours/temperature?idlocale=6873')
Try the following code:
require(httr)
require(jsonlite)
#list of parameters
a = list("key"="01234","type"="json","client"="abc","localeid"="6873")
dados_post <- POST(url='http://api.climatempo.com.br/api/v1/monitoring/weather',
body = toJSON(a))
Let me know if it works.
Related
I am really struggling to understand how this newly released API works.. Can someone please help me turn it into a useful dataframe in R? My res looks like the below (edited):
library(httr)
library(jsonlite)
library(dplyr)
#GET Function
res = GET("https://comtradeapi.un.org/data/v1/get/C/A/HS?reporterCode=826&period=2020&partnerCode=000&partner2Code=000&cmdCode=TOTAL&flowCode=M HTTP/1.1&subscription-key=6509aa2a08d54ca7b47a2fece2ab5bee")
df= fromJSON(rawToChar(res$content)) #this doesn't work
By pasting your URL into a browser we get:
{"elapsedTime":"0.02 secs","count":0,"data":[],"error":""}
So there appears to be an error with the result itself. Also, I'd strongly advise against publishing your secret API key, as it allows others to access the data you're subscribing to!
I'm trying to access the sheet using oAuth 2.0 in R.
I have generated the access token.
I am not clear of how to retrieve the data from the above mentioned sheet.
I have tried to access the data using this piece of code
sampleData <- GET("https://sheets.googleapis.com/v4/spreadsheets/1qS66SgO6Uc958WTY1_2I7zN58MxtleLN5PbadhjIBqg",
add_headers("Authorization" = new_access_token))
data_list <- content(sampleData, "parsed")
Can anyone provide me a way to proceed further?
Thanks in advance!!
Add the sheet name to the end e.g
/values/Sheet1
This will get you the entire sheet. To get details of a range include A1 notation
GET https://sheets.googleapis.com/v4/spreadsheets/spreadsheetId/values/Sheet1!A1:D5
There are 2 parts of my questions as I explored 2 methods in this exercise, however I succeed in none. Greatly appreciated if someone can help me out.
[PART 1:]
I am attempting to scrape data from a webpage on Singapore Stock Exchange https://www2.sgx.com/derivatives/negotiated-large-trade containing data stored in a table. I have some basic knowledge of scraping data using (rvest). However, using Inspector on chrome, the html hierarchy is much complex then I expected. I'm able to see that the data I want is hidden under < div class= "table-container" >,and here's what I've tied:
library(rvest)
library(httr)
library(XML)
SGXurl <- "https://www2.sgx.com/derivatives/negotiated-large-trade"
SGXdata <- read_html(SGXurl, stringsASfactors = FALSE)
html_nodes(SGXdata,".table-container")
However, nothing has been picked up by the code and I'm doubt if I'm using these code correctly.
[PART 2:]
As I realize that there's a small "download" button on the page which can download exactly the data file i want in .csv format. So i was thinking to write some code to mimic the download button and I found this question Using R to "click" a download file button on a webpage, but i'm unable to get it to work with some modifications to that code.
There's a few filtera on the webpage, mostly I will be interested downloading data for a particular business day while leave other filters blank, so i've try writing the following function:
library(httr)
library(rvest)
library(purrr)
library(dplyr)
crawlSGXdata = function(date){
POST("https://www2.sgx.com/derivatives/negotiated-large-trade",
body = NULL
encode = "form",
write_disk("SGXdata.csv")) -> resfile
res = read.csv(resfile)
return(res)
}
I was intended to put the function input "date" into the “body” argument, however i was unable to figure out how to do that, so I started off with "body = NULL" by assuming it doesn't do any filtering. However, the result is still unsatisfactory. The file download is basically empty with the following error:
Request Rejected
The requested URL was rejected. Please consult with your administrator.
Your support ID is: 16783946804070790400
The content is loaded dynamically from an API call returning json. You can find this in the network tab via dev tools.
The following returns that content. I find the total number of pages of results and loop combining the dataframe returned from each call into one final dataframe containing all results.
library(jsonlite)
url <- 'https://api.sgx.com/negotiatedlargetrades/v1.0?order=asc&orderby=contractcode&category=futures&businessdatestart=20190708&businessdateend=20190708&pagestart=0&pageSize=250'
r <- jsonlite::fromJSON(url)
num_pages <- r$meta$totalPages
df <- r$data
url2 <- 'https://api.sgx.com/negotiatedlargetrades/v1.0?order=asc&orderby=contractcode&category=futures&businessdatestart=20190708&businessdateend=20190708&pagestart=placeholder&pageSize=250'
if(num_pages > 1){
for(i in seq(1, num_pages)){
newUrl <- gsub("placeholder", i , url2)
newdf <- jsonlite::fromJSON(newUrl)$data
df <- rbind(df, newdf)
}
}
I've successfully received an access token from an oauth2.0 request so that I can start obtaining some data from the server. However, I keep getting error 403 on each attempt. APIs are very new to me and I only am entry level in using R so I can't figure out whats wrong with my request. I'm using the crul package currently, but I've tried to make the request with the httr package as well, but I can't get anything through without encountering the 403 error. I have a shiny app which in the end I'd like to be able to refresh with data imported from this other application which actually stores data, but I want to try to pull data to my console locally first so I can understand the basic process of doing so. I will post some of my current attempts.
(x <- HttpClient$new(
url = 'https://us.castoredc.com',
opts = list( exceptions = FALSE),
headers = list())
)
res.token <- x$post('oauth/token',
body = list(client_id = "{id}",
client_secret = "{secret}",
grant_type = 'client_credentials'))
importantStuff <- jsonlite::fromJSON(res$parse("UTF-8"))
token <- paste("Bearer", importantStuff$access_token)
I obtain my token, but the following doesn't seem to work.###
I'm attempting to get the list of study codes so that I can call on them in
further requests to actually get data from a study.
res.studies <- x$get('/api/study',headers = list(Authorization =
token,client_id = "{id}",
client_secret = "{secret}",
grant_type = 'client_credentials'),
body = list(
content_type = 'application/json'))
Their support team gave me the above endpoint to access the content, but I get 403 so I think i'm not using my token correctly?
status: 403
access-control-allow-headers: Authorization
access-control-allow-methods: Get,Post,Options,Patch
I'm the CEO at Castor EDC and although its pretty cool to see a Castor EDC question on here, I apologize for the time you lost over trying to figure this out. Was our support team not able to provide more assistance?
Regardless, I have actually used our API quite a bit in R and we also have an amazing R Engineer in house if you need more help.
Reflecting on your answer, yes, you always need a Study ID to be able to do anything interesting with the API. One thing that could make your life A LOT easier is our R API wrapper, you can find that here: https://github.com/castoredc/castoRedc
With that you would:
remotes::install_github("castoredc/castoRedc")
library(castoRedc)
castor_api <- CastorData$new(key = Sys.getenv("CASTOR_KEY"),
secret = Sys.getenv("CASTOR_SECRET"),
base_url = "https://data.castoredc.com")
example_study_id <- studies[["study_id"]][1]
fields <- castor_api$getFields(example_study_id)
etc.
Hope that makes you life a lot easier in the future.
So, After some investigation, It turns out that you first have to make a request to obtain another id for each Castor study under your username. I will post some example code that worked finally.
req.studyinfo <- httr::GET(url = "us.castoredc.com/api/study"
,httr::add_headers(Authorization = token))
json <- httr::content(req.studyinfo,as = "text")
studies <- fromJSON(json)
Then, this will give you a list of your studies in Castor for which you can obtain the ID that you care about for your endpoints. It will be a list that contains a data frame containing this information.
you use the same format with whatever endpoint you like that is posted in their documentation to retrieve data. Thank you for your observations! I will leave this here in case anyone is employed to develop anything from data used in the Castor EDC. Their documentation was vague to me, so maybe it will help someone in the future.
Example for next step:
req.studydata <- httr::GET("us.castoredc.com/api/study/{study id obtained
from previous step}/data-point-
collection/study",,httr::add_headers(Authorization =
token))
json.data <- httr::content(req.studydata,as = "text")
data <- fromJSON(json.data)
This worked for me, I removed the Sys.getenv() part
library(castoRedc)
castor_api <- CastorData$new(key = "CASTOR_KEY",
secret = "CASTOR_SECRET",
base_url = "https://data.castoredc.com")
example_study_id <- studies[["study_id"]][1]
fields <- castor_api$getFields(example_study_id)
I'm trying to fetch data from the Google Plus API but I only know how to search if I know the user_id.
Here's how I get the JSON using RCurl library:
data <- getURL(paste0("https://www.googleapis.com/plus/v1/people/",
user_id,"/activities/public?maxResults=100&key=", api_key),
ssl.verifypeer = FALSE)
I have tried formatting the URL like the documentation on google
like so:
data <- getURL(paste0("https://www.googleapis.com/plus/v1/activities/",
keyword,"?key=",api_key),ssl.verifypeer = FALSE)
but it doesn't work.
Is it even possible to search using a keyword from R or not? As R isn't in the supported programming languages for the API according to this link
I figured out how to make it work.
The GET request should be formatted as:
data <- getURL(paste0("https://www.googleapis.com/plus/v1/activities?key=",api_key,"&query=",search_string),ssl.verifypeer = FALSE)