I've tried two cryptocurrency API package for R, to name it:
coinmarketcapr package
riingo <- included in tidyquant
I really want to get updated historical data of cryptocurrency, my goal is to predict the thing with some timeseries analysis, but with these packages I kept getting error message, for coinmarketcapr it's understandable because apparently my API subscription plan doesn't cover the historical data, but for the riingo the message shows just like this...
> riingo_crypto_latest("btcusd", resample_frequency = "10min", base_currency = NULL)
Request failed [401]. Retrying in 1.6 seconds...
Request failed [401]. Retrying in 1.5 seconds...
Error: There was an error, but riingo isn't sure why. See Tiingo msg for details.
Tiingo msg) Invalid token.
Can somebody help me? Or maybe suggesting other source for taking cryptocurrency historical data? Thank you in advance for any answer!
P.S. I've already inserted the API key, so it's not the authentication problem.
If you trying to scrape information about new listings at crypto exchanges and some more useful info, you can be interested at using this API:
https://rapidapi.com/Diver44/api/new-cryptocurrencies-listings/
Example request in R:
library(httr)
url <- "https://new-cryptocurrencies-listings.p.rapidapi.com/new_listings"
response <- VERB("GET", url, add_headers(x_rapidapi-host = 'new-cryptocurrencies-listings.p.rapidapi.com', x_rapidapi-key = 'your-api-key', '), content_type("application/octet-stream"))
content(response, "text")
It includes an endpoint with New Listings from the biggest exchanges and a very useful endpoint with information about exchanges where you can buy specific coins and prices for this coin at that exchanges. You can use this information for trading. When currency starts to list at popular exchanges like Binance, KuCoin, Huobi, etc the price increases about 20-30%
Related
I am trying to retrieve tweets from twitter on a full archive basis. However, I finally managed to work things out on my developer page, but the code seems to stumble upon an error that I can not find anywhere on the internet. This is my code without my tokens:
install.packages("RCurl")
library("RCurl")
install.packages("rtweet")
library("rtweet")
consumer_key <- ".."
consumer_secret <- ".."
access_token <- ".."
access_secret <- ".."
app <- "..."
token = rtweet::create_token(app,consumer_key,consumer_secret,access_token,access_secret)
dataBTC1 <- search_fullarchive("Bitcoin", n = 1000, env_name = "Tweets", fromDate = "201501010000")
And this is the error I get:
Error in tweet(x$quoted_status) :
Unidentified value: edit_history, edit_controls, editable.
Please open an issue and notify the maintainer. Thanks!
Literally no idea what it means and how to solve it if possible. Can anyone help me?
Thanks!
These errors occur because the R package you are using (rtweet) does not "know" about the three new fields that were added to the Tweet object when the new editable Tweets feature was released. You will need to ask the rtweet maintainers (as it mentions in the error message) to enable support for these fields in their library, or find an alternative way to call the API.
I have paid premium access to Twitter so can access historical data. I am trying to retrieve tweets on a combined search for the terms 'delivery' and 'accident', ie as if I go to a Twitter search and type 'delivery accident' or 'delivery+accident' in the 'latest' tab. My code for doing so is the following:
deliv_list <- searchTwitter("delivery+accident", n=500)
I am getting results which correspond to my search, but far fewer than I had expected. R returns the error message:
Warning message:
In doRppAPICall("search/tweets", n, params = params, retryOnRateLimit = retryOnRateLimit, :
500 tweets were requested but the API can only return 248
I subsequently checked and I am able to return 500 records for a different single search, so it is not that I have reached a download limit. There are multiple tweets every day on this theme so I'm not sure why so few results n=248 are returned. Appreciate any help.
It seems twitteR cannot handle full archive premium accounts. search-tweets-python seems to be the way to go. Tantalizingly, this may be able to work inside R via R's reticulate package. See: https://dev.to/twitterdev/running-search-tweets-python-in-r-45eo
I am trying to get the full timeline of twitter user through the academictwitteR package and the twitter academic track API (v2). However, I get an error 400. Unfortunately, the explanation here https://developer.twitter.com/en/support/twitter-api/error-troubleshooting
says "The request was invalid or cannot be otherwise served. An accompanying error message will explain further. Requests without authentication or with invalid query parameters are considered invalid and will yield this response.", but the error code does not contain any explanation. I do not know what I am doing wrong. I tried other R packages as well, same error. I have access to all other functions of the API, but cannot adjust the timeline to download either more than 3200 tweets at a time (as my access should allow me to do). I want to full timeline of this user, not just the newest 3200 tweets.
The errorcode looks like that:
Error in make_query(url = endpoint_url, params = params, bearer_token = bearer_token, :
something went wrong. Status code: 400
This is my request (I tried using the user ID, it does not change anything).
tmlne <- get_user_timeline("25390350",
start_tweets = "2009-03-19T00.00.00Z",
end_tweets = "2021-11-27T00.00.00Z",
bearer_token = get_bearer(),
data_path = "twitter/data/timelines/",
n = 100000,
bind_tweets = F,
file = "tmlnw")
I tried revoking and renewing the bearer token, I tested the token by downloading all tweets from a user (which is not what I am after, but just to see if my access works), that works, but the timelines do not. I can get 3200 with the get_timelines_v2 function of the TwitterV2 package, but I cannot circumvent the 3200 tweets limit and I do not know how to change the request to get older tweets than the most recent 3200, and I cannot get the academictwitteR package to use the timeline function (here I know how to make multiple requests).
What would help me is either an information regarding the Error 400 or how to adjust the get_timelines_v2 function to include older tweets.
I try to download the reports available in Salesforce via the URL, e.g.
http://YOURInstance.my.salesforce.com/012389u13541?export=1&enc=UTF-8&xf=csv
in R.
I already did some investigation to access the report via HTTR-GET, however, up until today without any meaningful outcomes. Unfortunately, R is downloading HTML-code instead of the desired csv file. I also tried to realize the approach suggested here:
https://salesforce.stackexchange.com/questions/47414/download-a-report-using-python
The package "RForcecom" allows the interaction via an API, but I was not able to figure out how to realize above solution in R.
General GET-Request:
GET("http://YOUR_Instance.my.salesforce.com/012389u13541?export=1&enc=UTF-8&xf=csv")
I expect the output to be in csv format, but I receive the report data as html source code.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3...
<html>
<head>
<meta HTTP-EQUIV="PRAGMA" CONTENT="NO-CACHE">
...
Did anyone of you guys encounter same issues and can provide guidance? Any kind of help is much appreciated. Thanks in advance!
UPDATED and not-working R-Snippet:
library(RForcecom)
library(httr)
username='username'
password='password'
instanceURL <- "https://login.salesforce.com/"
session <- rforcecom.login(username, password, instanceURL)
sid=as.character(session['sessionID'])
url='http://YOURInstance.my.salesforce.com/012389u13541?export=1&enc=UTF-8&xf=csv'
getData=GET(url,add_headers('Content-Type'='application/json','Authorization'=paste0("Bearer ",sid),'X-PrettyPrint'='1'),set_cookies('sid'=sid))
Are you sure you have a valid report id? It doesn't look right (did you just obfuscate it for purposes of this post?). What is in that HTML you're getting, an error message? SF login screen?
What you're doing is effectively "screen scraping". This is not a real API, it can break at any time, you should find/build something that properly uses Salesforce Analytics API. You've been warned.
But if you're after a quick and dirty solution...
You need to pretend you're an authenticated user, that you have a valid session id. Add a cookie to your GET request.
How to get a valid session id?
You'd have to log in to SF first (for example use SOAP API's login call or I listed some REST api ideas here: https://stackoverflow.com/a/56034159/313628 )
or display some user's session ID in a SF formula, visualforce page and user would copy-paste it to your app.
Once you have it - add a Cookie header to your GET with value sid=<session id goes here>
Here's a raw request & response in SoapUI.
I recently struggled with the same issue, there's a magic parameter you need to add to the query : isdtp=p1
so if you try:
http://YOURInstance.my.salesforce.com/012389u13541?export=1&enc=UTF-8&xf=csv&isdtp=p1
it should return you the file directly.
In your example, I don't think that you can use the rforcecom session with httr functions as you are trying.
Here is a slightly different way to solve the problem.
Rather than trying to retrieve a report that you already created in Salesforce, why not specify the report in SOQL and use rforcecom.query function to execute the SOQL from r. That would return the data in a data frame and would require no further data wrangling in r to make it useable.
I use this technique often and once you get used to the Salesforce API I think that its probably faster and more powerful for most use cases.
Here is a simple function that I use to return select opportunity data for all opportunities in Salesforce.
getSFOpps <- function(session) {
#Construct SOQL Query
soql <- "SELECT Id,
Name,
AccountId,
Amount,
CurrencyIsoCode,
convertCurrency(Amount) usd_amount,
CloseDate,
CreatedDate,
Region__c,
IsClosed,
IsWon,
LastActivityDate,
LeadSource,
OwnerId,
Probability,
StageName,
Type,
IsDeleted
FROM Opportunity"
#Retrieve Opp information
as_tibble(RForcecom::rforcecom.query(session, soql))
}
It requires that you pass in a valid session from Rforcecom.login but you seem to have that part working from your code above.
I hope this helps ...
As of v0.2.0, the {salesforcer} R package implements the Salesforce Reports and Dashboards REST API. You can execute and manage reports without needing to write functions from scratch to pull down report data. Below is an example of how to find a report in your Org and then retrieve its data. You can also just use the report Id which appears in the URL bar when viewing the report in Salesforce (highlighted in red in the screenshot below).
# install.packages('salesforcer')
library(dplyr, warn.conflicts = FALSE)
library(salesforcer)
# Authenticate using username, password, and security token ...
sf_auth(username = "test#gmail.com",
password = "{PASSWORD_HERE}",
security_token = "{SECURITY_TOKEN_HERE}")
# ... or using OAuth 2.0 authentication
sf_auth()
# find a report in your org and run it
all_reports <- sf_query("SELECT Id, Name FROM Report")
this_report_id <- all_reports$Id[1]
results <- sf_run_report(this_report_id)
results
This question has been asked many times for php, C# and some other programming languages. I don't find any information for R though and I can't derive a solution from the other topics.
I try to do facebook data mining in R with the package Rfacebook:
library(devtools)
install_github("pablobarbera/Rfacebook/Rfacebook")
require("Rfacebook")
fb_oauth <- fbOAuth(app_id="ID", app_secret="SECRET",extended_permissions = TRUE)
After authenticating, facebook already gives me an error (though I'm not sure, if it has anything to do with the problem):
For starters, I'd like to access my own profile now:
myProfile <- getUsers("me",token=fb_oauth)
and that's when the console display the following error:
An active access token must be used to query information about the current user.
How can I fix this?