IBrokers - Download all position information - r

I am using the IBrokers API in R to try to download my current positions in my portfolio on Interactive Brokers. However, I'm having trouble downloading the information by following the API documentation.
I can get this far with the following. This downloads my account information, but it's not a desireable format.
tws <- twsConnect()
reqAccountUpdates(tws)
I trade using the following, but it doesn't work.
twsPortfolioValue(tws)
Ideally, I want a data frame that has the following fields: ticker, shares, execution price.
Is anyone familiar with this API?
Thank you!

You're passing a twsconn object to twsPortfolioValue, but the function needs the output of reqAccountUpdates as its input, as explained in the Details section of ?twsPortfolioValue
Try this:
ac <- reqAccountUpdates(tws)
twsPortfolioValue(ac)

Related

Convert my API result into a dataframe in R

I am really struggling to understand how this newly released API works.. Can someone please help me turn it into a useful dataframe in R? My res looks like the below (edited):
library(httr)
library(jsonlite)
library(dplyr)
#GET Function
res = GET("https://comtradeapi.un.org/data/v1/get/C/A/HS?reporterCode=826&period=2020&partnerCode=000&partner2Code=000&cmdCode=TOTAL&flowCode=M HTTP/1.1&subscription-key=6509aa2a08d54ca7b47a2fece2ab5bee")
df= fromJSON(rawToChar(res$content)) #this doesn't work
By pasting your URL into a browser we get:
{"elapsedTime":"0.02 secs","count":0,"data":[],"error":""}
So there appears to be an error with the result itself. Also, I'd strongly advise against publishing your secret API key, as it allows others to access the data you're subscribing to!

API Webscrape OpenFDA with R

I am scraping OpenFDA (https://open.fda.gov/apis). I know my particular inquiry has 6974 hits, which is organized into 100 hits per page (max download of the API). I am trying to use R (rvest, jsonlite, purr, tidyverse, httr) to download all of this data.
I checked the website information with curl in terminal and downloaded a couple of sites to see a pattern.
I've tried a few lines of code and I can only get 100 entries to download. This code seems to work decently, but it will only pull 100 entries, so one page To skip the fisrt 100, which I can pull down and merge later, here is the code that I have used:
url_json <- "https://api.fda.gov/drug/label.json?api_key=YOULLHAVETOGETAKEY&search=grapefruit&limit=100&skip=6973"
raw_json <- httr::GET(url_json, accept_json())
data<- httr::content(raw_json, "text")
my_content_from_json <- jsonlite::fromJSON(data)
dplyr::glimpse(my_content_from_json)
dataframe1 <- my_content_from_json$results
view(dataframe1)
SOLUTION below in the responses. Thanks!
From the comments:
It looks like the API parameters skip and limit work better than the search_after parameter. They allow pulling down 1,000 entries simultaneously according to the documentation (open.fda.gov/apis/query-parameters). To provide these parameters into the query string, an example URL would be
https://api.fda.gov/drug/label.json?api_key=YOULLHAVETOGETAKEY&search=grapefruit&limit=1000&skip=0
after which you can loop to get the remaining entries with skip=1000, skip=2000, etc. as you've done above.

How to get from Excel STOCKHISTORY to a list in R?

I need to find some historical time series for Stocks and have the result in R.
I tried already the package "quantmod", but unfortunately most of the stocks are not covered.
I found EXCELS "STOCKPRICEHISTORY" to yield good results.
Hence, to allow more solutions, I phrase my question more open:
How do I get from a table, that contains Stocks (Ticker), Startdate and Endate to a list in R which contains each respective stock and its stockprice timeseries?
My Startingpoint looks like this:
My aim at the very End is to have something like this:
(Its also ok if I have every single stock price timeseries as csv)
My ideas so far:
Excel VBA Solution - 1
Write a macro, that executes EXCELS "STOCKHISTORY" function on each of these stocks, and writes them as csv or so? Then after that, read them in and create a list in R
Excel VBA Solution - 2
Write a macro, that executes EXCELS "STOCKHISTORY" function on each of these stocks,
each one in a new worksheet? Bad Idea, since there are more than 4000 Stocks..
R Solution
(If possible) call "STOCKHISTORY" function from R directly (?)
Any suggestions on how to takle this?
kind regards
I would recommend using an API, especially over trying to connect to Excel via VBA. There are many that are free, but require and API key from their website. For example, alphavantage.
library(tidyverse)
library(jsonlite)
library(httr)
symbol = "IBM"
av_key = "demo"
url <- str_c("https://www.alphavantage.co/query?function=TIME_SERIES_DAILY&symbol=", symbol ,"&apikey=", av_key, "&datatype=csv")
d <- read_csv(url)
d %>% head
Cred and other options: https://quantnomad.com/2020/07/06/best-free-api-for-historical-stock-data-examples-in-r/

Cannot access EIA API in R

I'm having trouble accessing the Energy Information Administration's API through R (https://www.eia.gov/opendata/).
On my office computer, if I try the link in a browser it works, and the data shows up (the full url: https://api.eia.gov/series/?series_id=PET.MCREXUS1.M&api_key=e122a1411ca0ac941eb192ede51feebe&out=json).
I am also successfully connected to Bloomberg's API through R, so R is able to access the network.
Since the API is working and not blocked by my company's firewall, and R is in fact able to connect to the Internet, I have no clue what's going wrong.
The script works fine on my home computer, but at my office computer it is unsuccessful. So I gather it is a network issue, but if somebody could point me in any direction as to what the problem might be I would be grateful (my IT department couldn't help).
library(XML)
api.key = "e122a1411ca0ac941eb192ede51feebe"
series.id = "PET.MCREXUS1.M"
my.url = paste("http://api.eia.gov/series?series_id=", series.id,"&api_key=", api.key, "&out=xml", sep="")
doc = xmlParse(file=my.url, isURL=TRUE) # yields error
Error msg:
No such file or directoryfailed to load external entity "http://api.eia.gov/series?series_id=PET.MCREXUS1.M&api_key=e122a1411ca0ac941eb192ede51feebe&out=json"
Error: 1: No such file or directory2: failed to load external entity "http://api.eia.gov/series?series_id=PET.MCREXUS1.M&api_key=e122a1411ca0ac941eb192ede51feebe&out=json"
I tried some other methods like read_xml() from the xml2 package, but this gives a "could not resolve host" error.
To get XML, you need to change your url to XML:
my.url = paste("http://api.eia.gov/series?series_id=", series.id,"&api_key=",
api.key, "&out=xml", sep="")
res <- httr::GET(my.url)
xml2::read_xml(res)
Or :
res <- httr::GET(my.url)
XML::xmlParse(res)
Otherwise with the post as is(ie &out=json):
res <- httr::GET(my.url)
jsonlite::fromJSON(httr::content(res,"text"))
or this:
xml2::read_xml(httr::content(res,"text"))
Please note that this answer simply provides a way to get the data, whether it is in the desired form is opinion based and up to whoever is processing the data.
If it does not have to be XML output, you can also use the new eia package. (Disclaimer: I'm the author.)
Using your example:
remotes::install_github("leonawicz/eia")
library(eia)
x <- eia_series("PET.MCREXUS1.M")
This assumes your key is set globally (e.g., in .Renviron or previously in your R session with eia_set_key). But you can also pass it directly to the function call above by adding key = "yourkeyhere".
The result returned is a tidyverse-style data frame, one row per series ID and including a data list column that contains the data frame for each time series (can be unnested with tidyr::unnest if desired).
Alternatively, if you set the argument tidy = FALSE, it will return the list result of jsonlite::fromJSON without the "tidy" processing.
Finally, if you set tidy = NA, no processing is done at all and you get the original JSON string output for those who intend to pass the raw output to other canned code or software. The package does not provide XML output, however.
There are more comprehensive examples and vignettes at the eia package website I created.

rvest package to harvest instagram number of followers?

I'm trying to adapt the example code in the rvest package for getting number of followers for an account on instagram (e.g., https://www.instagram.com/bradyellison/). I tried using selectorgadget to isolate code for number of followers, which gave me this: ._218yx:nth-child(2) ._s53mj. But I don't get the expected followers back, and not sure how to debug this. Here's my code.
# example
require(rvest)
html <- read_html("https://www.instagram.com/bradyellison/")
athlete_followers <- html_nodes(html, "._218yx:nth-child(2) ._s53mj")
length(athlete_followers)
Output is:
[1] 0
Expected followers are 12.1K. Would really appreciate help. (I've tried using the Instagram API for this first, but couldn't get it to work, perhaps because I'm in sandbox mode or something.)
You can't rvest this page because it's not a static site, rather it's generated dynamically via code (e.g. try xml_text(html)). To access Instagram data you should use their API. See full example here: https://www.r-bloggers.com/analyze-instagram-with-r/ .

Resources