I would like to retrieve power hedging data using Rbbg bloomberg package in R and I know this formula works in excel :
=BDH("VATT SS Equity","BI_%_ELECTRIC_POWER_HEDGED","01/01/2000","","GEOGRAPHIC_LOCATION_OVERRIDE=EUCN","BI_CONTRACT_MATURITY_OVERRIDE=CY12","FUND_PER=Q")
But when I try this in R :
conn<-blpConnect(log.level="off")
data<-bdh(conn,"VATT SS Equity","BI_PER_ELECTRIC_POWER_HEDGED","20000101","","GEOGRAPHIC_LOCATION_OVERRIDE=EUCN","BI_CONTRACT_MATURITY_OVERRIDE=CY12","FUND_PER=Q")
I get the following error message :
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "invokeMethod", cl, :
org.findata.blpwrapper.WrapperException: response error: Invalid override field id specified [nid:217]
What should I change in the formula to make it work ?
Thanks
Edit: Indeed it is BI_PCT_ELECTRIC_POWER_HEDGED, however the problem does not come from here but from the overrides.
This returns an empty variable for me, but it doesn't throw an error so it might get you on the right track.
The way you specify options is different in the current version it seems.
data<-bdh(conn,"VATT SS Equity", "BI_PER_ELECTRIC_POWER_HEDGED","20000101","",
override_fields=c("GEOGRAPHIC_LOCATION_OVERRIDE",
"BI_CONTRACT_MATURITY_OVERRIDE",
override_values=c("EUCN","CY12"),
option_names="periodicitySelection",
option_values="QUARTERLY")
The doc where I found the correct syntax is here: RBloomberg. It was written in 2010 for the predecessor package (before Bloomberg complained about using their name) but I guess it works! I think the convention of enumerating the list of option names then the option values is odd compared to your assumption that OPTION=VALUE was correct, but there you go.
Related
I just try to use :
scRNA <- FindNeighbors(scRNA, dims = pc.num)
and
scRNA.counts <- Read10X(data.dir = "filtered_feature_bc_matrix")
and both of them gives error like :
Error in validityMethod(as(object, superClass)) : object 'Matrix_validate' not found
I guess these code totally run well in other's computer
so I wonder what's wrong with my code and how to fix it ?
Indeed, to solve the problem for you, it should be sufficient to do what #Mikael Jagan says:
update.packages("Matrix")
2nd thought: The above may not solve the problem entirely:
As there are other packages involved, some of these may have to be re-installed (after the updating of Matrix).
Can you post the output (or good summary of that if it's too long) of
traceback()
immediately after producing the error you are seeing?
I am needed to use sf package command, because readShapePoly commnad will be erased. That's neer future I know... So I want to change my code from route
thata readShapePloy to route sf::st_read. But I cannot write correct code. So I want to correct code and I am very happy, if u show correct sf package command. My now command below, thx...(I am sorry to my poor English skill,Plz over come it...)
In R, I wrote code again and again for ex, on sf::st_read command. But that show error again and again...code below nd error message below too
usa_state <- readShapePoly("usa_state.shp", IDvar = "STATE_CODE")
That is ok, but I know to change that code neer future, cuz this command is erased in neer future. So Plz show me command thata route of sf package.I tried below code but I know this is not understandable in R.
usa_state = sf::st_read("usa_state.shp", layer = "STATE_CODE")
bad code...Plz shw me correct coding!Error occured now am I...
Error in CPL_read_ogr(dsn, layer, query, as.character(options), quiet, :
SQL execution failed, cannot open layer.
In addition: Warning message:
In CPL_read_ogr(dsn, layer, query, as.character(options), quiet, :
GDAL Error 1: SQL Expression Parsing Error: syntax error, unexpected
identifier, expecting SELECT or '('. Occurred around : "STATE_CODE"
You're almost there with usa_state = sf::st_read("usa_state.shp", layer = "STATE_CODE").
I'm guessing that STATE_CODE is a field in the usa_state.shp shapefile. You don't need to supply any field names to the st_read() function. Just use:
library(sf)
usa_state = st_read("usa_state.shp")
You'll need to make sure that the usa_state.shp file (and its associated files) are in your current working directory, or you'll need to use the full path:
usa_state = st_read("/path/to/usa_state.shp")
The sf package is well worth getting to know. It's made all of my spatial work in R much easier.
I am new to SparkR, so please forgive if my question is very basic.
I work on databricks and try to get all unique dates of a column of a SparkDataFrame.
When I run:
uniquedays <- SparkR::distinct(df$datadate)
I get the error message:
unable to find an inherited method for function ‘distinct’ for signature ‘"Column"’
On Stack Overflow, I found out that this usually means
(If I run isS4(df), it returns TRUE):
That is the type of message you will get when attempting to apply an S4 generic function to an object of a class for which no defined S4 method exists
I also tried to run
uniquedays <- SparkR::unique(df$datadate)
where I get the error message:
unique() applies only to vectors
It feels like, I am missing something basic here.
Thank you for your help!
Try this:
library(magrittr)
uniquedays <- SparkR::select(df, df$datadate) %>% SparkR::distinct()
I found an answer to my own question (see below). Still need help.
In the same package, quantmod, there is an option called getSymbol.google.
Nevertheless,
If I use it to get Microsoft value, for example, it works all right
getSymbols.google('MSFT', environment() , src="google", from = (Sys.Date() - 1))
[1] "MSFT"
But, I can´t make it work on a currency pair;
getSymbols.google("GBPUSD", environment() , src="google", from = (Sys.Date() - 1))
Error in download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv'
In addition: Warning message:
In download.file(paste(google.URL, "q=", Symbols.name, "&startdate=", :
cannot open URL 'http://finance.google.com/finance/historical?q=GBPUSD&startdate=Nov+02,+2017&enddate=Nov+03,+2017&output=csv': HTTP status was '400 Bad Request'
Any ideas?
Good morning,
Since the 1ts of November i´m having trouble with the function getQuote from Yahoo. Is a function inside the package "quantmod", which uses yahoo API to request the information.
The description of the function is as follows; Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time.
In r, i´m getting the following error; "HTTP status was '403 Forbidden'"
I´ve look on my browser and the error comes from the following error in Yahoo web page "Fetch current stock quote(s) from specified source. At present this only handles sourcing quotes from Yahoo Finance, but it will be extended to additional sources over time."
Does anybody know how to solve ir, or, any alternatives to the function getQuote()
Here is an example from RStudio
getQuote("AAPL")
Error in download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv'
In addition: Warning message:
In download.file(paste("https://finance.yahoo.com/d/quotes.csv?s=", :
cannot open URL 'https://finance.yahoo.com/d/quotes.csv?s=AAPL&f=d1t1l1c1p2ohgv': HTTP status was '403 Forbidden'
Thanks
seems that yahoo has discontinued this service. Anyone aware of a alternative for yahoo (I'd rather not have to webscrape yahoo for this)
rob
I ran into the same problem... it's kludgey but as a workaround to get the end-of-day value, I have found this to work for now:
Instead of getQuote() to get the Last price (which doesn't seem to work from Yahoo anymore):
underlying<-"AAPL"
quote.last <-getQuote(underlying)$Last
I use "getSymbols" which still works-- throws it into a new data frame, and I pull out the value I want from that:
Hx<-getSymbols(underlying,from=Sys.Date()-1) # allows me to not have to retain the ticker name if I do this across many tickers
quote.last<-as.double(tail(Cl(get(Hx)),1)) # Closing price value from last row of data
rm(list=Hx) # throw away the temporary data frame with quote history
I'm sure the's a more elegant way to do it, but this is what fell out of my brain as a quick workaround that got it done... sadly that doesn't get things like the Bid and Ask that getQuote does.
I'm working for the first time with the R package BerkeleyEarth, and attempting to use its convenience functions to access the BEST data. I think maybe it's just a problem with their servers (a matter I've separately addressed to the package's maintainer) but I wanted to know if it's instead something silly I'm doing.
To reproduce my fault
library(BerkeleyEarth)
downloadBerkeley()
which provides the following error message
trying URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
Error in download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
cannot open URL 'http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip'
In addition: Warning message:
In download.file(urls$Url[thisUrl], destfile = file.path(destDir, :
InternetOpenUrl failed: 'A connection with the server could not be established'
Has anyone had a better experience using this package?
The error message is pointing to a different URL than one should get judging what URLs are listed at http://berkeleyearth.org/data/ that point to the zip formatted files. There are another set of .nc files that appear to be more recent. I would replace the entries in the BerkeleyUrls dataframe with the ones that match your analysis strategy:
This is the current URL that should be in position 1,1:
http://berkeleyearth.lbl.gov/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip
And this is the one that is in the package dataframe:
> BerkeleyUrls[1,1]
[1] "http://download.berkeleyearth.org/downloads/TAVG/LATEST%20-%20Non-seasonal%20_%20Quality%20Controlled.zip"
I suppose you could try:
BerkeleyUrls[, 1] <- sub( "download\\.berkeleyearth\\.org", "berkeleyearth.lbl.gov", BerkeleyUrls[, 1])