Is it possible to draw a log price chart in the chart.Posn() or chart.Reconcile() functions of blotter? I tried adding log.scale = TRUE to the function call without success. Is the underlying chart_Series function still too "experimental" to support this functionality or is the function call not correct?
chart.Posn(Portfolio = portfolio.st, Symbol = "GSPC", log.scale = TRUE)
Update: I have been trying to use the chart_Series() function directly, setting the ylog graphical parameter:
par(ylog=TRUE)
chart_Series(Cl(GSPC))
But I receive an error "log scale needs positive bounds" despite the data being all positive.
Btw, GSPC is an OHLCV time-series xts of the S&P 500 that plots in chartSeries() and chart_Series(), but just not with log-scale for either charting functions.
I found this old post not as a solution but as an alternative:
Does chart_Series() work with logarithmic axis?
I don't think there is any parameter like log.scale that chart_Series recognises. You could simply do chart_Series(log(Cl(GSPC)). You could also do some basic modifications to chart.Posn to put things on the log scale. Use as a starting point the source code for chart.Posn.
Here is an example of a modified function you could make. You can obviously modify it further in any way you please.
# We need an example. So,
# Source this code from the directory containing quantstrat, or at least source the macd.R demo in quantstrat.
source("demo/macd.R")
log.chart.Posn <- function(Portfolio, Symbol, Dates = NULL, env = .GlobalEnv) {
pname<-Portfolio
Portfolio<-getPortfolio(pname)
x <- get(Symbol, env)
Prices <- log(x)
chart_Series(Prices)
#browser()
if(is.null(Dates)) Dates<-paste(first(index(Prices)),last(index(Prices)),sep='::')
#scope the data by Dates
Portfolio$symbols[[Symbol]]$txn<-Portfolio$symbols[[Symbol]]$txn[Dates]
Portfolio$symbols[[Symbol]]$posPL<-Portfolio$symbols[[Symbol]]$posPL[Dates]
Trades = Portfolio$symbols[[Symbol]]$txn$Txn.Qty
Buys = log(Portfolio$symbols[[Symbol]]$txn$Txn.Price[which(Trades>0)])
Sells = log(Portfolio$symbols[[Symbol]]$txn$Txn.Price[which(Trades<0)])
Position = Portfolio$symbols[[Symbol]]$txn$Pos.Qty
if(nrow(Position)<1) stop ('no transactions/positions to chart')
if(as.POSIXct(first(index(Prices)))<as.POSIXct(first(index(Position)))) Position<-rbind(xts(0,order.by=first(index(Prices)-1)),Position)
Positionfill = na.locf(merge(Position,index(Prices)))
CumPL = cumsum(Portfolio$symbols[[Symbol]]$posPL$Net.Trading.PL)
if(length(CumPL)>1)
CumPL = na.omit(na.locf(merge(CumPL,index(Prices))))
else
CumPL = NULL
if(!is.null(CumPL)) {
CumMax <- cummax(CumPL)
Drawdown <- -(CumMax - CumPL)
Drawdown<-rbind(xts(-max(CumPL),order.by=first(index(Drawdown)-1)),Drawdown)
} else {
Drawdown <- NULL
}
if(!is.null(nrow(Buys)) && nrow(Buys) >=1 ) (add_TA(Buys,pch=2,type='p',col='green', on=1));
if(!is.null(nrow(Sells)) && nrow(Sells) >= 1) (add_TA(Sells,pch=6,type='p',col='red', on=1));
if(nrow(Position)>=1) {
(add_TA(Positionfill,type='h',col='blue', lwd=2))
(add_TA(Position,type='p',col='orange', lwd=2, on=2))
}
if(!is.null(CumPL)) (add_TA(CumPL, col='darkgreen', lwd=2))
if(!is.null(Drawdown)) (add_TA(Drawdown, col='darkred', lwd=2, yaxis=c(0,-max(CumMax))))
plot(current.chob())
}
log.chart.Posn(Portfolio = portfolio.st, Sym = "AAPL", Dates = NULL, env = .GlobalEnv)
add_MACD() # Simply added to make the plot almost identical to what is in demo/macd.R
This is what the original chart looks like:
New plot, with log scales:
Related
I am attempting to create a raster of pixel metrics for a large collection of lidar data using the lidR:: package. I want to first remove any outlier points in the point cloud, normalize the point cloud to a digital terrain model, and finally, calculate the standard z pixel metrics on a 20 m X 20 m grid. I followed the guidance on the lidR:: package's book and its vignettes for using the catalog_apply() engine. I have created a "low level API" function that first has a conditional to check if the input is a LAScatalog, and then runs the function through catalog_apply , then checks if the input is a LAScluster, and then runs the function directly and clips the chunk buffers from the output, and then finally checks if the input is a LAS, and then explicitly runs the function. I am struggling with getting the function to run properly on a LAScatalog. When I run the function on a LAS file, it works with out error, however, when I run it on a LAScatalog, all chunks show an error on the plot, and when the routine finishes, it throws this error:
Error in any_list[[1]] : subscript out of bounds
In addition: There were 15 warnings (use warnings() to see them)
This error makes me think that I am missing some sort of catalog_apply engine or SpatRaster driver option that tells the function how to merge the output chunks back together to form the final output, but I am not sure which option that would be, and I haven't been able to find any answers on the lidR:: wiki page, vignettes, or book, nor can I find a similar issue here on Stackoverflow. Any advice would be much appreciated. Below is my reproducible example:
##Loading Necessary Packages##
library(lidR)
library(future)
#Reading in the data##
LASfile <- system.file("extdata", "MixedConifer.laz", package="lidR")
ctg <- readLAScatalog(LASfile) # As LAScatalog
las<-readLAS(LASfile) # As LAS
####Custom function####
raster_metrics<-function(las, dtm_ras, grid_size, sensitivity){#start function
if(is(las, "LAScatalog")){#Start first conditional for LASCatalog
options <-list(automerge=TRUE, need_buffer=TRUE)
output<-catalog_apply(las, raster_metrics, grid_size=grid_size, sensitivity=sensitivity, .options=options)
return(output)
} else { #end first condition start first else
if (is(las, "LAScluster")){ #start second conditional for LAScluster
las<-readLAS(las)
if (is.empty(las)){return(NULL)}#Conditional for empty chunk (self contained)
output_tmp<-raster_metrics(las, dtm_ras, grid_size, sensitivity)
bbox<-sf::st_bbox(las)
output<-st_crop(output_tmp, bbox)
return(output)
} else {# End second conditional begin second else
if (is(las, "LAS")){
p95 <- pixel_metrics(las, ~quantile(Z, probs = 0.95), grid_size)
las <- merge_spatial(las, p95, "p95")
las <- filter_poi(las, Z < p95*sensitivity)
las$p95 <- NULL
norm<-las - dtm_ras
output<-pixel_metrics(norm, .stdmetrics_z, grid_size)
return(output)
}else { #end final conditional begin final else
stop("This type is not supported.")
}#end final else
} #end second else
} #end first else
} #end function
##Creating a rasterized dtm to feed to the function##
dtm_ras<-rasterize_terrain(ctg, algorithm = knnidw())
##Defining some function and engine option setttings##
grid_size<-20.0
sensitivity<-1.2
chunk_size<-grid_size*50
chunk_buffer<-grid_size*2
##Setting driver and engine option parameters##
opt_output_files(ctg)<-paste0(tempdir(), "/{XCENTER}_{YCENTER}_{ID}_Norm_Height")
opt_chunk_size(ctg)<-chunk_size
opt_chunk_buffer(ctg)<-chunk_buffer
opt_wall_to_wall(ctg)<-TRUE
opt_stop_early(ctg)<-FALSE
opt_filter(ctg)<-"-drop_withheld"
opt_select(ctg)<-"xyz"
ctg#output_options$drivers$SpatRaster$param$overwrite<-TRUE
##Setting up parallel processing##
plan(multisession, workers = nbrOfWorkers()-1)
set_lidr_threads(nbrOfWorkers()-1)
##Running the function##
example1<-raster_metrics(las=ctg, dtm_ras = dtm_ras, grid_size = grid_size, sensitivity = sensitivity)#Throws error
example2<-raster_metrics(las=las, dtm_ras = dtm_ras, grid_size = grid_size, sensitivity = sensitivity)#Works without error
UPDATE 2/3/2023
Doing a little digging on my own, it appears that this error gets thrown by the internal lidR::: function engine_merge(), which has an argument any_list=. This makes me think that somehow my function violates one of the template rules of catalog_apply(), but I copied the template verbatim from the vignette. Hoping this elucidates the source of my error.
You missed to propagate dtm_ras
output<-catalog_apply(las, raster_metrics, dtm_ras = dtm_ras, grid_size=grid_size, sensitivity=sensitivity, .options=options)
You used incorrect package to crop
bbox <-terra::ext(las)
output<-terra::crop(output_tmp, bbox)
With the following function it works in sequential mode
raster_metrics<-function(las, dtm_ras, grid_size, sensitivity)
{
if(is(las, "LAScatalog"))
{
options <-list(automerge=FALSE, need_buffer=TRUE)
output<-catalog_apply(las, raster_metrics, dtm_ras = dtm_ras, grid_size=grid_size, sensitivity=sensitivity, .options=options)
return(output)
}
else if (is(las, "LAScluster"))
{
las<-readLAS(las)
if (is.empty(las)){return(NULL)}
output_tmp <- raster_metrics(las, dtm_ras, grid_size, sensitivity)
bbox <-terra::ext(las)
output<-terra::crop(output_tmp, bbox)
return(output)
}
else if (is(las, "LAS"))
{
p95 <- pixel_metrics(las, ~quantile(Z, probs = 0.95), grid_size)
las <- merge_spatial(las, p95, "p95")
las <- filter_poi(las, Z < p95*sensitivity)
las$p95 <- NULL
norm <- las - dtm_ras
output<-pixel_metrics(norm, .stdmetrics_z, grid_size)
return(output)
}
else
{
stop("This type is not supported.")
}
}
However it does not work in parallel because terra's SpatRaster are not serializable. To say it simple, when the dtm_ras is sent to each worker, it no longer exists. This is not an issue with lidR it is an issue with terra. In lidR functions, I use an internal workarounds to deal with SpatRaster by converting them to raster.
On your side the simplest option is to use a RasterLayer from raster.
I'm working with a choroplethr map like the one below. How do I simply remove the state abbreviations?
Here is the replication code:
library(choroplethr)
library(choroplethrMaps)
data(df_pop_state)
df_pop_state$value <- as.numeric(df_pop_state$value)
state_choropleth(df_pop_state, num_colors = 1,
title = "2012 State Population Estimates",
legend = "Population")
Thank you for using choroplethr. Note that Choroplethr uses R6 Objects. In fact, the state_choropleth function is just a convenience wrapper for the StateChoropleth R6 object:
> state_choropleth
function (df, title = "", legend = "", num_colors = 7, zoom = NULL,
reference_map = FALSE)
{
c = StateChoropleth$new(df)
c$title = title
c$legend = legend
c$set_num_colors(num_colors)
c$set_zoom(zoom)
if (reference_map) {
if (is.null(zoom)) {
stop("Reference maps do not currently work with maps that have insets, such as maps of the 50 US States.")
}
c$render_with_reference_map()
}
else {
c$render()
}
}
<bytecode: 0x7fdda6aa3a10>
<environment: namespace:choroplethr>
If you look at the source code you will see that there is a field on the object that does what you want: show_labels. It defaults to TRUE.
We can get the result you want by simply creating your map using the StateChoropleth object (not the function) and setting show_labels to FALSE.
c = StateChoropleth$new(df_pop_state)
c$title = "2012 State Population Estimates"
c$legend = "Population"
c$set_num_colors(1)
c$show_labels = FALSE
c$render()
I chose this approach because, in general, I found that many functions in R have a large number of parameters, and that can be confusing. The downside is that functions are easier to document than objects (especially in R), so questions like this frequently come up.
This function returns a ggplot object, so you can manually inspect the layers and remove the one you don't want (here you want to remove the GeomText layer):
states <- state_choropleth(
df_pop_state,
num_colors = 1,
title = "2012 State Population Estimates",
legend = "Population"
)
layer_no <- grep("GeomText", sapply(states$layers, function(x) class(x$geom)[1]))
states$layers[[layer_no]] <- NULL
states
I have several xyplot objects that I have saved as .RDATA files. I am now interested in being able to look at their differences. I have tried things like
plot1-plot2
but this does not work (I get the "non-numeric argument to binary operator error).
I would also be able to do this if I knew how to extract the timeseries data stored within the lattice xyplot object, but I have looked everywhere and can't figure out how to do this either.
Any suggestions?
EDIT:
just to make it perfectly clear what I mean for MrFlick, by "taking the difference of two plots" I mean plotting the elementwise difference of the timeseries from each plot, assuming it exists (i.e. assuming that the plots have the same domain). Graphically,
I might want to take the following two plots, stored as xyplot objects:
and end up with something that looks like this:
-Paul
Here is a little function I wrote to plot the difference of two xyplots:
getDifferencePlot = function(plot1,plot2){
data1 = plot1$panel.args
data2 = plot2$panel.args
len1 = length(data1)
len2 = length(data2)
if (len1!=len2)
stop("plots do not have the same number of panels -- cannot take difference")
if (len1>1){
plotData = data.table(matrix(0,0,4))
setNames(plotData,c("x","y1","y2","segment"))
for (i in 1:len1){
thing1 = data.table(cbind(data1[[i]]$x,data1[[i]]$y))
thing2 = data.table(cbind(data2[[i]]$x,data2[[i]]$y))
finalThing = merge(thing1, thing2,by = "V1")
segment = rep(i,nrow(finalThing))
finalThing = cbind(finalThing,segment)
setNames(finalThing,c("x","y1","y2","segment"))
plotData = rbind(plotData,finalThing)
}
}
if (len1==1){
plotData = data.table(matrix(0,0,3))
setNames(plotData,c("x","y1","y2"))
thing1 = data.table(cbind(data1[[i]]$x,data1[[i]]$y))
thing2 = data.table(cbind(data2[[i]]$x,data2[[i]]$y))
plotData = merge(thing1, thing2,by = "V1")
}
plotData$difference = plotData$y1-plotData$y2
if (len1==1)
diffPlot = xyplot(difference~x,plotData,type = "l",auto.key = T)
if (len1>1)
diffPlot = xyplot(difference~x|segment,plotData,type = "l",auto.key = T)
return(diffPlot)
}
I'm trying to apply the combinef and aggts functions from the R hts package to a time series matrix in order to obtain an optimized set of forecasts across a hierarchy. I've run the same code every month without issue, and am now seeing errors after upgrading to hts package v4.5.
Reproducible example (I can share data file offline if needed)
#Read in forecast data for all levels of hierarchy#
fcast<-read.csv("SampleHierarchyForecast.csv", header = TRUE, check.names = FALSE)
#Convert to time series#
fcast<-ts(fcast, start = as.numeric(2010.25) + (64)/12, end = as.numeric(2010.25) + (75)/12, f= 12)
#Create time series of only the bottom level of the hierarchy#
index<-c()
fcastBottom<-fcast
for (i in 1:length(fcastBottom [1,]))
{
if(nchar(colnames(fcastBottom)[i])!=28)
index[i]<-i
else
index[i]<-0
}
fcastBottom<-fcastBottom[,-index]
#Create grouped time series from the bottom level forecast #
GtsForecast <- gts(fcastBottom, characters = list(c(12,12), c(4)), gnames = c("Category", "Item", "Customer", "Category-Customer"))
#Use combinef function to optimally combine the full hierarchy forecast using the groups from the full hierarchy gts#
combo <- combinef(fcast, groups = GtsForecast$groups)
*Warning message:
In mapply(rep, as.list(gnames), times, SIMPLIFY = FALSE) :
longer argument not a multiple of length of shorter*
traceback()
2: stop("Argument fcasts requires all the forecasts.")
1: combinef(fcast, groups = GtsForecast$groups)
There's a little bug when comebinef() function calls gts(). Now I've fixed it on github. So you can run your own code above without any trouble after updating the development version.
Alternatively, you need to tweak your code a bit if you don't want to install the newest version.
combo <- combinef(fcast, groups = GtsForecast$groups, keep = "bottom")
combo <- ts(combo, start = as.numeric(2010.25) + (64)/12,
end = as.numeric(2010.25) + (75)/12, f = 12)
colnames(combo) <- colnames(fcastBottom)
newGtsForecast <- gts(combo, characters = list(c(12,12), c(4)),
gnames = c("Category", "Item", "Customer",
"Category-Customer"))
Aggregate <- aggts(newGtsForecast)
Hope it helps.
Can one use t statistics from a Student's t-test directly in the fdrtool() function of fdrtool package (ver. 1.2.12)? The paper (Strimmer-K BMC Bioinfo. 2008, 9:303) mentions this but as far as I can see the parameters only recognize "normal", "correlation" and "pvalue". Is there a workaround for a non-statistician ?
I think it's a typo.
I took a look at the source for the fdrtool function and found that the statistic argument first gets passed through match.arg and then to fdrtool:::get.nullmodel.
Then, lo and behold:
args(fdrtool:::get.nullmodel)
# function (statistic = c("normal", "correlation", "pvalue", "studentt"))
# NULL
and indeed there is a fully-implemented case in that function for the student t:
if (statistic == "studentt") {
f0 = function(x, param, log = FALSE) {
return(dt(x, df = param, log = log))
}
F0 = function(x, param) {
return(pt(x, df = param))
}
iqr = function(param) {
return(qt(0.75, df = param) - qt(0.25, df = param))
}
get.support = function() return(c(1, 1000))
}
Now, before I tell you how to access this option, I want to warn you that it's very possible it was disabled on purpose. I can't imagine why, because at first glance it seems like it should work fine. But if you're planning to use this in a research result you ought to document the fact that this was essentially a "hidden" option and that you had to do some hacking to access it. Moreover, I haven't actually tested this on my computer, so beware of typos.
Now, as for that hacking, the easiest way to get this to work would be to first simply type fdrtool into the R console. Then, copy and paste the output to a new R script (or use sink if you're fancy like that). The first few lines should look like:
function (x, statistic = c("normal", "correlation", "pvalue"),
plot = TRUE, color.figure = TRUE, verbose = TRUE, cutoff.method = c("fndr",
"pct0", "locfdr"), pct0 = 0.75)
{
statistic = match.arg(statistic)
...
Then all you have to do is change c("normal", "correlation", "pvalue") to c("normal", "correlation", "pvalue", "studentt"). That is, the first few lines should now look like
function (x, statistic = c("normal", "correlation", "pvalue", "studentt"),
plot = TRUE, color.figure = TRUE, verbose = TRUE, cutoff.method = c("fndr",
"pct0", "locfdr"), pct0 = 0.75)
{
statistic = match.arg(statistic)
...
Finally, reassign this function to fdrtool (don't worry, this won't break the underlying package, it will just act like a "mask" until you remove it with rm):
fdrtool <- function (x, statistic = c("normal", "correlation", "pvalue", "studentt"),
plot = TRUE, color.figure = TRUE, verbose = TRUE, cutoff.method = c("fndr",
"pct0", "locfdr"), pct0 = 0.75)
{
statistic = match.arg(statistic)
...
And run the whole thing or source the script. Then you should be good to go.
Turns out that the maintainer of the package, Korbinian Strimmer, disabled the t-score based function on purpose. The reason for that is that it has been used incorrectly too often.
Prof. Strimmer is a nice guy and responded to my help request quickly and very comprehensively. This is what he suggests: T-scores in practice often do not follow a t-distribution but show rather an over- or underdispersion, which is why you should better use the normal option.
Before that, however, you will have to center your data
z.centered = z-median(z)
fdrtool(z.centered, statistic="normal")