I am trying to find a global reduction (global_reduct) for a timeseries of monthly loads.
The goal is make sure that only 50% (or any other taget) of all loads exceed a specific reference load.
global_reduct <- c(50) ## initial value
load_ref <- 450.5 ## reference load
tobject <- 50 ## 50% above reference
Data example which is a subset of 20+ years of data.
df <- data.frame(
Date=seq(as.Date('2010-01-01'), as.Date('2012-04-01'), by='months'),
load= c(1.496169954, 1.29147009, 1.964195241, 1.14352707, 1.319144304,
0.773288093, 0.65175612, 0.685340958, 0.416934849,
0.769853258, 1.104639594, 0.92213209, 1.685588986,
1.972510175, 2.6882446, 2.153314503, 1.324735759,
1.027755411, 0.610207197, 0.674642831, 0.721971375,
1.13233884, 0.739325423, 0.90031817, 1.366597449,
1.928098735, 1.216538229, 1.514353244)
)
In this case the reduction would be around 62% at a target of 50% of the reference load.
I tried to setup a function that can be called by optim to estimate
the new reduct value.
optfuc <- function(reduct, ttarget=50){
reduct_eq <- df$load *(1 - (reduct/100))
tt_exceed <- ifelse((reduct_eq *1000) > load_ref, 1, 0)
ave_ref <- sum(tt_exceed)/length(tt_exceed)*100 - ttarget
# ave_ref in this case should be = ttarget
# ave_ref
reduct
}
optim(c(30), optfuc, method ="L-BFGS-B", lower=0, upper=100)
How can I get the correct new reduct value?
Is there a different package that I can use?
It might be better to use proportions, i.e. values within [0, 1], instead of percentages.
Then minimizing the abs difference of reduced load load - load*reduct and tolerance tobject within interval [0, 1] should give the desired minimum, i.e. the reduction factor.
I use optimize directly here.
load_ref <- mean(df$load) ## for example
tobject <- .25 ## 25%
optfuc <- \(reduct, ref=load_ref, tol=tobject, data=df) {
load1 <- with(data, load - load*reduct)
abs(tol - mean(load1 > ref))
}
(o <- optimize(optfuc, c(0, 1)))
# $minimum
# [1] 0.1935267
#
# $objective
# [1] 0
reduct <- o$minimum
cat(sprintf('reduction:%s%% at target of%s%%',
formatC(reduct*100, digits=2),
formatC(tobject*100, digits=2)))
# reduction: 19% at target of 25%
Check:
(with(df, load - load*reduct) > load_ref) |> table() |> proportions()
# FALSE TRUE
# 0.75 0.25
Related
The code below estimates pi in R, now I am trying to find the minimum number of terms N_Min
you would have to include in your estimate of pie to make it accurate to three decimal places.
pi_Est<- function(NTerms){
NTerms = 5 # start with an estimate of just five terms
pi_Est = 0 # initialise the value of pi to zero
Sum_i = NA # initialise the summation variable to null
for(ii in 1:NTerms)
{
Sum_i[ii] = (-1)^(ii+1)/(2*ii - 1) # this is the series equation for calculating pi
}
Sum_i = 4*Sum_i # multiply by four as required in the formula (see lecture notes)
pi_Est = sum(Sum_i)
cat('\nThe estimate of pi with terms = ', NTerms ,' is ',pi_Est)
}
First of all, I would change some things about your function. Instead of getting it to print out a message, get it to return a value. Otherwise it becomes very difficult to do anything with its output, including testing it for convergence to pi.
Also, no matter what the value of NTerms is you feed this function, you are immediately over-writing NTerms inside the function.
You could rewrite the function like this:
pi_Est <- function(NTerms) {
pi_Est <- 0
Sum_i <- numeric()
for(ii in seq(NTerms))
{
Sum_i[ii] <- (-1)^(ii+1)/(2*ii - 1)
}
return(sum(4 * Sum_i))
}
And to show it converges to pi, let's test it with 50,000 terms:
pi_Est(50000)
#> [1] 3.141573
Now, if we want to find the first value of NTerms that is correct to 3 decimal places, we are going to need to be able to call this function on a vector of NTerms - at the moment it is only working on a single number. So let's define the function f that vectorizes pi_Est:
f <- Vectorize(pi_Est)
Now, let's create the estimate for all values of NTerms between 1 and 2,000 and store them in a vector:
estimates <- f(1:2000)
We can see that the values of estimates seem to oscillate round and converge to pi if we plot the first 100 values:
plot(estimates[1:100], type = 'l')
abline(h = pi)
Our answer is just the first value which, when rounded to three decimal places, is the same as pi rounded to three decimal places:
result <- which(round(estimates, 3) == round(pi, 3))[1]
result
#> [1] 1103
And we can check this is correct by feeding 1103 into our original function:
pi_Est(result)
#> [1] 3.142499
You will see that this gives us 3.142, which is the same as pi rounded to 3 decimal places.
Created on 2022-01-31 by the reprex package (v2.0.1)
1000 terms are required to make the estimate accurate to within 0.001:
pi_Est1 <- function(n) {
if (n == 0) return(0)
neg <- 1/seq(3, 2*n + 1, 4)
if (n%%2) neg[length(neg)] <- 0
4*sum(1/seq(1, 2*n, 4) - neg)
}
pi_Est2 <- function(tol) {
for (i in ceiling(1/tol + 0.5):0) {
est <- pi_Est1(i)
if (abs(est - pi) > tol) break
est1 <- est
}
list(NTerms = i + 1, Estimate = est1)
}
tol <- 1e-3
pi_Est2(tol)
#> $NTerms
#> [1] 1000
#>
#> $Estimate
#> [1] 3.140593
tol - abs(pi - pi_Est2(tol)$Estimate)
#> [1] 2.500001e-10
tol - abs(pi - pi_Est1(pi_Est2(tol)$NTerms - 1))
#> [1] -1.00075e-06
Created on 2022-01-31 by the reprex package (v2.0.1)
Perhaps we can try the code below
pi_Est <- function(digits = 3) {
s <- 0
ii <- 1
repeat {
s <- s + 4 * (-1)^(ii + 1) / (2 * ii - 1)
if (round(s, digits) == round(pi, digits)) break
ii <- ii + 1
}
list(est = s, iter = ii)
}
and you will see
> pi_Est()
$est
[1] 3.142499
$iter
[1] 1103
> pi_Est(5)
$est
[1] 3.141585
$iter
[1] 130658
Why not use a single line of code for the calculation?
Pi <- tail(cumsum(4*(1/seq(1,4*50000000,2))*rep(c(1,-1), 50000000)),1)
I am trying to use the optim() function in R to minimize a value with matrix operations. In this case, I am trying to minimize the volatility of a group of stocks whose individual returns covary with each other. The objective function being minimized is calculate_portfolio_variance.
library(quantmod)
filter_and_sort_symbols <- function(symbols)
{
# Name: filter_and_sort_symbols
# Purpose: Convert to uppercase if not
# and remove any non valid symbols
# Input: symbols = vector of stock tickers
# Output: filtered_symbols = filtered symbols
# convert symbols to uppercase
symbols <- toupper(symbols)
# Validate the symbol names
valid <- regexpr("^[A-Z]{2,4}$", symbols)
# Return only the valid ones
return(sort(symbols[valid == 1]))
}
# Create the list of stock tickers and check that they are valid symbols
tickers <- filter_and_sort_symbols(c("AAPL", "NVDA", "MLM", "AA"))
benchmark <- "SPY"
# Set the start and end dates
start_date <- "2007-01-01"
end_date <- "2019-01-01"
# Gather the stock data using quantmod library
getSymbols(Symbols=tickers, from=start_date, to=end_date, auto.assign = TRUE)
getSymbols(benchmark, from=start_date, to=end_date, auto.assign = TRUE)
# Create a matrix of only the adj. prices
price_matrix <- NULL
for(ticker in tickers){price_matrix <- cbind(price_matrix, get(ticker)[,6])}
# Set the column names for the price matrix
colnames(price_matrix) <- tickers
benchmark_price_matrix <- NULL
benchmark_price_matrix <- cbind(benchmark_price_matrix, get(benchmark)[,6])
# Compute log returns
returns_matrix <- NULL
for(ticker in tickers){returns_matrix <- cbind(returns_matrix, annualReturn(get(ticker)))}
returns_covar <- cov(returns_matrix)
colnames(returns_covar) <- tickers
rownames(returns_covar) <- tickers
# get average returns for tickers and benchmark
ticker_avg <- NULL
for(ticker in tickers){ticker_avg <- cbind(ticker_avg, colMeans(annualReturn(get(ticker))))}
colnames(ticker_avg) <- tickers
benchmark_avg <- colMeans(annualReturn(get(benchmark)))
# create the objective function
calculate_portfolio_variance <- function(allocations, returns_covar, ticker_avg, benchmark_avg)
{
# Name: calculate_portfolio_variance
# Purpose: Computes expected portfolio variance, to be used as the minimization objective function
# Input: allocations = vector of allocations to be adjusted for optimality; returns_covar = covariance matrix of stock returns
# ticker_avg = vector of average returns for all tickers, benchmark_avg = benchmark avg. return
# Output: Expected portfolio variance
# get benchmark volatility
benchmark_variance <- (sd(annualReturn(get(benchmark))))^2
# scale allocations for 100% investment
allocations <- as.matrix(allocations/sum(allocations))
# get the naive allocations
naive_allocations <- rep(c(1/ncol(ticker_avg)), times=ncol(ticker_avg))
portfolio_return <- sum(t(allocations)*ticker_avg)
portfolio_variance <- t(allocations)%*%returns_covar%*%allocations
# constraints = portfolio expected return must be greater than benchmark avg. return and
# portfolio variance must be less than benchmark variance (i.e. a better reward at less risk)
if(portfolio_return < benchmark_avg | portfolio_variance > benchmark_variance)
{
allocations <- naive_allocations
}
portfolio_variance <- t(allocations)%*%returns_covar%*%allocations
return(portfolio_variance)
}
# Specify lower and upper bounds for the allocation percentages
lower <- rep(0, ncol(returns_matrix))
upper <- rep(1, ncol(returns_matrix))
# Initialize the allocations by evenly distributing among all tickers
set.seed(1234)
allocations <- rep(1/length(tickers), times=length(tickers))
When I call the objective function manually, it returns a value as expected:
> calculate_portfolio_variance(allocations, returns_covar, ticker_avg, benchmark_avg)
[,1]
[1,] 0.1713439
However, when I use the optim() function it returns the error:
> optim_result <- optim(par=allocations, fn=calculate_portfolio_variance(allocations, ticker_avg, benchmark_avg), lower=lower, upper=upper, method="L-BFGS-B")
Error in t(allocations) %*% returns_covar : non-conformable arguments
I'm not sure the reason, but it may be with the way optim() recursively uses the allocations variable. What can I do to fix this?
Edit: FWIW, other optimization strategies work (differential evolution, simulated annealing) but I would prefer to use gradient descent because it is considerably faster
No error occurs if the first argument is renamed par and you switch the order in which you apply t() to the parameter vectors used in that flanking matrix-multiply operation:
cpv <- function(par, returns_covar=returns_covar, ticker_avg, benchmark_avg)
{
# Name: calculate_portfolio_variance
# Purpose: Computes expected portfolio variance, to be used as the minimization objective function
# Input: allocations = vector of allocations to be adjusted for optimality; returns_covar = covariance matrix of stock returns
# ticker_avg = vector of average returns for all tickers, benchmark_avg = benchmark avg. return
# Output: Expected portfolio variance
# get benchmark volatility
benchmark_variance <- (sd(annualReturn(get(benchmark))))^2
# scale allocations for 100% investment
par <- as.matrix(par/sum(par))
# get the naive allocations
naive_allocations <- rep(c(1/ncol(ticker_avg)), times=ncol(ticker_avg))
portfolio_return <- sum(t(par)*ticker_avg);print(par)
portfolio_variance <- t(par)%*%returns_covar%*%par
# constraints = portfolio expected return must be greater than benchmark avg. return and
# portfolio variance must be less than benchmark variance (i.e. a better reward at less risk)
if(portfolio_return < benchmark_avg | portfolio_variance > benchmark_variance)
{
par <- naive_allocations
}
portfolio_variance <- t(par)%*%returns_covar%*%par
return(portfolio_variance)
}
I left the debugging printing of par in the code and show the top of the results of running it
optim_result <- optim(par=allocations, fn=cpv, lower=lower, upper=upper, returns_covar=returns_covar, ticker_avg=ticker_avg, benchmark_avg=benchmark_avg, method="L-BFGS-B")
[,1]
[1,] 0.25
[2,] 0.25
[3,] 0.25
[4,] 0.25
[,1]
[1,] 0.2507493
[2,] 0.2497502
[3,] 0.2497502
[4,] 0.2497502
[,1]
[1,] 0.2492492
[2,] 0.2502503
[3,] 0.2502503
[4,] 0.2502503
#--- snipped output of six more iterations.
... and the result:
> optim_result
$par
[1] 0.25 0.25 0.25 0.25
$value
[1] 0.1713439
$counts
function gradient
1 1
$convergence
[1] 0
$message
[1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"
As I said in the comment to an unrelated question, the optim function first tried to raise then lower the first element in par, then tries to do the same for the second, third and fourth elements. At that point finding no improvement it "decides" it's converged at a local minimum and declares convergence.
I should point out that the code for optim is rather old and the author of the original algorithm, Dr Nash, has placed an updated version on CRAN in the form of the optimx package. He says optim was good in its day, but that he thinks other procedures should be tried if it's not successful.
I can do it for the two sample t test but not for Median test or Wilcoxon test or Hodges Lehmann test
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
mean(data_2000)
mean(data_2019)
mean(data_2019) - mean(data_2000)
combined_data <- c(data_2000, data_2019)
set.seed(123)
null_dist <- c()
for (i in 1:100000) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
null_dist[i] <- mean(shuffled_2019) - mean(shuffled_2000)
}
(p_value <- (sum(null_dist >= 49.57143) + sum(null_dist <=
`enter code here`-49.57143))/length(null_dist))
I think this is what you're trying to do. I altered your code as little as possible. There are packages like infer that will do this for you and the for loop is not the most efficient but it's plenty good enough and may help you learn. As long as we're looping I did mean and median at the same time since all other parts of the code are identical. ifelse is a nice easy way to make 1s and 0s to sum.
data_2000 <- c(500,450,600,700,550,551,552)
data_2019 <- c(560,460,620,720,540,600,750)
delta_mean <- mean(data_2019) - mean(data_2000)
delta_median <- median(data_2019) - median(data_2000)
combined_data <- c(data_2000, data_2019)
trials <- 100000
set.seed(123)
mean_diff <- c()
median_diff <- c()
for (i in 1:trials) {
shuffled_data <- sample(combined_data)
shuffled_2000 <- shuffled_data[1:7]
shuffled_2019 <- shuffled_data[8:14]
mean_diff[i] <- mean(shuffled_2019) - mean(shuffled_2000)
median_diff[i] <- median(shuffled_2019) - median(shuffled_2000)
}
p_mean <- sum(ifelse(mean_diff > delta_mean | mean_diff < -1 * delta_mean, 1, 0)) / trials
p_median <- sum(ifelse(median_diff > delta_median | median_diff < -1 * delta_median, 1, 0)) / trials
p_mean
#> [1] 0.31888
p_median
#> [1] 0.24446
Following up on your question about HL test. Quoting Wikipedia
The Hodges–Lehmann statistic also estimates the difference between two populations. For two sets of data with m and n observations, the set of two-element sets made of them is their Cartesian product, which contains m × n pairs of points (one from each set); each such pair defines one difference of values. The Hodges–Lehmann statistic is the median of the m × n differences.
You could run it on your data with the following code...
Do NOT run it 100,000 times the answer is the same everytime because you're already making all 49 possible pairings
hl_df <- expand.grid(data_2019, data_2000)
hl_df$pair_diffs <- hl_df$Var1 - hl_df$Var2
median(hl_df$pair_diffs)
[1] 49
You can do the Wilcoxon test with wilcox.test in the stats package (loaded by default as part of R core). You need to set exact = FALSE because an exact p-value is not possible if there are ties.
wilcox.test(data_2019, data_2000, exact = FALSE)
Wilcoxon rank sum test with continuity correction
data: data_2019 and data_2000
W = 33.5, p-value = 0.2769
alternative hypothesis: true location shift is not equal to 0
I'll update this when I figure out how to do the other tests.
I'm trying to implement a check for decreasing values of avg temperatures to see when the temperature starts falling. See the chart of temperatures here:
Here is the formula I'm trying to implement:
Here is my code to implement that formula:
temps <- read.delim("temps.txt")
date_avgs <- rowMeans(temps[2:length(temps)], dims=1, na.rm=T)
mu <- 87
threshold <- 86
constant <- 3
date_avgs
S <- 0 * date_avgs
for (i in 2:length(date_avgs)) {
value <- S[i-1] + (mu - date_avgs[i] - constant)
cat("\nvalue", value, "si", date_avgs[i], i)
S[i] <- max(0, value)
if(S[i] >= threshold){
#Once I hit this for the first time, that indicates at this index the temp is decreasing
cat("\nDecreased past my threshold!!!", S[i] ,i)
}
}
But I'm not able to detect the change as I expect. My formula doesn't get over the threshold until index 108, when it should get there around index 60.
Here is the plot of my S (or CUSUM) values:
Any ideas what I'm doing wrong in my formula?
I think the problem is mu <- mean(date_avgs) basically means of all the observations. But mu should be "mean of X if no change". Thus mu should be about 87 but according your code and plotted data seems to be 80 or less.
# simulated data
set.seed(4422)
date_avgs <- c(runif(60, 84, 92), 88-(1:50)-rnorm(50,0,4))
plot(date_avgs)
# setting constants
mu <- 87
threshold <- 86
constant <- 3
# after running for cycle
Index <- match(S[S >= threshold][1], S)
Index
[1] 75
# for data
> date_avgs[74]
[1] 73.41981
# Considering a lower threshold
# (as maximum allowable difference to detect trend 2 * C)
mu <- 87
threshold <- 6 # arbitrary
constant <- 3
# after running for cycle
Index <- match(S[S >= threshold][1], S)
Index
[1] 66
So I think code is fine, maybe the interpretation is not
I want to find the mode (x-value) of a univariate density function using R
s optimize function
I.e. For a standard normal function f(x) ~ N(3, 1) the mode should be the mean i.e. x=3.
I tried the following:
# Define the function
g <- function(x) dnorm(x = x, mean = 3, sd = 1)
Dvec <- c(-1000, 1000)
# First get the gradient of the function
gradfun <- function(x){grad(g, x)}
# Find the maximum value
x_mode <- optimize(f=g,interval = Dvec, maximum=TRUE)
x_mode
This gives the incorrect value of the mode as:
$maximum
[1] 999.9999
$objective
[1] 0
Which is incorrect i.e. gives the max value of the (-1000, 1000) interval as opposed to x=3.
Could anyone please help edit the optimisation code.
It will be used to pass more generic functions of x if this simple test case works
I would use optim for this, avoiding to mention the interval. You can tailor the seed by taking the maximum of the function on the original guessed interval:
guessedInterval = min(Dvec):max(Dvec)
superStarSeed = guessedInterval[which.max(g(guessedInterval))]
optim(par=superStarSeed, fn=function(y) -g(y))
#$par
#[1] 3
#$value
#[1] -0.3989423
#$counts
#function gradient
# 24 NA
#$convergence
#[1] 0
#$message
#NULL