Non Linear Optimization in R with nloptr vs Excel - r

I can't seem to find an answer for my particular problem. I'm trying to minimize the mean absolute error between a real value vector and a linear combination of models as follows:
library(nloptr)
df <- data.frame(
real = c(24.2418, 21.7374, 7.203, 115.0233, 16.632, 5.4644, 27.8917, 0.5904, 0.633, 105.3656, 110.0564, 122.9399, 23.0418, 4.2186, 109.5453),
m_1 = c(17.7790979854662, 32.93639375, 6.94294375000066, 98.909065625, 80.1068562499999, 11.656556250002, 39.868921875, 0.859157480988586, 0.612625482376216, 112.580383416359, 151.896765176957, 155.81521460987, 7.3257, 4.1268, 41.5711703205879),
m_2 = c(25.5062900474607, 32.7709877317137, 6.94317649489279, 98.898593448344, 80.1114267521597, 11.6563450007055, 39.8691722409057, 0.907260912997819, 0.602795676399197, 114.183526809465, 139.51052151724, 149.993624420536, 6.85002142907728, 4.66668862149305, 70.7527906311631),
m_3 = c(27.1495912199999, 40.2339353399999, 7.10339542, 87.1444967133334, 58.4563384933332, 11.1378198366666, 37.6030141333333, 0.852288459999999, 0.681724560000002, 100.101136186666, 118.536525109999, 136.923264319999, 5.64763034333333, 4.8659515, 70.12675835),
m_4 = c(25.511590625, 32.9363937499999, 7.00050769504031, 98.3660974929738, 80.10685625, 11.65655625, 39.868921875, 0.665580984791989, 0.498756215272925, 85.791042265746, 135.619508469251, 140.946144588455, 5.05824305930683, 3.25333636845094, 22.0908752674237),
m_5 = c(25.6118152340655, 34.5646719234769, 6.82416840383483, 91.5582383465651, 84.4603847826215, 11.3405620277701, 40.7906062254877, 0.908706704665592, 0.602817399156822, 114.326905157898, 139.595783699511, 150.046375909198, 6.8511793011574, 4.6622942290559, 56.2753212961812),
m_6 = c(21.9868574376585, 44.3147731773466, 6.38315486686481, 100.303757097094, 9.13921010739697, 7.83817900918309, 31.5458855316741, 1.09960505333834, 0.817751834425678, 101.110814846224, 145.55847538105, 142.82362305075, 7.61732986965459, 4.6774198307473, 67.5821464371521)
)
best_dist <- function(x) {
output <- df$m_1 * x[1] + df$m_2 * x[2] + df$m_3 * x[3] +
df$m_4 * x[4] + df$m_5 * x[5] + df$m_6 * x[6]
mean(abs(output - df$real))
}
restriction <- function(x) sum(x) - 1
nloptr(
x0 = rep(1 / 6, 6),
eval_f = best_dist,
lb = rep(0, 6),
ub = rep(1, 6),
eval_g_eq = restriction,
opts = list(algorithm = "NLOPT_GN_ISRES", xtol_rel = 1e-16, maxeval = 1e4)
)
As you could read I'm using the nloptr package. The above code yields the not optimal result of 14.85 for the objective function and the parameters are all the inital parameters. You may change the initial parameters to some other vector and still won't get the optimal solution.
However, using the excel solver one can easily get a result of 10.77 for the objective function and (0, 0, .15, 0, 0, .85) for the parameters.
I've tried using an algorithm with gradient, however I can't seem to get the syntax right. Here's my other attempt.
gradient <- function(x) {
output <- df$m_1 * x[1] + df$m_2 * x[2] + df$m_3 * x[3] +
df$m_4 * x[4] + df$m_5 * x[5] + df$m_6 * x[6]
err <- output - df$real
c(
- sum(sign(err) * df$m_1),
- sum(sign(err) * df$m_2),
- sum(sign(err) * df$m_3),
- sum(sign(err) * df$m_4),
- sum(sign(err) * df$m_5),
- sum(sign(err) * df$m_6)
)
}
nloptr(
x0 = runif(6),
eval_f = best_dist,
eval_grad_f = gradient,
lb = rep(0, 6),
ub = rep(1, 6),
eval_g_eq = restriction,
opts = list(algorithm = "NLOPT_GN_ISRES", xtol_rel = 1e-16, maxeval = 1e4)
)

This is like LAD Regression with a side constraint. LP (linear programming) formulations for LAD regression are shown here. Using your naming of the data (real[i], m[i,j]), we have:
min sum(i, | real[i] - sum(j,m[i,j]*x[j]) | )
subject to
sum(j, x[j]) = 1
0 <= x[j] <= 1
This can be linearized (using variable splitting) as follows:
min sum(i, r1[i]+r2[i])
subject to
r1[i]-r2[i] = real[i] - sum(j,m[i,j]*x[j])
sum(j, x[j]) = 1
bounds:
r1[i], r2[i] >= 0 (positive variables)
0 <= x[j] <= 1
This is a pure LP and can be solved with any LP solver.

Related

MathOptInterface.OTHER_ERROR when trying to use ISRES of NLopt through JuMP

I am trying to minimize a nonlinear function with nonlinear inequality constraints with NLopt and JuMP.
In my test code below, I am minimizing a function with a known global minima.
Local optimizers such as LD_MMA fails to find this global minima, so I am trying to use global optimizers of NLopt that allow nonlinear inequality constraintes.
However, when I check my termination status, it says “termination_status(model) = MathOptInterface.OTHER_ERROR”. I am not sure which part of my code to check for this error.
What could be the cause?
I am using JuMP since in the future I plan to use other solvers such as KNITRO as well, but should I rather use the NLopt syntax?
Below is my code:
# THIS IS A CODE TO SOLVE FOR THE TOYMODEL
# THE EQUILIBRIUM IS CHARACTERIZED BY A NONLINEAR SYSTEM OF ODEs OF INCREASING FUCTIONS B(x) and S(y)
# THE GOAL IS TO APPROXIMATE B(x) and S(y) WITH POLYNOMIALS
# FIND THE POLYNOMIAL COEFFICIENTS THAT MINIMIZE THE LEAST SQUARES OF THE EQUILIBRIUM EQUATIONS
# load packages
using Roots, NLopt, JuMP
# model primitives and other parameters
k = .5 # equal split
d = 1 # degree of polynomial
nparam = 2*d+2 # number of parameters to estimate
m = 10 # number of grids
m -= 1
vGrid = range(0,1,m) # discretize values
c1 = 0 # lower bound for B'() and S'()
c2 = 2 # lower and upper bounds for offers
c3 = 1 # lower and upper bounds for the parameters to be estimated
# objective function to be minimized
function obj(α::T...) where {T<:Real}
# split parameters
αb = α[1:d+1] # coefficients for B(x)
αs = α[d+2:end] # coefficients for S(y)
# define B(x), B'(x), S(y), and S'(y)
B(v) = sum([αb[i] * v .^ (i-1) for i in 1:d+1])
B1(v) = sum([αb[i] * (i-1) * v ^ (i-2) for i in 2:d+1])
S(v) = sum([αs[i] * v .^ (i-1) for i in 1:d+1])
S1(v) = sum([αs[i] * (i-1) * v ^ (i-2) for i in 2:d+1])
# the equilibrium is characterized by the following first order conditions
#FOCb(y) = B(k * y * S1(y) + S(y)) - S(y)
#FOCs(x) = S(- (1-k) * (1-x) * B1(x) + B(x)) - B(x)
function FOCb(y)
sy = S(y)
binv = find_zero(q -> B(q) - sy, (-c2, c2))
return k * y * S1(y) + sy - binv
end
function FOCs(x)
bx = B(x)
sinv = find_zero(q -> S(q) - bx, (-c2, c2))
return (1-k) * (1-x) * B1(x) - B(x) + sinv
end
# evaluate the FOCs at each grid point and return the sum of squares
Eb = [FOCb(y) for y in vGrid]
Es = [FOCs(x) for x in vGrid]
E = [Eb; Es]
return E' * E
end
# this is the actual global minimum
αa = [1/12, 2/3, 1/4, 2/3]
obj(αa...)
# do optimization
model = Model(NLopt.Optimizer)
set_optimizer_attribute(model, "algorithm", :GN_ISRES)
#variable(model, -c3 <= α[1:nparam] <= c3)
#NLconstraint(model, [j = 1:m], sum(α[i] * (i-1) * vGrid[j] ^ (i-2) for i in 2:d+1) >= c1) # B should be increasing
#NLconstraint(model, [j = 1:m], sum(α[d+1+i] * (i-1) * vGrid[j] ^ (i-2) for i in 2:d+1) >= c1) # S should be increasing
register(model, :obj, nparam, obj, autodiff=true)
#NLobjective(model, Min, obj(α...))
println("")
println("Initial values:")
for i in 1:nparam
set_start_value(α[i], αa[i]+rand()*.1)
println(start_value(α[i]))
end
JuMP.optimize!(model)
println("")
#show termination_status(model)
#show objective_value(model)
println("")
println("Solution:")
sol = [value(α[i]) for i in 1:nparam]
My output:
Initial values:
0.11233072522513032
0.7631843020124309
0.3331559403539963
0.7161240026812674
termination_status(model) = MathOptInterface.OTHER_ERROR
objective_value(model) = 0.19116585196576466
Solution:
4-element Vector{Float64}:
0.11233072522513032
0.7631843020124309
0.3331559403539963
0.7161240026812674
I answered on the Julia forum: https://discourse.julialang.org/t/mathoptinterface-other-error-when-trying-to-use-isres-of-nlopt-through-jump/87420/2.
Posting my answer for posterity:
You have multiple issues:
range(0,1,m) should be range(0,1; length = m) (how did this work otherwise?) This is true for Julia 1.6. The range(start, stop, length) method was added for Julia v1.8
Sometimes your objective function errors because the root doesn't exist. If I run with Ipopt, I get
ERROR: ArgumentError: The interval [a,b] is not a bracketing interval.
You need f(a) and f(b) to have different signs (f(a) * f(b) < 0).
Consider a different bracket or try fzero(f, c) with an initial guess c.
Here's what I would do:
using JuMP
import Ipopt
import Roots
function main()
k, d, c1, c2, c3, m = 0.5, 1, 0, 2, 1, 10
nparam = 2 * d + 2
m -= 1
vGrid = range(0, 1; length = m)
function obj(α::T...) where {T<:Real}
αb, αs = α[1:d+1], α[d+2:end]
B(v) = sum(αb[i] * v^(i-1) for i in 1:d+1)
B1(v) = sum(αb[i] * (i-1) * v^(i-2) for i in 2:d+1)
S(v) = sum(αs[i] * v^(i-1) for i in 1:d+1)
S1(v) = sum(αs[i] * (i-1) * v^(i-2) for i in 2:d+1)
function FOCb(y)
sy = S(y)
binv = Roots.fzero(q -> B(q) - sy, zero(T))
return k * y * S1(y) + sy - binv
end
function FOCs(x)
bx = B(x)
sinv = Roots.fzero(q -> S(q) - bx, zero(T))
return (1-k) * (1-x) * B1(x) - B(x) + sinv
end
return sum(FOCb(x)^2 + FOCs(x)^2 for x in vGrid)
end
αa = [1/12, 2/3, 1/4, 2/3]
model = Model(Ipopt.Optimizer)
#variable(model, -c3 <= α[i=1:nparam] <= c3, start = αa[i]+ 0.1 * rand())
#constraints(model, begin
[j = 1:m], sum(α[i] * (i-1) * vGrid[j]^(i-2) for i in 2:d+1) >= c1
[j = 1:m], sum(α[d+1+i] * (i-1) * vGrid[j]^(i-2) for i in 2:d+1) >= c1
end)
register(model, :obj, nparam, obj; autodiff = true)
#NLobjective(model, Min, obj(α...))
optimize!(model)
print(solution_summary(model))
return value.(α)
end
main()

deSolve ODE Not Working with Differential Equations (Calculates NA)

I have been using method deSolve::ode45 which has been working until I made a few necessary changes to my equations. Does anyone know why the ODE solver is not working? I have tried running with ode45 as well as the default ode method and neither work. Please let me know if any further explanation would be helpful.
I have checked over the differential equations and I am confident they are correct.
The equations used are as follows:
CCHFModel = function(t,x,params)
{
# get SIR values
SH <- x[1]
EH <- x[2]
IA <- x[3]
IS <- x[4]
RH <- x[5]
ST <- x[6]
IT <- x[7]
SC <- x[9]
IC <- x[10]
RC <- x[11]
# Load values ----
# Beta values
betaHHA = params["betaHHA"]
betaHHS = params["betaHHS"]
betaTH = params["betaTH"]
betaCH = params["betaCH"]
betaTC = params["betaTC"]
betaCT = params["betaCT"]
betaTT = params["betaTT"]
# Gamma value
gamma = params["gamma"]
# death rates
muH = params["muH"]
muT = params["muT"]
muC = params["muC"]
# birth rates
piH = params["piH"]
piT = params["piT"]
piC = params["piC"]
# incubation
deltaHS = params["deltaHS"]
deltaHA = params["deltaHA"]
# recovery rate
alphaA = params["alphaA"]
alphaS = params["alphaS"]
alphaC = params["alphaC"]
# total population
NH = (SH + IA + IS + EH + RH) + (piH * SH) - (muH * SH)
NT = (ST + IT) + (piT * ST) - (muT * ST)
NC = (SC + IC + RC) + (piC * SC) - (muH * SC)
# tick carrying Capacity
# KT = NC * 130 # 130 ticks per carrier max
#computations ----
dSHdt <- (piH * NH) - (betaHHA * IA + betaHHS * IS + betaCH * IC + betaTH * IT)*(SH/NH) - (muH * SH)
dEHdt <- (betaHHA * IA + betaHHS * IS + betaCH * IC + betaTH * IT)*(SH/NH) - ((deltaHA + muH)*EH)
dIAdt <- (deltaHA * EH) - ((alphaA + muH + deltaHS) * IA)
dISdt <- (deltaHS * IA) - ((alphaS + muH + gamma) * IS)
dRHdt <- alphaA * IA + alphaS * IS - muH*RH
dSTdt <- (piT * NT) - (betaTT * IT + betaCT * IC)*(ST/NT) - (muT * ST)
dITdt <- (betaTT * IT + betaCT * IC)*(ST/NT) - (muT * IT)
dSCdt <- (piC * NC) - (betaTC * IT)*(SC/NC) - (muC * SC)
dICdt <- (betaTC * IT)*(SC/NC) - ((alphaC +muC) * IC)
dRCdt <- (alphaC * IC) - (muC * RC)
# return results
list(c(dSHdt, dEHdt, dIAdt, dISdt, dRHdt, dSTdt, dITdt, dSCdt, dICdt, dRCdt))
}
I run the ODE solver using:
defaultParms = c(betaHHA = .0413,
betaHHS = .0413,
betaTH = .2891,
betaCH = .0826,
betaTC = (1/365),
betaCT = 59/365,
betaTT = ((1/(365 * 2)) * .04) * 280,
gamma = 1/10,
muH = (1/(365 * 73)),
muT = (1/(365 * 2)),
muC = (1/(11 * 365)),
piH = 1.25/(73 * 365),
piT = 4.5/730,
piC = 1/(11 * 365),
deltaHS = 1/3,
deltaHA = 1/2,
alphaA = 1/17,
alphaS = 1/17,
alphaC = 1/7)
# time to start solution
t = seq(from = 0, to = 365, by = 0.1)
#initialize initial conditions
initialConditions = c(SH = 10000, EH = 5, IA = 5, IS = 10, RH = 2, ST = 80000, IT = 50, SC = 30000, IC = 5, RC = 1)
dataSet = ode(y = initialConditions, times = t, func = CCHFModel, parms = defaultParms)%>%
as.data.frame()
After running this all the output following the initial conditions is NA.
This is due to a typo - you misnumbered the translation of input values in the first section of your code (i.e., you skipped x[8]. I will go through two (hopefully) useful exercises, first explaining how I debugged this and then showing how to rewrite your function to make it less error-prone ...
debugging
Try running the gradient function for t=0, x=<initial conditions>:
CCHFModel(0,initialConditions, defaultParms)
## piH betaHHA deltaHA deltaHS alphaA piT
## -15.02882327 12.62349834 0.53902803 0.07805607 0.88227788 385.31052332
## betaTT piC betaTC alphaC
## 0.85526763 NA NA NA
Hmm, we already see we have a problem. Why are the last three elements of the computed gradients NA?
add browser() near the end of the function (before the dsCdt <- ... line) so we can take a closer look. Redefine the function, and try computing the gradient again.
When we get there and print out some of the quantities involved in the computation we see that both NC and RC are NA ... we can also see that an NA value of RC will cause NC to be NA, so let's check the definition of RC ...
aha! RC is defined as x[11], but length(initialConditions) is only 10 ... and a closer look shows that we missed x[8]. Redefining properly gives non-NA values throughout (I don't know if they're correct, but at least they're not NA).
error-proofing (1)
Although using [] or [[]] to extract elements of a vector usually give equivalent answers, you should always use [[]] when you want to extract a single element (scalar) from a vector. Here's why:
initialConditions[11] ## NA
initialConditions[[11]] ## Error in x[[11]] : subscript out of bounds
If you use [], the NA propagates through your code and you have to hunt down the original source. If you use [[]], R fails right away and tells you where the problem is. An additional benefit is that [] propagates the names of the vector elements in a way that doesn't usually make sense (take a look at the names of the output in "debugging/1" above ...)
error-proofing (2)
You can avoid all of the tedious and error-prone unpacking of the parameter and state vectors by replacing the unpacking code (everything before the computation of total populations) with
comb <- c(as.list(x), as.list(params))
attach(comb)
on.exit(detach(comb))
Provided that your parameter and state vectors are properly named (and there are no names that overlap between them), this will create a named list and allow looking up of the elements by name within your function; on.exit(detach(comb)) makes sure that everything gets cleaned up properly at the end. (You will see recommendations to use with() to do this; I prefer the strategy here because it makes debugging within the function [if necessary] easier. But as #tpetzoldt notes in comments, you should always pair attach(...) with on.exit(detach(...)); otherwise things get very confusing and messy ...)
At the end of the function I would use
g <- c(dSHdt, dEHdt, dIAdt, dISdt, dRHdt, dSTdt, dITdt, dSCdt, dICdt, dRCdt)
names(g) <- names(x)
list(g)
to make sure the gradient vector is properly labeled, which makes troubleshooting easier.

Linear optimization with R

I'm trying to maximize the function by x[1], x[2]:
(a - 1) * x[1] * c + (a - 1) * x[2] * b * d
Where a, b, c, d are known positive constants and
0 < x[1] < 1, 0 < x[2] < 1,
x[1] + x[2] = 1
Using NlcOptim, I did:
solver_1 <- function(a, b, c, d){
obj = function(x){
return((a - 1) * x[1] * c + (a - 1) * x[2] * b * d )
}
con = function(x){
f = NULL
f = rbind(f, x[1] + x[2] - 1)
return(list(ceq = f, c = NULL))
}
x0 = c(1, 0)
solnl(x0, objfun = obj, confun = con)
}
solver_1(1.2, 5.2, 0.8, 0.1)
But it gives me:
Error in if (norm(H, "I") == 0) { : missing value where TRUE/FALSE needed
Does anyone know what I'm doing wrong?
Using an optimization package is massive overkill.
Either by following up on the comment of #Roland and doing some elementary calculus, or using the corner point theorem of linear programming, your objective function is optimized when x[1] = 0 and x[2] = 1 or vice versa. Thus you only have to evaluated two constant expressions in a,b,c,d and pick the larger of the two:
max(c*(a-1), b*d*(a-1))
For any a other than 1, which is larger will be determined by whether or not c > a*b

Calculating double integral with gamma incomplete function in R

I need to calculate a double integral on two variables (B0 and B1) in R.
Till now, nothing complicated with the int2 function.
But, my function to integrate includes gamma incomplete function (gammainc in R ) !
The following error message appears :
Error in gammainc(1/eta, lambda * exp(B0 + B1 * z_arm) * tmax^eta) :
Arguments must be of length 1; function is not vectorized.
Any advice to help me ?
tmax = 5
Sig = matrix ( c(0.2, 0, 0, 0.4) , ncol = 2 )
Mu = matrix ( c(1, 0) , ncol = 1 )
eta = 0.5
lambda = 0.8
z_arm = c(rep(0.5,10), rep(1,15))
to.integrate = function(B0, B1)
{
first.int = 1/eta *(lambda * exp(B0 + B1 * z_arm))^(-1/eta)* gammainc(1/eta, lambda * exp(B0 + B1 * z_arm)*tmax^eta)['lowinc']
B = matrix(c(B0, B1), ncol=1)
multi.norm = 1 / (2 * pi * det(Sig)^(1/2)) * exp (- 0.5 * t( B - Mu ) * solve(Sig) * ( B - Mu ) )
return (first.int * multi.norm)
}
int2(to.integrate , a=c(-Inf,-Inf), b=c(Inf,Inf), eps=1.0e-6, max=16, d=5)
Thanks for any help!

Using lanczos low pass filter in R program

I am wondering if there is any package which allows us to use the Lanczos filter. I found other filters such as butterworth but I am looking for Lanczos low pass filter.
How different is Lanczos filter from butterworth filter ? Any suggestions or hints is appreciated.
Thanks.
Using the web I find this MATLAB implementation.
If you skipped the first part(arguments check), it looks simple to write its R equivalent.
# Cf - Cut-off frequency (default: half Nyquist)
# M - Number of coefficients (default: 100)
lanczos_filter_coef <- function(Cf,M=100){
lowpass_cosine_filter_coef <- function(Cf,M)
coef <- Cf*c(1,sin(pi*seq(M)*Cf)/(pi*seq(M)*Cf))
hkcs <- lowpass_cosine_filter_coef(Cf,M)
sigma <- c(1,sin(pi*seq(M)/M)/(pi*seq(M)/M))
hkB <- hkcs*sigma
hkA <- -hkB
hkA[1] <- hkA[1]+1
coef <- cbind(hkB, hkA)
coef
}
To test it for example:
dT <- 1
Nf <- 1/(2*dT)
Cf <- Nf/2
Cf <- Cf/Nf
lanczos_filter_coef(Cf,5)
hkB hkA
[1,] 5.000000e-01 5.000000e-01
[2,] 2.977755e-01 -2.977755e-01
[3,] 1.475072e-17 -1.475072e-17
[4,] -5.353454e-02 5.353454e-02
[5,] -4.558222e-18 4.558222e-18
[6,] 2.481571e-18 -2.481571e-18
PS I don't know very well MATLAB(used it many years ago), so I I used this link For the R/MATLAB analogy. I hope that someone with more R/MATLAB/Scilab knowledge can test my code.
I used the method provided in this link https://www.atmos.umd.edu/~ekalnay/syllabi/AOSC630/METO630ClassNotes13.pdf and wrote this function:
`
lanczos_weights<-function(window=101,sampl_rate=1,type="lowpass",low_freq=1/100,high_freq=1/10){
low_freq=sampl_rate*low_freq
high_freq=sampl_rate*high_freq
if (type=="lowpass"){
order = ((window - 1) %/% 2 ) + 1
nwts = 2 * order + 1
fc=low_freq
w = seq(0,0,length=nwts)
n = nwts %/% 2
w[n+1] = 2 * fc
k = seq(1, n-1)
sigma = sin(pi * k / n) * n / (pi * k)
firstfactor = sin(2 *pi * fc * k) / (pi * k)
w[n:2] = firstfactor * sigma
w[(n+2):(length(w)-1)] = firstfactor * sigma
w=w[-c(1,length(w))]}
else if (type=="highpass"){
order = ((window - 1) %/% 2 ) + 1
nwts = 2 * order + 1
fc=high_freq
w = seq(0,0,length=nwts)
n = nwts %/% 2
w[n+1] = 2 * fc
k = seq(1, n-1)
sigma = sin(pi * k / n) * n / (pi * k)
firstfactor = sin(2 *pi * fc * k) / (pi * k)
w[n:2] = firstfactor * sigma
w[(n+2):(length(w)-1)] = firstfactor * sigma
w=w[-c(1,length(w))]
w=-w
w[order]=1-2*fc }
else if (type=="bandpass"){
order = ((window - 1) %/% 2 ) + 1
nwts = 2 * order + 1
fc=low_freq
w = seq(0,0,length=nwts)
n = nwts %/% 2
w[n+1] = 2 * fc
k = seq(1, n-1)
sigma = sin(pi * k / n) * n / (pi * k)
firstfactor = sin(2 *pi * fc * k) / (pi * k)
w[n:2] = firstfactor * sigma
w[(n+2):(length(w)-1)] = firstfactor * sigma
w1=w[-c(1,length(w))]
order = ((window - 1) %/% 2 ) + 1
nwts = 2 * order + 1
fc=high_freq
w = seq(0,0,length=nwts)
n = nwts %/% 2
w[n+1] = 2 * fc
k = seq(1, n-1)
sigma = sin(pi * k / n) * n / (pi * k)
firstfactor = sin(2 *pi * fc * k) / (pi * k)
w[n:2] = firstfactor * sigma
w[(n+2):(length(w)-1)] = firstfactor * sigma
w2=w[-c(1,length(w))]
w=w2-w1}
else {print("Please specify a valid filter type: either 'lowpass', 'highpass' or 'bandpass'")}
return(w)}
`
#### the inputs are:
#### window: Filter length=number of weights. Corresponds to the total number of points to be lost. Should be odd: window=2N-1. The formula for N is taken from Poan et al. (2013)
#### sampl_rate: sampling rate=number of observation per time unit. ( eg: if time unit is one day, hourly data have sampl_rate=1/24)
#### type= one of "lowpass", "highpass" and "bandpass"
#### low_freq: the lowest frequency
#### high_freq: the highest frequency
I have compared my weights to those obtained using NCL filwgts_lanczos and they are exactly the same.

Resources