Julia ldlfact and sparse conversion - julia

I'm trying to solve a linear system in Julia with ldlfact. Why am I getting different results in the following cases?
Setup:
srand(10)
n = 7
pre = sprand(n,n,0.5)
H = pre + pre' + speye(n,n)
p_true = rand(n)
g = H*-p_true
fac = ldltfact(H; shift=0.0)
perm = fac[:p]
p1 = zeros(n)
p2 = zeros(n)
Case 1:
LDs = sparse(fac[:LD])
q1 = LDs\-g[perm]
p1[perm] = fac[:U]\q1
H*p - H*p_true
Case 2:
q2 = fac[:LD]\-g[perm]
p2[perm] = fac[:U]\q2
H*p2 - H*p_true
The solution p1 is wrong in the first case.

Couldn't post this nicely as a comment, so wanted to add for posterity. Solving Case 1 in the following way worked for this example (thanks to #DanGetz's post)
L = copy(LDs)
for i=1:size(L,1)
L[i,i] = 1.0
end
D = sparse(1:size(L,1),1:size(L,1),diag(LDs))
q1 = (L*D)\-g[perm]
p1[perm] = L'\q1
H*p1 - H*p_true

Related

How to fix "dt <= dtmin. Aborting" error in solveODE

I'm trying to emulate a system of ODEs (Fig3 B in Tilman, 1994.Ecology, Vol.75,No1,pp-2-16) but Julia Integration method failed to give a solution.
The error is dt <= dtmin. Aborting.
using DifferentialEquations
TFour = #ode_def TilmanFour begin
dp1 = c1*p1*(1-p1) - m*p1
dp2 = c2*p2*(1-p1-p2) -m*p2 -c1*p1*p2
dp3 = c3*p3*(1-p1-p2-p3) -m*p3 -c1*p1*p2 -c2*p2*p3
dp4 = c4*p4*(1-p1-p2-p3-p4) -m*p4 -c1*p1*p2 -c2*p2*p3 -c3*p3*p4
end c1 c2 c3 c4 m
u0 = [0.05,0.05,0.05,0.05]
p = (0.333,3.700,41.150,457.200,0.100)
tspan = (0.0,300.0)
prob = ODEProblem(TFour,u0,tspan,p)
sol = solve(prob,alg_hints=[:stiff])
I think that you read the equations wrong. The last term in the paper is
sum(c[j]*p[j]*p[i] for j<i)
Note that every term in the equation for dp[i] has a factor p[i].
Thus your equations should read
dp1 = p1 * (c1*(1-p1) - m)
dp2 = p2 * (c2*(1-p1-p2) - m - c1*p1)
dp3 = p3 * (c3*(1-p1-p2-p3) - m - c1*p1 -c2*p2)
dp4 = p4 * (c4*(1-p1-p2-p3-p4) - m - c1*p1 - c2*p2 - c3*p3)
where I also made explicit that dpk is a multiple of pk. This is necessary as it ensures that the dynamic stays in the octand of positive variables.
Using python the plot looks like in the paper
def p_ode(p,c,m):
return [ p[i]*(c[i]*(1-sum(p[j] for j in range(i+1))) - m[i] - sum(c[j]*p[j] for j in range(i))) for i in range(len(p)) ]
c = [0.333,3.700,41.150,457.200]; m=4*[0.100]
u0 = [0.05,0.05,0.05,0.05]
t = np.linspace(0,60,601)
p = odeint(lambda u,t: p_ode(u,c,m), u0, t)
for k in range(4): plt.plot(t,p[:,k], label='$p_%d$'%(k+1));
plt.grid(); plt.legend(); plt.show()

Implementing the Izhikevich neuron model

I'm trying to implement the spiking neuron of the Izhikevich model. The formula for this type of neuron is really simple:
v[n+1] = 0.04*v[n]^2 + 5*v[n] + 140 - u[n] + I
u[n+1] = a*(b*v[n] - u[n])
where v is the membrane potential and u is a recovery variable.
If v gets above 30, it is reset to c and u is reset to u + d.
Given such a simple equation I wouldn't expect any problems. But while the graph should look like , all I'm getting is this:
I'm completely at loss what I'm doing wrong exactly because there's so little to do wrong. I've looked for other implementations but the code I'm looking for is always hidden in a dll somewhere. However I'm pretty sure I'm doing exactly what the Matlab code of the author (2) is doing. Here is my full R code:
v = -70
u = 0
a = 0.02
b = 0.2
c = -65
d = 6
history <- c()
for (i in 1:100) {
if (v >= 30) {
v = c
u = u + d
}
v = 0.04*v^2 + 5*v + 140 - u + 0
u=a*(b*v-u);
history <- c(history, v)
}
plot(history, type = "l")
To anyone who's ever implemented an Izhikevich model, what am I missing?
usefull links:
(1) http://www.opensourcebrain.org/projects/izhikevichmodel/wiki
(2) http://www.izhikevich.org/publications/spikes.pdf
Answer
So it turns out I read the formula wrong. Apparently v' means new v = v + 0.04*v^2 + 5*v + 140 - u + I. My teachers would have written this as v' = 0.04*v^2 + 6*v + 140 - u + I. I'm very grateful for your help in pointing this out to me.
Take a look at the code that implements the Izhikevich model in R below. It results in the following R plots:
Regular Spiking Cell:
Chattering Cell:
And the R code:
# Simulation parameters
dt = 0.01 # ms
simtime = 500 # ms
t = 0
# Injection current
I = 15
delay = 100 # ms
# Model parameters (RS)
a = 0.02
b = 0.2
c = -65
d = 8
# Params for chattering cell (CH)
# c = -50
# d = 2
# Initial conditions
v = -80 # mv
u = 0
# Input current equation
current = function()
{
if(t >= delay)
{
return(I)
}
return (0)
}
# Model state equations
deltaV = function()
{
return (0.04*v*v+5*v+140-u+current())
}
deltaU = function()
{
return (a*(b*v-u))
}
updateState = function()
{
v <<- v + deltaV()*dt
u <<- u + deltaU()*dt
if(v >= 30)
{
v <<- c
u <<- u + d
}
}
# Simulation code
runsim = function()
{
steps = simtime / dt
resultT = rep(NA, steps)
resultV = rep(NA, steps)
for (i in seq(steps))
{
updateState()
t <<- dt*(i-1)
resultT[i] = t
resultV[i] = v
}
plot(resultT, resultV,
type="l", xlab = "Time (ms)", ylab = "Membrane Potential (mV)")
}
runsim()
Some notes:
I've picked the parameters for the "Regular Spiking (RS)" cell from Izhikevich's site. You can pick other parameters from the two upper-right plots on that page. Uncomment the CH parameters to get a plot for the "Chattering" type cell.
As commenters have suggested, the first two equations in the question are incorrectly implemented differential equations. The correct way to implement the first one would be something like: "v[n+1] = v[n] + (0.04*v[n]^2 + 5*v[n] + 140 - u[n] + I) * dt". See the code above for example. dt refers to the user specified time step integration variable and usually dt << 1 ms.
In the for loop in the question, the state variables u and v should be updated first, then the condition checked after.
As noted by others, a current source is needed for both of these cell types. I've used 15 (I believe these are pico amps) from this page on the author's site (bottom value for I in the screenshot). I've also implemented a delay for the current onset (100 ms parameter).
The simulation code should implement some kind of time tracking so it's easier to know when the spikes are occurring in resulting plot. The above code implements this, and runs the simulation for 500 ms.

simulate data from a linear fractional stable motion

I have to simulate some data from a Linear fractional stable motion. I have found an article where they simulate such data using Matlab. The code is from the article "Simulation methods for linear fractional stable motion and
FARIMA using the Fast Fourier Transform" by Stilian Stoev and Murad S. Taqqu. The following is the matlab code:
% Written by Stilian Stoev 05.06.2002, sstoev#math.bu.edu
%
% Usage:
% y = fftlfsn(H,alpha,m,M,C,N,n)
%
mh = 1/m;
d = H-1/alpha;
t0 = [mh:mh:1];
t1 = [1+mh:mh:M];
A = mh^(1/alpha)*[t0.^d, t1.^d-(t1-1).^d];
C = C*(sum(abs(A).^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
A = fft(A,Na);
y = [];
for i=1:n,
if alpha<2,
Z = rstab(alpha,0,Na)’;
elseif alpha==2,
Z = randn(1,Na);
end;
Z = fft(Z,Na);
w = real(ifft(Z.*A,Na));
y = [y; w(1:m:N*m)];
end;
Example:
The commands
H = 0.2; alpha =1.5; m = 256; M = 6000; N = 2^14 - M;
y = fftlfsn(H,alpha,m,M,1,N,1);
x = cumsum(y);
generate a simulated path y of length N of linear
fractional stable noise and a path x of LFSM.
In the following I have tried to translate it,
but I have some questions. I have commented on it in the code.
fftlfsn <- function(H,alpha,m,M,C,N,n){
mh = 1/m;
d = H-1/alpha;
t0 = seq(mh,mh, by =1);
t1 = seq(1+mh,mh, by=M);
# Is the following the right way to translate the matlab code into R?
A = mh^(1/alpha)*matrix(c(t0^d, t1^d-(t1-1)^d), ncol = length(t0), nrow = length(t1));
C = C*(sum(abs(A)^alpha)^(-1/alpha));
A = C*A;
Na = m*(M+N);
# I don't konw if it is right to use the function "fft" here.
#Does this respond directly to the function "fft" in matlab?
A = fft(A,Na);
#how can I do somthing similar in R?
#I think they create an empty matrix? Could I just write y=0?
y = [];
for (i in 1:n)
{
if(alpha<2){
# The function "rstab" generates symmetric alpha-stable variables. Is there a similar function in R, or do you know how to write one?
Z = t(rstab(alpha,0,Na))
}
else if(alpha==2){
Z = matrix (rnorm(Na, mean = 0, sd = 1), nrow = 1, ncol = Na)
}
# Again, can I just use the R-function "fft" directly?
Z = fft(Z,Na);
w = Re(fft(Z*A,Na, inverse= TRUE));
#I have trouble understanding the following and therefore I can't translate it.
y = [y; w(1:m:N*m)];
}
}
Any help appreciated!

Calculate more time points from ode() of the "deSolve" R package without increasing runtime

I wrote a function: ODEsystem = function(t, states, parameters), which contains an ODE-system and solved it with the well documented R packages "deSolve" written by Karline Soetaert, Thomas Petzoldt and R. Woodrow Setzer. The documentation of the package is comprehensive and with many examples. It gives me confidence in their programming and memory optimization skills.
However, when solving the ODE-system with daily intervals instead of monthly intervals the time it takes to calculate the state-values for the specified moments increases tenfold. There might be a bit additional calculations for reaching the exact required moments in time, but for both cases roughly the same internal dynamic time steps should have been made. I did not expect such a large drop in runtime.
The call to ode() in “desolve” looks like this:
out <- as.data.frame(ode(states, t=times, func=ODEsystem), parms=parameters, method="ode45"))
I used two variants for times
times = seq(0, 100*365, by=365/12) # 100 years, one time point per month
times = seq(0, 100*365, by=1) # 100 years, one time point per day
Calling with data points per month
user system elapsed
4.59 0.00 4.58
Calling with data points per month and cmpfun() on the function containing the ODEsystem
user system elapsed
4.39 0.00 4.38
Calling with data points per day
user system elapsed
44.41 0.00 44.46
Calling with data points per day and cmpfun() on the function containing the ODEsystem
user system elapsed
43.01 0.00 43.17
The runtime measured with system.time() increases with factor ten when switching from monthly intervals to daily intervals. Matters do not improve much by using cmpfun() on the function containing the ODE-system.
(The output "out" is only assigned when the function call to ode() is done. Thus pre-assigning "out" yields no performance gain.)
Question 1: I am looking for the reason why there is this decrease in runtime/performance.
(I expect it to be in the internals of the deSolve package.)
Question 2: Given the answer to Question 1, how can I improve the runtime without resorting to dynamic link libraries?
Pre-assign some memory for what will become “out” might help (using knowledge on the time steps in “times”), but I do not known which internal variable in ode() to affect.
#### Clear currrent lists from memory
rm(list=ls())
### Load libraries
# library(rootSolve);library(ggplot2);
library(base);library(deSolve);library(stringr);library(compiler);library(data.table);
#### constants
dpy=365;durX1 = 40*dpy;rH = 1/durX1;durX4 = 365/12;rX4 = 1/durX4;durX6 = 365/12;rX6 = 1/durX6;durX2 = 80;rX2 = 1/durX2;durX3 = 31;rX3 = 1/durX3;durX7 = 20*365/12;rX7 = 1/durX7;durX5 = 29;rX5 = 1/durX5;durX8 = 200;rX8 = 1/durX8;fS = 0.013;fR = 8/100;fL = .03;fP = .03;fF = .05;X1zero = 1000;UDdur = 365/12*5;rK = rX3*(1/UDdur);fD1 = .05;fD2 = .05;durbt = 4;bt = 1/durbt;LX11 = 14;rF = 1/LX11;durX11 = 5;rX11 = 1/durX11;iniX12 = 0;pH = 1;frac_Im = 0;durX9 = dpy*5;ini_X2 = 1;sp = .90;fpX1 = 5;NF = fpX1*X1zero;rT1 = fD1*rX4;rT2 = fD2*rX6;pX1 = 0*sp;pX2 = 1/80*sp;pX3 = .50*sp;pX4 = .5*sp;pX6 = .5*sp;pX7 = 1/100*sp;pX5 = pX3;pX9 = 0*sp;pX8 = 1*sp;rX9 = 1/durX9;
#### vector with parameters
parameters = c(rH, rX3, rX4, rX6, rX2, rX8, rX7, rX5, rK, rT1, rT2, bt, rF, NF, rX11, pX1, pX2, pX3, pX4, pX6, pX7, pX5, pX9, pX8, rX9, X1zero)
### States contains initial conditions
states = c( X1 =X1zero-1,X2=1,X3=0, X4=0, X5=0,X6=0, X7=0, X8=0, X9=0, X10=NF,X11=0,X12=0, X13 = 0)
### function with ODE system
ODEsystem = function(t,states,parameters){
with(as.list(c(states,parameters)),{
### functions
X1part = (pX2*X2 + pX3*X3 + pX4*X4 + pX6*X6 + pX7*X7 + pX5*X5 + pX9*X9 + pX8*X8); prob1 = bt * X12 / X1zero; lF = bt * X1part/X1zero; AD = rK*(X3+X5+X4+X6)+rT1*X4+rT2*X6;
### fluxes
J1 = prob1*X1; J2 = fS*rX2*X2; J3 = (1-fS)*rX2*X2; J4 = (1-fP)*rX3*X3 ; J5 = fP*rX3*X3; J6 = (1-fF)*rX4*X4; J7 = fF*rX4*X4; J8 = rX6*X6; J9 = fR*rX7*X7; J10 = rX5*X5; J11 = (1-fR)*(1-fL)*rX7*X7; J12 = (1-fR)*fL*rX7*X7; J13 = rX8*X8; J14 = rH*X3; J15 = rH*X1; J16 = rH*X2; J17 = rH*X4; J18 = rH*X6; J19 = rH*X5; J20 = rH*X8; J21 = rH*X7; J22 = rH*X9; J23 = rK*X3; J24 = rK*X4; J25 = rK*X6; J26 = rT2*X6; J27 = rH*X1zero; J28 = rT1*X4; J29 = AD; J30 = rK*X5; J31 = rF*X12; J32 = rF*X11; J33 = rF*X10; J34 = lF*X10; J35 = rX11*X11; J36 = rF*NF; J37 = rX9*X9; J38 = 0; J39 = 0; J40 = 0; J41 = 0; J42 = 0; J43 = 0; flux1=J4/X1zero*1e4*dpy; flux2=J12/X1zero*1e4*dpy;
# rate of change
dX1 = - J1 - J15 + J27 + J29 + J37
dX2 = + J1 - J2 - J3 - J16 - J40
dX3 = + J2 - J4 - J5 - J14 - J23 - J41
dX4 = + J4 - J6 - J7 - J17 - J24 - J28
dX5 = + J9 - J10 - J19 - J30 - J43
dX6 = + J7 - J8 + J10 - J18 - J25 - J26
dX7 = + J5 + J6 + J8 - J9 - J11 - J12 - J21
dX8 = + J12 - J13 - J20 - J42
dX9 = + J3 + J11 + J13 - J22 - J37 + J40 + J41 + J42 + J43
dX10 = - J33 - J34 + J36
dX11 = - J32 + J34 - J35
dX12 = - J31 + J35
dX13 = + J38 - J39
# return the rate of change
list(c(dX1,dX2,dX3,dX4,dX5,dX6,dX7,dX8,dX9,dX10,dX11,dX12,dX13),flux1,flux2,prob1)
})
}
## compiled version of ODE system function
cfODEsystem=cmpfun(ODEsystem)
#### time points to be calculated
times = seq(0, 100*365,by=365/12) # 100 year, time points per month
#times = seq(0, 100*365,by=1) # 100 year, time points per day
### calculations
system.time(out <- as.data.frame(ode(states, t=times, func=ODEsystem, parms=parameters, method="ode45")))
#system.time(out <- as.data.frame(ode(states, t=times, func=cfODEsystem, parms=parameters, method="ode45")))
### longitudinal plots of each variable, flux1 and 2 and prob1
for (i in seq(from=2, to=dim(out)[2], by=1) ) {
tempdata <- out[c("time",names(out)[i])]
tempdata$time= tempdata$time/365
templabel <-names(out)[i]
plot(tempdata,col = "black","l",xlab="time (years)",ylab=templabel,
xlim=c(0, max(tempdata$time)), ylim=c(0, signif(max(tempdata[2]),2)))
}
So thanks for writing this question, it prompted me to look into the deSolve internals and learn a bit (and also maybe speed my own code up).
Question 1
The ODE function is called a number of times to solve the function (maybe less than the number of timepoints), but then is also called once a timepoint to evaluate the additional algebraic equations. So if you add 30x the timepoints, you will always add to the runtime but by less than a factor of 30, due to things like setup and teardown.
Question 2
There are a few things you can do to speed things up without resorting to C code (though that is an excellent option)
Use a different solver (e.g. lsoda, which on my system is around 5x faster than ode45)
When using lsoda, increase the hmax to allow the adaptive timestepping more freedom to integrate ahead (another speedup)
rewrite the code to avoid the use of with, instead using array accesses (possibly to temporary named variables).

Not able to understand a logarithm conversion

I was going through slides of an algorithm class and came across following.
T(n) = 2T(n^(1/2)) + lg n
Rename: m = lg n => n = 2^m
T (2^m) = 2T(2^(m/2)) + m
Rename: S(m) = T(2^m) S(m) = 2S(m/2) + m
Can any one explain me how did the last equation come ? I'm not able to understand how S(m/2) came. Thank you.
It is just an argument substitution.
You have S(m) = T(f(m)), where f(m) = 2^m. Substitute m with m/2 and you'll get
S(m/2) = T(f(m/2)), f(m/2) = 2^(m/2)
Now you may rewrite left part T(f(m/2)) = T(2^(m/2)) = S(m/2)

Resources