This is a really basic question but this is the first time I've used MATLAB and I'm stuck.
I need to simulate a simple series RC network using 3 different numerical integration techniques. I think I understand how to use the ode solvers, but I have no idea how to enter the differential equation of the system. Do I need to do it via an m-file?
It's just a simple RC circuit in the form:
RC dy(t)/dt + y(t) = u(t)
with zero initial conditions. I have the values for R, C the step length and the simulation time but I don't know how to use MATLAB particularly well.
Any help is much appreciated!
You are going to need a function file that takes t and y as input and gives dy as output. It would be its own file with the following header.
function dy = rigid(t,y)
Save it as rigid.m on the MATLAB path.
From there you would put in your differential equation. You now have a function. Here is a simple one:
function dy = rigid(t,y)
dy = sin(t);
From the command line or a script, you need to drive this function through ODE45
[T,Y] = ode45(#rigid,[0 2*pi],[0]);
This will give you your function (rigid.m) running from time 0 through time 2*pi with an initial y of zero.
Plot this:
plot(T,Y)
More of the MATLAB documentation is here:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/ode23tb.html
The Official Matlab Crash Course (PDF warning) has a section on solving ODEs, as well as a lot of other resources I found useful when starting Matlab.
Related
How to integrate an interval-algorithm-based nonlinear function (for example, d(F(X))/dX= a/(1+cX), where a=[1, 2], c=[2, 3] are interval constants) using the "IntervalArithmetic" package in Julia? Could you please give an example? Or could you please provide a relevant document? F(X) will come out as an interval with bounds, F(X)=[p, q].
Just numerical integration?
As long as the integrating code is written in Julia (otherwise i suspect it will struggle to understand IntervalArithmetic) and there isn't some snag about how it should interpret tolerances, then it should just work, more or less how you might expect it to handle e.g. complex numbers.
using IntervalArithmetic
f(x) = interval(1,2)/(1+interval(2,3)*x)
and combined with e.g.
using QuadGK
quadgk(f, 0, 1)
gives ([0.462098, 1.09862], [0, 0.636515]) (so.. i guess here the interpretation is that the error is in the interval 0 to 0.636515 :))
Just as a sanity check, lets just go with a good old trapezodial rule.
using Trapz
xs = range(0, 1,length=100)
trapz(xs, f.(xs))
again gives us the expected interval [0.462098, 1.09862]
[I am solving a system of ODEs by using Xcos (SIRD infection model) in Scilab 6.1.1 version and I am using Windows 10 operating system, but the user-defined function gives me errors.
Initial conditions: S(0)=10^7-1000; I(0)=1000; R(0)=0; D(0)=0.
I tried to use 1/S block, but it does not accept vector initial conditions, so I used integrator block and I am not sure if is it correct or not. Please, I need your help to figure out this error. I am going to attach a screenshot of the Xcos file of the SIRD model simulation.
You don't need Xcos to do such a simulation. Use directly the ode() solver, like in the following example (replace parameters with your values)
function dxdt=f(t,x)
S=x(1);
I=x(2);
R=x(3);
D=x(4);
dxdt=[-β*S*I/N
β*S*I/N-γ*I-μ*I
γ*I
μ*I]
end
N = 1000;
β = 0.4;
γ = 0.035;
μ = 0.0035;
t=linspace(0,100,1000);
x0=[997; 3; 0; 0]
x=ode(x0,0,t,f);
clf
plot(t,x)
legend S I R D
I'm modelling an overhead crane and obtained the following equations:
I'm noob when it comes to Scilab and so far I only simullated (using ODE) linear systems with no more than two degrees of freedom, which are simple systems that I can easily convert to am matrix and integrate it using ODE.
But this system in particular I have no clue how to simulate it, not because of the sin and cos functions, but because of the fact that I don't know how to put it in a state space matrix.
I've looked for a few tutorials (listed bellow) but I didn't understand any of those, can somebody tell me how I do it, or at least point where I could learn it?
http://www.openeering.com/sites/default/files/Nonlinear_Systems_Scilab.pdf
http://www.math.univ-metz.fr/~sallet/ODE_Scilab.pdf
Thank you, and sorry about my english
The usual form means writing in terms of first order derivatives. So you'll have relations where the 2nd derivative terms will be written as:
x'' = d(x')/fx
Substitute these into the equations you have. You'll end up with eight simultaneous ODEs to solve instead of four, with appropriate initial conditions.
Although this ODE system is implicit, you can solve it with a classical (explicit) ODE solver by reformulating it this way: if you define X=(x,L,theta,q)^T then your system can be reformulated using matrix algebra as A(X,X') * X" = B(X,X'). Please note that the first order form of this system is
d/dt(X,X') = ( X', A(X,X')^(-1)*B(X,X') )
Suppose now that you have defined two Scilab functions A and B which actually compute their values w.r.t. to the values of Xand X'
function out = A(X,Xprime)
x=X(1)
L=X(2)
theta=X(3)
qa=X(4)
xd=XPrime(1)
Ld=XPrime(2)
thetad=XPrime(3)
qa=XPrime(4);
...
end
function out = B(X,Xprime)
...
end
then the right hand side of the system of 8 ODEs, as it can be given to the ode function of Scilab can be coded as follows
function dstate_dt = rhs(t,state)
X = state(1:4);
Xprime = state(5:8);
out = [ Xprime
A(X,Xprime) \ B(X,Xprime)]
end
Writing the code of A() and B() according to the given equations is the only remaining (but quite easy) task.
Having some trouble with my first night in Matlab. I was resting my laurels in R I guess...
I am trying to solve a problem for an equilibrium in infinite time of a Euler Equation.
Matlab seems much more apt for doing the optimization in this problem, but I am having some issues just getting started defining functions. Here is what I have:
%% ECON 20210 Honors Problem Set #2 Code
%% Writen By Jacob Miller
%% Date 4/17/2014
%% In this problem, you have to write
%% a program to solve the following social planner problem:
%% Maximize The sum from t=1 to T over Ct,Nt, Kt+1 of the function
%% sum from t=1 to T of B^t-1*(Ct^(1-y)/1-y)-psi*log(n)
%% subject to the constraint Ct=A(kt^a)*(nt^1-a)+(1-d)Kt-Kt+1
function [eul] = funkone(c,y,p,n)
eul = (power(c,minus(1,y))/minus(1,y)-p*log(n));
Which right now is returning the error:
EDU>> MillerEconPset2
Error using MillerEconPset2 (line 12)
Not enough input arguments.
It seems to think I'm trying to run the function, but I just want to define it. I have a similar script working in R:
myfunc<-function(c,y,n,p){
(c^(1-y))/(1-y)-n*log(p)
}
But unfortunately R is not nearly as good for solving for global equilibria. Any idea how to fix the Matlab code?
Thanks a billion.
Best,
Jake
A function is defined by creating a file containing the function and saving it with the appropriate file name matching the function name. Stand alone functions should be defined in separate files than your main script. You will then be able to access any functions you define in this way as long as they are in your MATLAB search path or current working direcetory.
An alternative that would work in this case would be to use an anonymous function. That is:
funkone = #(c,y,p,n) (power(c,minus(1,y))/minus(1,y)-p*log(n));
which could be called as
ouput_val = funkone(c,y,p,n)
I asked this question a while ago. I am not sure whether I should post this as an answer or a new question. I do not have an answer but I "solved" the problem by applying the Levenberg-Marquardt algorithm using nls.lm in R and when the solution is at the boundary, I run the trust-region-reflective algorithm (TRR, implemented in R) to step away from it. Now I have new questions.
From my experience, doing this way the program reaches the optimal and is not so sensitive to the starting values. But this is only a practical method to step aside from the issues I encounterd using nls.lm and also other optimization functions in R. I would like to know why nls.lm behaves this way for optimization problems with boundary constraints and how to handle the boundary constraints when using nls.lm in practice.
Following I gave an example illustrating the two issues using nls.lm.
It is sensitive to starting values.
It stops when some parameter reaches the boundary.
A Reproducible Example: Focus Dataset D
library(devtools)
install_github("KineticEval","zhenglei-gao")
library(KineticEval)
data(FOCUS2006D)
km <- mkinmod.full(parent=list(type="SFO",M0 = list(ini = 0.1,fixed = 0,lower = 0.0,upper =Inf),to="m1"),m1=list(type="SFO"),data=FOCUS2006D)
system.time(Fit.TRR <- KinEval(km,evalMethod = 'NLLS',optimMethod = 'TRR'))
system.time(Fit.LM <- KinEval(km,evalMethod = 'NLLS',optimMethod = 'LM',ctr=kingui.control(runTRR=FALSE)))
compare_multi_kinmod(km,rbind(Fit.TRR$par,Fit.LM$par))
dev.print(jpeg,"LMvsTRR.jpeg",width=480)
The differential equations that describes the model/system is:
"d_parent = - k_parent * parent"
"d_m1 = - k_m1 * m1 + k_parent * f_parent_to_m1 * parent"
In the graph on the left is the model with initial values, and in the middle is the fitted model using "TRR"(similar to the algorithm in Matlab lsqnonlin function ), on the right is the fitted model using "LM" with nls.lm. Looking at the fitted parameters(Fit.LM$par) you will find that one fitted parameter(f_parent_to_m1) is at the boundary 1. If I change the starting value for one parameter M0_parent from 0.1 to 100, then I got the same results using nls.lm and lsqnonlin.I have many cases like this one.
newpars <- rbind(Fit.TRR$par,Fit.LM$par)
rownames(newpars)<- c("TRR(lsqnonlin)","LM(nls.lm)")
newpars
M0_parent k_parent k_m1 f_parent_to_m1
TRR(lsqnonlin) 99.59848 0.09869773 0.005260654 0.514476
LM(nls.lm) 84.79150 0.06352110 0.014783294 1.000000
Except for the above problems, it often happens that the Hessian returned by nls.lm is not invertable(especially when some parameters are on the boundary) so I cannot get an estimation of the covariance matrix. On the other hand, the "TRR" algorithm(in Matlab) almost always give an estimation by calculating the Jacobian at the solution point. I think this is useful but I am also sure that R optimization algorithms(the ones I have tried) did not do this for a reason. I would like to know whether I am wrong by using the Matlab way of calculating the covariance matrix to get standard error for the parameter estimates.
One last note, I claimed in my previous post that the Matlab lsqnonlin outperforms R's optimization functions in almost all cases. I was wrong. The "Trust-Region-Reflective" algorithm used in Matlab is in fact slower(sometimes much slower) if also implemented in R as you can see from the above example. However, it is still more stable and reaches a better solution than the R's basic optimization algorithms.
First off, I am not an expert on Matlab and Optimisation and have never used R.
I am not sure I see what your actual question is, but maybe I can shed some light into your puzzlement:
LM is slightly enhanced Gauß-Newton approach - for problems with several local minima it is very sensitive to initial states. Including boundaries typically generates more of those minima.
TRR is akin to LM, but more robust. It has better capabilities for "jumping out of" bad local minima. It is quite feasible that it will behave better, but perform worse, than an LM. Actually explaining why is very hard. You would need to study the algorithms in detail and look at how they behave in this situation.
I cannot explain the difference between Matlab's and R's implementation, but there are several extensions to TRR that maybe Matlab uses and R does not.
Does your approach of using LM and TRR alternatingly converge better than TRR alone?
Using the mkin package, you can find the parameters using the "Port" algorithm (which is also a kind of a TRR algorithm as far as I could tell from its documentation), or the "Marq" algorithm, which uses nls.lm in the background. Then you can use "normal" starting values or "bad" starting values.
library(mkin)
packageVersion("mkin")
Recent mkin version can speed up the process considerably as they compile the models from automatically generated C code if a compiler is available on your system (e.g. you have r-base-dev installed on Debian/Ubuntu, or Rtools on Windows).
This defines the model:
m <- mkinmod(parent = mkinsub("SFO", "m1"),
m1 = mkinsub("SFO"),
use_of_ff = "max")
You can check that the differential equations are correct:
cat(m$diffs, sep = "\n")
Then we fit in four variants, Port and LM, with or without M0 fixed to 0.1:
f.Port = mkinfit(m, FOCUS_2006_D)
f.Port.M0 = mkinfit(m, FOCUS_2006_D, state.ini = c(parent = 0.1, m1 = 0))
f.LM = mkinfit(m, FOCUS_2006_D, method.modFit = "Marq")
f.LM.M0 = mkinfit(m, FOCUS_2006_D, state.ini = c(parent = 0.1, m1 = 0),
method.modFit = "Marq")
Then we look at the results:
results <- sapply(list(Port = f.Port, Port.M0 = f.Port.M0, LM = f.LM, LM.M0 = f.LM.M0),
function(x) round(summary(x)$bpar[, "Estimate"], 5))
which are
Port Port.M0 LM LM.M0
parent_0 99.59848 99.59848 99.59848 39.52278
k_parent 0.09870 0.09870 0.09870 0.00000
k_m1 0.00526 0.00526 0.00526 0.00000
f_parent_to_m1 0.51448 0.51448 0.51448 1.00000
So we can see that the Port algorithm finds the best solution (to the best of my knowledge) even with bad starting values. The speed issue that one may have with more complicated models is alleviated using the automatic generation of C code.