Having some trouble with my first night in Matlab. I was resting my laurels in R I guess...
I am trying to solve a problem for an equilibrium in infinite time of a Euler Equation.
Matlab seems much more apt for doing the optimization in this problem, but I am having some issues just getting started defining functions. Here is what I have:
%% ECON 20210 Honors Problem Set #2 Code
%% Writen By Jacob Miller
%% Date 4/17/2014
%% In this problem, you have to write
%% a program to solve the following social planner problem:
%% Maximize The sum from t=1 to T over Ct,Nt, Kt+1 of the function
%% sum from t=1 to T of B^t-1*(Ct^(1-y)/1-y)-psi*log(n)
%% subject to the constraint Ct=A(kt^a)*(nt^1-a)+(1-d)Kt-Kt+1
function [eul] = funkone(c,y,p,n)
eul = (power(c,minus(1,y))/minus(1,y)-p*log(n));
Which right now is returning the error:
EDU>> MillerEconPset2
Error using MillerEconPset2 (line 12)
Not enough input arguments.
It seems to think I'm trying to run the function, but I just want to define it. I have a similar script working in R:
myfunc<-function(c,y,n,p){
(c^(1-y))/(1-y)-n*log(p)
}
But unfortunately R is not nearly as good for solving for global equilibria. Any idea how to fix the Matlab code?
Thanks a billion.
Best,
Jake
A function is defined by creating a file containing the function and saving it with the appropriate file name matching the function name. Stand alone functions should be defined in separate files than your main script. You will then be able to access any functions you define in this way as long as they are in your MATLAB search path or current working direcetory.
An alternative that would work in this case would be to use an anonymous function. That is:
funkone = #(c,y,p,n) (power(c,minus(1,y))/minus(1,y)-p*log(n));
which could be called as
ouput_val = funkone(c,y,p,n)
Related
This is a continuation of my questions:
Declaring a functional recursive sequence in Matlab
Is there a more efficient way of nesting logarithms?
Nesting a specific recursion in Pari-GP
But I'll keep this question self contained. I have made a coding project for myself; which is to program a working simple calculator for a tetration function I've constructed. This tetration function is holomorphic, and stated not to be Kneser's solution (as to all the jargon, ignore); long story short, I need to run the numbers; to win over the nay-sayers.
As to this, I have to use Pari-GP; as this is a fantastic language for handling large numbers and algebraic expressions. As we are dealing with tetration (think numbers of the order e^e^e^e^e^e); this language is, of the few that exist, the best for such affairs. It is the favourite when doing iterated exponential computations.
Now, the trouble I am facing is odd. It is not so much that my code doesn't work; it's that it's overflowing because it should over flow (think, we're getting inputs like e^e^e^e^e^e; and no computer can handle it properly). I'll post the first batch of code, before I dive deeper.
The following code works perfectly; and does everything I want. The trouble is with the next batch of code. This produces all the numbers I want.
\\This is the asymptotic solution to tetration. z is the variable, l is the multiplier, and n is the depth of recursion
\\Warning: z with large real part looks like tetration; and therefore overflows very fast. Additionally there are singularities which occur where l*(z-j) = (2k+1)*Pi*I.
\\j,k are integers
beta_function(z,l,n) =
{
my(out = 0);
for(i=0,n-1,
out = exp(out)/(exp(l*(n-i-z)) +1));
out;
}
\\This is the error between the asymptotic tetration and the tetration. This is pretty much good for 200 digit accuracy if you need.
\\modify the 0.000000001 to a bigger number to make this go faster and receive less precision. When graphing 0.0001 is enough
\\Warning: This will blow up at some points. This is part of the math; these functions have singularities/branch cuts.
tau(z,l,n)={
if(1/real(beta_function(z,l,n)) <= 0.000000001, //this is where we'll have problems; if I try to grab a taylor series with this condition we error out
-log(1+exp(-l*z)),
log(1 + tau(z+1,l,n)/beta_function(z+1,l,n)) - log(1+exp(-l*z))
)
}
\\This is the sum function. I occasionally modify it; to make better graphs, but the basis is this.
Abl(z,l,n) = {
beta_function(z,l,n) + tau(z,l,n)
}
Plugging this in, you get the following expressions:
Abl(1,log(2),100)
realprecision = 28 significant digits (20 digits displayed)
%109 = 0.15201551563214167060
exp(Abl(0,log(2),100))
%110 = 0.15201551563214167060
Abl(1+I,2+0.5*I,100)
%111 = 0.28416643148885326261 + 0.80115283113944703984*I
exp(Abl(0+I,2+0.5*I,100))
%112 = 0.28416643148885326261 + 0.80115283113944703984*I
And so on and so forth; where Abl(z,l,n) = exp(Abl(z-1,l,n)). There's no problem with this code. Absolutely none at all; we can set this to 200 precision and it'll still produce correct results. The graphs behave exactly as the math says they should behave. The problem is, in my construction of tetration (the one we actually want); we have to sort of paste together the solutions of Abl(z,l,n) across the value l. Now, you don't have to worry about any of that at all; but, mathematically, this is what we're doing.
This is the second batch of code; which is designed to "paste together" all these Abl(z,l,n) into one function.
//This is the modified asymptotic solution to the Tetration equation.
beta(z,n) = {
beta_function(z,1/sqrt(1+z),n);
}
//This is the Tetration function.
Tet(z,n) ={
if(1/abs(beta_function(z,1/sqrt(1+z),n)) <= 0.00000001,//Again, we see here this if statement; and we can't have this.
beta_function(z,1/sqrt(1+z),n),
log(Tet(z+1,n))
)
}
This code works perfectly for real-values; and for complex values. Some sample values,
Tet(1+I,100)
%113 = 0.12572857262453957030 - 0.96147559586703141524*I
exp(Tet(0+I,100))
%114 = 0.12572857262453957030 - 0.96147559586703141524*I
Tet(0.5,100)
%115 = -0.64593666417664607364
exp(Tet(0.5,100))
%116 = 0.52417133958039107545
Tet(1.5,100)
%117 = 0.52417133958039107545
We can also effectively graph this object on the real-line. Which just looks like the following,
ploth(X=0,4,Tet(X,100))
Now, you may be asking; What's the problem then?
If you try and plot this function in the complex plane, it's doomed to fail. The nested logarithms produce too many singularities near the real line. For imaginary arguments away from the real-line, there's no problem. And I've produced some nice graphs; but the closer you get to the real line; the more it misbehaves and just short circuits. You may be thinking; well then, the math is wrong! But, no, the reason this is happening is because Kneser's tetration is the only tetration that is stable about the principal branch of the logarithm. Since this tetration IS NOT Kneser's tetration, it's inherently unstable about the principal branch of the logarithm. Of course, Pari just chooses the principal branch. So when I do log(log(log(log(log(beta(z+5,100)))))); the math already says this will diverge. But on the real line; it's perfectly adequate. And for values of z with an imaginary argument away from zero, we're fine too.
So, how I want to solve this, is to grab the Taylor series at Tet(1+z,100); which Pari-GP is perfect for. The trouble?
Tet(1+z,100)
*** at top-level: Tet(1+z,100)
*** ^------------
*** in function Tet: ...unction(z,1/sqrt(1+z),n))<=0.00000001,beta_fun
*** ^---------------------
*** _<=_: forbidden comparison t_SER , t_REAL.
The numerical comparison I've done doesn't translate to a comparison between t_SER and t_REAL.
So, my question, at long last: what is an effective strategy at getting the Taylor series of Tet(1+z,100) using only real inputs. The complex inputs near z=0 are erroneous; the real values are not. And if my math is right; we can take the derivatives along the real-line and get the right result. Then, we can construct a Tet_taylor(z,n) which is just the Taylor Series expansion. Which; will most definitely have no errors when trying to graph.
Any help, questions, comments, suggestions--anything, is greatly appreciated! I really need some outside eyes on this.
Thanks so much if you got to the bottom of this post. This one is bugging me.
Regards, James
EDIT:
I should add that a Tet(z+c,100) for some number c is the actual tetration function we want. There is a shifting constant I haven't talked about yet. Nonetheless; this is spurious to the question, and is more a mathematical point.
This is definitely not an answer - I have absolutely no clue what you are trying to do. However, I see no harm in offering suggestions. PARI has a built in type for power series (essentially Taylor series) - and is very good at working with them (many operations are supported). I was originally going to offer some suggestions on how to get a Taylor series out of a recursive definition using your functions as an example - but in this case, I'm thinking that you are trying to expand around a singularity which might be doomed to failure. (On your plot it seems as x->0, the result goes to -infinity???)
In particular if I compute:
log(beta(z+1, 100))
log(log(beta(z+2, 100)))
log(log(log(beta(z+3, 100))))
log(log(log(log(beta(z+4, 100)))))
...
The different series are not converging to anything. Even the constant term of the series is getting smaller with each iteration, so I am not entirely sure there is even a Taylor series expansion about x = 0.
Questions/suggestions:
Should you be expanding about a different point? (say where the curve
crosses the x-axis).
Does the Taylor series satisfy some recursive relation? For example: A(z) = log(A(z+1)). [This doesn't work, but perhaps there is another way to write it].
I suspect my answer is unlikely to be satisfactory - but then again your question is more mathematical than a practical programming problem.
So I've successfully answered my question. I haven't programmed in so long; I'm kind of shoddy. But I figured it out after enough coffee. I created 3 new functions, which allow me to grab the Taylor series.
\\This function attempts to find the number of iterations we need.
Tet_GRAB_k(A,n) ={
my(k=0);
while( 1/real(beta(A+k,n)) >= 0.0001, k++);
return(k);
}
\\This function will run and produce the same results as Tet; but it's slower; but it let's us estimate Taylor coefficients.
\\You have to guess which k to use for whatever accuracy before overflowing; which is what the last function is good for.
Tet_taylor(z,n,k) = {
my(val = beta(z+k,n));
for(i=1,k,val = log(val));
return(val);
}
\\This function produces an array of all the coefficients about a value A.
TAYLOR_SERIES(A,n) = {
my(ser = vector(40,i,0));
for(i=1,40, ser[i] = polcoeff(Tet_taylor(A+z,n,Tet_GRAB_k(A,n)),i-1,z));
return(ser);
}
After running the numbers, I'm confident this works. The Taylor series is converging; albeit rather slowly and slightly less accurately than desired; but this will have to do.
Thanks to anyone who read this. I'm just answering this question for completeness.
I am a very new Julia user (coming from Matlab), so forgive me if I ask a very dumb question.
I currently have a julia code, which works (it runs fine) though it provides different results if I execute it as a function or if I run every of the function lines interactively.
My script is mostly about linear algebra and uses Arrays and Dicts.
As I have some trouble making use of the Juno debugger, I did not find another way to debug my code, which is quite a shame.
I spent the last three hours on this and I still have no clue why these results differ.
I suspect I don't understand some very basic working process of julia related to variable allocation but I'm flying blind here.
Does anyone have a explaination for this behavior ?
I can't provide the code here but here is the base structure of the code. Basically the M matrix returned by childfunction is wrong. a is a scalar a dict is a dictionary.
calling function
function motherfunction(...)
M = childfunction(a,dict)
end
child function
function childfunction(...)
...
M = *some linear algebra*
return M
end
I like solving my math problems(high school) using R as it is faster than writing on a piece of paper. One problem I'm having is that I have to keep writing the multiplication sign, example:
9x^2 + 24x + 16 yields = Error: unexpected symbol in "9x"
Is there any way in R to multiply 4x, without having to write 4*x but only 4x?
Would save me some time in having to write one extra character the whole time! Thanks
No. Having a number in front of a character without any space simply isn't valid syntax in R.
Take a step back and look at the syntax rules for, say, Excel, Matlab, Python, Mathematica. Every language has its rules, generally (:-) ) with good reason. For example, in R, the following are legal object names:
foo
foo.bar
foo1
foo39
But 39foo is not legal. So if you wanted any sequence [0-9][Letters] or the reverse to indicate multiplication, you'd have a conflict with naming rules.
In recent conversations with fellow students, I have been advocating for avoiding globals except to store constants. This is a sort of typical applied statistics-type program where everyone writes their own code and project sizes are on the small side, so it can be hard for people to see the trouble caused by sloppy habits.
In talking about avoidance of globals, I'm focusing on the following reasons why globals might cause trouble, but I'd like to have some examples in R and/or Stata to go with the principles (and any other principles you might find important), and I'm having a hard time coming up with believable ones.
Non-locality: Globals make debugging harder because they make understanding the flow of code harder
Implicit coupling: Globals break the simplicity of functional programming by allowing complex interactions between distant segments of code
Namespace collisions: Common names (x, i, and so forth) get re-used, causing namespace collisions
A useful answer to this question would be a reproducible and self-contained code snippet in which globals cause a specific type of trouble, ideally with another code snippet in which the problem is corrected. I can generate the corrected solutions if necessary, so the example of the problem is more important.
Relevant links:
Global Variables are Bad
Are global variables bad?
I also have the pleasure of teaching R to undergraduate students who have no experience with programming. The problem I found was that most examples of when globals are bad, are rather simplistic and don't really get the point across.
Instead, I try to illustrate the principle of least astonishment. I use examples where it is tricky to figure out what was going on. Here are some examples:
I ask the class to write down what they think the final value of i will be:
i = 10
for(i in 1:5)
i = i + 1
i
Some of the class guess correctly. Then I ask should you ever write code like this?
In some sense i is a global variable that is being changed.
What does the following piece of code return:
x = 5:10
x[x=1]
The problem is what exactly do we mean by x
Does the following function return a global or local variable:
z = 0
f = function() {
if(runif(1) < 0.5)
z = 1
return(z)
}
Answer: both. Again discuss why this is bad.
Oh, the wonderful smell of globals...
All of the answers in this post gave R examples, and the OP wanted some Stata examples, as well. So let me chime in with these.
Unlike R, Stata does take care of locality of its local macros (the ones that you create with local command), so the issue of "Is this this a global z or a local z that is being returned?" never comes up. (Gosh... how can you R guys write any code at all if locality is not enforced???) Stata has a different quirk, though, namely that a non-existent local or global macro is evaluated as an empty string, which may or may not be desirable.
I have seen globals used for several main reasons:
Globals are often used as shortcuts for variable lists, as in
sysuse auto, clear
regress price $myvars
I suspect that the main usage of such construct is for someone who switches between interactive typing and storing the code in a do-file as they try multiple specifications. Say they try regression with homoskedastic standard errors, heteroskedastic standard errors, and median regression:
regress price mpg foreign
regress price mpg foreign, robust
qreg price mpg foreign
And then they run these regressions with another set of variables, then with yet another one, and finally they give up and set this up as a do-file myreg.do with
regress price $myvars
regress price $myvars, robust
qreg price $myvars
exit
to be accompanied with an appropriate setting of the global macro. So far so good; the snippet
global myvars mpg foreign
do myreg
produces the desirable results. Now let's say they email their famous do-file that claims to produce very good regression results to collaborators, and instruct them to type
do myreg
What will their collaborators see? In the best case, the mean and the median of mpg if they started a new instance of Stata (failed coupling: myreg.do did not really know you meant to run this with a non-empty variable list). But if the collaborators had something in the works, and too had a global myvars defined (name collision)... man, would that be a disaster.
Globals are used for directory or file names, as in:
use $mydir\data1, clear
God only knows what will be loaded. In large projects, though, it does come handy. You would want to define global mydir somewhere in your master do-file, may be even as
global mydir `c(pwd)'
Globals can be used to store an unpredictable crap, like a whole command:
capture $RunThis
God only knows what will be executed; let's just hope it is not ! format c:\. This is the worst case of implicit strong coupling, but since I am not even sure that RunThis will contain anything meaningful, I put a capture in front of it, and will be prepared to treat the non-zero return code _rc. (See, however, my example below.)
Stata's own use of globals is for God settings, like the type I error probability/confidence level: the global $S_level is always defined (and you must be a total idiot to redefine this global, although of course it is technically doable). This is, however, mostly a legacy issue with code of version 5 and below (roughly), as the same information can be obtained from less fragile system constant:
set level 90
display $S_level
display c(level)
Thankfully, globals are quite explicit in Stata, and hence are easy to debug and remove. In some of the above situations, and certainly in the first one, you'd want to pass parameters to do-files which are seen as the local `0' inside the do-file. Instead of using globals in the myreg.do file, I would probably code it as
unab varlist : `0'
regress price `varlist'
regress price `varlist', robust
qreg price `varlist'
exit
The unab thing will serve as an element of protection: if the input is not a legal varlist, the program will stop with an error message.
In the worst cases I've seen, the global was used only once after having been defined.
There are occasions when you do want to use globals, because otherwise you'd have to pass the bloody thing to every other do-file or a program. One example where I found the globals pretty much unavoidable was coding a maximum likelihood estimator where I did not know in advance how many equations and parameters I would have. Stata insists that the (user-supplied) likelihood evaluator will have specific equations. So I had to accumulate my equations in the globals, and then call my evaluator with the globals in the descriptions of the syntax that Stata would need to parse:
args lf $parameters
where lf was the objective function (the log-likelihood). I encountered this at least twice, in the normal mixture package (denormix) and confirmatory factor analysis package (confa); you can findit both of them, of course.
One R example of a global variable that divides opinion is the stringsAsFactors issue on reading data into R or creating a data frame.
set.seed(1)
str(data.frame(A = sample(LETTERS, 100, replace = TRUE),
DATES = as.character(seq(Sys.Date(), length = 100, by = "days"))))
options("stringsAsFactors" = FALSE)
set.seed(1)
str(data.frame(A = sample(LETTERS, 100, replace = TRUE),
DATES = as.character(seq(Sys.Date(), length = 100, by = "days"))))
options("stringsAsFactors" = TRUE) ## reset
This can't really be corrected because of the way options are implemented in R - anything could change them without you knowing it and thus the same chunk of code is not guaranteed to return exactly the same object. John Chambers bemoans this feature in his recent book.
A pathological example in R is the use of one of the globals available in R, pi, to compute the area of a circle.
> r <- 3
> pi * r^2
[1] 28.27433
>
> pi <- 2
> pi * r^2
[1] 18
>
> foo <- function(r) {
+ pi * r^2
+ }
> foo(r)
[1] 18
>
> rm(pi)
> foo(r)
[1] 28.27433
> pi * r^2
[1] 28.27433
Of course, one can write the function foo() defensively by forcing the use of base::pi but such recourse may not be available in normal user code unless packaged up and using a NAMESPACE:
> foo <- function(r) {
+ base::pi * r^2
+ }
> foo(r = 3)
[1] 28.27433
> pi <- 2
> foo(r = 3)
[1] 28.27433
> rm(pi)
This highlights the mess you can get into by relying on anything that is not solely in the scope of your function or passed in explicitly as an argument.
Here's an interesting pathological example involving replacement functions, the global assign, and x defined both globally and locally...
x <- c(1,NA,NA,NA,1,NA,1,NA)
local({
#some other code involving some other x begin
x <- c(NA,2,3,4)
#some other code involving some other x end
#now you want to replace NAs in the the global/parent frame x with 0s
x[is.na(x)] <<- 0
})
x
[1] 0 NA NA NA 0 NA 1 NA
Instead of returning [1] 1 0 0 0 1 0 1 0, the replacement function uses the index returned by the local value of is.na(x), even though you're assigning to the global value of x. This behavior is documented in the R Language Definition.
One quick but convincing example in R is to run the line like:
.Random.seed <- 'normal'
I chose 'normal' as something someone might choose, but you could use anything there.
Now run any code that uses generated random numbers, for example:
rnorm(10)
Then you can point out that the same thing could happen for any global variable.
I also use the example of:
x <- 27
z <- somefunctionthatusesglobals(5)
Then ask the students what the value of x is; the answer is that we don't know.
Through trial and error I've learned that I need to be very explicit in naming my function arguments (and ensure enough checks at the start and along the function) to make everything as robust as possible. This is especially true if you have variables stored in global environment, but then you try to debug a function with a custom valuables - and something doesn't add up! This is a simple example that combines bad checks and calling a global variable.
glob.arg <- "snake"
customFunction <- function(arg1) {
if (is.numeric(arg1)) {
glob.arg <- "elephant"
}
return(strsplit(glob.arg, "n"))
}
customFunction(arg1 = 1) #argument correct, expected results
customFunction(arg1 = "rubble") #works, but may have unexpected results
An example sketch that came up while trying to teach this today. Specifically, this focuses on trying to give intuition as to why globals can cause problems, so it abstracts away as much as possible in an attempt to state what can and cannot be concluded just from the code (leaving the function as a black box).
The set up
Here is some code. Decide whether it will return an error or not based on only the criteria given.
The code
stopifnot( all( x!=0 ) )
y <- f(x)
5/x
The criteria
Case 1: f() is a properly-behaved function, which uses only local variables.
Case 2: f() is not necessarily a properly-behaved function, which could potentially use global assignment.
The answer
Case 1: The code will not return an error, since line one checks that there are no x's equal to zero and line three divides by x.
Case 2: The code could potentially return an error, since f() could e.g. subtract 1 from x and assign it back to the x in the parent environment, where any x element equal to 1 could then be set to zero and the third line would return a division by zero error.
Here's one attempt at an answer that would make sense to statisticsy types.
Namespace collisions: Common names (x, i, and so forth) get re-used, causing namespace collisions
First we define a log likelihood function,
logLik <- function(x) {
y <<- x^2+2
return(sum(sqrt(y+7)))
}
Now we write an unrelated function to return the sum of squares of an input. Because we're lazy we'll do this passing it y as a global variable,
sumSq <- function() {
return(sum(y^2))
}
y <<- seq(5)
sumSq()
[1] 55
Our log likelihood function seems to behave exactly as we'd expect, taking an argument and returning a value,
> logLik(seq(12))
[1] 88.40761
But what's up with our other function?
> sumSq()
[1] 633538
Of course, this is a trivial example, as will be any example that doesn't exist in a complex program. But hopefully it'll spark a discussion about how much harder it is to keep track of globals than locals.
In R you may also try to show them that there is often no need to use globals as you may access the variables defined in the function scope from within the function itself by only changing the enviroment. For example the code below
zz="aaa"
x = function(y) {
zz="bbb"
cat("value of zz from within the function: \n")
cat(zz , "\n")
cat("value of zz from the function scope: \n")
with(environment(x),cat(zz,"\n"))
}
This is a really basic question but this is the first time I've used MATLAB and I'm stuck.
I need to simulate a simple series RC network using 3 different numerical integration techniques. I think I understand how to use the ode solvers, but I have no idea how to enter the differential equation of the system. Do I need to do it via an m-file?
It's just a simple RC circuit in the form:
RC dy(t)/dt + y(t) = u(t)
with zero initial conditions. I have the values for R, C the step length and the simulation time but I don't know how to use MATLAB particularly well.
Any help is much appreciated!
You are going to need a function file that takes t and y as input and gives dy as output. It would be its own file with the following header.
function dy = rigid(t,y)
Save it as rigid.m on the MATLAB path.
From there you would put in your differential equation. You now have a function. Here is a simple one:
function dy = rigid(t,y)
dy = sin(t);
From the command line or a script, you need to drive this function through ODE45
[T,Y] = ode45(#rigid,[0 2*pi],[0]);
This will give you your function (rigid.m) running from time 0 through time 2*pi with an initial y of zero.
Plot this:
plot(T,Y)
More of the MATLAB documentation is here:
http://www.mathworks.com/access/helpdesk/help/techdoc/ref/ode23tb.html
The Official Matlab Crash Course (PDF warning) has a section on solving ODEs, as well as a lot of other resources I found useful when starting Matlab.