Labeling matlab plots with input parameters - global-variables

I am integrating a system of ode's using the MATLAB utility routine, ode45. I do not have a reliable way to label plots with the parameters used to produce the plotted results. It would be easy if there were a an approved substitute for global variables. It would be possible to write a script that automatically edits the derivative function for each case in order to hard-wire the constants, but there must be a better way.

To specify constants, simply add an equation for each constant and give 0 as its derivative. This adds a column to the result matrix but the constant value is available for use in calculating the other derivatives.

Related

Julia Differential Algebraic Equation as a Boundary Value Problem

Does Julia support boundary value differential algebraic equations? I have an implicit ODE with a variable mass matrix that is sometimes singular, so I have to use the DAEProblem. My problem is two coupled second order ODEs for x1(t) and x2(t) that I have transformed into four first order equations by setting x1'(t) = y1(t) and x2'(t)=y2(t). I have values for x1 and x2 at the start and end of my domain, but don't have values for y1 or y2 anywhere, so I have a need for both a DAE and a BVP.
This github post suggests that this is possible, but I'm afraid I don't understand the machinery well enough to understand how to couple DAEProblem with BVProblem.
I've had success writing multiple shooting code following numerical recipes to solve the problem, but it's fairly clunky. Ultimately, I would like to pair this with DiffEqFlux (I have quite a few measurements of x1 and x2 along the domain and don't know the exact form of the differential equation), but I suspect that would be much simpler if there was a more direct approach to linking BVProblem with DAEProblem.
Just go directly to DiffEqFlux, since parameter estimation encompasses BVPs. Write the boundary conditions as part of the loss function on a DAEProblem (i.e. that the starting values should equal x and the final values should equal y), and optimize the initial conditions at the same time as any parameters. Optimizing only the intial conditions and not any parameters is equivalent to a single shooting BVP solver in this form, and this allows simultaneous parameter estimation. Or use the multiple shooting layer functions to do multiple shooting. Or use a BVProblem with a mass matrix.
For any more help, you'll need to share code for what you tried and didn't work, since it's not anything more difficult than that so it's hard to give more generic help than just "use constructor x".

Define groups of functions

I'm working on building an R package, and I've encountered a structural problem that I'm not sure how I should solve. I have several different distributions that I'd like to implement in my package (normal, student's t, etc.) and for each distribution I'll have several functions related to it. I will then have an additional function that uses these functions to execute some process, and so I'm trying to avoid having to define all of these functions with different names.
To be more clear, let me give a simple example. Let's say I want to write a simple package to do maximum likelihood estimation for several distributions. Ideally, I'd like to call an MLE function like:
MLE(data, distribution = "normal")
and then have the MLE function load all the related normal distribution functions that it needs. So, it may load density and gradDensity specific to the normal distribution and operate with these functions. However, if I call
MLE(data, distribution = "studentT")
then density and gradDensity are defined as different functions, now specific to the Student's t distribution.
My question is this: how can I appropriately define the density and gradDensity functions for each different distribution I'm interested in and load them when I need them? I've considered defining a new class for this package and having this object contain all the distribution functions I'd need, but this seems problematic because I want one of the functions in this object to be able to call another one of the functions in the object (for example, gradDensity may call density). I also considered defining separate environments for each distribution, but I wasn't sure if that was good practice. Ideally, I'd also like users to be able to define their own distribution and then use this package as well, but I'm having a hard time understanding how to appropriately construct this structure in R.

How can optimization be used as a solver?

In a question on Cross Validated (How to simulate censored data), I saw that the optim function was used as a kind of solver instead of as an optimizer. Here is an example:
optim(1, fn=function(scl){(pweibull(.88, shape=.5, scale=scl, lower.tail=F)-.15)^2})
# $par
# [1] 0.2445312
# ...
pweibull(.88, shape=.5, scale=0.2445312, lower.tail=F)
# [1] 0.1500135
I have found a tutorial on optim here, but I am still not able to figure out how to use optim to work as a solver. I have several questions:
What is first parameter (i.e., the value 1 being passed in)?
What is the function that is passed in?
I can understand that it is taking the Weibull probability distribution and subtracting 0.15, but why are we squaring the result?
I believe you are referring to my answer. Let's walk through a few points:
The OP (of that question) wanted to generate (pseudo-)random data from a Weibull distribution with specified shape and scale parameters, and where the censoring would be applied for all data past a certain censoring time, and end up with a prespecified censoring rate. The problem is that once you have specified any three of those, the fourth is necessarily fixed. You cannot specify all four simultaneously unless you are very lucky and the values you specify happen to fit together perfectly. As it happened, the OP was not so lucky with the four preferred values—it was impossible to have all four as they were inconsistent. At that point, you can decide to specify any three and solve for the last. The code I presented were examples of how to do that.
As noted in the documentation for ?optim, the first argument is par "[i]nitial values for the parameters to be optimized over".
Very loosely, the way the optimization routine works is that it calculates an output value given a function and an input value. Then it 'looks around' to see if moving to a different input value would lead to a better output value. If that appears to be the case, it moves in that direction and starts the process again. (It stops when it does not appear that moving in either direction will yield a better output value.)
The point is that is has to start somewhere, and the user is obliged to specify that value. In each case, I started with the OP's preferred value (although really I could have started most anywhere).
The function that I passed in is ?pweibull. It is the cumulative distribution function (CDF) of the Weibull distribution. It takes a quantile (X value) as its input and returns the proportion of the distribution that has been passed through up to that point. Because the OP wanted to censor the most extreme 15% of that distribution, I specified that pweibull return the proportion that had not yet been passed through instead (that is the lower.tail=F part). I then subtracted.15 from the result.
Thus, the ideal output (from my point of view) would be 0. However, it is possible to get values below zero by finding a scale parameter that makes the output of pweibull < .15. Since optim (or really most any optimizer) finds the input value that minimizes the output value, that is what it would have done. To keep that from happening, I squared the difference. That means that when the optimizer went 'too far' and found a scale parameter that yielded an output of .05 from pweibull, and the difference was -.10 (i.e., < 0), the squaring makes the ultimate output +.01 (i.e., > 0, or worse). This would push the optimizer back towards the scale parameter that makes pweibull output (.15-.15)^2 = 0.
In general, the distinction you are making between an "optimizer" and a "solver" is opaque to me. They seem like two different views of the same elephant.
Another possible confusion here involves optimization vs. regression. Optimization is simply about finding an input value[s] that minimizes (maximizes) the output of a function. In regression, we conceptualize data as draws from a data generating process that is a stochastic function. Given a set of realized values and a functional form, we use optimization techniques to estimate the parameters of the function, thus extracting the data generating process from noisy instances. Part of regression analyses partakes of optimization then, but other aspects of regression are less concerned with optimization and optimization itself is much larger than regression. For example, the functions optimized in my answer to the other question are deterministic, and there were no "data" being analyzed.

Difference between solnp and gosolnp functions in R for Non-Linear optimization problems

What is the main difference between the two functions, the r Help manual says that gosolnp helps to set the initial parameters correctly. Is there any difference otherwise? Also, if so is the case, how do we determine the correct distribution type for the parameter space?
In my problem, the initial set of parameters is difficult to determine, which is why the optimization problem is used. However, I have idea about the parameter upper and lower bounds.
gsolnp is an extension of solnp, a wrapper, allowing for multiple restarts. Simply put, it uses solnp several times (controllable by n.restarts) to avoid getting stuck in local minima. If your function is known to have no local minima (e.g., it is convex, which can be derived analytically), use solnp to save time. Otherwise, use gsolnp. If you know any additional information (for instance, an area where a global minimum is supposed to be), you may use it for finer control of the starting parameter distribution: see parameters distr and distr.opt.

Looking for interesting formula

I'm creating a game where players can make an alloy. To make it less predictable and more interesting, I thought that the durability and hardness of an alloy should not be calculated by a simple formula, because it will be extremely easy to find extrema, where alloy have best statistics.
So the questions is, is there any formula for a function where extrema can be found only by investigating all points? Input values will be in percents: 0.0%-100.0%. I think it should look like this: half sound wave
A very simple way would be a couple of sin function, just vary the constants and the sign for each new player. Here is one example (sin(1.1*x) + sin(x) + sin(0.9 *x))^2
If you use this between 10pi and 20pi you have an by average increasing function with local minima.
Modulating a simple linear or exponential function with trigonometric functions whose frequency and amplitude are dependent on the input should get you what you want.
You don't need a formula, I think — throw a bunch of random values around your domain, and then interpolate (linear interpolation will do) between them. Then you can even change the "formula" completely each time the game is run, or once in a while, or change it slowly with time, etc, etc.
If you want something that is very hard to predict then I would suggest involving a random number generator with the same seed every time. You can use it as an envelope for whatever function you come up with (trig functions or what not) to make it more jagged.
An interesting formula to use would be that of gamma of the Black-Scholes options pricing model. It goes as follows:
You can easily replace the variables, here's a graph of how the function looks:
alt text http://www.sqbimmer.com/aalex/gamma.png

Resources