Julia and system of ordinary differential equations - julia

I want to try to solve a system of ordinary differential equations, perhaps parallelized and came across Julia and DifferentialEquations.jl. the system looks like
x'(t) = f(t)*z(t)
y'(t) = g(t)*z(t)
z'(t) = f(t)*(1-2*x(t))/(2) -g(t)*y(t)
over 10^2 < t < 10^14, but my initial boundary conditions are
x(10^14) == 0
y(10^14) == 0
z(10^14) == 0
Could someone please explain to me how to setup this problem in julia? I checked the documentation and could only find u0 as a parameter, but it doesn't give details on choosing for a right handed set of boundary conditions Many thanks!

You're looking to solve a boundary value problem (BVP). While this area is currently less developed than other areas of DifferentialEquations.jl, there are methods which exist for this which are shown in the tutorial on solving BVPs. The MIRK4 method may be the one to try.
I will note however that your timescale is quite large and may lead to numerical errors. Either using higher precision numbers (BigFloat, ArbFloat, DoubleFloat) may be required for handling that range, or you may want to rescale time in your equations so that way it better fits for standard double precision floating point numbers (Float64).

Related

Recomendations (functions/solution) to apply in OpenMDAO instead of boolean conditions (if/else)

I have been working for a couple of months with OpenMDAO and I find myself struggling with my code when I want to impose conditions for trying to replicate a physical/engineering behaviour.
I have tried using sigmoid functions, but I am still not convinced with that, due to the difficulty about trading off sensibility and numerical stabilization. Most of times I found overflows in exp so I end up including other conditionals (like np.where) so loosing linearity.
outputs['sigmoid'] = 1 / (1 + np.exp(-x))
I was looking for another kind of step function or something like that, able to keep linearity and derivability to the ease of the optimization. I don't know if something like that exists or if there is any strategy that can help me. If it helps, I am working with an OpenConcept benchmark, which uses vectorized computations ans Simpson's rule numerical integration.
Thank you very much.
PD: This is my first ever question in stackoverflow, so I would like to apologyze in advance for any error or bad practice commited. Hope to eventually collaborate and become active in the community.
Update after Justin answer:
I will take the opportunity to define a little bit more my problem and the strategy I tried. I am trying to monitorize and control thermodynamics conditions inside a tank. One of the things is to take actions when pressure P1 reaches certein threshold P2, for defining this:
eval= (inputs['P1'] - inputs['P2']) / (inputs['P1'] + inputs['P2'])
# P2 = threshold [Pa]
# P1 = calculated pressure [Pa]
k=100 #steepness control
outputs['sigmoid'] = (1 / (1 + np.exp(-eval * k)))
eval was defined in order avoid overflows normalizing the values, so when the threshold is recahed, corrections are taken. In a very similar way, I defined a function to check if there is still mass (so flowing can continue between systems):
eval= inputs['mass']/inputs['max']
k=50
outputs['sigmoid'] = (1 / (1 + np.exp(-eval*k)))**3
maxis also used for normalizing the value and the exponent is added for reaching zero before entering in the negative domain.
PLot (sorry it seems I cannot post images yet for my reputation)
It may be important to highlight that both mass and pressure are calculated from coupled ODE integration, in which this activation functions take part. I guess OpenConcept nature 'explore' a lot of possible values before arriving the solution, so most of the times giving negative infeasible values for massand pressure and creating overflows. For that sometimes I try to include:
eval[np.where(eval > 1.5)] = 1.5
eval[np.where(eval < -1.5)] = -1.5
That is not a beautiful but sometimes effective solution. I try to avoid using it since I taste that this bounds difficult solver and optimizer work.
I could give you a more complete answer if you distilled your question down to a specific code example of the function you're wrestling with and its expected input range. If you provide that code-sample, I'll update my answer.
Broadly, this is a common challenge when using gradient based optimization. You want some kind of behavior like an if-condition to turn something on/off and in many cases thats a fundamentally discontinuous function.
To work around that we often use sigmoid functions, but these do have some of the numerical challenges you pointed out. You could try a hyberbolic tangent as an alternative, though it may suffer the same kinds of problems.
I will give you two broad options:
Option 1
sometimes its ok (even if not ideal) to leave the purely discrete conditional in the code. Lets say you wanted to represent a kind of simple piecewise function:
y = 2x; x>=0
y = 0; x < 0
There is a sharp corner in that function right at 0. That corner is not differentiable, but the function is fine everywhere else. This is very much like the absolute value function in practice, though you might not draw the analogy looking at the piecewise definition of the function because the piecewise nature of abs is often hidden from you.
If you know (or at least can check after the fact) that your final answer will no lie right on or very near to that C1 discontinuity, then its probably fine to leave the code the way is is. Your derivatives will be well defined everywhere but right at 0 and you can simply pick the left or the right answer for 0.
Its not strictly mathematically correct, but it works fine as long as you're not ending up stuck right there.
Option 2
Apply a smoothing function. This can be a sigmoid, or a simple polynomial. The exact nature of the smoothing function is highly specific to the kind of discontinuity you are trying to approximate.
In the case of the piecewise function above, you might be tempted to define that function as:
2x*sig(x)
That would give you roughly the correct behavior, and would be differentiable everywhere. But wolfram alpha shows that it actually undershoots a little. Thats probably undesirable, so you can increase the exponent to mitigate that. This however, is where you start to get underflow and overflow problems.
So to work around that, and make a better behaved function all around, you could instead defined a three part piecewise polynomial:
y = 2x; x>=a
y = c0 + c1*x + c2*x**2; -a <= x < a
y = 0 x < -a
you can solve for the coefficients as a function of a (please double check my algebra before using this!):
c0 = 1.5a
c1 = 2
c2 = 1/(2a)
The nice thing about this approach is that it will never overshoot and go negative. You can also make a reasonably small and still get decent numerics. But if you try to make it too small, c2 will obviously blow up.
In general, I consider the sigmoid function to be a bit of a blunt instrument. It works fine in many cases, but if you try to make it approximate a step function too closely, its a nightmare. If you want to represent physical processes, I find polynomial fillet functions work more nicely.
It takes a little effort to derive that polynomial, because you want it to be c1 continuous on both sides of the curve. So you have to construct the system of equations to solve for it as a function of the polynomial order and the specific relaxation you want (0.1 here).
My goto has generally been to consult the table of activation functions on wikipedia: https://en.wikipedia.org/wiki/Activation_function
I've had good luck with sigmoid and the hyperbolic tangent, scaling them such that we can choose the lower and upper values as well as choosing the location of the activation on the x-axis and the steepness.
Dymos uses a vectorization that I think is similar to OpenConcept and I've had success with numpy.where there as well, providing derivatives for each possible "branch" taken. It is true that you may have issues with derivative mismatches if you have an analysis point right on the transition, but often I've had success despite that. If the derivative at the transition becomes a hinderance then implementing a sigmoid or relu are more appropriate.
If x is of a magnitude such that it can cause overflows, consider applying units or using scaling to put it within reasonable limits if you cannot bound it directly.

Julia not solving sparse Linear system

Introduction
I'm doing research in computational contact mechanics, in which I try to solve a PDE using a finite difference method. Long story short, I need to solve a linear system like Ax = b.
The suspects
In the problem, the matrix A is sparse, and so I defined it accordingly. On the other hand, both x and b are dense arrays.
In fact, x is defined as x = A\b, the potential solution of the problem.
So, the least one might expect from this solution is to satisfy that Ax is close to b in some sense. Great is my surprise when I find that
julia> norm(A*x-b) # Frobenius or 2-norm
5018.901093242197
The vector x does not solve the system! I've tried a lot of tricks discover what is going on, but no clues as of now. My first candidate is that I've found a bug, however I need more evidence to make this assertion.
The hints
Here are some tests that I've done to try to pinpoint the error
If you convert A to dense, the solution changes completely, and in fact it returns the correct solution.
I have repeated the proccess above in matlab, and it seems to work well with both sparse and dense matrices (that is, the sparse version does not agree with that of Julia's)
Not all sparse matrices cause a problem. I have tried other initial conditions and the solver seems to work quite well. I am not able to predict what property of the matrix can be causing this discrepancy. However;
A has a condition number of 120848.06, which is quite high, although matlab doesn't seem to complain. Also, the absolute error of the solution to the real solution is huge.
How to reproduce this "bug"
Download the .csv files in the following link
Run the following code in the folder of the files (install the packages if necessary
using DelimitedFiles, LinearAlgebra, SparseArrays;
A = readdlm("A.csv", ',');
b = readdlm("b.csv", ',');
x = readdlm("x.csv", ',');
A_sparse = sparse(A);
println(norm(A_sparse\b - x)); # You should get something close to zero, x is the solution of the sparse implementation
println(norm(A_sparse*x - b)); # You should get something not close to zero, something is not working!
Final words
It might easily be the case that I'm missing something. Are there any other implementations apart from the usual A\b to test against?
To solve a sparse square system Julia chooses to do a sparse LU decomposition. For the specific matrix A in the question, this decomposition is numerically ill-conditioned. This is evidenced by the cond(lu(A_sparse).U) == 2.879548971708896e64. This causes the solve routine to make numerical errors in turn.
A quick solution is to use a QR decomposition instead, by running x = qr(A_sparse)\b.
The solve or LU routines might need to be fixed to handle this case, or at least maintainers need to know of this issue, so opening an issue on the github repo might be good.
(this is a rewrite of my comment on question)

arithmetic library for tracking worst case error

(edited)
Is there any library or tool that allows for knowing the maximum accumulated error in arithmetic operations?
For example if I make some iterative calculation ...
myVars = initialValues;
while (notEnded) {
myVars = updateMyVars(myVars)
}
... I want to know at the end not only the calculated values, but also the potential error (the range of posible values if results in each individual operations took the range limits for each operand).
I have already written a Java class called EADouble.java (EA for Error Accounting) which holds and updates the maximum positive and negative errors along with the calculated value, for some basic operations, but I'm afraid I might be reinventing an square wheel.
Any libraries/whatever in Java/whatever? Any suggestions?
Updated on July 11th: Examined existing libraries and added link to sample code.
As commented by fellows, there is the concept of Interval Arithmetic, and there was a previous question ( A good uncertainty (interval) arithmetic library? ) on the topic. There just a couple of small issues about my intent:
I care more about the "main" value than about the upper and lower bounds. However, to add that extra value to an open library should be straight-forward.
Accounting the error as an independent floating point might allow for a finer accuracy (e.g. for addition the upper bound would be incremented just half ULP instead of a whole ULP).
Libraries I had a look at:
ia_math (Java. Just would have to add the main value. My favourite so far)
Boost/numeric/Interval (C++, Very complex/complete)
ErrorProp (Java, accounts value, and error as standard deviation)
The sample code (TestEADouble.java) runs ok a ballistic simulation and a calculation of number e. However those are not very demanding scenarios.
probably way too late, but look at BIAS/Profil: http://www.ti3.tuhh.de/keil/profil/index_e.html
Pretty complete, simple, account for computer error, and if your errors are centered easy access to your nominal (through Mid(...)).

Function for returning a list of points on a Bezier curve at equal arclength

Someone somewhere has had to solve this problem. I can find many a great website explaining this problem and how to solve it. While I'm sure they are well written and make sense to math whizzes, that isn't me. And while I might understand in a vague sort of way, I do not understand how to turn that math into a function that I can use.
So I beg of you, if you have a function that can do this, in any language, (sure even fortran or heck 6502 assembler) - please help me out.
prefer an analytical to iterative solution
EDIT: Meant to specify that its a cubic bezier I'm trying to work with.
What you're asking for is the inverse of the arc length function. So, given a curve B, you want a function Linv(len) that returns a t between 0 and 1 such that the arc length of the curve between 0 and t is len.
If you had this function your problem is really easy to solve. Let B(0) be the first point. To find the next point, you'd simply compute B(Linv(w)) where w is the "equal arclength" that you refer to. To get the next point, just evaluate B(Linv(2*w)) and so on, until Linv(n*w) becomes greater than 1.
I've had to deal with this problem recently. I've come up with, or come across a few solutions, none of which are satisfactory to me (but maybe they will be for you).
Now, this is a bit complicated, so let me just give you the link to the source code first:
http://icedtea.classpath.org/~dlila/webrevs/perfWebrev/webrev/raw_files/new/src/share/classes/sun/java2d/pisces/Dasher.java. What you want is in the LengthIterator class. You shouldn't have to look at any other parts of the file. There are a bunch of methods that are defined in another file. To get to them just cut out everything from /raw_files/ to the end of the URL. This is how you use it. Initialize the object on a curve. Then to get the parameter of a point with arc length L from the beginning of the curve just call next(L) (to get the actual point just evaluate your curve at this parameter, using deCasteljau's algorithm, or zneak's suggestion). Every subsequent call of next(x) moves you a distance of x along the curve compared to your last position. next returns a negative number when you run out of curve.
Explanation of code: so, I needed a t value such that B(0) to B(t) would have length LEN (where LEN is known). I simply flattened the curve. So, just subdivide the curve recursively until each curve is close enough to a line (you can test for this by comparing the length of the control polygon to the length of the line joining the end points). You can compute the length of this sub-curve as (controlPolyLength + endPointsSegmentLen)/2. Add all these lengths to an accumulator, and stop the recursion when the accumulator value is >= LEN. Now, call the last subcurve C and let [t0, t1] be its domain. You know that the t you want is t0 <= t < t1, and you know the length from B(0) to B(t0) - call this value L0t0. So, now you need to find a t such that C(0) to C(t) has length LEN-L0t0. This is exactly the problem we started with, but on a smaller scale. We could use recursion, but that would be horribly slow, so instead we just use the fact that C is a very flat curve. We pretend C is a line, and compute the point at t using P=C(0)+((LEN-L0t0)/length(C))*(C(1)-C(0)). This point doesn't actually lie on the curve because it is on the line C(0)->C(1), but it's very close to the point we want. So, we just solve Bx(t)=Px and By(t)=Py. This is just finding cubic roots, which has a closed source solution, but I just used Newton's method. Now we have the t we want, and we can just compute C(t), which is the actual point.
I should mention that a few months ago I skimmed through a paper that had another solution to this that found an approximation to the natural parameterization of the curve. The author has posted a link to it here: Equidistant points across Bezier curves

As a programmer how would you explain imaginary numbers?

As a programmer I think it is my job to be good at math but I am having trouble getting my head round imaginary numbers. I have tried google and wikipedia with no luck so I am hoping a programmer can explain in to me, give me an example of a number squared that is <= 0, some example usage etc...
I guess this blog entry is one good explanation:
The key word is rotation (as opposed to direction for negative numbers, which are as stranger as imaginary number when you think of them: less than nothing ?)
Like negative numbers modeling flipping, imaginary numbers can model anything that rotates between two dimensions “X” and “Y”. Or anything with a cyclic, circular relationship
Problem: not only am I a programmer, I am a mathematician.
Solution: plow ahead anyway.
There's nothing really magical to complex numbers. The idea behind their inception is that there's something wrong with real numbers. If you've got an equation x^2 + 4, this is never zero, whereas x^2 - 2 is zero twice. So mathematicians got really angry and wanted there to always be zeroes with polynomials of degree at least one (wanted an "algebraically closed" field), and created some arbitrary number j such that j = sqrt(-1). All the rules sort of fall into place from there (though they are more accurately reorganized differently-- specifically, you formally can't actually say "hey this number is the square root of negative one"). If there's that number j, you can get multiples of j. And you can add real numbers to j, so then you've got complex numbers. The operations with complex numbers are similar to operations with binomials (deliberately so).
The real problem with complexes isn't in all this, but in the fact that you can't define a system whereby you can get the ordinary rules for less-than and greater-than. So really, you get to where you don't define it at all. It doesn't make sense in a two-dimensional space. So in all honesty, I can't actually answer "give me an exaple of a number squared that is <= 0", though "j" makes sense if you treat its square as a real number instead of a complex number.
As for uses, well, I personally used them most when working with fractals. The idea behind the mandelbrot fractal is that it's a way of graphing z = z^2 + c and its divergence along the real-imaginary axes.
You might also ask why do negative numbers exist? They exist because you want to represent solutions to certain equations like: x + 5 = 0. The same thing applies for imaginary numbers, you want to compactly represent solutions to equations of the form: x^2 + 1 = 0.
Here's one way I've seen them being used in practice. In EE you are often dealing with functions that are sine waves, or that can be decomposed into sine waves. (See for example Fourier Series).
Therefore, you will often see solutions to equations of the form:
f(t) = A*cos(wt)
Furthermore, often you want to represent functions that are shifted by some phase from this function. A 90 degree phase shift will give you a sin function.
g(t) = B*sin(wt)
You can get any arbitrary phase shift by combining these two functions (called inphase and quadrature components).
h(t) = Acos(wt) + iB*sin(wt)
The key here is that in a linear system: if f(t) and g(t) solve an equation, h(t) will also solve the same equation. So, now we have a generic solution to the equation h(t).
The nice thing about h(t) is that it can be written compactly as
h(t) = Cexp(wt+theta)
Using the fact that exp(iw) = cos(w)+i*sin(w).
There is really nothing extraordinarily deep about any of this. It is merely exploiting a mathematical identity to compactly represent a common solution to a wide variety of equations.
Well, for the programmer:
class complex {
public:
double real;
double imaginary;
complex(double a_real) : real(a_real), imaginary(0.0) { }
complex(double a_real, double a_imaginary) : real(a_real), imaginary(a_imaginary) { }
complex operator+(const complex &other) {
return complex(
real + other.real,
imaginary + other.imaginary);
}
complex operator*(const complex &other) {
return complex(
real*other.real - imaginary*other.imaginary,
real*other.imaginary + imaginary*other.real);
}
bool operator==(const complex &other) {
return (real == other.real) && (imaginary == other.imaginary);
}
};
That's basically all there is. Complex numbers are just pairs of real numbers, for which special overloads of +, * and == get defined. And these operations really just get defined like this. Then it turns out that these pairs of numbers with these operations fit in nicely with the rest of mathematics, so they get a special name.
They are not so much numbers like in "counting", but more like in "can be manipulated with +, -, *, ... and don't cause problems when mixed with 'conventional' numbers". They are important because they fill the holes left by real numbers, like that there's no number that has a square of -1. Now you have complex(0, 1) * complex(0, 1) == -1.0 which is a helpful notation, since you don't have to treat negative numbers specially anymore in these cases. (And, as it turns out, basically all other special cases are not needed anymore, when you use complex numbers)
If the question is "Do imaginary numbers exist?" or "How do imaginary numbers exist?" then it is not a question for a programmer. It might not even be a question for a mathematician, but rather a metaphysician or philosopher of mathematics, although a mathematician may feel the need to justify their existence in the field. It's useful to begin with a discussion of how numbers exist at all (quite a few mathematicians who have approached this question are Platonists, fyi). Some insist that imaginary numbers (as the early Whitehead did) are a practical convenience. But then, if imaginary numbers are merely a practical convenience, what does that say about mathematics? You can't just explain away imaginary numbers as a mere practical tool or a pair of real numbers without having to account for both pairs and the general consequences of them being "practical". Others insist in the existence of imaginary numbers, arguing that their non-existence would undermine physical theories that make heavy use of them (QM is knee-deep in complex Hilbert spaces). The problem is beyond the scope of this website, I believe.
If your question is much more down to earth e.g. how does one express imaginary numbers in software, then the answer above (a pair of reals, along with defined operations of them) is it.
I don't want to turn this site into math overflow, but for those who are interested: Check out "An Imaginary Tale: The Story of sqrt(-1)" by Paul J. Nahin. It talks about all the history and various applications of imaginary numbers in a fun and exciting way. That book is what made me decide to pursue a degree in mathematics when I read it 7 years ago (and I was thinking art). Great read!!
The main point is that you add numbers which you define to be solutions to quadratic equations like x2= -1. Name one solution to that equation i, the computation rules for i then follow from that equation.
This is similar to defining negative numbers as the solution of equations like 2 + x = 1 when you only knew positive numbers, or fractions as solutions to equations like 2x = 1 when you only knew integers.
It might be easiest to stop trying to understand how a number can be a square root of a negative number, and just carry on with the assumption that it is.
So (using the i as the square root of -1):
(3+5i)*(2-i)
= (3+5i)*2 + (3+5i)*(-i)
= 6 + 10i -3i - 5i * i
= 6 + (10 -3)*i - 5 * (-1)
= 6 + 7i + 5
= 11 + 7i
works according to the standard rules of maths (remembering that i squared equals -1 on line four).
An imaginary number is a real number multiplied by the imaginary unit i. i is defined as:
i == sqrt(-1)
So:
i * i == -1
Using this definition you can obtain the square root of a negative number like this:
sqrt(-3)
== sqrt(3 * -1)
== sqrt(3 * i * i) // Replace '-1' with 'i squared'
== sqrt(3) * i // Square root of 'i squared' is 'i' so move it out of sqrt()
And your final answer is the real number sqrt(3) multiplied by the imaginary unit i.
A short answer: Real numbers are one-dimensional, imaginary numbers add a second dimension to the equation and some weird stuff happens if you multiply...
If you're interested in finding a simple application and if you're familiar with matrices,
it's sometimes useful to use complex numbers to transform a perfectly real matrice into a triangular one in the complex space, and it makes computation on it a bit easier.
The result is of course perfectly real.
Great answers so far (really like Devin's!)
One more point:
One of the first uses of complex numbers (although they were not called that way at the time) was as an intermediate step in solving equations of the 3rd degree.
link
Again, this is purely an instrument that is used to answer real problems with real numbers having physical meaning.
In electrical engineering, the impedance Z of an inductor is jwL, where w = 2*pi*f (frequency) and j (sqrt(-1))means it leads by 90 degrees, while for a capacitor Z = 1/jwc = -j/wc which is -90deg/wc so that it lags a simple resistor by 90 deg.

Resources