I'm trying to solve a 6th-order nonlinear PDE (1D) with fixed boundary values (extended Fisher-Kolmogorov - EFK). After failing with FTCS, next attempt is MoL (either central in space or FEM) using e.g. LSODES.
How can this be implemented? Using Python/C + OpenMP so far, but need some pointers
to do this efficiently.
EFK with additional 6th order term:
u_t = d u_6x - g u_4x + u_xx + u-u^3
where d, g are real coefficients.
u(x,0) = exp(-x^2/16),
ux = 0 on boundary
domain is [0,300] and dx << 1 since i'm looking for pattern formations (subject to the values
of d, g)
I hope this is sufficient information.
All PDE solutions like this will ultimately end up being expressed using linear algebra in your program, so the trick is to figure out how to get the PDE into that form before you start coding.
Finite element methods usually begin with a weighted residual method. Non-linear equations will require a linear approximation and iterative methods like Newton-Raphson. I would recommend that you start there.
Yours is a transient solution, so you'll have to do time stepping. You can either use an explicit method and live with the small time steps that stability limits will demand or an implicit method, which will force you to do a matrix inversion at each step.
I'd do a Fourier analysis first of the linear piece to get an idea of the stability requirements.
The only term in that equation that makes it non-linear is the last one: -u^3. Have you tried starting by leaving that term off and solving the linear equation that remains?
UPDATE: Some additional thoughts prompted by comments:
I understand how important the u^3 term is. Diffusion is a 2nd order derivative w.r.t. space, so I wouldn't be so certain that a 6th order equation will follow suit. My experience with PDEs comes from branches of physics that don't have 6th order equations, so I honestly don't know what the solution might look like. I'd solve the linear problem first to get a feel for it.
As for stability and explicit methods, it's dogma that the stability limits placed on time step size makes them likely to fail, but the probability isn't 1.0. I think map reduce and cloud computing might make an explicit solution more viable than it was even 10-20 years ago. Explicit dynamics have become a mainstream way to solve difficult statics problems, because they don't require a matrix inversion.
Related
Does Julia support boundary value differential algebraic equations? I have an implicit ODE with a variable mass matrix that is sometimes singular, so I have to use the DAEProblem. My problem is two coupled second order ODEs for x1(t) and x2(t) that I have transformed into four first order equations by setting x1'(t) = y1(t) and x2'(t)=y2(t). I have values for x1 and x2 at the start and end of my domain, but don't have values for y1 or y2 anywhere, so I have a need for both a DAE and a BVP.
This github post suggests that this is possible, but I'm afraid I don't understand the machinery well enough to understand how to couple DAEProblem with BVProblem.
I've had success writing multiple shooting code following numerical recipes to solve the problem, but it's fairly clunky. Ultimately, I would like to pair this with DiffEqFlux (I have quite a few measurements of x1 and x2 along the domain and don't know the exact form of the differential equation), but I suspect that would be much simpler if there was a more direct approach to linking BVProblem with DAEProblem.
Just go directly to DiffEqFlux, since parameter estimation encompasses BVPs. Write the boundary conditions as part of the loss function on a DAEProblem (i.e. that the starting values should equal x and the final values should equal y), and optimize the initial conditions at the same time as any parameters. Optimizing only the intial conditions and not any parameters is equivalent to a single shooting BVP solver in this form, and this allows simultaneous parameter estimation. Or use the multiple shooting layer functions to do multiple shooting. Or use a BVProblem with a mass matrix.
For any more help, you'll need to share code for what you tried and didn't work, since it's not anything more difficult than that so it's hard to give more generic help than just "use constructor x".
Nearly every game tends to use some of a game loop. Gafferongames has a great article on how to make a well designed game loop: http://gafferongames.com/game-physics/fix-your-timestep/
In his code, he uses integrate( state, t, deltaTime );, where I believe state contains position, velocity, and acceleration of the object. He uses RK4 to integrate it from t to t+deltaTime.
My question is, why use a numerical integration technique like RK4, when you can use kinematics equations (here) to find the exact value?
These equations work when acceleration is constant. It seems rare that you would have a changing acceleration within a timestep. It seems like RK4 is a lower performance, lower accuracy, more complex solution.
Edit: I think you could add a "jerk" value to objects and still find exact expressions for acceleration, velocity, and displacement, if you really wanted to.
Edit 2: Well, I did not read his "Integration Basics" article too carefully. I think he's modelling a damper and spring, which do cause non-constant acceleration within a timestep.
As soon as you add things that many game designers want, like (velocity dependent) drag, position dependent forces, etc. the equations are no longer solvable exactly.
So, if you're happy to limit your forces to those the kinematic equation can handle, then go with it. If you want something flexible, then numerical integration is the only way to go.
Note: If you treat the forces as constant over a time interval when they are not really constant - then you are actually using a form of numerical integration. And it is an inaccurate form of integration too. So why not use a tried and proven numerical method instead? RK4 is one of many such methods.
Approximating acceleration (derivatives, really) as constant within a time step is how numerical integration methods work. When the derivatives are not constant, you need to consider what sort of error you introduce by treating them as constant.
Imagine breaking a time range T up into N equal steps of width h=T/N. Now integrate the dynamical equations stepwise. With RK4, the local error per-step is O(h^5) giving a global error of O(h^4).
Using the kinematical equations as you propose, we can assess the error by considering the Taylor expansion of the position, keeping terms to second order. The position will have error of O(h^3) introduced at each step, corresponding to where you truncate the expansion. This gives local error O(h^3) and global error O(h^2).
Based on the asymptotic error, the error from RK4 goes to zero much more rapidly than does the kinematical equations. It's more accurate. RK4 obtains a very nice accuracy obtained for the number of function evaluations that need to be done.
I'm trying to design a nonlinear fitness function where I maximize variable A and minimize the variable B. The issue is that maximizing A is much more important at single digit values, almost logarithmic. B needs to be minimized and in contrast to A, it becomes less important when small (less than one) and more important when it's larger (>1), so exponential decay.
The main goal is to optimize A, so I guess an analog is A=profits, B=costs
Should I aim to keep everything positive so that the I can use a roulette wheel selection, or would it be better to use a rank/torunament kind of system? The purpose of my algorithm is shape optimization.
Thanks
When considering a multi-objective problem the goal is usually to identify all solutions that lie on the Pareto curve - the Pareto optimal set. Have a look here for a 2-dimensional visual example. When the algorithm completes you want a set of solutions that are not dominated by any other solution. You therefore need to define a pareto ranking mechanism to take into account both objectives - for a more in depth explanation, as well as links to even more reading, go here
With this in mind, in order to effectively explore all solutions along the pareto front you do not want an implementation that encourages premature convergence, otherwise your algorithm will only explore the search space in one specific area of the Pareto curve. I would implement a selection operator that keeps all members of each iteration's optimal set of solutions, that is all solutions which are not dominated by another + plus a parameter controlled percentage of other solutions. This way you encourage exploration all along the Pareto curve.
You also need to ensure your mutation and crossover operators are tuned correctly too. With any novel application of Evolutionary Algorithms, part of the problem is trying to identify an optimal parameter set for the problem domain... this is where it gets really interesting!!
The description is very vague, but assuming that you actually have an idea of what the function should look like and you're just wondering whether you need to modify it so that proportional selection can be used easily, then no. Regardless of fitness function, you should probably default to using something like tournament selection. Controlling selection pressure is one of the most important things you have to do in order to get consistently good results, and roulette wheel selection doesn't allow you that control. You typically get enormous pressure very early, which drives premature convergence. That might be preferable in a few cases, but it's not where I'd start my investigations.
If I have a system of a springs, not one, but for example 3 degree of freedom system of the springs connected in some with each other. I can make a system of differential equations for but it is impossible to solve it in a general way. The question is, are there any papers or methods for filtering such a complex oscilliations, in order to get rid of the oscilliations and get a real signal as much as possible? For example if I connect 3 springs in some way, and push them to start the vibrations, or put some weight on them, and then take the vibrations from each spring, are there any filtering methods to make it easy to determine the weight (in case if some mass is put above) of each mass? I am interested in filtering complex spring like systems.
Three springs, six degrees of freedom? This is a trivial solution using finite element methods and numerical integration. It's a system of six coupled ODEs. You can apply any form of numerical integration, such as 5th order Runge-Kutta.
I'd recommend doing an eigenvalue analysis of the system first to find out something about its frequency characteristics and normal modes. I'd also do an FFT of the dynamic forces you apply to the system. You don't mention any damping, so if you happen to excite your system at a natural frequency that's close to a resonance you might have some interesting behavior.
If the dynamic equation has this general form (sorry, I don't have LaTeX here to make it look nice):
Ma + Kx = F
where M is the mass matrix (diagonal), a is the acceleration (2nd derivative of displacements w.r.t. time), K is the stiffness matrix, and F is the forcing function.
If you're saying you know the response, you'll have to pre-multiply by the transpose of the response function and try to solve for M. It's diagonal, so you have a shot at it.
Are you connecting the springs in such a way that the behavior of the system is approximately linear? (e.g. at least as close to linear as are musical instrument springs/strings?) Is this behavior consistant over time? (e.g. the springs don't melt or break.) If so, LTI (linear time invariant) systems theory might be applicable. Given enough measurements versus the numbers of degrees of freedom in the LTI system, one might be able to estimate a pole-zero plot of the system response, and go from there. Or something like a linear predictor might be useful.
Actually it is possible to solve the resulting system of differential equations as long as you know the masses, etc.
The standard approach is to use a Laplace Transform. In particular you start with a set of linear differential equations. Add variables until you have a set of first order linear differential equations. (So if you have y'' in your equation, you'd add the equation z = y' and replace y'' with z'.) Rewrite this in the form:
v' = Av + w
where v is a vector of variable, A is a matrix, and w is a scalar vector. (An example of something that winds up in w is gravity.)
Now apply a Laplace transform to get
s L(v) - v(0) = AL(v) + s w
Solve it to get
L(v) = inv(A - I s)(s w + v(0))
where inv inverts a matrix and I is the identity matrix. Apply the inverse Laplace transform (if you read up on Laplace transforms you can find tables of inverse of common types of functions - getting a complete list of the functions you actually encounter shouldn't be that hard), and you have your solution. (Be warned, these computations quickly get very complex.)
Now you have the ability to take a particular setup and solve for the future behavior. You also have the ability to (if you do things really carefully) figure out how the model responds to a small perturbation in parameters. But your problem is that you don't know the parameters to use. However you do have the ability to measure the positions in the system at repeated times.
If you put this together, what you can do is this. Measure your position at a number of points. First estimate all of the initial values of the parameters, and then all of the values a second later. You can adjust your parameters (using Newton's method) to come close enough to the values a second later. Take the measurements from 5 seconds later and use that initial estimate as your starting point to refine your calculations for what is happening 5 seconds later. Repeat with longer intervals to get all of your answers.
Writing and debugging this should take you some time. :-) I would strongly recommend investigating how much of this Mathematica knows how to do for you already...
There is a good compilation of trajectory math in wikipedia.
But I need to calculate a trajectory that has non uniform conditions. E.g. the wind speed changes above certain altitude. (Cannot be modeled easily.)
Should I calculate projectile's velocity vector e.g. every second and then for the next second based on that (having small enough tdelta)
Or should I try to split the trajectory into pieces - based on the parameters (e.g. wind is vwind 1 between y1 and y2 so I calculate for y<y1, y1≤y<y2 and y2≤y separately).
Try to build and solve a symbolic equation - run time - with all the parameters modeled. (Is this completely utopistic? Traditional programmin languages aren't too good solving symbols.)
Something completely different... ?
Are there good languages / frameworks for handling symbolic math?
I'd suggest an "improved" first approach: solve the differential equations of motion numerically with e.g. the classic Runge-Kutta method.
The nice part is that with these algorithms, once you correctly set up your framework, you just have to write an "evaluate" function for the motion law (which can be almost anything - you don't need to restrict to particular forces), and everything should work fine (as far as the integration step is adequate).
If the conditions really are cleanly divided into two domains like that, then the second approach is probably best. The first approach is both imprecise and overkill, and the third, if done right, will wind up being equivalent to the second.