Which method does OpenFoam use to calculate the pressure and correct the velocities? - openfoam

I'm trying to run the case "cavity" but I've assigned a Dirichlet condition on the moving wall (p = 0) and I've noticed the pressure values change depending on the number of nodes (but I'm not quite sure yet). However, I would like to see the algorithm OpenFOAM uses to perform the velocity correction in order to ensure the incompressibility condition.
Does anyone have documentation about this algorithm? or do you know where in the libraries of OpenFoam could I get a better idea of which it's done internally?
I know it's the PISO method but I'd like more details.

Related

Min Max component, objective function

I would like to perform some optimizations by minimizing the maximum of a specific path variable within Dymos. or the maximum of the absolute of such a variable.
In linear programming methods, this can be done by introducing slack variables.
Do you know if this has been attempted before with Dymos, or if there was a reason not to include it?
I understand gradient based methods are not entirely suitable for these problems, though I think some "functions" can be introduced to mitigate this.
For example,
The space shuttle reentry problem from [Betts][1] used as a [test example][2] in dymos, the original source contains an example where the maximum heat flux is minimized. Such functionality could be implemented with the "loc" argument as:
phase.add_objective('q_c', loc='max')
[1]: J. Betts. Practical Methods for Optimal Control and Estimation Using Nonlinear Programming. Society for Industrial and Applied Mathematics, second edition, 2010. URL: https://epubs.siam.org/doi/abs/10.1137/1.9780898718577, arXiv:https://epubs.siam.org/doi/pdf/10.1137/1.9780898718577, doi:10.1137/1.9780898718577.
[2]: https://openmdao.github.io/dymos/examples/reentry/reentry.html
This has been done with pseudospectral methods before. Dymos currently doesn't have any direct way of implementing this, for a few reasons:
As you said, doing this naively can introduce discontinuous gradients that confuse the optimizer. When the node at which the maximum occurs switches, this tends to cause a sharp edge discontinuity in the gradient.
Since the pseudospectral methods are discrete, you cannot guarantee that the maximum will occur at a node. It's often fine to assume it does, but sometimes your requirements might demand more precision.
There are two possible ways to get around this.
The KSComp in OpenMDAO can be used as a "differentiable maximum". Add one after the trajectory, feed it the timeseries data for the output of interest, and set it up such that it returns a smooth approximation to the maximum. The KS function is a bit conservative, so it won't pick out the precise maximum, but depending on the value of the rho option it can be tuned to get pretty close.
When a more precise value of a maximum is needed, it's pretty common to set up a trajectory such that a phase ends when the maximum or minimum is reached.
If the variable whose maximum is being sought is a state, this can be done by adding a boundary constraint on the rate source for that state.
This ensures that the maximum occurs at the first or last node in the phase (depending on if its an initial or final boundary constraint). That lets you more accurately capture its value.
If the variable being sought is not a state, its possible to use the polynomials that are used for fitting states and controls in a phase to interpolate the variable of interest. By then taking the time derivative of that polynomial we can get a reasonably good approximation for its rate. The master branch of dymos has a method add_timeseries_rate_output that does this. And soon, within a few weeks hopefully, we'll add add_boundary_rate_constraint so that these interpolated rates can be easily used as boundary constraints.
In the meantime, you should be able to achieve this by adding the timeseries rate output and then manually applying the OpenMDAO method 'add_constraint' to the resulting timeseries output, using either indices=[0] or indices=[-1] to treat it as an initial or final constraint.
This is a common enough request that we'll add some documentation on how to achieve this behavior using both the KSComp approach and the boundary constraint approach.
Personally I'm not as much of a fan of KSComp because I've had trouble getting problems getting those types of objectives to converge in the past. I've used the slack variable and that has worked well. In the following example, we take a guess at the Rotor power in static analysis, and then we run a trajectory and get the actual rotor power during the mission. The objective was to minimize aircraft weight, so if you have a large amount of power in statics, that costs more weight. The constraint shown below prevents us from decreasing our updated guess of rotor power in statics below the maximum power required during the trajectory.
p.model.add_subsystem(
'static_power_check',
om.ExecComp('Power_check = Power_ODE - Power_statics',
Power_check = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_ODE = {'value':np.ones(nn_timeseries_main_tx), 'units':'kW'},
Power_statics = {'value':0.0, 'units':'kW'}),
promotes_inputs=[
('Power_ODE','hop0.main_phase.timeseries.Power_R'), ('Power_statics','Power_{rotor,slack}')],
promotes_outputs=['Power_check'])
p.model.add_constraint('Power_check', upper=0, ref=1)
The constraint on the slack variable effectively helped us ensure that our slack rotor power matched the maximum rotor power during the mission. This allowed us to get the right sizes for the rotor parts (i.e. motors).

How do I warm start a dymos optimization problem?

My problem:
I have a system with 4 states and 4 parameters (static) that I would like to optimize. The parameters are initialized to some known values that would result in trajectories that respect constraints. The states are initialized to a constant value. To verify the model, I run the problem where the parameters setting opt=False. Once verified, I rebuild the OpenMDAO problem with opt=True and run the optimizer.
I'm running a study to evaluate how each parameter affects the system, cost function, etc. and how the initial guess impacts the optimization (ideally, it doesn't). The problem I encounter is that some initial guesses for a parameter result in a failed optimization (iteration limit or positive line search) while others don't and it's not immediately clear why. Note: I always provide an initial guess for the problem that results in feasible trajectories. I check this by setting opt=False for the parameters when I build the problem.
My assumption is that although my initial guess for the parameters are okay, my initial guess for the states is not and the problem gets stuck trying to get feasible trajectories.
My solution/idea:
Is it possible to warm start an optimization problem in Dymos? To warm start, I would like to provide a feasible solution to the states and state rates of the optimizer. As a general flow I would like to first (1) run the optimization with the opt setting in controls and parameters set to False to get a state trajectory, then (2) set the opt setting for controls and parameters to True, and finally (3) re-run the optimization. It seems like there should be an easy way to do this, but I can't determine how without creating 2 problems (with different opt settings) and setting all the initial state guesses of the opt=True problem.
Note: I did read this post: Dymos how to use previous trajectory solution as initial guess? and I can rerun a problem. I just don't know how to change the opt setting between runs.
If there is an alternate or better solution to my problem, I'd be interested in that as well.
If you are using IPOPT, using a previous solution as your initial guess doesn't really help. This is due to the nature of interior point optimizers. On start, the barrier parameter mu is large. This will push the "optimum" solution, for that value of the barrier parameter mu, from doing Newton's method, AWAY from the initial guess. Then mu is decreased, Newton's method gets you closer to the true optimum. This process gets repeated as mu as decreased, until finally mu is small and you get back to the point, which was the optimum that you guessed initially.
Also, because we are using a Quasi-Newton method with a limited-memory Hessian approximation (L-BFGS) when going through Dymos/pyoptsparse, all the information about the Hessian is not there when you start again even if your initial guess is the optimum. So that information has to be filled in again as the algorithm iterates.
I am not an IPOPT expert but this seems to explain why I had no luck trying to use an "improved" initial guess. One thing that did help a lot with convergence was increasing the "limited_memory_max_history" parameter to 100 or so.
IPOPT does have the warm-start option but getting it the initial information it needs regarding the Hessian and initial multipliers might be something you have to go into pyoptsparse to figure out how to do.

Signal to filter in R

I need to filter a signal without it losing its properties so that later this signal is inserted into an artificial neural network. I'm using the R and the signal library, I thought about using a low-pass filter or an FFT.
This is the signal to be filtered, it is about shifting pixels in a video. In the case I calculated the resultant of vectors X and Y to obtain only one value and thus generate this graph / signal:
Using the signal library and the fftfilt function, I obtained the following signal, which seems to be easier for a neural network to be trained, but I did not understand what the function is doing and if the signal properties have remained.
resulting <- fftfilt(rep(1,50)/50,resulting)
Could someone explain how this function works or suggest a better method to filter this signal.
As for the fftfilt(...) function I can tell you roughly what it does: it is an approximate finite impulse response filter implementation that uses FFT of the filter impulse response function with some padding as a window. It gets the signal's FFT within the window and filter's IR FFT, then generates the filtered signal in frequency domain by just multiplying the two results and then uses the reverse FFT to get the actual result in time domain. If your FIR filter has a huge number of coefficients (even though in many cases it is just a sign of a bad system design and should not be needed) the function works much faster than filter(Ma(...),...). With more reasonable number of coefficients (like definitely below 100) the direct and precise approach is actually faster.
As for the proper methods of filtering there are so many of them that there are full thick books on just the topic. And my personal experience in the field is a little bit skewed towards somewhat specific very low computing power microcontroller-based sensor signal DSP tricks with fixed point arithmetics, precise unity gains in a selected pass band point, power of 2 scale coefficients and staged implementations so I doubt it would help you in this case. Basically first you just need to know what result do you want to get from your filter (and do you even need to filter or maybe just something like peak detection and decimation is enough) and then just know what you are doing. From your message it is hard to guess what your "neural network" requirements are, what you think you need for it and what you actually need.

Graph partitioning optimization

The problem
I have a set of locations on a plane (actually they are pins in a KML file) and I want to partition this graph into subgraphs. Connectivity is pretty good - as with all real world road networks - so I assume that if two locations are close they have some kind of connection. The resulting set of subgraphs should adhere to these constraints:
Every node has to be covered by a subgraph
Every node should be in exactly 1 subgraph
Every node within a subgraph should be close to each other (L2 norm distances)
Every subgraph should contain at least 5 locations
The amount of subgraphs should be minimal
Right now the amount of locations is no more than 100 so I thought about brute forcing through every possibility but this obviously won't scale well.
I thought about using some k-Nearest-Neighbors algorithm (e.g. using QuickGraph) but I can't get my head around where to start and how to extend/shrink the subgraphs on the way. Maybe it's possible to map this problem to another problem that can easily be solved with some numerical procedure (e.g. Simplex) ...
Maybe someone has experience in this kind of optimization problems and is willing to help me find a solution? I don't have access to Mathematica/Matlab or the like ... but sufficient .NET programming skills and hmm Excel :-)
Thanks a lot!
As soon as there are multiple criteria that need to be appeased in the best possible way simultanously, it is usually starting to get difficult.
A numerical solution could work as follows: You could define yourself a utility function, that maps partitionings of your locations to positive real values, describing how "good" a partition is by assigning it a "rating" (good could be high "bad" could be near zero).
Once you have such a function assigning partitions their according "values", you simply need to optimize it and then you hopefully obtain a good solution if you defined your utility function reasonably. Evolutionary algorithms are good at that task since your utility function is probably analytically too complex to solve due to its discrete nature.
The problem is then only how you assign "values" to partitions via this utility function. This is then your task. It can be done for example by weighing each criterion with a factor and summing the results up, or even more complex functions (least squares etc.). The factors you use in the definition of the utility function are tuning parameters and can be varied until the result seems to be good.
Some CA software wold help a lot for testing if you can get your hands on one, bit I guess to obtain a black box solver for your partitioning problem, you need to implement the complete procedure yourself using a language of your choice.

fitness function and Selection for a Genetic Algorithm

I'm trying to design a nonlinear fitness function where I maximize variable A and minimize the variable B. The issue is that maximizing A is much more important at single digit values, almost logarithmic. B needs to be minimized and in contrast to A, it becomes less important when small (less than one) and more important when it's larger (>1), so exponential decay.
The main goal is to optimize A, so I guess an analog is A=profits, B=costs
Should I aim to keep everything positive so that the I can use a roulette wheel selection, or would it be better to use a rank/torunament kind of system? The purpose of my algorithm is shape optimization.
Thanks
When considering a multi-objective problem the goal is usually to identify all solutions that lie on the Pareto curve - the Pareto optimal set. Have a look here for a 2-dimensional visual example. When the algorithm completes you want a set of solutions that are not dominated by any other solution. You therefore need to define a pareto ranking mechanism to take into account both objectives - for a more in depth explanation, as well as links to even more reading, go here
With this in mind, in order to effectively explore all solutions along the pareto front you do not want an implementation that encourages premature convergence, otherwise your algorithm will only explore the search space in one specific area of the Pareto curve. I would implement a selection operator that keeps all members of each iteration's optimal set of solutions, that is all solutions which are not dominated by another + plus a parameter controlled percentage of other solutions. This way you encourage exploration all along the Pareto curve.
You also need to ensure your mutation and crossover operators are tuned correctly too. With any novel application of Evolutionary Algorithms, part of the problem is trying to identify an optimal parameter set for the problem domain... this is where it gets really interesting!!
The description is very vague, but assuming that you actually have an idea of what the function should look like and you're just wondering whether you need to modify it so that proportional selection can be used easily, then no. Regardless of fitness function, you should probably default to using something like tournament selection. Controlling selection pressure is one of the most important things you have to do in order to get consistently good results, and roulette wheel selection doesn't allow you that control. You typically get enormous pressure very early, which drives premature convergence. That might be preferable in a few cases, but it's not where I'd start my investigations.

Resources