I had a similar issue as expressed in this question. I followed Rob Flack's answer but had issues. If anyone could help me out, I would appreciate it.
I used the code suggested in the answer but had an issue: It changed the simulation results. I added a line in the script for the min_time_climb example that goes like this:
phase.add_timeseries_output('aero.mach', units=None, shape=(1,), output_name = "recorded_mach")
I used the name "recorded_mach" so as to not override anything else Dymos may or may not have been recording. The issue is that the default Altitude (h) vs. time graph actually changed, both the discrete points and simulation curve. I ended up recording 4 variables with similar commands to what I have just shown and that somehow made the simulation track better with the discrete optimisation points on the graph. When I recorded another 4 variables on top of that, it made it track worse. I find this very strange because I don't see why recording the simulation should change its output.
Have you ever come across this? Any insight you could provide into the issue would be greatly appreciated.
Notes:
I have somewhat modified the example in order to fit a different sutuation (Different thrust and fuel burn data, different lift and drag polars, different height and speed goals) before implimenting the code described above. However, it was working fine still.
Without some kind of example to look at, I can only make an educated guess. So please take my answer with a grain of salt.
Some optimization problems have very ill conditioned Jacobians and/or KKT matrices (which you as a user would not normally see, but can be problematic none the less). There are many potential causes for this ill conditioning, but some common ones are very large derivatives (i.e. approaching infinity) or very larger ranges in magnitude between different derivatives. Another common cuase is the introduction of a saddle point, where you have infinite numbers of answers that are all equally good. Sometimes you can fix the problem with scaling, other times you need to re-work the problem formulation.
Ill conditioning has two bad effects on the optimizer. First, it makes it very hard for the numerics inside to comput inverses which are needed to compute step sizes. It will get an answer, but may be highly subject to numerical noise. Second, it may prevent certain approximations (like BFGS) from performing well in the first place.
In these cases, small changes in execution order or extra steps (e.g. case recoding) can cause the optimizer to take a different path. If you're finding that the path ultimately leads one case to work and another to fail, then you might have a marginally stable problem where you got lucky one time and not the other.
Look carefully for anything singular-like in your jacobian. 0 rows/columns? a constraint that happens to be satisfied, but still has a 0 row is a problem that comes up in Dymos cases if you forget to add additional degrees of freedom when you add constraints. Saddle points also arise if you're careful with your objective.
Related
Pretty nice feature. I am trying to figure out how to interpret the report to see if there are any glaring problems in my model that includes a trajectory with Dymos. My model usually converges fine, although sometimes I have to change the NLP scaling to gradient based. I have no idea what that really does but usually it will make IPOPT converge if the default setting doesn't work, and vice versa.
This the what the Jacobian looks like according to the tool
I am guessing that what is desired is that the order of magnitude of the partials in the Jacobian spans as few orders of magnitude as possible. The lower two diagonal bands have magnitudes from 0.1 to 10E5 and seem to be related to phase linkages. For example we have 'traj.linkages.phase_1:h_final|phase_2:h_initial wrt traj.phases.phase_1.indep_states.states:h' with a magnitude of 10E5. Should I be doing something about this?
In the design variables everything seems to be scaled ok, with driver values an order of magnitude of 1.
In the constraints report the OOM span is wider from 10E-5 to 10E2. I am not setting defect_refs. Maybe I need to do something here?
I tend to use the table that shows the norms from the driver and model perspectives.
In this case the driver values are all scaled on the order of 1 or so. If they're huge, then the scalers/refs may need to be adjusted. (Scaling to the order of around 1 isn't guaranteed to be a good strategy, but it's usually a pretty good going in position.
You may want to do this after limiting the driver iterations. For a converged case in dymos, the values of the defect constraints are going to be close to zero and this may not give you a good idea of their initial values.
For defect_refs, for instance, if I note that the value of the constraint were something like 5.5E4, then I might set that defect ref to 1.0E-4. Again, unit scaling isn't always correct, but it does frequently work.
I was hoping to get some information on how to set my defect refs in Dymos a smart way. I found the following notes on scaling here https://github.com/hweyandtnasa/scaling-tutorial but it lists defect scaling in Dymos as a TODO still. Should I just set them equal to the ref value for the state they pertain to?
Scaling pseudospectral optimal control problems is tricky. If you can get a copy of John Betts' Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, I highly recommend it. Betts suggest using the same scaling for both the state design variable values and the defects. This is often a good rule of thumb, but as with most approaches to scaling, isn't universal. The collocation "defects" which dictate whether the dynamics are physically correct are just the difference between the slope of the approximating polynomial and the computed equations of motion.
In situations where state values are large but tiny rates of change are significant, then different scaling is warranted in my experience. Examples of states where these can be true are aircraft range or spacecraft orbital elements. Just recently we had a situation where a low-thrust orbit transfer of spacecraft wasn't matching physics. The semi-latus rectum, for instance, is typically measured in km, so on the scale of thousands when in Earth orbit). In the units being used, a "significant" difference in the defect was less than 1E-6 (the threshold for feasibility being used). In this case, the problem was solved by bumping the defect_scaler up a few orders of magnitude (equivalent to bumping the defect_ref down a few orders of magnitude).
I'd also recommend this paper from Ross, Gong, Karpenko, and Proulx. It lays out some good rules of thumb and has an approachable example in the brachistochrone. It references costates a lot. Dymos doesn't provide automatic costate estimation yet, but they are closely related to the lagrange multipliers of the problem, which are printed in the pyoptsparse output if you use SNOPT.
The github repo you pointed out was the work of an intern and was based around this scaling method developed by Sagliano. We found it to work well in a many situations, but it's also not a panacea.
Ultimately we want some automatic scaling options in Dymos and/or OpenMDAO, but we're not sure when they might find their way into the framework. Our past work has typically tied scaling approaches more tightly to the equations of motion, and Dymos is designed to be more general in that the user can supply whatever EOM they choose.
In Dymos, if you leave the defect_ref value unset when you call set_state_options then the default behavior is to make make the defect_ref equal to the ref value. Here is why that is done:
Defects are the differences between the computed state rate from the polynomial interpolation function and the actual state rate computed by the ODE.
As you can see here:
defect = (f_approx-f_computed) * dt_dstau
the dt_dstau just adjusts things into a normalized time space called tau but it also multiplies by the time unit as well (tau is dimensionless). That means the defects are computed in the same units as the states themselves. Thus a reasonable guess for scaling is to match the scaling between the states and the defects. As Rob Falck's answer points out that is not always the right solution, but it's a good starting point.
I have two openmdao groups with cyclic dependency between the groups. I calculate the derivatives using Complex step. I have a non-linear solver for the dependency and use SLSQP to optimize my objective function. The issue is with the choice of the non-linear solver. When I use NonlinearBlockGS the optimization is successful in 12 iterations. But when I use NewtonSolver with Directsolver or ScipyKrylov the optimization fails (Iteration limit exceeded), even with maxiter=2000. The cyclic connections converge, but it is just that the design variables does not reach the optimal values. The difference between the design variables in consecutive iterations is in the order 1e-5. And this increases the iterations needed. Also when I change the initial guess to a value closer to the optimal value it works.
To check further, I converted the model into IDF (by creating copies of coupling variables and consistency constraints) thereby removing the need for a solver. Now the optimization is successful in 5 iterations and the results are similar to the results when NonlinearBlockGS is used.
Why does this happen? Am I missing something? When should I use NewtonSolver over others? I know that it is difficult to answer without seeing the code. But it is just that my code is long with multiple components and I couldn't recreate the issue with a toy model. So any general insight is much appreciated.
Without seeing the code, you're right that its hard to give specifics.
Very broadly speaking, Newton can sometimes have a lot more trouble converging than NLBGS (Note: this is not absolutely true, but is a good rule of thumb). So what I would guess is happening is that on your first or second iteration, the newton solver isn't actually converging. You can check this by setting newton.options['iprint']=2 and looking at the iteration history as the optimizer iterates.
When you have a solver in your optimization, its critical that you also make sure that you set it to throw an error on non-convergence. Some optimizers can handle this error, and will backtrack on the line search. Others will just die. Either way, its important. Otherwise, you end up giving the optimizer an unconverged case that it doesn't know is unconverged.
This is bad for two reasons. First, the objective and constraints values it gets are going to be wrong! Second, and perhaps more importantly, the derivatives it computes are going to be wrong! You can read the details [in the theory manual,] but in summary the analytic derivative methods that OpenMDAO uses assume that the residuals have gone to 0. If thats not the case, the math breaks down. Even if you were doing full model finite-difference, non-convergenced models are a problem. You'll just get noisy garbage when you try to FD it.2
So, assuming you have set up your model correctly, and that you have the linear solvers set up problems (it sounds like you do since it works with NLBGS), then its most likely that the newton solver isn't converging. Use iprint, possibly combined with driver debug printing, to check this for yourself. If thats the case, you need to figure out how to get newton to behave better.
There are some tips here that are pretty general. You could also try using the armijo line search, which can often stablize a newton solve at the cost of some speed.
Finally... Newton isn't the best answer in all situations. If NLBGS is more stable, and computational cheaper you should use it. I applaud your desire to get it to work with Newton. You should definitely track down why its not, but if it turns out that Newton just can't solve your coupled problem reliably thats ok too!
the set it to throw an error on non-convergence is broken on your answer. I have added the link which I think is the right one. Please correct if the linked one is not the one you were thinking to link.
I'm trying to determine the best DSP method for what I'm trying to accomplish, which is the following:
In real-time, detect the presence of a frequency from a set of different predefined frequencies (no more than 40 different frequencies all within a 1000Hz range). I need to be able to do this even when there are other frequencies (outside of this set or range) that are more dominant.
It is my understanding that FFT might not be the best method for this, because it tells you the most dominant frequency (magnitude) at any given time. This seems like it wouldn't work because if I'm trying to detect say a frequency at 1650Hz (which is present), but there's also a frequency at 500Hz which is stronger, then it's not going to tell me the current frequency is 1650Hz.
I've heard that maybe the Goertzel algorithm might be better for what I'm trying to do, which is to detect single frequencies or a set of frequencies in real-time, even within sounds that have more dominant frequencies than the ones trying to be detected .
Any guidance is greatly appreciated and please correct me if I'm wrong on these assumptions. Thanks!
In vague and somewhat inaccurate terms, the output of the FFT is the magnitude and phase of all[1] frequencies. That is, your statement, "[The FFT] tells you the most dominant frequency (magnitude) at any given time" is incorrect. The FFT is often used as a first step to determine the most dominant frequency, but that's not what it does. In fact, if you are interested in the most dominant frequency, you need to take extra steps over and beyond the FFT: you take the magnitude of all frequencies output by the FFT, and then find the maximum. The corresponding frequency is the dominant frequency.
For your application as I understand it, the FFT is the correct algorithm.
The Goertzel algorithm is closely related to the FFT. It allows for some optimization over the FFT if you are only interested in the magnitude and/or phase of a small subset of frequencies. It might be the right choice for your application depending on the number of frequencies in question, but only as an optimization -- other than performance, it won't solve any problems the FFT won't solve. Because there is more written about the FFT, I suggest you start there and use the Goertzel algorithm only if the FFT proves to not be fast enough and you can establish the Goertzel will be faster in your case.
[1] For practical purposes, what's most inaccurate about this statement is that the frequencies are grouped together in "bins". There's a limited resolution to the analysis which depends on a variety of factors.
I am leaving my other answer as-is because I think it stands on it's own.
Based on your comments and private email, the problem you are facing is most likely this: sounds, like speech, that are principally in one frequency range, have harmonics that stretch into higher frequency ranges. This problem is exacerbated by low quality microphones and electronics, but it is not caused by them and wouldn't go away even with perfect equipment. Once your signal is cluttered with noise in the same band, you can't really distinguish on from off in a simple and reliable way, because on could be caused by the noise. You could try to do some adaptive thresholding based on noise in other bands, and you'll probably get somewhere, but that's no way to build a robust system.
There are a number of ways to solve this problem, but they all involve modulating your signal and using error detection and correction. Basically, you are building a modem and/or radio. Ultimately, what I'm saying is this: you can't solve your problem on the detector alone. You need to build some redundancy into your signal, and you may need to think about other methods of detection. I know of three methods of sending complex signals:
Amplitude modulation, which is what it sounds like you are doing now.
Frequency modulation, which tends to be more robust in the face of ambient noise. (compare FM and AM radio)
Phase modulation, which is more subtle and tricky.
These methods can be combined and multiplexed in various ways. Read about them on wikipedia. Moreover, once your base signal is transmitted, you can add error correction and detection on top.
I am not an expert in this area, but off the top of my head, I am not sure you'll be able to use PM silently, and AM is simply too sensitive to noise, as you've discovered, although it might work with the right kind of redundancy. FM is probably your best bet.
I'm trying to design a nonlinear fitness function where I maximize variable A and minimize the variable B. The issue is that maximizing A is much more important at single digit values, almost logarithmic. B needs to be minimized and in contrast to A, it becomes less important when small (less than one) and more important when it's larger (>1), so exponential decay.
The main goal is to optimize A, so I guess an analog is A=profits, B=costs
Should I aim to keep everything positive so that the I can use a roulette wheel selection, or would it be better to use a rank/torunament kind of system? The purpose of my algorithm is shape optimization.
Thanks
When considering a multi-objective problem the goal is usually to identify all solutions that lie on the Pareto curve - the Pareto optimal set. Have a look here for a 2-dimensional visual example. When the algorithm completes you want a set of solutions that are not dominated by any other solution. You therefore need to define a pareto ranking mechanism to take into account both objectives - for a more in depth explanation, as well as links to even more reading, go here
With this in mind, in order to effectively explore all solutions along the pareto front you do not want an implementation that encourages premature convergence, otherwise your algorithm will only explore the search space in one specific area of the Pareto curve. I would implement a selection operator that keeps all members of each iteration's optimal set of solutions, that is all solutions which are not dominated by another + plus a parameter controlled percentage of other solutions. This way you encourage exploration all along the Pareto curve.
You also need to ensure your mutation and crossover operators are tuned correctly too. With any novel application of Evolutionary Algorithms, part of the problem is trying to identify an optimal parameter set for the problem domain... this is where it gets really interesting!!
The description is very vague, but assuming that you actually have an idea of what the function should look like and you're just wondering whether you need to modify it so that proportional selection can be used easily, then no. Regardless of fitness function, you should probably default to using something like tournament selection. Controlling selection pressure is one of the most important things you have to do in order to get consistently good results, and roulette wheel selection doesn't allow you that control. You typically get enormous pressure very early, which drives premature convergence. That might be preferable in a few cases, but it's not where I'd start my investigations.