I have 3 phases in my Dymos trajectory. In the first and last phases I am using angle of attack as control. In the second phase, I do not want to control but just let the angle of attack stay at the last value from the first phase. I turned off optimization of angle of attack in the middle phase with opt = False. I used add_link_constraint() to connect alpha between the first and second phases but that didn't work how I thought it would --- in the second phase alpha defaults to zero and forces alpha at end of first phase to zero. Not sure how to make it override.
In this case, since alpha is not dynamic in the second phase and you just want it to maintain its value, you should make it a parameter instead of a control.
Let it's value be determined by the optimizer (by setting opt=True), and use a phase linkage constraint to force dymos to drive the difference between the value of alpha and the end of the first phase and the beginning of the second phase to zero.
There's a test case that demonstrates this here. In the test case, the control angle u1 is a dynamic control in the first and third phases (when under propulsion) and maintains a constant value during the second phase (a coast).
This test case uses the link_phases method on Trajectory to impose the continuity (linkage) constraints. Alternatively, the method add_linkage_constraint has more generality, but is more complicated to use.
Note this capability is new to Dymos so I'd recommend using the latest release.
Leaving it as an optimized control doesn't work in this case because the linkage would only try to link the value at the endpoints of the second phase. The interior values wouldn't have any bearing on the optimization.
Related
Such as in the 'racecar' example, could I set a lower and upper limit for the 'mass' design_parameter and then optimise the vehicle mass while solving the optimal control problem?
I see that there is an "opt" argument for phase.add_design_parameter() but when I run the problem with opt=True the value stays static. Do I need another layer to the solver that optimises this value?
This feature would be useful for allocating budgets to design decisions (e.g. purchasing a lighter chassis), and tuning parameters such as gear ratio.
It's absolutely possible, and in fact that is the intent of the opt flag on design parameters.
Just to make sure things are working as expected, when you have a design parameter with opt=True, make sure it shows up as one of the optimizer's design variables by invoking list_problem_vars on the problem instance after run_model. The documentation for list_problem_vars is here.
If it shows up as a design variable but the optimizer is refusing to change it, it could be that it sees no sensitivity wrt that variable. This could be due to
incorrectly defined derivatives in the model (wrong partials)
poor scaling (the sensitivity of the objective/constraints wrt the design parameter may be miniscule in the optimizer's units
sometimes by nature of the problem, a certain input has little to no impact on the result (this is probably the least likely here).
Things you can try:
run problem.check_totals (make sure to call problem.run_model first) and see if any of the total derivatives appear to be incorrect.
run problem.driver.scaling_report and verify that the values are not negligible in the units in which the optimizer sees them. If they're really small at the starting point, then it may be appropriate to scale the design parameter smaller (set ref to a smaller number like 0.01) so that a small change from the optimizer's perspective results in a larger change within the model.
If things don't appear to be working after trying this and I'll work with you to figure this out.
I have been facing a problem how to generate event signals based on value at the integrator block in Scilab Xcos. For example I need to create event signal in case value at the output of the integrator block is equal to zero. I had an idea to use the RELATIONALOP block for the comparison of the value at the output of the integrator with zero but I donĀ“t know how to convert result of this comparison into the event. Can anybody help?
The zcross_f, NEGTOPOS_f POSTONEg_f AND general_F blockS are exactly designed for this purpose.
There based on the zero-crossing ability of ODE/DAE solver the continuous time integration is performed till a given expression of the states exactly crosses zero. At this time the discret simulation handles the immediate consequences of this events before the continuous state integration restarts.
In Applying Arc-Consistency (AC3) algorithms on one Constraint Satisfaction Problem, if domain of one variable be empty, what is the next step?
1) halt.
2) do backtrack.
3) start from another initial state.
4) it depends on that we are in which step.
Solution (4). I think (1) is true because it mean we cannot find any consistent assignment and halt. anyone can describe why (4) is true?
With the particular algorithm you're using, if the domain of a variable shrinks until it is empty, then it means that the constraint problem has no solutions. Therefore the algorithm should halt in the failure state.
I just touch on pipeline theory for a few hours. perhaps it's a easy question, but I really need your help.
I know that we should store mem[pc] into IF/ID pipeline register in fetch stage for we will decode it in next stage, also we should update PC in fetch stage for we will feteh next instruction via that updated PC next cycle, but I really don't understand why we should also store NPC into pipeline register.
below is an explanation derived from Computer Organization and Design, I don't get it.
This incremented address is also saved in the IF/ID pipeline register in case it is
needed later for an instruction, such as beq
The reason for saving NPC in the pipeline is because sometimes the next instruction in the pipeline will want to use it.
Look at the definition of beq. It has to compute the target address of the branch. Some branches use a fixed location for the target address, like "branch to address A." This is called "branching to an absolute address."
Another kind of branch is a "relative" branch, where the branch target is not an absolute address but an offset, that is, "branch forward X instructions." (If X is negative, this ends up being a backwards branch.) Now consider this: forwards/backwards from where? From NPC. That is, for a relative branch instruction, the computation for the new PC value is:
NewPC = NPC + X
Why do architectures include the ability to perform relative branches? Because it takes less space. Lets say that X has a small value, like 16. The storage required for an absolute branch to a target address is:
sizeof(branch opcode) + sizeof(address)
But the storage for a relative branch of offset 16 is only:
sizeof(branch opcode) + 1 ## number of bytes needed to hold the value 16!
Of course, larger offsets can be accommodated by increasing the number of bytes used to hold the offset value. Other kinds of space-saving, range-increasing representations are possible too.
If the exception point is in a branch-delay slot, then one needs two PCs to restart execution: one that
points at the exceptional instruction (delay slot) and another that points at the next instruction. The
second PC is needed because the instruction following the delay slot could be either the next
sequential instruction (if the branch was not taken) or the branch target (if the branch was taken).
Although MIPS has the same issue, it relies on software to back up the exception point to the previous
instruction (when it is a branch) before restarting execution; this works because branches are
idempotent.
Credits: http://www.cs.berkeley.edu/~kubitron/courses/cs252-S09/handouts/oldquiz/sp09-quiz1_soln.pdf
I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.
The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.
For each marker point and each view the following function is minimised:
double diff = calculatedXY[index] - observedXY[index]
Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.
I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.
I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?
Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?
It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).
Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.
You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.