Failsafes simulations - arduino

I need to simulate failsafes or mechanical failures described in
Diagnosing problems.
Is this possible using SITL/DroneKit ?

On the ArduPilot pages for SITL, you will find various parameters that can be set to control the virtual environment for a simulated drone. Here's the examples: http://ardupilot.org/dev/docs/using-sitl-for-ardupilot-testing.html
Command param show sim* will show all parameters with name beginning with "sim" which are parameters used by SITL for simulating various conditions. Apart from these, it might also be possible to directly affect vehicle parameters to simulate a deviation in vehicle state relative to input and the desired outcome (like pitch/roll suddently changing to simulate a rotor failure).

Related

How to generate live plot of the residuals vs. number of iteration while the driver is still running?

My objective is while the driver/non-linear solver is still running, I would not only like to print the values of the inputs/outputs/gradients information but I would also like to generate a live plot for the residual of the objective function and other design variables/inputs/gradients of my interest. Residual of the objective function (or any other parameter) can be defined as log of the difference between two consecutive values.
I came across two particular features in Openmdao, driver/solver debug_print and driver/solver recorder features, where debug_print prints the input/output values live and information from the recorder is accessed to assess the convergence of the model. I have two specific questions:
How to save the values that debug_print prints on the screen in a text file (or other formats) while the driver is still running? (The text file is dynamically updated while the driver is running)
Can I use recorder to generate live plots of the residuals for the quantities of my interest? I have seen the Advanced recording tutorial where recorder is used to plot the design variables and objective functions, but recorder information can be only retrieved once the driver/solver completes its job right? Or can the information in the recorder be retrieved while the driver/solver is still running? If the information from the recorder can be retrieved while the driver/solver is still running, I believe I can achieve my goal.
Thanks!
You can write a separate script to read the recorder values while the main script is running and writing those values. There is nothing stopping the reading script from being a different file/process from the main execution.

Dymos: Is it possible to optimise a design_parameter?

Such as in the 'racecar' example, could I set a lower and upper limit for the 'mass' design_parameter and then optimise the vehicle mass while solving the optimal control problem?
I see that there is an "opt" argument for phase.add_design_parameter() but when I run the problem with opt=True the value stays static. Do I need another layer to the solver that optimises this value?
This feature would be useful for allocating budgets to design decisions (e.g. purchasing a lighter chassis), and tuning parameters such as gear ratio.
It's absolutely possible, and in fact that is the intent of the opt flag on design parameters.
Just to make sure things are working as expected, when you have a design parameter with opt=True, make sure it shows up as one of the optimizer's design variables by invoking list_problem_vars on the problem instance after run_model. The documentation for list_problem_vars is here.
If it shows up as a design variable but the optimizer is refusing to change it, it could be that it sees no sensitivity wrt that variable. This could be due to
incorrectly defined derivatives in the model (wrong partials)
poor scaling (the sensitivity of the objective/constraints wrt the design parameter may be miniscule in the optimizer's units
sometimes by nature of the problem, a certain input has little to no impact on the result (this is probably the least likely here).
Things you can try:
run problem.check_totals (make sure to call problem.run_model first) and see if any of the total derivatives appear to be incorrect.
run problem.driver.scaling_report and verify that the values are not negligible in the units in which the optimizer sees them. If they're really small at the starting point, then it may be appropriate to scale the design parameter smaller (set ref to a smaller number like 0.01) so that a small change from the optimizer's perspective results in a larger change within the model.
If things don't appear to be working after trying this and I'll work with you to figure this out.

How is the seed chosen if not set by the user?

For the purpose of reproducibility, one has to choose a seed. In R, we can use set.seed().
My question is, when the seed is not set explicitly, how does the computer choose the seed?
Why is there no default seed?
A pseudo random number generator (PRNG) needs a default start value, which you can set with set.seed(). If there is no given it generally takes computer based information. This could be time, cpu temperatur or something similar. If you want a more random start value it is possible to use physical values, like white noise or nuclear decay, but you generally need an extern information source for this kind of random information.
The documentation mentions R uses current time and the process ID:
Initially, there is no seed; a new one is created from the current time and the process ID when one is required. Hence different sessions will give different simulation results, by default. However, the seed might be restored from a previous session if a previously saved workspace is restored.
A default seed is a bad idea, since a random generators would always produce the same samples of numbers by default. If you always take the same seed it's not anymore randomized, since there will be always the same numbers. So you just provide a fixed data sample, which is not the intended output of a PRNG. You could of course turn the default seed off (if there would be one), but the intended function is primary to generate a completely random set of data and not a fixed one.
For statistical approaches it matters for validation and verification reasons, but it's getting more important when you get to cryptography. In this field a good PRNG is mandatory.

How to read "Graph Result" in JMeter

I am trying to understand the "Graph Result" in JMeter I got when I followed the following scenario:
I am hitting www.google.com with:
No. of users: 10,
Ramp up Period is 5 Seconds,
Loop count is 10
I am finding it difficult to read "Graph Result", I have used another listeners too (View Results in Tree, View Results in Table, Summary Report) which are easy to understand but I would like to learn this too.
Please refer the result Image link:
https://www.cubbyusercontent.com/pli/Image.png/_2855385a0bbb40a0b7cd2d31224b521c
Help appreciated.
According to JMeter Help,
Graph results contains,
The Graph Results listener generates a simple graph that plots all
sample times. Along the bottom of the graph, the current sample
(black), the current average of all samples(blue), the current
standard deviation (red), and the current throughput rate (green) are
displayed in milliseconds.
The throughput number represents the actual number of requests/minute
the server handled. This calculation includes any delays you added to
your test and JMeter's own internal processing time.
Basically it shows data,average,median,deviation,throughput i.e.system statistics during test in a graphical format.
These values are plotted runtime thus it updates values at bottom at runtime i.e. total no. of samples are no. of samples occurred till that point of time with deviation at that point of time and similarly other counters represent their values.
Due to its runtime behavior, this listener consumes lot of memory and cpu and it is advised that it should not be used while load test (I think you have used it just to know its use and working.)
While running actual load test you can learn/understand these statistics from aggregate report other reports which can be created in non-ui mode also.
I hope this have cleared what graph result shows and how to read it and when to use it.

How to handle missing data in structure from motion optimization/bundle adjustment

I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.
The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.
For each marker point and each view the following function is minimised:
double diff = calculatedXY[index] - observedXY[index]
Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.
I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.
I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?
Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?
It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).
Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.
You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.

Resources