How to estimate a QR code redunduncy level? - qr-code

I'm not a specialist, but as far as I know, a bit of information in a QR-code is coded more than once, and it is defined as the redundancy level
How can I estimate a QR-code redundancy level ? Is where an mobile app or a website where I can test my QR-code redundancy level easily ? If not, is it an easy algorithm that I can implement ?
Redundancy is sorted in different categories according to this website,
but I'd like to have the direct percentage value if possible

There are some pixels next to the lower left positioning block which indicate the redundancy level. Quote from https://blog.qrstuff.com/2011/12/14/qr-code-error-correction
Quite conveniently, there’s also 2 modules down in the bottom left-hand corner of every QR code that display what the error correction level used in that QR code is.
There is a very nice graphic on that page which visualizes this, which I won't include here as I assume that I'm not licensed to do so.

Related

Dymos: Is it possible to optimise a design_parameter?

Such as in the 'racecar' example, could I set a lower and upper limit for the 'mass' design_parameter and then optimise the vehicle mass while solving the optimal control problem?
I see that there is an "opt" argument for phase.add_design_parameter() but when I run the problem with opt=True the value stays static. Do I need another layer to the solver that optimises this value?
This feature would be useful for allocating budgets to design decisions (e.g. purchasing a lighter chassis), and tuning parameters such as gear ratio.
It's absolutely possible, and in fact that is the intent of the opt flag on design parameters.
Just to make sure things are working as expected, when you have a design parameter with opt=True, make sure it shows up as one of the optimizer's design variables by invoking list_problem_vars on the problem instance after run_model. The documentation for list_problem_vars is here.
If it shows up as a design variable but the optimizer is refusing to change it, it could be that it sees no sensitivity wrt that variable. This could be due to
incorrectly defined derivatives in the model (wrong partials)
poor scaling (the sensitivity of the objective/constraints wrt the design parameter may be miniscule in the optimizer's units
sometimes by nature of the problem, a certain input has little to no impact on the result (this is probably the least likely here).
Things you can try:
run problem.check_totals (make sure to call problem.run_model first) and see if any of the total derivatives appear to be incorrect.
run problem.driver.scaling_report and verify that the values are not negligible in the units in which the optimizer sees them. If they're really small at the starting point, then it may be appropriate to scale the design parameter smaller (set ref to a smaller number like 0.01) so that a small change from the optimizer's perspective results in a larger change within the model.
If things don't appear to be working after trying this and I'll work with you to figure this out.

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

ITK-SNAP segmentation displays same intensity value even after registration

I'm using ITK-SNAP to compare the intensities of several Regions of Interest between several conditions.
For some subjects, I need to realign one image to another by using the Registration tool.
However, I noticed that the intensity values of a specific segmentation that I drew on the reference image doesn't change no matter how I register.
The value will be different between the two images, but even if I manually register the second image to something completely off, it will stay the same.
Is it possible to get the actual mean intensity of my segmentation depending on where it is on the registered image ?
Segmentation menu, option "Volumes and Statistics..." should show you what you are looking for.
Registration does not impact the intensity. Depending on how you transform your image, it affects the location and coordination of your voxels! It does not play with the intensities! It may reform, or reshape, rotate, or translate the image. If you expect different intensities after registration, you need to apply some other techniques rather than registration! because all the transformation matrix are applied on the coordination and location. You should play with the other features of your data!
There are some registration methods which influence the intensities but they are not used in ITKSNAP for example. You should look for its special package.
For example this paper is on:
Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes
Which is specifically playing with the intensities for fusion.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755
Other example is this matlab script for Intensity based automatic registration, The process begins with the transform type you specify and an internally determined transformation matrix. Together, they determine the specific image transformation that is applied to the moving image with bilinear interpolation.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

How to handle missing data in structure from motion optimization/bundle adjustment

I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.
The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.
For each marker point and each view the following function is minimised:
double diff = calculatedXY[index] - observedXY[index]
Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.
I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.
I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?
Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?
It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).
Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.
You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.

Resources