What known methods are there for automatically taring a scale? - scale

I'm programming an Ignition project for work. Part of its job is to detect when a tote has been placed on the scale, then tell the scale to tare.
The values from the scale head (gross weight, tare weight, etc.) are readable from my project and I will raise a flag telling the scale to tare when it is the right time.
So far the best I can think of is to take an average weight of the totes (which are usually the same size and shape), give a small tolerance, and any time a gross weight is within tolerance, tell the scale to tare. If there are outliers, there will probably have to be a manual tare button (as a failsafe).
Ignition uses Jython.
I was wondering if there are better methods for automatically taring a scale or if someone can approve/critique my current ideas.
I have not tried my intended method yet, I'm looking for feedback before I go in the wrong direction.

Related

computer vision: segmentation setup. Graph cut potentials

I have been trying to teach myself some simple computer vision algorithms and am trying to solve a problem where I have some noise corrupted image and all I am trying to do is separate the black background from the foreground which has some signal. Now, the background RGB channels are not all completely zero as they can have some noise. However, the human eye can easily discern the foreground from the background.
So, what I did was use the SLIC algorithm to break the image down into super pixels. The idea being that since the image is noise corrupted, doing statistics on the patches might result in better classification of background and foreground because of higher SNR.
After this, I get around 100 patches which should have similar profile and the result of SLIC seems reasonable. I have been reading about graph cuts (the Kolmogorov paper) and it seemed like something nice to try for the binary problem I have. So, I constructed a graph which is a first order MRF and I have edges between the immediate neighbours (4-connected graph).
Now, I was wondering what possible unary and binary terms I can use here to do my segmentation. So, I was thinking for the unary term, I can model it as a simple Gaussian where the background should have a zero mean intensity and the foreground should have some non-zero mean. Although, I am struggling to figure out how to encode this. Should I just assume some noise variance and compute probabilities directly using patch statistics?
Similarly, for neighbouring patches I do want to encourage them to take similar label but I am not sure what binary term I can design that reflects that. Seems just the difference between the label (1 or 0) seems weird...
Sorry for the long-winded question. Hoping someone can give some helpful hint on how to start.
You could build your CRF model over superpixels, such that a superpixel has a connection to another superpixel if it is a neighbour of it.
For your statistical model Pixel Wise Posteriors are simple and cheap to compute.
So, I suggest the following for the unary terms of the CRF:
Build foreground and background histograms over texture per pixel(assuming you have a mask, or reasonable amount of marked foreground pixels(note, not superpixels)).
For each superpixel, make an independence assumption over pixels within it, such that a superpixels likelihood of being either foreground or background is the product over each observation in the superpixel(in practice, we sum logs). The individual likelihood terms come from the histograms that you generated.
Compute the posterior for foreground as the cumulative likelihood described above for foreground divided by the sum of the cumulative likelihoods of both. Similar for background.
The pairwise terms between superpixels can be as simple as the difference between the mean observed textures(pixelwise) for each passed through a kernel, such as the Radial Basis Function.
Alternatively, you could compute histograms over each superpixels observed texture(again, pixel wise) and compute the Bhattacharyya Distance between each neighbouring pair of superpixels.

Exclude graph values above certain point

I would like to ensure that when looking at my web-server response time graphs I can see a good level of detail from 0-5k on the scale of my graph. However occasionally there are metrics above the 5k (File downloads) mark which then increase the scale of the graph making it difficult to see what is going on around the regular range of values.
How do I exclude metric values from being plotted that are above 5k? Bearing in mind I do not want metrics themselves to be excluded.
Or perhaps the best thing to do would be to scale down the high points with log, but then I loose the actual scale information, which is quite useful at a glance.
Any help appreciated.
From the Graphite Documentation:
http://graphite.readthedocs.org/en/latest/render_api.html#ymax
Default: The highest value of any of the series displayed
Manually sets the upper bound of the graph. Can be passed any integer
or floating point number.
Example:
&yMax=0.2345
Looks like yMax parameter was only a suggestion at one point. Reported to be strictly enforced as of 0.9.5. For more: https://bugs.launchpad.net/graphite/+bug/412663
Also, from: http://graphite.wikidot.com/url-api-reference
yMin and yMax set the minimum and maximum y-values for the generated
image. A good use of these parameters would be min=0&max=100 when the
value you are graphing is a percentage.
Some other finds. Not sure if they're entirely relevant; might be helpful.
graphite-graph-dsl: A small DSL to describe graphite graphs
https://github.com/behrendsj/graphite-graph-dsl
Added ability to define the right y-axis min and max values: https://github.com/behrendsj/graphite-graph-dsl/commit/11e146b0b3eb82faa7c1f5db5af324c81db66144
graphene: Graphene is a realtime dashboard & graphing toolkit based on D3 and Backbone.
https://github.com/jondot/graphene
Define yMax support: https://github.com/jondot/graphene/pull/33

implement prismaticJoint physics

I'm trying to implement my own physics for an app I'm making in C++, OpenframeWorks. I'm currently using Box2D but I don't need collision detection so I want something much lighter.
I have a world with gravity and a dynamic object with movement constrained by a prismatic joint of an arbitrary length at an arbitrary angle, attached to a static object. Friction is simulated using the joint motor.
I've looked at
Resources for 2d game physics
But everything here seems to focus on building complete physics engines which I don't need to do. Could anyone point me in the right direction for the maths on this?
You just need to separate the force of gravity into two components; Along the Prismatic Joint Axis, and anything else. (See Free body diagrams)
This is easily achieved with the vector dot product between the gravity vector and axis vector. If you first scale the axis vector to length 1, the result of the dot product will be the force along the axis.
To translate the force into acceleration, you just need to divide by the mass of the moving object.
If Box2D has what you want, I'd recommend you reconsider your "lighter" requirement. Unless you can quantify the harm caused by using a library with a few more bytes, I'd say that the benefit will outweigh the cost of you writing it for yourself.
If you have a good understanding of the physics, and want to learn how to do it, by all means go ahead. If not, use what someone more knowledgeable than you has provided and forget the size of the library.

How to detect a trend inside unsteady data (e.g. Trendly)?

I was wondering what kind of model / method / technique Trendly might use to achieve this model:
[It tries to find the moments where significant changes set in and ignores random movements]
Any pointers very welcome! :)
I've never seen 'Trendly', and don't know anything about it, but if I wanted to produce that red line from that blue line, in an algorithmic fashion, I would try:
Fourier the whole data set
Choose a block size longer than the period of the dominant frequency
Divide the data up into blocks of the chosen size
Compare adjacent ones with a statistical test of some sort.
Where the test says two blocks belong to the same underlying distribution, merge them.
If any were merged, go back to 4.
Red trend line is the mean of each block.
A simple "median" function could produce smoother curves over a mostly un-smooth curve.
Otherwise, a brute-force or genetic algorithm could be used; attempting to find the way to split the data into sections, so that more sections = worse solution, and less accuracy of the lines = worse solution.
Another way would be like this: Start at the beginning. As soon as the line moves outside of some radius (3 above or 3 below the first, for instance) set the new height to an average of the current line's height and the previous marker.
If you keep doing that, it would ignore small fluctuations. However, if the fluctuation was large enough, it would still effect it.

Similarity Between Colors

I'm writing a program that works with images and at some point I need to posterize the image. This means I need to bin the colors, but I'm having trouble deciding how to tell how close one color is to another.
Given a color in RGB, I can think of at least 2 ways to see how different they are:
|r1 - r2| + |g1 - g2| + |b1 - b2|
sqrt((r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2)
And if I move into HSV, I can think of other ways of doing it.
So I ask, ignoring speed, what is the best way to tell how similar two colors are? Best meaning most accurate to the human eye.
Well, if speed is not an issue, the most accurate way would be to take some sample images and apply the filter to them using various cutoff values for the distance (distance being determined by one of the equations on the Color_difference page that astander linked to, meaning you'd have to use one of those color spaces listed there with the calculations, then convert to sRGB or something [which also means that you'd need to convert the image into the other color space first if it's not in it to begin with]), and then have a large number of people examine the images to see what looks best to them, then go with the cutoff value for the images that the majority agrees looks best.
Basically, it's largely a matter of subjectiveness; in fact, it also depends on how stylized you want the images, and you might even want to add in some sort of control so that you can alter the cutoff distance on the fly.
If speed does become a bit of an issue and/or you want more simplicity, then just use your second choice for distance calculation (which is simply the CIE76 equation; just make sure to use the Lab* color space) with the cutoff being around 2 or 2.3.
What do you mean by "posterize the image"?
If you're trying to cluster the colors into bins, you should look at
cluster analysis
Just a comment if you are going to move to HSV (or similar spaces):
Diffing on H: difference between 0° and 359° is numerically big but perceptually is negligible.
H difference if V or S are small - is small.
For computer vision apps, more important not perceptual difference (used mostly by paint manufacturers) but are these colors belong to the same object/segment or not. Which means that we might partially ignore V, which can change from lighting conditions.

Resources