Halcon - detect bright spots compared to local surroundings - brightness

I'd need to detect bright regions in the image. Would be quite easy with threshold. But I need to get spots that are bright compared to its surroundings, not based on an absolute value.
What would be a good way to do this?
Unfortunately I do not have sample images at the moment.

If your background does not have too much texture, you can try the 'local_threshold' operator.
There is a nice example of it included with HDevelop demonstrating the operator for OCR purposes.

Related

Generating eroded mountain terrain in a local / 'bottom-up' way

I'm picturing a typical random polygon hillside with ridges that come together into bigger ridges as you ascend and canyons that come together into bigger canyons as you descend.
The way you normally make something like this is to start with the top of the whole mountain and iterate until you have enough detail in the area you're interested in and then stop.
OK, suppose there is no absolute mountain top; it just keeps going; and I want to generate the neighboring chunk before I get to it so it matches up with what is already there.
After thinking about it for a while I think this is probably either impossible or involves a kind of math I haven't even heard of. On the other hand it -seems- like it 'should' be possible, (with extra information stored per-vertex)?
Maybe try doing it in 2D first and see what you can come up with.
It looks possible in 2D, in 3d it would work too but not with a real mountain with a top, more with an endless slope that does not converge to a point.
What you want to do is actually reversing the gravity rules in some sense.
Not sure if this is really an answer, but the question is quite vague too :)

Is it possible to create an SVG that is precise to 1,000,000,000% zoom?

Split off from: https://stackoverflow.com/questions/31076846/is-it-possible-to-use-javascript-to-draw-an-svg-that-is-precise-to-1-000-000-000
The SVG spec states that SVGs use double-precision floats for all values.
Through testing, it's easy to verify this.
Affinity designer is a vector graphics program that allows zooms up to 1,000,000,000%, and it too uses double-precision floats to do all calculations.
I would like to know from someone who deeply understands double-precision floats: is it possible create an SVG that is visually correct at 1,000,000,000% zoom?
Honestly, I'm struggling with getting a grasp on the math of this:
9007199254740992 (The max value of a double-float according to https://stackoverflow.com/a/1848953/2328064 ) is larger than 1,000,000,000 so it seems to be reasonable that if something is 2 or even 2000 wide, that it would still be small when starting at 9007199254740992 and zooming 1,000,000,000%.
Hypothetical examples as ways to approach the question:
If we created an SVG of a 2D slice of the entire visible universe how far could we zoom in before floating point rounding started shifting things by 1 pixel?
If we start with an SVG that is 1024x1024, can we create a 'microscopic' grid that is both visible and visually correct at 1,000,000,000% zoom? (Like, say, we can see 20+ equidistant squares)
Edit:
Based on everything so far, the definitive answer is yes (with some important and interesting caveats for actually viewing this SVG).
In order to get the most precision at high zoom, start at the centre.
The SVG spec is not designed for this level of precision. This is especially true of the spec for SVG viewers.
(Not mentioned below) Typically curves are represented in software as Bézier curves, and standard Bézier curve implementations do not draw mathematically perfect circles.
Of course it is. Floating point math deals with relative, not absolute, precision. If you created a regular polygon at the origin, with radius 1e-7, then zoomed it to 1e7X size, you would expect to see a regular polygon with the same size and precision as an unzoomed circle with radius 1.
If you were to create the same regular polygon with vertices centered at (0, 1e9) or so, you'd expect to see some serious error. Doubles that large do not have enough absolute precision to accurately represent a shape that small.
However, there's another way to express "shapes far from the origin" in SVG, using a node transformation. If you were to specify the polygon relative to the origin, but give it a translation of (0,1e9), and zoomed to that point, you'd expect to see the same precision as the origin-centered polygon.
HOWEVER however, all this assumes that the SVG renderer in question is designed to do such things in the most precise possible manner (such as composing the shape and view transformations before applying them to the vertices, rather than applying one at a time). I'm not sure if any of the SVG renderers out there go to such lengths, given the unusualness (some might say, the wrong-headedness) of such a use case.
TL;DR: It is possible to create such an SVG file, but it's impossible to know if a renderer or other tools that merely follow the spec will render/process it correctly.
This is a case of the SVG standard being too vague. Since the renderers, canvses, etc. only have to follow the spec, the realistic answer is: you can create it, but it won't be usable for what you intend to use it for.
Most likely no.
The double has around 53 bits precision, so when doing a multiplication of 1e9 percent you could get a small amount of, but there are no guarantees. Maybe not enough to not stay in the correct pixel, but I guess you should create your own solution working and have a look at rasterisation, because that's what you seem to need to know more about.

Programming math-based images for use in high-resolution artwork [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm interested in creating poster-sized images that contain repeating patterns, similar to the two (public domain) images below, the Flower of Life and a Penrose tiling:
My questions:
How do people usually create images like these on a computer? I'm hoping the answer isn't, "Open Adobe Illustrator and guess at intersection points," since such points can be defined mathematically. But I also imagine that not everyone with an interest in geometric patterns is also familiar with programming.
What is the best environment for creating such images? In particular, what's the best way to get high-resolution images out of Java, Python, Processing, etc? Or, is Mathematica the best tool?
Actually calculating the points and doing the math isn't the hard part, in my mind (at least, it's not the focus of this question). I'm interested in the best way to get a high-quality visual product out of a program.
The best way to create images like these is to learn to write PostScript. It's a clean language, easy to learn, and quite powerful once you know it well.
Bill Casselman's manuals are by far the best reference for high quality mathematical illustration.
Use a vector image format like SVG. This will scale perfectly to any resolution.
Inkscape is a great tool for creating these.
Once you have a vector image format, there are many options for using it in programming languages, depending on your language of choice.
For example -
.NET - SvgNet
ActionScript - svgweb
C++ - LeadTools
XAML for WPF/Silverlight - ViewerSVG (Converts SVG to XAML)
I don't know how those images were created, I would guess they were scanned from a book, but, in my work with fractals, I tend to start with just using the <canvas> tag, mainly so that I can change the size of the element and see it drawn more iterations, so I can get the highest resolution.
That is the problem with something like SVG is that you would need to pick a resolution and then create it, and it will scale well up and down, but if you developed it at one resolution, then you go to a higher resolution to demo, you may see more gaps than you would like.
If you want to just do it and save it as a static image then any GUI will work, as you are saving a GIF at that point, but if you want it, for example, on a web page, and have it look as good as it can on that browser then you may want to look at using javascript.
The math part isn't hard, and so drawing the image is fairly easy, once you derive the recursive algorithm that is needed. I tend to go to the next iteration until the size is below a threshold, for example, a radius of < 3, then it exits.
Well, #2 is going to be kind of a holy war so I'll address #1. :)
The key to images of this nature is recursion. Basically they are the same image repeated over and over in a controlled way to get an intersting result. Take the flower of life for example. You repeat the center petal six times (the method to do the petals is up to you). Then you create six more flowers using the petals tip as the center and overlapping one of the petals. You then recursively move outward. After a few "rounds" you stop and draw the containing circle. Basically the recusion simulates the stamp, move and rotate that would be required if you were doing it by hand.
When I have played around with these kinds of things I have always found that experimentation is the best way to get cool new things. Of course that could be just my lack of imagination. :)
I know I am not very math heavy in this answer but that is up to you and experimentation. Just remember that COS and SIN are your friends and there are 360 degrees in a cricle (or 2pi radians depending on your math package).
EDIT: Adding some math for the "Flower"
Starting with a center of (Xo, Yo) and a flower radius of r...
The tips of the petals (P0, P1, etc) are determined by...
X = Xo + (sin((n * pi)/3 + (pi / 6)) * r)
Y = Yo - (cos((n * pi)/3 + (pi / 6)) * r)
where n is the petal number (0..5)
Once you compute a petal tip, just draw the petal and then start a new flower at the tip. You would also set a bounding circle so that any point outside that circle would not be drawn.
I would try to create a PDF with iText in Java. PDF supports vector graphics, so it should scale without problems. I don't know how well iText scales w.r.t. performance when you have a really big number of graphic elements.
A1. You might want to look at turtle-graphics, l-systems, iterated function systems, space filling curves, and probably a lot of other approaches I'm not familiar with or haven't thought of yet.
A2. You can program any of these with any of the languages you suggest. I like Mathematica, but I know that not everyone has a copy of it and I have a copy 'cos I work in number-crunching and get to play with it for making pretty pictures. But Processing, which is free, was designed to be artist-friendly and might be a better starting point for you. Both Mathematica and Processing do the graphics right there and then, no calls to external libraries (or worrying which ones to use).
And, while I agree with everyone who says that vectors are the way to go, don't forget that the final productions step, onto paper or screen, is rendering so give some thought to how that will be done. This might, for example, lead you to Postscript or PDF for an output format.
Have fun
Mark
Well, I used to draw flowers of life with a compass, back then in junior school ... very simple actually ... but I don't think that's the answer you're looking for.
Basically it consists of drawing a circle of the same radius, from every point, until you encounter the big circle (limit).

Similarity Between Colors

I'm writing a program that works with images and at some point I need to posterize the image. This means I need to bin the colors, but I'm having trouble deciding how to tell how close one color is to another.
Given a color in RGB, I can think of at least 2 ways to see how different they are:
|r1 - r2| + |g1 - g2| + |b1 - b2|
sqrt((r1 - r2)^2 + (g1 - g2)^2 + (b1 - b2)^2)
And if I move into HSV, I can think of other ways of doing it.
So I ask, ignoring speed, what is the best way to tell how similar two colors are? Best meaning most accurate to the human eye.
Well, if speed is not an issue, the most accurate way would be to take some sample images and apply the filter to them using various cutoff values for the distance (distance being determined by one of the equations on the Color_difference page that astander linked to, meaning you'd have to use one of those color spaces listed there with the calculations, then convert to sRGB or something [which also means that you'd need to convert the image into the other color space first if it's not in it to begin with]), and then have a large number of people examine the images to see what looks best to them, then go with the cutoff value for the images that the majority agrees looks best.
Basically, it's largely a matter of subjectiveness; in fact, it also depends on how stylized you want the images, and you might even want to add in some sort of control so that you can alter the cutoff distance on the fly.
If speed does become a bit of an issue and/or you want more simplicity, then just use your second choice for distance calculation (which is simply the CIE76 equation; just make sure to use the Lab* color space) with the cutoff being around 2 or 2.3.
What do you mean by "posterize the image"?
If you're trying to cluster the colors into bins, you should look at
cluster analysis
Just a comment if you are going to move to HSV (or similar spaces):
Diffing on H: difference between 0° and 359° is numerically big but perceptually is negligible.
H difference if V or S are small - is small.
For computer vision apps, more important not perceptual difference (used mostly by paint manufacturers) but are these colors belong to the same object/segment or not. Which means that we might partially ignore V, which can change from lighting conditions.

Lens correction projection

What is the simplest way to un-warp a photo made using fisheye or wide-angle lens? I'm looking a pixel projection formula that has few parameters. Camera and lens parameters will not be known, so user has to change the parameters visually. Thanks
There is a good paper here that provides some decent looking mathematical models for lens distortion. It's at least. SDX2000 was kind of on the right track with the grid I think. I think the most common way to approach the problem is to map the image to a grid and then allow warping parameters to be applied to produce pincushion and barrel distortion. See the lens distortion filters in Lightroom or Photoshop as an example.
There is an excellent discussion from ImageMagick. They give the equation that they use.
Note that this does not correct distortion in the same way as Photoshop CS6 (i.e. you cannot take coefficients from the Adobe lens profiles and simply chuck them in).
The paper that Kamil points to seems like an excellent in-depth look.
I would assume you could use the lens equation to do it.
1/f = 1/object_distance + 1/image_distance
Where f is the focal length (the user input). The ratio of image distance and object distance could be used to resize the image appropriately, using the magnification equation. To get what you really want, then, you need to restructure the equation:
1/object_distance = 1/f - 1/image_distance
And then use the magnification equation to use the object height to resize:
-image_distance/object_distance = image_height/object_height
The catch, as you may have noticed, is that you need to know the distance each pixel is away from the camera. Otherwise, it simply doesn't work. You could ask the user for that information, but that seems unlikely, and painful. I don't know of any other way to do it-- lens distortion is a 3D effect, and you're given 2D information. At best you can attempt to correct it two-dimensionally, but this will be difficult, and won't work properly.
If its possible you should ask the user to take a photograph of a reference image (a chess board for example) using the same camera and then use this information to analyze the lens characteristics. This information can then be used to un-warp the other photographs taken by the same camera.
For implementation you could use neural networks/genetic algorithms.

Resources