Halcon - how to handle different colored objects - brightness

I have objects with 3 different colors:
black, dark green, and bright blue.
When I take pictures of them, I do not yet know the real color of the object, thus my image can only be so bright that the bright blue object does not blow out to white. If I make the bright blue as bright as I can, then the dark green and the black are still very dark. Almost too much.
Is there anything that can be done beside brightening up the images afterward?
the camera is a Genie Nano.

If there is enough time, you could take multiple images with different exposure times. There is even a ready method in Halcon to merge light and dark images "create_high_dynamic_range_image.hdev".
If your application is time critical and camera manufacturer provides events for the language you're working in you can do the following:
Put the camera in asynchronous acquisition (reduces time)
Start acquisition with the lowest desired exposure time
On each image event increase the exposure time for some step N times
Since the camera is in asynchronous acquisition each exposure time comes after M frames (usually 5)
First M images should be ignored, and after that you will have N-M images with gradually increased exposure time

Related

Light sensors and Phillips hue

I am designing a system where a light sensor detects light in a room and based on that it adjusts the output of a few light bulbs. The goal is to maintain a standard level of light in the room based on the environmental conditions such as external sunlight. The code is written in Python and uses Phillips hue bulbs to function. The pseudocode is as follows:
if (read light is in between 10 - 50 lumins) {
set bulb to 100 lumins
}
if (read light is in between 51-100 lumins) {
set bulb to 50 lumins
}
etc
However what ends up happening is that after each iteration, when the light is set to the specific value, the sensor detects the bulbs' own light and on the next iteration turns down the light. The lights end up flickering from high to low every second. Does anyone have any experience with this sort of thing, of an algorithm to deal with it? Essentially, the problem is that the light sensor is detecting the bulbs' own light and then undoing its previous decision. I am using a standard TSL2561 sensor to detect the light and the bulbs are Phillips hue.
The placement of the sensor is key in these situations. You can also try an optical filter but that is not the full solution.
Your algorithm is to crude to compensate for a dynamic environment. The real solution is to use a PID algorithm to make small adjustments over time to the light output to stay close to an ideal total (ambient+LED) light level.
See this example, there is many similar out there if you search for pid controller light sensor.
A simplified pseudo code representation for a PID like control system would be:
read in_lumins
if (in_lumins is in between 10 - 50 lumins) {
increment out_lumins
}
if (in_lumins is in between 51-100 lumins) {
decrement out_lumins
}
set bulb to out_lumins
Loop and repeat. Time increments on loop and/or increment size should vary with distance from ideal.

Point cloud color blending after registration

I have succesfully registered two point clouds of the same scene obtained from different camera positions. Color values are different due to changes in light condition between both positions. I would like to know how to perform a smart color blending between two aligned point clouds in order to obtain an uniform color along the global model. Any idea?
I enclose a capture where you can see how color is darker in the cloud on the right.
I was trying to adapt image blending approaches to 3D point clouds, but it's not straightforward at all, so I applied an easier solution that solved my problem for the moment.
Since texture changes are mainly given by changes in scene lighting due to different camera positions, theoretically just a exposure compensation between both clouds should provide good results. I've fixed my problem extending a standard approach of 2D exposure compensation to a 3D scenario. Concretely, just a gain compensation (point 6 of the paper) is enough if the lighting difference is low enough.

is there a way to make a flashlight that cannot go through solid objects in game maker

I am making a 2D (top-down) horror game in game maker. Each player has a flash light which drains overtime. The flashlight uses surfaces to draw light and the cone gets smaller overtime.I would like for the flashlight to act like a real flashlight instead of going through walls. Is there anyway to do this?Picture of what I want it to look like
how are you currently drawing your flashlight?
I would recommend not drawing a flashlight sprite and instead filling a surface with black (to act as darkness) and cutting your lights out of that.
Then You can use the collision_line function to sweep in an arc from your player and get either where it hits an object or whether the line extends past your flashlight range. Then store all those vertices and draw a primitive with blending to act as flashlight.
Hope that makes sense, otherwise I swear I've seen some posts on the gamemaker forums on this, good luck!

Image capturing continuously

I am a final year student making a project in which I take a image from camera placed at car and my objective is through image processing on Matlab. I have to take image of different colour ball until my desired image (which is red) comes and the car stop through micro controller. How can I continuously take a image of ball with millisecond time delay?
The For-A VFC-1000SB High Speed -- only about $12k :-)
However, on the "cheap" end there is the Exilim line (e.g. EX-F1). One of the features is a movie mode up to 1200fps. Note that as the fps goes up the resolution goes down. I know nothing more about this other than the adverting. YMMV.
Now, even if the camera can take frames at this speed, getting it to the host and processed "somewhat timely" is unlikely without additional specialty hardware (and I doubt it is doable at all with the Exilim).

How to fade out volume naturally?

I have experimented with a sigmoid and logarithmic fade out for volume over a period of about half a second to cushion pause and stop and prevent popping noises in my music applications.
However neither of these sound "natural". And by this I mean, they sound botched. Like an amateur engineer was in charge of the sound decks.
I know the ear is logarithmic when it comes to volumes, or at least, twice as much power does not mean twice as loud. Is there a magic formula for volume fading? Thanks.
I spent many of my younger years mixing music recordings, live concerts and being a DJ for my school's radio station and the one thing I can tell you is that where you fade is also important.
Fading in on an intro or out during the end of a song sounds pretty natural as long as there are no vocals, but some of these computerized radio stations will fade ANYWHERE in a song to make the next commercial break ... I don't think there's a way to make that sound good.
In any case, I'll also answer the question you asked ... the logarithmic attenuation used for adjusting audio levels is generally referred to as "audio taper". Here's an excellent article that describes the physiology of human hearing in relation to the electronics we now use for our entertainment. See: http://tangentsoft.net/audio/atten.html.
You'll want to make sure that the end of the fade out is at a "zero crossing" in the waveform.
Half a second is pretty fast. You might just want to extend the amount of time, unless it must be that fast. Generally 2 or 3 seconds is more natural.
More on timing, it should really be with the beat rate of the music, and end at a natural point in the rhythm. Try getting the BPM of the song (this can be calculated roughly), and fading out over an interval equal to a whole or half note in that timing.
You might also try slowing down the playback speed while you're fading out. This will give a more natural vinyl record or magnetic tape sounding stop/pause. Linearly reduce playback speed while logarithmically reducing volume over the period of 1 second.
If you're just looking to get a clean sound sound when pausing or stopping playback then there's no need to fade at all - just find a zero-crossing point and stop there (or more realistically just fill the rest of that final buffer with silence). Fading out when the user expects the sound to stop immediately will sound unnatural, as you've noticed, because the result is decoupled from the action.
The reason for stopping at a zero-crossing point is that zero is the steady state value while the audio is stopped, so the transition between the two states is seamless. If you stop playback when the last sample's amplitude is large then you are effectively introducing transients into the audio from the point of view of the audio hardware when it reconstructs the analogue signal, which will be audible as pops and/or clicks.
Another approach is to fade to zero very fast (~< 10mS), which effectively achieves the same thing as the zero-crossing technique.

Resources