Image capturing continuously - initialization

I am a final year student making a project in which I take a image from camera placed at car and my objective is through image processing on Matlab. I have to take image of different colour ball until my desired image (which is red) comes and the car stop through micro controller. How can I continuously take a image of ball with millisecond time delay?

The For-A VFC-1000SB High Speed -- only about $12k :-)
However, on the "cheap" end there is the Exilim line (e.g. EX-F1). One of the features is a movie mode up to 1200fps. Note that as the fps goes up the resolution goes down. I know nothing more about this other than the adverting. YMMV.
Now, even if the camera can take frames at this speed, getting it to the host and processed "somewhat timely" is unlikely without additional specialty hardware (and I doubt it is doable at all with the Exilim).

Related

Proximity Distortion in Depth Image

Description:
The goal of my current project is to determine the location of an "object" with just its 3D-coordinates.
To achieve that I figured it'd be best to turn off the "Fill"-Mode of my Camera (ZED 2 from Stereolabs), because I want some hard edges in my depth-image.
The Problem:
The depth image is being distorted to a major degree due to proximity of other "objects".
The following image shows the depth image from the side, it is viewing some bars before a smooth woodwall. The wall is mostly plain, so everything is fine here.
I blacked the Color-Image and Myself, do not worry about those parts.
When I put my hand or another object in front of the wood wall parts that are bigger than my actual hand get "pulled" towards the camera around the location of the hand or other object. These parts seem to "stick" to other elevated parts in the proximity, as the area between the bars and my arm gets pulled entirely.
Question(s):
Is this normal?
Is there an easy way to get rid of it?
What is the reason behind it?
My own assumption(s):
Feel like this is some sort of approximation of unknown parts
Hopefully.. Glad the camera was calibrated by default, as that usually is a pain to do right.
Due to the new object that gets put in front of the wall, there is more stuff hidden and therefore more areas that the camera cannot see with both lenses, maybe it just "guesses" that the area between is not so far off due to some underlying algorithms that make the image smoother..
First of all I would advice you to change the depth mode also with keeping the sensing mode in STANDARD:
ULTRA: offers the highest depth range and better preserves Z-accuracy along the sensing range.
QUALITY: has a strong filtering stage giving smooth surfaces.
PERFORMANCE: designed to be smooth, can miss some details.
*********************From your description, it seems like you are using the Performance mode
The ZED Camera uses a matching alogorithm to generate the disparity/depth map, which is a closed source and I have recently contacted stereolabs about that and they've said "We cannot disclose this information to you because it's internal information and proprietary to Stereolabs."
Other works on the zed camera showed some limitations in depth sensing, specially when there is a variation in lightning and shadows. """Depth Data Error Modeling of the ZED 3D Vision Sensor from
Stereolabs"""
In addition to this, the depth error is directly proportional to the distance of the object from the camera, so make sure to set your depth range properly.

Inform7: Change a room's description depending on where the player came from

I'm pretty new to Inform and it seems like this shouldn't be too hard to do but I haven't yet found a way. I want to change a room's description based on where the player came from. Something along the lines of:
The Town Square is a room. "As you enter the small town square, [if yourself came from West]
the rising sun makes silhouettes of the roofs and spires to the East.[otherwise]your long
shadow strides before you as the Sun rises behind.[end if]"
What's the best way to go about this?
I found a reasonable solution:
The last location is a room that varies.
Orientation is a direction that varies.
Before going to anywhere, now the last location is the location of the player.
After going to anywhere:
now orientation is the best route from the last location to the location, using even locked doors;
continue the action.
The Town Square is a room. "As you enter the small town square, [if orientation is east]
the rising Sun makes silhouettes of the roofs and spires to the East.[otherwise if orientation is west]
your long shadow strides before you as the Sun rises behind.[end if]"
The "using even locked doors" modifier ensures this will work even if a door closes and locks behind the player. The solution does assume that the player has come via a reversible route, which may not always be the case e.g. if the player has teleported.

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

Time synchronization of views generated by different instances of the game engine

I'm using open source Torque 3d game engine for the avia simulator project.
I need to generate single image from the several IG (image generator) PCs.Each IG displays has its own view camera with certain angle offset and get the info about the current position from the server via LAN.
I've already setup multi IG system.
Network connection is robust (less than <1 ms)
Frame rate is good as well - about 70 FPS on each IG.
However while moving the whole picture looks broken because some IG are updating their views faster than others.
I'm looking for the solution that will make the IG update simultaneously. Maybe some kind of precise time synchronization algorithms that make different PC connected via LAN act as one.
I had a much simpler problem, but my approach might help you.
You've got to run clocks on all your machines with, say, a 15 millisecond tick. Each image needs to be generated correctly for a specific tick and marked with its tick ID time. The display machine can check its own clock, determine the specific tick number (time) for which it should display, grab the images for that specific time, and display them.
(To have the right mindset to think about this, imagine your network is really bad and think about one IG delivering 1000 images ahead of the current display tick while another is 5 ticks behind. Write for this sort of system and the results will look really good on the one you have.)
Ideally you want your display running a bit behind the IGs so you always have a full set of images for the current tick. I had a client-server setup and slowed the display (client) timer down if it came close to missing updates and sped it up if it was getting too far behind. You have to synchronize all your IG machines, so it might be better to have the master clock on the display and have it send messages to speed up any IG machine that's getting behind. (You may not have the variable network delays I had, but it's best to plan for them.)
The key is that each image must be made at a particular time, that the display include only images for the time being displayed, and that the composite images appear right when they should (every 15 milliseconds, on the millisecond). Also, do not depend on your network or even your machines to do anything in a timely manner. Use feedback to keep everything synched.
Addition On Feedback:
Say the last image for the frame at time T arrives 5ms after time T by the display computer's time (real time). If you display the frame for time T at T plus 10 ms, no one will notice the lag and you'll have plenty of time to assemble the images. Using a constant (10 ms) delay might work for you, especially if you make it big enough. It may be the way to go if you always run with the exact same network.
But you are depending on all your IG machines being precisely synchronized for real time, taking no more than a certain amount of time to produce their image, and delivering their image to the display machine all in predictable lengths of time.
What I'd suggest is have your display machine determine the delay based on the time stamps on the images it receives. It would want to increase the delay if it isn't getting the images it needs in time, and decrease it if all the IG's are running several images ahead of what the display needs. (You might want to ignore the occasional really late image. You have to decide which is more annoying: images that are out-of-date, a display that is running noticably behind time, or a display that noticably speeds up and slows down.)
In my original answer I was suggesting some kind of feedback from the display to keep the IG machines running on time, but that may be overkill: your computer's clocks are probably good enough for that.
Very generally, when any two processes have to coordinate over time, it's best if they talk to each other to stay in step (feedback) rather than each stick to a carefully timed schedule.

How to fade out volume naturally?

I have experimented with a sigmoid and logarithmic fade out for volume over a period of about half a second to cushion pause and stop and prevent popping noises in my music applications.
However neither of these sound "natural". And by this I mean, they sound botched. Like an amateur engineer was in charge of the sound decks.
I know the ear is logarithmic when it comes to volumes, or at least, twice as much power does not mean twice as loud. Is there a magic formula for volume fading? Thanks.
I spent many of my younger years mixing music recordings, live concerts and being a DJ for my school's radio station and the one thing I can tell you is that where you fade is also important.
Fading in on an intro or out during the end of a song sounds pretty natural as long as there are no vocals, but some of these computerized radio stations will fade ANYWHERE in a song to make the next commercial break ... I don't think there's a way to make that sound good.
In any case, I'll also answer the question you asked ... the logarithmic attenuation used for adjusting audio levels is generally referred to as "audio taper". Here's an excellent article that describes the physiology of human hearing in relation to the electronics we now use for our entertainment. See: http://tangentsoft.net/audio/atten.html.
You'll want to make sure that the end of the fade out is at a "zero crossing" in the waveform.
Half a second is pretty fast. You might just want to extend the amount of time, unless it must be that fast. Generally 2 or 3 seconds is more natural.
More on timing, it should really be with the beat rate of the music, and end at a natural point in the rhythm. Try getting the BPM of the song (this can be calculated roughly), and fading out over an interval equal to a whole or half note in that timing.
You might also try slowing down the playback speed while you're fading out. This will give a more natural vinyl record or magnetic tape sounding stop/pause. Linearly reduce playback speed while logarithmically reducing volume over the period of 1 second.
If you're just looking to get a clean sound sound when pausing or stopping playback then there's no need to fade at all - just find a zero-crossing point and stop there (or more realistically just fill the rest of that final buffer with silence). Fading out when the user expects the sound to stop immediately will sound unnatural, as you've noticed, because the result is decoupled from the action.
The reason for stopping at a zero-crossing point is that zero is the steady state value while the audio is stopped, so the transition between the two states is seamless. If you stop playback when the last sample's amplitude is large then you are effectively introducing transients into the audio from the point of view of the audio hardware when it reconstructs the analogue signal, which will be audible as pops and/or clicks.
Another approach is to fade to zero very fast (~< 10mS), which effectively achieves the same thing as the zero-crossing technique.

Resources