Which technology is behind intel's realsense depth sensor?
Is it a structured light or ToF approach?
Where can I find specs?
You might be able to find some more information on the RealSense Website
As far as specs go, this is all I could find:
Full VGA depth resolution
1080p RGB camera
0.2 – 1.2 meter range (Specific algorithms may have different range and accuracy)
USB 3.0 interface
The IR Laser Projector on the Intel RealSense F200 camera sends non visible patterns (coded light) onto the object. The reflected patterns are captured by the IR camera and processed by the ASIC which assigns depth values to each pixel to create a depth video frame.
Applications see 2 (depth and color) video streams.
The ASIC syncs depth with color stream (texture mapping) using UVC time stamp and generates data flags for each depth value (valid, invalid, or motion detected.)
At least for the front-facing camera, it seems to be structured light.
Related
My goal is to write a VST that generates 4 bytes of data based on 3 parameters. So i've made an algorithm that basically turns music into color with ZGameEditorVisualizer VST and 3 parameters (low,mid,high control the hue, lightness and saturation). My problem is that this can only make video (that's what the VST is intended for. However i want to use live, these colors should be projected on my RGB led strip, which will be connected to the arduino. For that i will need to write a VST that can generate RGB data from Hue, Saturation and Lightness. So it generates 4 bytes, (Red,Green,Blue,Alpha) and needs to send it over to the Arduino. Arduino then makes them shine beautifully. What i need help with, is where do i get started? Is it possible to connect a VST to Serial Communication? How can i use these 3 parameters to make RGB? Is there a library for that for C++? Any help will be greatly appreciated.
First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.
I am working on a drone project and currently choosing a board to use. Is it possible to use an Arduino Nano for all needs which are:
Gyroscope and Accelerometer
Barometer (as an altimeter)
Digital magnetometer
WiFi (to send telemetry for processing)
GPS module
4 motors (of course)
P.S:
I know nothing about Arduino. However I have a good ASM, C/C++, programming background and I used to design analog circuits.
I would like to avoid using ready-made flight controllers.
Pin count should not be too much of an issue if using I²C sensors, they would simply all share the same two pins (SCL, SDA).
I agree that the RAM could be a limitation, the processing power (30 MIPS for an arduino uno) should be sufficient.
On an arduino mega, the APM project ran for years with great success.
I believe it's possible to do a very simplified drone flight controller with an Arduino nano and several I²C sensors + GPS.
But even with a more advanced microcontroller it's not a trivial task.
*** If you still want to try the experiment, have a look at openlrs project : https://code.google.com/p/openlrs/ . It's quite old (there are several derived projects too), but it runs on a hardware similar to arduino uno (atmega328). It provides RC control, and quad flight controller with i²c gyroscopes, accelerometers (based on wii remote), and barometer.
It also parse data from the GPS, but afaik it doesn't provide autonomous navigation but it should be possible to add it without too much additional work.
edit : about the available RAM.
I understand that at first sight 2kb of RAM seems a very small amount. And a part of it is already used by Arduino, for example the serial library provides two 64 bytes FIFO, using some RAM. Same for the Wire (I²C) library, although a smaller amount. It also uses some RAM for stack and temporary variables, even for simple tasks such as float operations. Let's say in total it will use 500 bytes.
But then what amount of RAM is really required ?
- It will have a few PIDs regulators, let's say that each one will use 10 float parameters to store PID parameters, current value etc. So it gives 40 bytes per regulator, and let's say we need 10 regulators. We should need less, but let's take that example. So that's 400 bytes.
-Then it will need to parse GPS messages. A GPS message is maximum 80 bytes. Let's allow a buffer of 80 bytes for GPS parsing, even if it would be possible to do most of the parsing "on-the-fly" without storing it in a buffer.
-Let's keep some room for the GPS and sensors data, 300 bytes which seems generous, as we don't need to store them in floats. But we can put in it the current GPS coordinates, altitude, number of satellites, pitch, roll etc
-Then some place for application data, such as home GPS coordinates, current mode, stick positions, servo values etc.
The rest is mostly calculations, going from the current GPS coordinates and target coordinates to a target altitude, heading etc. And then feed the PIDs to the calculated pitch and roll. But this doesn't require additional RAM.
So I would say it's possible to do a very simple flight controller using 1280 bytes. And if I was too low or forgot some aspects, there's still more than 700 bytes available.
Certainly not saying it's easy to do, every aspect will have to be optimized, but it doesn't look impossible.
It would be a trick to make all of that work on a Nano. I would suggest you look at http://ardupilot.com/ they have built a lot of cool thinks around the ARM chip (same as an Arduino) and there are some pretty active communities on there as well.
Even if you didn't run out of pins (and you probably would), by the time you wrote the code for the motors and the GPS, you will run out of RAM.
And that's not even getting into the CPU speed, which is nowhere near enough. As mentioned in the other answer, you'll be better off with a Cortex M-x CPU.
Arguably, you could use a few Nanos, one per task, but chaining them together would be a nice mess...
i would like to know if there exists datasheet for pixy camera modul, official wiki pages are not worth much. For starter i am interested in getting image from this camera modul, in wiki pages there is only a hello world program that detects objects. How to get image data? (Arduino) (I would like to transfer this image data via UART to computer, i know about pixymoon)
I would also like to know if there exists port to stm discovery 32f4?
I think it's not possible. I saw this in the pixy forums, hope it's enlightening:
"is it possible to use an Arduino or any other device in order to decode/transform USB video into analog? What is the video format that is fed into USB? Is it possible to cut into Pixy hardware before the signal is encoded into USB and extract the signal? Thank you.
Hello ####,
I think analog video is a pretty high bandwidth signal with strict timing requirements. I doubt this is possible without adding significant complexity.
Edward (developer)"
I want to use a camera which is installed in my computer in a Flex AIR application i'm writing, and I have few questions regarding the quality options :
Do I have any limitation on the video image quality? If my camera supports HD recording, will I be able to record in HD format via the AIR application?
How can I export the recorded video in any format I want?
If I want to use the same camera for shooting stills, how can I ensure (within the code) the best quality for the result pictures ?
Thanks for your answers.
1) Air can play 1080p no prob, however it cannot 'record' it from the application. There are some workarounds but you won't have sound. This is something that the OS should handle.
2) You can't, see above.
3) Shooting stills with a camera for picture quality is done on the hardware side, not software. In the software, it would essentially be equal to just taking one frame out of the video.