how many gestures can be recognized?
The specific range of gesture recognition?
What are the advantages compared to sr300?
D400 series works only with LibRS2.0, the LibRS2.0 doesn't have gesture recognition MW.
Related
Is there an off-the-shelf beacon I can just buy that also has support for triggering Vibration / Haptic Motor over BLE? Amazon / whatever links are welcome.
Context: I'd like to trigger vibration from iOS in a small independent device. BLE seems ideal as I need to support more than one and all devices should be within range. I'm trying to rapidly prototype something instead of cranking out my own HW.
For prototyping you may be able to use a few sub-$20 Xiaomi Mi Band devices (or Band 2 which should be under $30 and has a screen). And then use API like this to trigger vibration: https://github.com/betomaluje/Mi-Band
They both can be removed from the wristband and look like a 2 cm long pill, and last weeks on a charge with normal use, maybe months when not counting steps.
I have no hands on experience with BLE and beacons at this point, and am having a hard time figuring out the viability of using them in a particular manner. Wondering if anyone can provide some high level feedback about the viability of this use case:
The goal is to use beacons to track a running race. Runners with their smartphones would be able to log times when they hit various beacons spread throughout an indoor course. Pretty simple scenario.
The problems that I foresee are 1) the ability to continuously scan for beacons at sub second intervals, and 2) the ability to then determine closest range to the beacon at sub second intervals.
I've tried parsing through the estimote and kontakt.io SDKs and am not certain that what I want to do is entirely possible or feasible with these particular beacons (or any for that matter). Further, would there be any device (the smartphones) specific limitations that would apply?
Thanks!
If you are using Estimote SDK you can set this property on BeaconManager.
See BeaconManager#setForegroundScanPeriod. SDK Docs
Which technology is behind intel's realsense depth sensor?
Is it a structured light or ToF approach?
Where can I find specs?
You might be able to find some more information on the RealSense Website
As far as specs go, this is all I could find:
Full VGA depth resolution
1080p RGB camera
0.2 – 1.2 meter range (Specific algorithms may have different range and accuracy)
USB 3.0 interface
The IR Laser Projector on the Intel RealSense F200 camera sends non visible patterns (coded light) onto the object. The reflected patterns are captured by the IR camera and processed by the ASIC which assigns depth values to each pixel to create a depth video frame.
Applications see 2 (depth and color) video streams.
The ASIC syncs depth with color stream (texture mapping) using UVC time stamp and generates data flags for each depth value (valid, invalid, or motion detected.)
At least for the front-facing camera, it seems to be structured light.
I'm new to iBeacon, and would like to simulate entering and exiting an iBeacon region, to see how notifications work on entering/exiting a region when an app monitoring for iBeacons is in the background.
The iBeacon I'd like to try this with would be a virtual iBeacon, running on a mac or an ios device.
Can this be done by fluctuating the power or is there a better way to do it? And are there any good examples of doing this anywhere?
The easiest way to do this is by simply turning the iBeacon on and off. I do this every day using our MacBeacon and Locate for iBeacon test tools which have on-screen on/off switches.
In theory, you could do what you suggest by turning the radio power way down, too. But iOS, OSX and Linux do not let you adjust the radio power. Turning off the transmission completely is an easier and simpler alternative.
I want to use a camera which is installed in my computer in a Flex AIR application i'm writing, and I have few questions regarding the quality options :
Do I have any limitation on the video image quality? If my camera supports HD recording, will I be able to record in HD format via the AIR application?
How can I export the recorded video in any format I want?
If I want to use the same camera for shooting stills, how can I ensure (within the code) the best quality for the result pictures ?
Thanks for your answers.
1) Air can play 1080p no prob, however it cannot 'record' it from the application. There are some workarounds but you won't have sound. This is something that the OS should handle.
2) You can't, see above.
3) Shooting stills with a camera for picture quality is done on the hardware side, not software. In the software, it would essentially be equal to just taking one frame out of the video.