I'm using multiple Azure Kinect devices to create a merged PointCloud with PCL and Open3D libraries. This is because Azure Kinect doesn't support multi-device body tracking fusion. I've read some people computing joints (position and orientation) from every single Kinect and then fusing them in different ways, such as Kalman filter, but the most correct way to obtain a good tracking is using a merged Cloud and then track detected bodies, but I can't find any project or SDK to use, just scientific researches.
Can anyone help me? Thank you very much.
I think the reason that you're unable to find any sort of library to use because none exist! If you're able to fuse the pointclouds successfully you could try running the body tracking on that to see if it improves results, or turn that into a mesh and use some sort of mesh-based pose estimation.
Related
I'm in the design stages of writing a Xamarin Forms app that will process camera pictures of legal documents. Is it possible to use the Computer Vision API to detect QR codes and parse them?
I was looking for the same thing and reviewed the documentation again, but didn't find any suitable operation within Computer Vision up to version 2.1. In the end I settled for this service: http://goqr.me/api/ which does exactly that and seems to work quite well. I tried ZXing, too, but with unsatisfactory results.
Hope that helps.
Joerg.
I am trying to write a Gatan DigitalMicrograph script to control the tilting of incident electron beam before and after a specimen. I think that the values of pre-specimen lens system can be got and changed by using commands such as EMGetBeamTilt, EMSetBeamTilt and EMChangeBeamTilt. However, I don't know how to get or control the status of the post-specimen lens system such as a projector lens. What command or code should be written in order to control the projector lens system?
It will be appreciated if you share some wisdom. Thank you very much in advance.
Unfortunately, only a limited number of microscope hardware components can be accessed by DM-script via a generalized interface. The generalized commands communicate to the microscope via a software interface which is implemented by the microscope vendor, so that the exact behaviour of each command (i.e. which lenses are driven when a value is changed) lies completely within the control of the microscope software and not DM. Commands to access specific lenses or microscope-specific controls are most often not available.
All available commands, while they can be found in earlier versions often as well, are officially supported and documented since GMS 2.3. You will find the complete list of commands in the F1 help-documentation (on online-systems):
D3D11 in Metro doesn't support D3DReflect.
Why not?
My API uses this to dynamically get the constant buffer sizes of shaders.
Is there any other way to get a constant buffer size dynamically in D3D11 without a ID3D11ShaderReflection object? Or get the constant variables by name?
What if I wanted to make a shader compiler tool for Metro?
What is I wanted to make an Art application that allowed you to dynamically generate complex brushes that requires shader generation. But this doesn't work.
Does Windows(Desktop), OSX, Linux, iOS or Android have these shader limitations?
No, so why on earth does Metro?
See http://social.msdn.microsoft.com/Forums/en-US/wingameswithdirectx/thread/9ae33f2c-791a-4a5f-b562-8700a4ab1926 for some discussion about it.
There is not an official position explaining why they made such restriction, but it is very similar to the whole restriction of dynamic code execution on WinRT. So your application scenario is unfortunately not possible.
Though, It would be feasible to hack/patch d3dcompiler_xx.dll and redirect all dll imports to call another DLL that would use authorized only APIs, but that's quite some work, and It is not even sure that this is legal (even by extracting the dll code from original d3dcompiler and rebuilding a new dll).
Another option for your scenario is to send the shader over the internet to a server that compiles and returns bytecode and reflection info... far from being perfect.
Among the platforms you mention, probably iOS is the one that could have the same restriction (I don't develop on this platform, so I can't confirm it).
I have to develop an sdk for biometric applications, but don't know how to start development. Either I need to write my own algorithm or use written by other and are free. If I use others algorithm then it is difficult to say about the quality and results.
Is there any standard source available that can help me in term of quality.
I don't wanna use any available sdk
Anybody who can help me in this regard will be a plus for me.
Thanks
Khizar
Programming for biometric devices depends largely on the device you use. Chances are you've received software with your device that has its own evaluation algorithim which then outputs a value, usually a hash, that your program can then handle.
If you're looking for a generic option, Google has several. One option being m2sys.
I'm making a distributed sensor network. The basic architecture of my network is to have several slave nodes (up to about 10) reporting back to a master node on a regular basis.
I'm looking for a software framework that I can use for this, so far I have thought of
corba
pubsubhubub
xmtp
making my own
I have some basic requirements (like basic security, fault awareness)
Anyone have any suggestions?
In specific answer to your question, TinyOS provides a lot of what you'll need.
There's quite a large body of academic work on getting these up and running, especially combining agent-based infrastructures with sensor networks -- take a look on Google Scholar for example.
There are also some very good links on Wikipedia.
Are you specifically interested in an OS to run on your sensors, or something at higher level that plugs into some sensor infra you already have? Are you intending to build your own kit, or work on something that already exists (e.g. BTNode)?
You can also use RL-ARM or FreeRTOS if you wanted to use micro controllers for your project. also in the network layer you can use lwip.
there are many other libraries both free and open source in case if you want to use ARM based micro controllers.