recognizing QR codes with the Computer Vision API - xamarin.forms

I'm in the design stages of writing a Xamarin Forms app that will process camera pictures of legal documents. Is it possible to use the Computer Vision API to detect QR codes and parse them?

I was looking for the same thing and reviewed the documentation again, but didn't find any suitable operation within Computer Vision up to version 2.1. In the end I settled for this service: http://goqr.me/api/ which does exactly that and seems to work quite well. I tried ZXing, too, but with unsatisfactory results.
Hope that helps.
Joerg.

Related

Body Tracking using Point Cloud from Azure Kinect

I'm using multiple Azure Kinect devices to create a merged PointCloud with PCL and Open3D libraries. This is because Azure Kinect doesn't support multi-device body tracking fusion. I've read some people computing joints (position and orientation) from every single Kinect and then fusing them in different ways, such as Kalman filter, but the most correct way to obtain a good tracking is using a merged Cloud and then track detected bodies, but I can't find any project or SDK to use, just scientific researches.
Can anyone help me? Thank you very much.
I think the reason that you're unable to find any sort of library to use because none exist! If you're able to fuse the pointclouds successfully you could try running the body tracking on that to see if it improves results, or turn that into a mesh and use some sort of mesh-based pose estimation.

What are difference between Computer Vision API v1.0 and v2.0?

Both have their own documentation and I see only small wording differences between those. Are there list of things that have actually changed? Has OCR for example improved on version 2.0 or it's the same except I guess the handwriting recognition? Some kind of changelog would really make a difference.
The only difference between v1.0 and v2.0 is the revised /recognizedText which has a breaking change in the input/output. All other endpoints are exactly the same. Also, if you have an key in an up-to-date pricing tier (including free), your API key will work in both versions.
As you may know, the Computer Vision API has two different OCR endpoints. The /ocr endpoint runs the older recognition engine with broader language coverage. The newer /recognizeText endpoint, which in v1.0 handled handwritten text, in v2.0 covers both handwritten and printed text using a newer engine. The /recognizeText endpoint remains English-only for now. You select between handwritten/printed modalities using the mode query parameter. See
documentation here.
As for documenting changes, there isn't one obvious place for this unfortunately. One option is to check the swagger repo.

Point Cloud data - learning sources

What are the best learning sources and study materials for Point Cloud data and Point cloud Library ?
So far I came across PCL Documentation.
There is a git hub repository existing (which was last edited in 2013 :-( ). Maybe it helps you a little bit.
In addition there exists a user forum where you can ask questions for sure.
In addition there are a few resources on the web:
PDF explaining the PCL (in german though)
Another PDF explaining PCL
Tutorial for PCL,
Another Tutorial for PCL,
This PDF is best to understand Point cloud.
And The documentation you mention is good if you know the basic flow.So My opinion is to start with given PDF and then continue with official documentation.
That exactly what I am doing :)

Android client for SignalR

I need support in SignalR for Android client. I am using following client SignalR/Java-Client but unable to know where to start :) We are completed .net self host & working fine. But only problem with Android & iPhone. Can any one please guide me how to start the next steps for Android & iPhone.
You don't give a lot of details, so it's hard to give a concrete answer, especially since your question is very broad to begin with. Nonetheless, you should have a look at the official samples for the Java client, to get you started. If you have implemented the server side yourselves, and know your Java, it should be pretty easy to figure out from the code provided in these samples. The Java client is, in my experience, very easy to use.
As for an iOS client, a Google search came up with this library. I have never used it, and it looks like it's not getting a whole lot of support, but you could always give it a shot.

D3D11 in Metro doesn't support D3DReflect? (Why Not?)

D3D11 in Metro doesn't support D3DReflect.
Why not?
My API uses this to dynamically get the constant buffer sizes of shaders.
Is there any other way to get a constant buffer size dynamically in D3D11 without a ID3D11ShaderReflection object? Or get the constant variables by name?
What if I wanted to make a shader compiler tool for Metro?
What is I wanted to make an Art application that allowed you to dynamically generate complex brushes that requires shader generation. But this doesn't work.
Does Windows(Desktop), OSX, Linux, iOS or Android have these shader limitations?
No, so why on earth does Metro?
See http://social.msdn.microsoft.com/Forums/en-US/wingameswithdirectx/thread/9ae33f2c-791a-4a5f-b562-8700a4ab1926 for some discussion about it.
There is not an official position explaining why they made such restriction, but it is very similar to the whole restriction of dynamic code execution on WinRT. So your application scenario is unfortunately not possible.
Though, It would be feasible to hack/patch d3dcompiler_xx.dll and redirect all dll imports to call another DLL that would use authorized only APIs, but that's quite some work, and It is not even sure that this is legal (even by extracting the dll code from original d3dcompiler and rebuilding a new dll).
Another option for your scenario is to send the shader over the internet to a server that compiles and returns bytecode and reflection info... far from being perfect.
Among the platforms you mention, probably iOS is the one that could have the same restriction (I don't develop on this platform, so I can't confirm it).

Resources