Microsoft Face API- Find Similar face from Real Time Video - azure-cognitive-services

I am working on application which detect user faces and find similar faces from Realtime live streaming video using Microsoft Cognitive Services- Face API.I am using following example for my reference:
https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/tree/master/Windows/LiveCameraSample
I am able to detect faces from real time video but I am not getting any idea how to store face images and how to detect similar faces from it.
Can anyone please help to find the way?

Your are looking for an other existing feature of Face API. It is a combination of several operations:
Initiate your project by creating a Person Group
Use Detect to find faces in images
Use Find Similar to check if a face is already existing in a person group
For new faces, create a person and store in the person group the newly created persons and associated faces
Face API documentation: here
But in fact, there is an open-source demo project from Microsoft called Intelligent-Kiosk that has a demo called Realtime Crowd Insights that is doing exactly those things. The source code is on Github, here.

Related

guidance needed for corda Application design

I have a web background majorly with javascript, I have started learning Corda recently for project implementation and need guidance in this regard,
So our application is based on the web, the user signs up with different school name, create question papers, and then want to share either part of it or whole with teachers of some other school in our platform. they can make changes and assign it back to creator and the process goes back and forth, finally signing the paper to be finalized, once finalized it cannot be changed by anyone. I need to store these transactions in Corda application, not sure how to go about it, I did try replicating it using negotiation application in corda/kotlin/sample, but stuck in a bug as I was trying to send a list of objects.
I do have the following questions in mind
Should I use enterprise edition or go with open source as I think I need schema design for this. web db is in postgress
As far as I have seen each node is predefined in the config with username and password,is there a way to create the node while the user signs up.
I have schools and teachers inside the school, do I need a separate node for each school and then create states in each node(not sure if a node can be set up at run time). or do I use the account's library provided for creating the account of each teacher, if yes id there a way to use passwords in it, unable to find password field in it.
how do I send an array of objects to the state, or should I create a separate state for each question, as different questions can be assigned to different teachers, but again multiple questions can be assigned to the same teacher.
These are few questions on my mind any help is much appreciated, as most of the examples gave IOU samples or states with int and string, Please guide me in the right direction.
Alessandro has good advice here, definitely look at the samples repos for inspiration on how to build what you're looking for.
start with open source, it's easier to prototype and you can switch to enterprise later it won't be an issue for you
this depends on design, you wouldn't really want to create a new corda node per-person, you might want to have corda accounts that run on a single node instead. See accounts sdk here: https://github.com/corda/accounts
what you might do is make a corda node for each school and then accounts per teacher like you were already were thinking. That would mean only a couple of nodes based on the number of schools you have.
as long as your state is marked with #CordaSerializable you won't have problems sending arrays of data, I send an array in a state in this sample here: https://github.com/corda/samples-java/blob/master/Advanced/secretsanta-cordapp/contracts/src/main/java/net/corda/samples/secretsanta/states/SantaSessionState.java#L24
https://github.com/corda/samples-java
https://github.com/corda/samples-kotlin

How to use Google Cloud Search and Vision API together

everyone,
I have a completely untypical topic at this point and I hope that I am addressing the right people here.
I'm working on a personal project. I recently became a G Suite customer and would like to map my document and media management via Google Drive. The document management works well so far and with the help of Google Cloud Search I can easily find my documents across platforms.
Since I personally take a lot of pictures, I was wondering if I could use Google products to find a way to classify my pictures automatically. My approach was to use the label detection of the Vision API to store the 5 most likely labels as metadata. By using the metadata, I can then, when I search for example for architecture or animal, find all images that contain one of the following terms in a single search. The concept should of course be extendable to location and text detection.
I have already tried to create an automatism via pages like integromat.com that labels the photos, but unfortunately without success.
Well and now we come to the current situation. Since I realized that an active interaction with the Google Cloud is essential, I am looking for help from an experienced community. I hope that maybe someone here has a good or inspiring idea.
Maybe one more hint before the proposal is made. Google Photos is great and can do something like that, but it doesn't integrate with Google Cloud Search and managing RAW files would be terrible.
You can achieve what you want using the following approach:
Build a web/mobile app to upload photos to Google Drive or Cloud Storage.
Use the Google Vision API to fetch metadata from your image before uploading to Drive/Cloud Storage.
Use Google Cloud Search Rest API to index the extracted metadata along with the image URL to Cloud Search.
Create a custom Search Interface to search and display your indexed images.
Above steps should be able to point you in the right direction in implementing the solution. Let me know if you need further help with it.

How do I use texStorage2D?

Does anyone have a working, end-to-end example of this API in WebGL 2 or could point me to one? I did a search already and couldn't find anything but the API documentation.
I'd like to know how the calls would differ from what I currently do? Do I still need the following calls? Are there equivalents for them for Storage 2D?
createTexture
bindTexture
texSubStorage2D? (how do i upload data?)
activeTexture
framebufferTexture2D
readPixels

qt3d and the oculus sdk

Given qt3d's structure, is it possible to integrate the oculus sdk with a qt3d application?
I have tried but my two main obstacles are:
I cant use the textures from the texture swap chain created by the oculus sdk as a render target attachment
I am not able to call ovr_SubmitFrame at the end of each frame since qt3d doesnt have a signal that would allow me to do so.
Has anyone successfully gotten the oculus sdk to work with qt3d? If so, how did you overcome these issues?
Are there any plans for allowing the integration of VR SDKs (not just oculus') in qt3d in further releases?
You could probably do it with some sort of custom framegraph that encapsulated the stereo rendering functionality and included a custom component that could take the currently rendered content and submitted it to the SDK prior to the swapbuffer call.
Alternatively you could dive into the code that processes the framegraph itself and see how hard it would be to customize it to work against a VR API. I've done significant work with integrating Qt apps with VR, but not specifically with Qt3D.
The frame graph will indeed provide one part of the solution for the stereoscopic rendering setup. There is already an anaglyphic stereo example showing most of what you need that ships with Qt 3D.
To integrate the swap chain of the occulus SDK will require deeper integration. I do not know the details of the Occulus SDK as yet but we can take a look.
From what I can see you should be able to do something analogous to the Scene3D custom Qt Quick 2 item to be able to render to the textures provided by the Occulus SDK and to tell Qt 3D which OpenGL Context to use. See
http://code.qt.io/cgit/qt/qt3d.git/tree/src/quick3d/imports/scene3d?h=5.7
Nicolas, I also do not appreciate you publicly saying that KDAB are not much help. I only received an email from Karsten on Friday which I responded to despite being on vacation saying that we can help but it will be on a best efforts basis since you are not paying and I have a very full workload preparing Qt 3D for release at the end of the month along with Qt 5.7. Today is a public holiday in the UK, as you are aware, yet you are already saying detrimental things about us.
You were also directed to post to the interest#qt-project.org mailing list on the qt-forums as I do not tend to monitor SO or the qt-forums on a regular basis. You could have also emailed us directly or via the development#qt-interest mailing list.
We would be more than happy to set up a support agreement with you.

Creating an indoor map

I wonder if someone here can help here ,in my web application I'm trying to create map section:
In my map section the objective is to show an indoor search like in the attached picture from Yahoo Maps does someone know how they Created the tenants names and level of the floor on the maps it self?
http://s30.postimg.org/4dh7mlpfl/Yahoo_Maps.png
I think the best answer for this one is going to depend on which mapping framework you were interested in using.
If you're using Yahoo maps: Yahoo got that indoor map data from Nokia's here platform. As far as I know, they don't offer an editor for the indoor mapping data. The major mapping platforms often have some self-service mechanism to add or correct mapping data. If you were set on having this available and you were using Yahoo's maps, you might want to try to contact someone at Nokia's "here" and see how you might be able to get that to happen.
With that being said, you can do something like that with Google Maps as well. They have information and a way to add the interior layout of a building here. I haven't used it...I just know that it exists, so I can't speak to it in much detail.
There is also some support for this kind of thing in OpenStreetMap. I would post a link to an example of it, but stackoverflow says I can't post more than 2 links unless I have more than 10 reputation. (Sorry...I'm still relatively new to posting on here.)

Resources