What is the rough concept behind the AOA object tracking? - azure-object-anchors

I am currently working on my Master Thesis, trying out different object detection/tracking SDKs for Hololens 2. Are you using some kind of ICP algorithm for detecting and tracking the objects? I assume that you are not using any geometry-based model tracking algorithm?

Currently, AOA detects and tracks object by leveraging different kinds information on both object and the surrounding environment. This includes geometry data and some ICP algorithms.

Related

Point Cloud library pose estimation given a pre-existing model as truth

PCL's github directs these questions here so I don't really know where else to ask this.
I'm trying to implement pose estimation given a mesh and a generated point cloud. Using PCL, I know you can do pose estimation with two point clouds from the tutorial. In my case I have an accurate faceted model of my target object. Does there exist a PCL pose estimator that can consume faceted truth models? I would like to avoid using mesh_sampling or mesh2pcd as a work around.
Googling does not bring any results relevant to my search with the following 54 terms
point cloud library pose estimation with
mesh
triangles
facets
truth data
model truth data
model
mesh truth data
vertexes
vertices
point cloud library point set registration with
(above)
point cloud library registration with
(above)
point cloud library 6DOF with
(above)
point cloud library pose with
(above)
point cloud library orientation with
(above)
Maybe I don't know the right words to search?
but it appears like it might be possible, because functors like this
pcl::SampleConsensusPrerejective<PointNT,PointNT,FeatureT>
and this
pcl::Registration< PointSource, PointTarget, Scalar >
take what seem to be pretty generic template arguments, only requiring PCL base functionality. But placing pcl::mesh did not compile (though it doesn't appear to be the only "mesh" type in PCL), since mesh doesn't seem to inherit off of base. The documentation does not talk about what is or is not possible with template types. Additionally I have found zero documentation that states this is impossible or indicates that only point clouds are allowed.
Can I use the model directly with out point cloud conversion, and if not why?
PCL is a library for point cloud processing. While some mesh support is available (pcl::PolygonMesh), practically all the implemented algorithms are based on point cloud data.
However keep in mind that a mesh is just a point cloud + additional triangulation information (faces) - so this means that any point cloud algorithm can be applied on a mesh. You just need to generate a point cloud from your mesh's vertices, and ignore the faces - no need for mesh sampling.

Can you resolve identity queries whilst Face API is undergoing training?

I am investigating solutions for identifying people utilising facial recognition and I am interested in using Microsoft's Face API.
I have noted that when adding new people the model needs to be trained again before those people will be recognised.
For our application it is crucial that whilst training is happening that the model continues to resolve identify requests so that the service runs uninterrupted.
It seems to make sense that the old model would continue to respond to identify requests whilst the new model is being trained up but I am not sure if this assumption is correct.
I would be grateful if someone with knowledge of the API could advise if this is the case or if not if there is another way round to ensure continuous resolution of identify requests. I have thought about creating a whole new person group with all the new images but this involves copying a lot of data and seems an inefficient way to go.
From the same documentation link in the previous answer:
During the training period, it is still possible to perform Identification and FindSimilar if a successful training is done before. However, the drawback is that the new added persons/faces will not appear in the result until a new post migration to large-scale training is completed.
My understanding is that this would work with LargePersonGroups (hence "post migration to large-scale training") but is unclear if would work for the legacy PersonGroups.
I tried a few manipulations on my projects using face API but the collection of faces is too small and the training is too fast to check. I think it's not blocking the previous version but cannot guarantee it.
Anyway, you will be interested in the following part of the documentation addressing the problems of training latency: https://learn.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/how-to-use-large-scale#buffer
It shows how you could avoid the problem you depict by using a "buffer" group

Finding similar items using Microsoft Cognitive Services

Which Microsoft Cognitive Services (or Azure Machine Learning services?) is best and least work to use to solve the problem of finding similar articles given an article. An article is a string of text. And assuming I do not have user interaction data about the articles.
Are there anything in Microsoft Cognitive Services that can solve this problem out-of-the-box? It seems I cannot use the Recommendations API since I don't have interaction/user data.
Anthony
I am not sure that Text Analytics API may be a good use for this scenario, at least not yet.
There are really two types of similarities:
1. Surface similarity (lexical) – Similarity by presence of words/alphabets
If we are looking for surface similarity, try fuzzy matching/lookup (SQL Server Integration Services – provides a component for this.), or approximate similarity functions (Jaro-Winkler distance, Levenshtein distance) etc. This would be easier as it would not require you to create a custom machine learning model.
2. Semantic similarity – Similarity by meaning of words
If we are looking for Semantic similarity, then you need to go for semantic clustering, word embedding, DSSM (Deep semantic similarity model) etc.
this is harder to do, as it would require you to train your own machine learning model based on an annotated corpus.
Luis Cabrera | Text Analytics Program Manager | Cloud AI Platform, Microsoft
Yes, you can use Text Analytics API.
examples are available here. https://www.microsoft.com/cognitive-services/en-us/text-analytics-api
I would suggest you use the Text Analytics API [1] as #Narasimha suggested. You would put your strings through the Topic detection API, and then come up with a metric (say, Similarity = count(Matching topics) - count(Non Matching topics)) that could order each string against the others for similarity. This would just require one API call and a little JSON parsing.
[1] https://www.microsoft.com/cognitive-services/en-us/text-analytics-api
Sentence similarity or semantic textual similarity is a measure of how similar two pieces of text are, or to what degree they express the same meaning.
This Microsoft's GitHub repo for NLP provide some sample wich could be used from Azure VM and Azure ML : https://github.com/microsoft/nlp/tree/master/examples/sentence_similarity
This folder contains examples and best practices, written in Jupyter notebooks, for building sentence similarity models. The gensen and pretrained embeddings utility scripts are used to speed up the model building process in the notebooks.
The sentence similarity scores can be used in a wide variety of applications, such as search/retrieval, nearest-neighbor or kernel-based classification methods, recommendations, and ranking tasks.

Create instance specification code in Rhapsody

I am working on a Rhapsody SysML project for work and we need to be able to model different configurations of our system. To give a concrete example, if our system is a vehicle, we want to be able to simulate that vehicle with different configurations of engines, wheels, etc.
This is my first time using SysML but in the book A Practical Guide to SysML it discusses, in chapter 7, the concept of Instance Specifications. These sound like exactly what we need, and Rhapsody appears to have support for them. So we created an Instance Specification in Rhapsody, giving it specific values for the engine and wheels. But once we create the instance specification we cannot find any way to actually create an instance from that specification. We noticed that Rhapsody doesn't even generate any code for the instance specification.
So my questions are the following, can Instance Specifications be used to create different configurations of a system and if so how? If not, what is the best method for modeling different configurations of a system?
Thanks for any help you can provide.

How to check the efficiency of UIMA Annotators?

I have made a few annotators in UIMA and now, i want to check their efficiency.Is there a standardized way to gauge the performance of the Annotators?
UIMA itself does not provide immediate support for comparing annotators and evaluating them against a gold standard.
However, there are various tools/implementations out there that provide such functionality on top of UIMA but typically within the confines of the particular tool, e.g.:
U-Compare supports running multiple annotators doing the same thing and comparing their results
WebAnno is an interactive annotation tool that uses UIMA as its backend and that supports comparing annotations from multiple users to each other. There is a class called "CasDiff2" in the code that generates differences and feeds them into DKPro Statistics in the background for the actual agreement calculation. Unfortunately, CasDiff2 cannot be really used separately from WebAnno (yet).
Disclosure: I'm on the WebAnno team and have implemented CasDiff2 in there.

Resources