LUIS not able to classify multiple person names in a query - microsoft-cognitive

I am having an issue where in LUIS is unable to identify multiple person names in an utterance when separated by "and or ," in a phrase.
For Eg:
When user types, "Schedule a meeting with Bob and Lisa", LUIS recognizes only Bob as builtin.personName where in Lisa is not recognized at all. Also separating names with comma didn't work either. If I change the order of names to Lisa and Bob, only Lisa gets listed and poor Bob gets ignored this time.
Also it failed to identify the name when typing, "Book meeting and Bob".
My another attempt was successful after changing the utterance to "Book meeting with Bob as well as Lisa". But that would not be general way of posting a query.
Phrase lists didn't help either, I have added below two samples in phrase list but the results were same as stated above.
"Schedule a meeting with {personName}, {personName}"
"can i have {personName} and {personName} for a quick meeting"
I don't see similar issue with emails separated by "and or comma".
Note:I also tried built in domain entity Entertainment.Person but got similar results.
Appreciate your help.

You need to make your app more intelligent by adding patterns to the app. I have tried to replicate your issue by creating my LUIS app, it successfully detects the entities properly. Refer the app stored as a gist here : https://gist.github.com/mandardhikari/f0edd9406aeeb6d7b9fd0f68371ff4eb

Related

use case for LUIS (Microsoft Cognitive Service)

we want to use LUIS to get the entities and intent from a user question and identify the entities that belong to our domain, so what we're doing is training LUIS with a lot of entities that comes from our context domain. Is this a valid and "correct" use of LUIS?
Thanks
Yes you can the intents and entities fro the user question with LUIS. You have to provide training examples accordingly. There are many features in LUIS to label entities which follow a specific pattern using Patterns feature (pattern.any) and provide phrase lists for synonyms. You have to use them based on the scenario. Hope that helps!!
I'm creating a search engine to find in medical documents with a very specific terms. For this I'm training LUIS with this kind of words or tags as "entities".
Yes you are right. The medical term you are referring to are suppose to be entities.
But this approach implies a big bulk of terms in LUIS
If there is a difference only in the term i.e if your utterances are like
search for a
search for b
Then you can add a and b as a phrase list in LUIS, in this way you don't have to keep repeating the utterance for each term. You can check out how to add phrase list. If you look at the 3rd point there you can see that for the name City many city values are being entered. You can do the same with the medical terms you need to search.
In this way you can get the medical terms at your server side by inspecting the entity value.

LUIS Intent identification conflict

Am trying to implement a hierarchical chat bot capitalizing LUIS to identify primary and secondary intents.
As part of this created numerous LUIS models and trained.
However the behavior of the LUIS is observed weird and unpredicted at various instances.
For instance, got a LUIS model named Leave trained with following utterances.
Utterance Intent
Am I eligible for leave of adoption? Leave Query
What is my leave balance? Leave Query
What is sick leave? Leave Query
Who approves my sick leave? Leave Approval
Upon training these utterances, the queries against those on leave context are working as expected.
However when the following messages are validated against the Leaves Model with the expectation of receiving “None” intent, LUIS is returning intents other than “None”, which is not making any sense.
Query Expected Intent Actual Intent
Am I eligible for loan? None Leave Query
What is my loan balance None Leave Query
Who approves my loan None Leave Query
The issue here is “Am I eligible for loan” doesn’t belong to this LUIS model at all and am expecting a “None” intent.
The idea is to receive a None intent when the utterance doesn’t belong to queried LUIS model, so that can check other models for valid intent.
However am always getting some intent instead of “none”.
Not sure if I am doing something wrong here.
Any help/guidance on this would be much helpful.
I agree with what Steven has suggested above
Training None intent is a good practice
Defining entities will help
If you want to categorize your intents based on some domain for e.g., Leave in the present case. I would suggest creating a List entity with value as leave.
if you want to have anything with leave word go to leave Query Intent.
anything about [leave ]
Current version results
Top scoring intent
Leave Query (1)
Other intents
None (0.28)
and rest of sentences without Leave
anything about loan
Current version results
Top scoring intent
None (0.89)
Other intents
Leave Query (0)
Although the constraint here is, you would make it more definitive like scoring would be either 1 or 0 for Leave query.
it depends on your use case, whether you want to take a definitive approach or predictive approach. For Machine to Machine communication, you might think of taking a definitive approach but for things like chatbot you might prefer taking predictive approach.
None the less, this is nice little trick which might help you.
Hope this helps
How trained is your model and how many utterances are registered? Just to check, have you gone into the LUIS portal after you received the utterances "Am I eligible for loan?", and "Who approves my loan" and trained the bot that they are to not match against the Leave intents?
Please note that until any language understanding model is thoroughly trained, they are going to be prone to errors.
When looking at your utterances I noticed that they're all very similar:
"Am I eligible for leave of adoption?" vs "Am I eligible for loan?"
"What is my leave balance?" vs "What is my loan balance?"
"Who approves my sick leave?" vs "Who approves my loan"
These utterances have minimal differences. They're very general questions and you haven't indicated that any entities are currently being used. While the lack of entities for these questions is understandable with your simple examples, entities definitely help LUIS in understanding which intent to match against.
To resolve this problem you'll need to train your model more and should add entities. Some additional utterances you might use are "What's my leave balance?", "Check my leave balance", "Tell me my leave balance.", "Check leave balances", et cetera.

CSV list of all universities - google maps

I have a CSV list of university names around the world - about 13,000 university names. I'm looking for a way to pull the addresses of these universities. Google Maps API / Google Places API looks promising, but requires lat/long to map the locations.
End game is to mark to each school as a 1 if the school is in the US, and 0 if the school is outside of the US.
Any thoughts on how to search these colleges in maps and pull out the addresses - or at least the country?
Example:
is there nothing else in the csv, only the names? that's going to make it hard, i'd bet the names aren't always unique in the world.
you could write something that had different passes at biting the apple - for instance, if the university has a state name in it, check those off as 1's - then find another logic to use to take "another bit" until the apple is gone.
On top of #WEBjuju's answer, since you only want to mark if the school is in US, or outside of US, you can use the "country" type in Place Types in the Google Places API, by setting the option as country='us'.
https://developers.google.com/places/supported_types?csw=1#table2
You may also want to cross check with this list of schools.
https://www.4icu.org/reviews/index2.htm
https://en.wikipedia.org/wiki/Lists_of_universities_and_colleges_by_country

How can I get Amazon echo to respond to "preheat the car" or "what's the car battery status"? (they get hijacked)

I'm trying to create some skills for my Echo (for my own use, I'm not concerned about the invocation names not getting through review). I've set my invocation name as "the car" (I also tried "car"). I wanted to be able to ask what my battery status is and order Alexa to pre-heat the car (a Renault ZOE).
It seems no matter what I put in for my utterances, I always get the same responses:
Anything with "battery" in gets "I don't have a battery"
Anything with "heat" in gets "You have no smarthome devices, blah blah"
It seems like the words "battery" and "heat" result in things never matching my skill (even when I said the invocation name).
Is there anything I can do so that it will route actions along the lines of the above to my skill?
Edit: Today I get different results trying "preheat the car".. I just get a weird tone. It never calls my skill, nor shows anything in the Home section of the app. What does this tone mean?
Video here: https://twitter.com/DanTup/status/804615557605654528
With help from Reddit I managed to get this working reasonably well. the car was a bad invocation name and I wasn't following the documented way for invoking skills (joining words etc. are fairly restrictive).
I'm now using my car as the invocation name and can do the following:
Alexa tell my car to preheat
Alexa ask my car for battery
As of Dec 2017, it's still not possible to have completely custom phrases with Alexa. Google Assistant/Home does support this however, via shortcuts.

Why are cookies called "cookies"?

In Servlets we have something called "Cookies". I know why Java got the name "Java" and why the Apple company got the name "Apple" and so on.
I would like to know why the name "Cookies" was chosen.
In the early 1970s a group of programmers working at Xerox came up with an idea for storing a bit of information on another computer. They appear to have called this little chunk of information a cookie after a character from the popular (at that time) Andy Williams Show. This "Cookie Bear" character would follow Andy around asking for a cookie. The action of tracing these little files back to their original source is also referred to as following a trail of cookie crumbs.
Source

Resources