Am trying to implement a hierarchical chat bot capitalizing LUIS to identify primary and secondary intents.
As part of this created numerous LUIS models and trained.
However the behavior of the LUIS is observed weird and unpredicted at various instances.
For instance, got a LUIS model named Leave trained with following utterances.
Utterance Intent
Am I eligible for leave of adoption? Leave Query
What is my leave balance? Leave Query
What is sick leave? Leave Query
Who approves my sick leave? Leave Approval
Upon training these utterances, the queries against those on leave context are working as expected.
However when the following messages are validated against the Leaves Model with the expectation of receiving “None” intent, LUIS is returning intents other than “None”, which is not making any sense.
Query Expected Intent Actual Intent
Am I eligible for loan? None Leave Query
What is my loan balance None Leave Query
Who approves my loan None Leave Query
The issue here is “Am I eligible for loan” doesn’t belong to this LUIS model at all and am expecting a “None” intent.
The idea is to receive a None intent when the utterance doesn’t belong to queried LUIS model, so that can check other models for valid intent.
However am always getting some intent instead of “none”.
Not sure if I am doing something wrong here.
Any help/guidance on this would be much helpful.
I agree with what Steven has suggested above
Training None intent is a good practice
Defining entities will help
If you want to categorize your intents based on some domain for e.g., Leave in the present case. I would suggest creating a List entity with value as leave.
if you want to have anything with leave word go to leave Query Intent.
anything about [leave ]
Current version results
Top scoring intent
Leave Query (1)
Other intents
None (0.28)
and rest of sentences without Leave
anything about loan
Current version results
Top scoring intent
None (0.89)
Other intents
Leave Query (0)
Although the constraint here is, you would make it more definitive like scoring would be either 1 or 0 for Leave query.
it depends on your use case, whether you want to take a definitive approach or predictive approach. For Machine to Machine communication, you might think of taking a definitive approach but for things like chatbot you might prefer taking predictive approach.
None the less, this is nice little trick which might help you.
Hope this helps
How trained is your model and how many utterances are registered? Just to check, have you gone into the LUIS portal after you received the utterances "Am I eligible for loan?", and "Who approves my loan" and trained the bot that they are to not match against the Leave intents?
Please note that until any language understanding model is thoroughly trained, they are going to be prone to errors.
When looking at your utterances I noticed that they're all very similar:
"Am I eligible for leave of adoption?" vs "Am I eligible for loan?"
"What is my leave balance?" vs "What is my loan balance?"
"Who approves my sick leave?" vs "Who approves my loan"
These utterances have minimal differences. They're very general questions and you haven't indicated that any entities are currently being used. While the lack of entities for these questions is understandable with your simple examples, entities definitely help LUIS in understanding which intent to match against.
To resolve this problem you'll need to train your model more and should add entities. Some additional utterances you might use are "What's my leave balance?", "Check my leave balance", "Tell me my leave balance.", "Check leave balances", et cetera.
Related
If a story is in progress and then swim lanes are code review and QA-ready, how should the assignment of stories work? Should a story remain assigned to the developer? And should the code review and QA tasks be created as sub-tasks in it? Or should the story be re-assigned when it is moved to code review by the developer, and when code review is done, it is moved to QA lane by the reviewer and re-assigned to QA by the reviewer. It seems anti-pattern to re-assign tickets from in-progress to future states. It looks okay to re-assign tickets before it was brought in the sprint but not after.
Scrum does not have anything to say about how the work is done nor how a board is managed. However, many team's look at Kanban's "pull" approaches to answer this. In that case, work is never assigned or given, it is only claimed/taken on. Therefor, work would be moved to "Code Review" by the reviewer when they began the work. Similarly, the work would be moved to QA by the tester when they started. "Ready" columns are a bit of a misnomer as they are not states. Rather, they are statuses of the previous state. If your order is Code Review - QA Ready - QA, then in fact, QA ready is a possible designation on work in Code Review. This may seem minor, but it is very important to prevent pile-ups in your process where work stalls without owners.
There is no single answer, but one way of doing it is to think of of a User Story as a container of tasks where each task is a small technical deliverable of any kind. With this mindset you can effectivly stop thinking of who the assignee is as each developer will have its small contribution towards the goal.
One of the problems with task re-assignment is that at one point you can loose traceability of who has done what and productivity on per developer basis. So in this sense having each teammember doing its own tasks and delivering towards the completion of a user story can solve this.
Then you can assign the User Story to the product owner, or you can assign it to a developer that kind of holds ownership towards its delivery to test when the tester will take over. But the user story when assigned to a developer does not mean that he owns the User Story, it just means that it is his responsibility to ensure hand over to test nothing more nothing less.
When a tester encounters a bug then you create a bug attached to the User story.
Not recommended. It's feasible tho. You have to assess your current work situation. If the user story is something that can make a whole difference, then it would be better to just stop the sprint, reassess your situation and make the necessary changes - then continue. Either way, when you are adding a new user story to the backlog, deadlines can be hardly met.
We are using a little bit different approach. Like we have following columns on Jira Board.
To-do
In_progress
Ready for Review
Ready for QA
In-Testing
Rework/Rejected
Done
A developer pick a task from to-do and assign it to him self and keep it in-progress. Once he is done he moved it to Ready for Review and keep it un assign. Someone will pick it and assign it to himself and review it. After reviewing that person will move the case to ready for QA without assigning it to anyone. Whoever is free or plaining to work on case will assign that case to himself and when he starts working on the case, he will move it to in-testing. As a result of testing the case can go in rework/rejected or in Done. If it moved to Rework/Rejected he will assign it to original person who initially worked on it. And that person when rework on it, will move the case to in-progress again.
I have defined an intent in Watson Assistant using the following training examples:
adieu
au revoir
bye
bye now
ciao
cu
cya
exit
farewell
good bye
have a nice day
I'm leaving
later
quit
see you
so long
stop
we are done
A user inputs the word "again". Watson returns a match to this intent with a confidence level of about .9
The word "again" does appear in a training example for a completely different intent, namely "I'm looking forward to working with you again! :)". It does not appear in any other training example.
What is the reasoning used by Watson Assistant to arrive at this match and with such a high level of confidence?
There is a whole load of factors that determine why an intent is picked over the other.
Intents do not work properly if you have <= 2 intents.
Any entities you have created that are referenced in the example questions can also impact what is picked.
Contextual entities will also add weight to the POS of those entities.
Number of intents and how frequently the word is used across those intents can also impact the scoring.
Watson Assistant always tries to get meaning from the term where it can.
When trying to determine why it picked one intent over another, you need to look at both. The intent you mention may not even be the second one picked.
With just one intent shown above it's hard to say the 'Why', so this is just an educated guess as to what may be happening.
"again" is a single word and by itself has no context to determine an intent. The closest in the list would be "later".
It couldn't find any meaning whatsoever in a single word, so looked at the intent with the most single word examples, as possible reason to pick it.
That aside, you should try not to answer real 1-2 keyword based questions. There is almost never any context that a person could answer, so it's unlikely WA will be able to either.
we want to use LUIS to get the entities and intent from a user question and identify the entities that belong to our domain, so what we're doing is training LUIS with a lot of entities that comes from our context domain. Is this a valid and "correct" use of LUIS?
Thanks
Yes you can the intents and entities fro the user question with LUIS. You have to provide training examples accordingly. There are many features in LUIS to label entities which follow a specific pattern using Patterns feature (pattern.any) and provide phrase lists for synonyms. You have to use them based on the scenario. Hope that helps!!
I'm creating a search engine to find in medical documents with a very specific terms. For this I'm training LUIS with this kind of words or tags as "entities".
Yes you are right. The medical term you are referring to are suppose to be entities.
But this approach implies a big bulk of terms in LUIS
If there is a difference only in the term i.e if your utterances are like
search for a
search for b
Then you can add a and b as a phrase list in LUIS, in this way you don't have to keep repeating the utterance for each term. You can check out how to add phrase list. If you look at the 3rd point there you can see that for the name City many city values are being entered. You can do the same with the medical terms you need to search.
In this way you can get the medical terms at your server side by inspecting the entity value.
Background
I'm writing an Alexa Skill and looking to get pieces of information from the user.
The following conversation for example:
Alexa: What month were you born at?
User: April
Alexa: Good. And what was your favorite movie?
User: April
The problem
Given the following utterances:
GetMonthIntent {month}
GetMovieIntent {movie}
Once a user answers April for the second time, the GetMonthIntent might be triggered.
What I have tried
Asking the user to specify which piece of information is giving by using the following utterances:
GetMonthIntent Month {month}
GetMovieIntent Movie {movie}
The question
What is the right way to make Alexa wait for a single term answer based on the current context?
In the same vein as the other answers here, you should take a look at the newest Node.JS library here, which handles state out of the box:
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs#making-skill-state-management-simpler
You could define:
State_Launch
State_Month
State_Movie
And then return the proper error response if anything other than the GetMovieIntent or GetMonthIntent (etc.) intents are called in the wrong state.
You would have to do data validation on the server side to make sure the "month" is a valid one, and movies are even harder to validate unless you have a list of expected values. That is, if you care to parse them for use beyond repeating back.
Unfortunately, there is no solution. There is no way to specify the 'context' in which a user reply should be interpreted, so you have to tell the user "what was your favorite movie? Please say 'my favorite movie is' and then the name of the movie".
Here are two ASK feature requests that I think would address your issue:
https://forums.developer.amazon.com/content/idea/41062/creating-something-to-help-with-more-structured-qu.html
https://forums.developer.amazon.com/content/idea/55525/allow-a-response-to-specify-a-set-of-expected-inte.html
Personally I think this is fairly important so I voted for those, but they are not near the top.
I ran into this same problem when I created the "Who's on First? Baseball Skit" skill. I handled this by:
Create a sequence number for each response given by Alexa
Write this number to the "session" in the response.
The session is then passed back to your skill by Alexa in the next request.
Read the sequence number from the request to know what the previous question was.
If a given intent could be the answer to multiple questions (eg. month and movie in your case) then use the sequence number to determine which it is.
This should give you ideas on how to deal with repeated answers. The session is quite easy to use. Other options include writing the userId and status to a database like DynamoDB, but I find that the session works in most cases.
I want to develop one robot using watson conversation, this robot can answer some frequently asked question about my application. Here is my thinking, every question has its own intent, and the answer will be returned by response. but I have over 50 questions, that means I need to define over 50 intents, but watson conversation limited 25 intents for one workspace. Does anyone have any idea about how to resolve it? Thanks.
There are two options for you
Purchase Standard plan ($0.0025 USD/API call)Includes upto Up to 2000 Intents.Check out more on pricing here
Link similar questions together and try to reduce the questions.For example, take two questions regarding Bank withdrawals and Bank deposits could be asked in one intent as Bank transaction and then put type of transactions as entities(dialogue box condition as entities).
And besides that limit, there is another one: max 1000 API queries/month. It if fine for a proof of concept or to development phase. If you intend to go to production, you should purchase a plan. https://www.ibm.com/watson/developercloud/conversation.html