I have defined an intent in Watson Assistant using the following training examples:
adieu
au revoir
bye
bye now
ciao
cu
cya
exit
farewell
good bye
have a nice day
I'm leaving
later
quit
see you
so long
stop
we are done
A user inputs the word "again". Watson returns a match to this intent with a confidence level of about .9
The word "again" does appear in a training example for a completely different intent, namely "I'm looking forward to working with you again! :)". It does not appear in any other training example.
What is the reasoning used by Watson Assistant to arrive at this match and with such a high level of confidence?
There is a whole load of factors that determine why an intent is picked over the other.
Intents do not work properly if you have <= 2 intents.
Any entities you have created that are referenced in the example questions can also impact what is picked.
Contextual entities will also add weight to the POS of those entities.
Number of intents and how frequently the word is used across those intents can also impact the scoring.
Watson Assistant always tries to get meaning from the term where it can.
When trying to determine why it picked one intent over another, you need to look at both. The intent you mention may not even be the second one picked.
With just one intent shown above it's hard to say the 'Why', so this is just an educated guess as to what may be happening.
"again" is a single word and by itself has no context to determine an intent. The closest in the list would be "later".
It couldn't find any meaning whatsoever in a single word, so looked at the intent with the most single word examples, as possible reason to pick it.
That aside, you should try not to answer real 1-2 keyword based questions. There is almost never any context that a person could answer, so it's unlikely WA will be able to either.
Related
I have two queries related to SCRUM. They are as follows:
I have read that the format of SCRUM story is "As a < type of user >, I want < some goal > so that < some reason >". I have to write a story for an API. This API will send an email with a link to validate the email address of the user. What will be the type of user here? Will it be the user logged in?
Do subtasks have story format similar to a story or it can be a normal description?
The trouble you are encountering is likely that you are starting from a determined implementation and then trying to work backwards to the need (unless your product is an API that your users leverage, in which case I think that answers your questions).
When we approach it from a user need, we'll usually end up with more of a problem statement, like
"As a vacationer, I'd like the site to calculate the best route across
all types of transportation for me so that I don't have to run many
searches to figure it out myself."
One of the pieces of delivering on this need will be creating the API calls if your application architecture calls for that. Then "add API method for aggregated call" may be a task under that user story.
You will have cases where all a particular story needs is API work, and that's fine, but it won't come out in the user story. For example, let's say we did the about user story but limited it to planes and trains for the first start, then we created another story that reads:
"As a vacationer in the US, I want my trip planner to factor in buses
so that I can make use of bus tours in my vacation."
Now, maybe the only task in there is to create a some API changes to include the bus routes in the search, but that doesn't cause a problem with your user stories because we started back at the user's problem statement in the beginning instead of starting at the desired implementation and working backward.
Let's start clarifying some concepts first.
Scrum is not an acronym so is written as Scrum (proper name). Then, there is nothing called "Scrum Stories". What you are referring to is called: user story. User stories were wide used in the Chrysler C3 project were eXtreme Programming was developed. Furthermore, you are referring to a particular template which was popularized by Mike Cohn known as canonical form. So it's ok to express your Product Backlog Item as user stories for an API. But take into account that you can use this template, you can use user stories or you can write the Product Backlog Item the way has more sense and value to you. In your case, which is the persona, machine or service which will be used the API?
About your second question. The Scrum Guide just says you should decompose your Sprint Planning in unit of work of 1 day or less. Normally, the implementation is to create this unit of work and call them task which are the work necessary to carry out the user story. The way the are written is open too but is not quite common to write them in the canonical form. So you can write it as an ID, title and a description.
Am trying to implement a hierarchical chat bot capitalizing LUIS to identify primary and secondary intents.
As part of this created numerous LUIS models and trained.
However the behavior of the LUIS is observed weird and unpredicted at various instances.
For instance, got a LUIS model named Leave trained with following utterances.
Utterance Intent
Am I eligible for leave of adoption? Leave Query
What is my leave balance? Leave Query
What is sick leave? Leave Query
Who approves my sick leave? Leave Approval
Upon training these utterances, the queries against those on leave context are working as expected.
However when the following messages are validated against the Leaves Model with the expectation of receiving “None” intent, LUIS is returning intents other than “None”, which is not making any sense.
Query Expected Intent Actual Intent
Am I eligible for loan? None Leave Query
What is my loan balance None Leave Query
Who approves my loan None Leave Query
The issue here is “Am I eligible for loan” doesn’t belong to this LUIS model at all and am expecting a “None” intent.
The idea is to receive a None intent when the utterance doesn’t belong to queried LUIS model, so that can check other models for valid intent.
However am always getting some intent instead of “none”.
Not sure if I am doing something wrong here.
Any help/guidance on this would be much helpful.
I agree with what Steven has suggested above
Training None intent is a good practice
Defining entities will help
If you want to categorize your intents based on some domain for e.g., Leave in the present case. I would suggest creating a List entity with value as leave.
if you want to have anything with leave word go to leave Query Intent.
anything about [leave ]
Current version results
Top scoring intent
Leave Query (1)
Other intents
None (0.28)
and rest of sentences without Leave
anything about loan
Current version results
Top scoring intent
None (0.89)
Other intents
Leave Query (0)
Although the constraint here is, you would make it more definitive like scoring would be either 1 or 0 for Leave query.
it depends on your use case, whether you want to take a definitive approach or predictive approach. For Machine to Machine communication, you might think of taking a definitive approach but for things like chatbot you might prefer taking predictive approach.
None the less, this is nice little trick which might help you.
Hope this helps
How trained is your model and how many utterances are registered? Just to check, have you gone into the LUIS portal after you received the utterances "Am I eligible for loan?", and "Who approves my loan" and trained the bot that they are to not match against the Leave intents?
Please note that until any language understanding model is thoroughly trained, they are going to be prone to errors.
When looking at your utterances I noticed that they're all very similar:
"Am I eligible for leave of adoption?" vs "Am I eligible for loan?"
"What is my leave balance?" vs "What is my loan balance?"
"Who approves my sick leave?" vs "Who approves my loan"
These utterances have minimal differences. They're very general questions and you haven't indicated that any entities are currently being used. While the lack of entities for these questions is understandable with your simple examples, entities definitely help LUIS in understanding which intent to match against.
To resolve this problem you'll need to train your model more and should add entities. Some additional utterances you might use are "What's my leave balance?", "Check my leave balance", "Tell me my leave balance.", "Check leave balances", et cetera.
Background
I'm writing an Alexa Skill and looking to get pieces of information from the user.
The following conversation for example:
Alexa: What month were you born at?
User: April
Alexa: Good. And what was your favorite movie?
User: April
The problem
Given the following utterances:
GetMonthIntent {month}
GetMovieIntent {movie}
Once a user answers April for the second time, the GetMonthIntent might be triggered.
What I have tried
Asking the user to specify which piece of information is giving by using the following utterances:
GetMonthIntent Month {month}
GetMovieIntent Movie {movie}
The question
What is the right way to make Alexa wait for a single term answer based on the current context?
In the same vein as the other answers here, you should take a look at the newest Node.JS library here, which handles state out of the box:
https://github.com/alexa/alexa-skills-kit-sdk-for-nodejs#making-skill-state-management-simpler
You could define:
State_Launch
State_Month
State_Movie
And then return the proper error response if anything other than the GetMovieIntent or GetMonthIntent (etc.) intents are called in the wrong state.
You would have to do data validation on the server side to make sure the "month" is a valid one, and movies are even harder to validate unless you have a list of expected values. That is, if you care to parse them for use beyond repeating back.
Unfortunately, there is no solution. There is no way to specify the 'context' in which a user reply should be interpreted, so you have to tell the user "what was your favorite movie? Please say 'my favorite movie is' and then the name of the movie".
Here are two ASK feature requests that I think would address your issue:
https://forums.developer.amazon.com/content/idea/41062/creating-something-to-help-with-more-structured-qu.html
https://forums.developer.amazon.com/content/idea/55525/allow-a-response-to-specify-a-set-of-expected-inte.html
Personally I think this is fairly important so I voted for those, but they are not near the top.
I ran into this same problem when I created the "Who's on First? Baseball Skit" skill. I handled this by:
Create a sequence number for each response given by Alexa
Write this number to the "session" in the response.
The session is then passed back to your skill by Alexa in the next request.
Read the sequence number from the request to know what the previous question was.
If a given intent could be the answer to multiple questions (eg. month and movie in your case) then use the sequence number to determine which it is.
This should give you ideas on how to deal with repeated answers. The session is quite easy to use. Other options include writing the userId and status to a database like DynamoDB, but I find that the session works in most cases.
I want to develop one robot using watson conversation, this robot can answer some frequently asked question about my application. Here is my thinking, every question has its own intent, and the answer will be returned by response. but I have over 50 questions, that means I need to define over 50 intents, but watson conversation limited 25 intents for one workspace. Does anyone have any idea about how to resolve it? Thanks.
There are two options for you
Purchase Standard plan ($0.0025 USD/API call)Includes upto Up to 2000 Intents.Check out more on pricing here
Link similar questions together and try to reduce the questions.For example, take two questions regarding Bank withdrawals and Bank deposits could be asked in one intent as Bank transaction and then put type of transactions as entities(dialogue box condition as entities).
And besides that limit, there is another one: max 1000 API queries/month. It if fine for a proof of concept or to development phase. If you intend to go to production, you should purchase a plan. https://www.ibm.com/watson/developercloud/conversation.html
I can't find anything on the web about how to sample Adobe Analytics data? I need to integrate Adobe Analytics into a new website with a ton of traffic so the stakeholders want to sample the data to avoid exorbitant server calls. I'm using DTM but not sure if that will help or be a non-factor? Can anyone either point me to some documentation or give me some direction on how to do this?
Adobe Analytics does not have any built-in method for sampling data, neither on their end nor in the js code.
DTM doesn't offer anything like this either. It doesn't have any (exposed) mechanisms in place to evaluate all requests made to a given property (container); any rules that extend state beyond "hit" scope are cookie based.
Adobe Target does offer ability to output code based on % of traffic so you can achieve sampling this way, but really, you're just trading one server call cost for another.
Basically, your only solution would be to create your own server-side framework for conditionally outputting the Adobe Analytics (or DTM) tag, to achieve sampling with Adobe Analytics.
Update:
#MichaelJohns comment below:
We have a file that we use as a boot strap file to serve the DTM file.
What I think we are going to do is use some JS logic and cookies
around that to determine if a visitor should be served the DTM code.
Okay, well maybe i'm misunderstanding what your goal here is (but I don't think I am) but that's not going to work.
For example, if you only want to output tracking for 50% of visitors, how would you use javascript and cookies alone to achieve this? In order to know that you are only filtering 50%, you need to know the total # of people in play. By itself, javascript and cookies only know about ONE browser, ONE person. It has no way of knowing anything about those other 99 people unless you have some sort of shared state between all of them, like keeping track of a count in a database server-side.
The best you can do solely with javascript and cookies is that you can basically flip a coin. In this example of 50%, basically you'd pick a random # between 1 and 100 and lower half gets no tracking, higher half gets tracking.
The problem with this is that it is possible for the pendulum to swing 100% one way or the other. It is the same principle as flipping a coin 100 times in a row: it is entirely possible that it can land on tails all 100 times.
In theory, the trend over time should show an overall average of 50/50, but this has a major flaw in that you may go one month with a ton of traffic, another month with few. Or you could have a week with very little traffic followed by 1 day of a lot of traffic. And you really have no idea how that's going to manifest over time; you can't really know which way your pendulum is swinging unless you ARE actually recording 100% of the traffic to begin with. The affect of all this is that it will absolutely destroy your trended data, which is the core principle of making any kind of meaningful analysis.
So basically, if you really want to reliably output tracking to a % of traffic, you will need a mechanism in place that does in fact record 100% of traffic. If I were going to roll my own homebrewed "sampler", I would do this:
In either a flatfile or a database table I would have two columns, one representing "yes", one representing "no". And each time a request is made, I look for the cookie. If the cookie does NOT exist, I count this as a new visitor. Since it is a new visitor, I will increment one of those columns by 1.
Which one? It depends on what percent of traffic I am wanting to (not) track. In this example, we're doing a very simple 50/50 split, so really, all I need to do is increment whichever one is lower, and in the case that they are currently both equal, I can pick one at random. If you want to do a more uneven split, e.g. 30% tracked, 70% not tracked, then the formula becomes a bit more complex. But that's a different topic for discussion ( also, there are a lot of papers and documents and wikis out there published by people a lot smarter than me that can explain it a lot better than me! ).
Then, if it is fated that that I incremented the "yes" column, I set the "track" cookie to "yes". Otherwise I set the "track" cookie to "no".
Then in in my controller (or bootstrap, router, whatever all requests go through), I would look for the cookie called "track" and see if it has a value of "yes" or "no". If "yes" then I output the tracking script. If "no" then I do not.
So in summary, process would be:
Request is made
Look for cookie.
If cookie is not set, update database/flatfile incrementing either yes or no.
Set cookie with yes or no.
If cookie is set to yes, output tracking
If cookie is set to no, don't output tracking
Note: Depending on language/technology of your server, cookie won't actually be set until next request, so you may need to throw in logic to look for a returned value from db/flatfile update, then fallback to looking for cookie value in last 2 steps.
Another (more general) note: In general, you should beware sampling. It is true that some tracking tools (most notably Google Analytics) samples data. But the thing is, it initially records all of the data, and then uses complex algorithms to sample from there, including excluding/exempting certain key metrics from being sampled (like purchases, goals, etc.).
Just think about that for a minute. Even if you take the time to setup a proper "sampler" as described above, you are basically throwing out the window data proving people are doing key things on your site - the important things that help you decide where to go as far as giving visitors a better experience on your site, etc..so now the only way around it is to start recording everything internally and factoring those things in to whether or not to send the data to AA.
But all that aside.. Look, I will agree that hits are something to be concerned about on some level. I've worked with very, very large clients with effectively unlimited budgets, and even they worry about hit costs racking up.
But the bottom line is you are paying for an enterprise level tool. If you are concerned about the cost from Adobe Analytics as far as your site traffic.. maybe you should consider moving away from Adobe Analytics, and towards a different tool like GA, or some other tool that doesn't charge by the hit. Adobe Analytics is an enterprise level tool that offers a lot more than most other tools, and it is priced accordingly. No offense, but IMO that's like leasing a Mercedes and then cheaping out on the quality of gasoline you use.