can we retrieve only the intent using watson conversation? - watson-conversation

Currently we are using Watson Natural Language Classifier service (NLC) to get intent for an user's question. But configuring and maintaining NLC is becoming difficult, so was wondering whether it would be possible to only get intent of user's question using Watson Conversation section, only the intent not the response of dialog from the service.

The intent comes as part of the response back from conversation. If you set the parameter alternate_intents=true then the top 10 intents are returned.
You will still get the rest of the payload, but you can ignore it. I would recommend to create one dialog node with a condition of true and nothing else. This will prevent SPEL errors when not finding a matched node.
Your response will look something like this.
{
"alternate_intents": true,
"context": {
"conversation_id": "6c256e10-ba3b-4d2b-84fc-740853879d4f",
"system": {
"_node_output_map": { "True": [0] },
"branch_exited": true,
"branch_exited_reason": "completed",
"dialog_request_counter": 1,
"dialog_stack": [ { "dialog_node": "root" } ],
"dialog_turn_counter": 1
}
},
"entities": [],
"input": { "text": "test" },
"intents": [
{ "intent": "intent1", "confidence": 1.0 },
{ "intent": "intent2", "confidence": 0.9 },
{ "intent": "intent3", "confidence": 0.8 },
{ "intent": "intent4", "confidence": 0.7 },
{ "intent": "intent5", "confidence": 0.6 },
{ "intent": "intent6", "confidence": 0.5 },
{ "intent": "intent7", "confidence": 0.4 },
{ "intent": "intent8", "confidence": 0.3 },
{ "intent": "intent9", "confidence": 0.2 },
{ "intent": "intent10", "confidence": 0.1 }
],
"output": {
"log_messages": [],
"nodes_visited": [ "True" ],
"text": [ "" ]
}
}
All you need to reference is the json_response['intents']. Also if you only care about the intent, you do not need to keep sending back the context.
Just to add to this. NLC and Conversation use two very different learning models.
NLC uses "Relative Confidence"
Conversation uses "Absolute Confidence"
In the case of Relative, all confidences of items found will add up to 1. In layman terms, NLC automatically assumes that the answer can only be in the intents it has been given.
For Absolute, the confidences relate only to that intent. This means that conversation can understand that what you are saying may not be in the training it has been given. It also means that your intent list can come back empty.
So don't panic if something that was giving you 90% before is now giving you 60%. They are just scoring differently.

Related

Discrepancies between the routes calculated by Tour planning api and Route planning API [Here API]

I am currently trying to use HERE's API to calculate some tours for trucks.
First, I'm using the Tour Planning API to calculate the optimal tours.
In the second step, I'm using the Route Planning API to visualize these tours and get the exact routes that should be driven.
Somehow I've some problems using the avoid features to not plan routes with U-turns and difficult turns. I've come up with a small example that shows the problem. This is my example tour planning request:
{
"fleet": {
"types": [
{
"id": "1561fwef8w1",
"profile": "truck1",
"costs": {
"fixed": 1.0,
"distance": 5.0,
"time": 0.000001
},
"capacity": [20],
"amount": 1,
"shifts" : [
{
"start" : {
"time" : "2022-12-12T06:00:00Z",
"location" : {"lat": 51.4719851907272,"lng": 7.31300969864971}
},
"end" : {
"time" : "2022-12-12T16:00:00Z",
"location" : {"lat": 51.4807604,"lng": 7.3152156}
}
}
]
}
],
"profiles": [
{
"type": "truck",
"name": "truck1",
"avoid" : {
"features" : ["difficultTurns", "uTurns"]
}
}
]
},
"plan": {
"jobs": [
{
"id": "job_0",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4736547333341,"lng": 7.29935641079885},
"duration": 300
}
]
}
]
}
},
{
"id": "job_1",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.473125253443,"lng": 7.28609119643401},
"duration": 300
}
]
}
]
}
},
{
"id": "job_2",
"tasks": {
"deliveries": [
{
"demand": [1],
"places": [
{
"location": {"lat": 51.4871939377375,"lng": 7.30587404313616},
"duration": 300
}
]
}
]
}
}
]
}
}
The answer is a tour that is 7.1 km long and takes 43 minutes. I'm now asking the route planning API to give me the exact route with the following request:
https://router.hereapi.com/v8/routes?via=51.4736547333341,7.29935641079885!stopDuration=300&via=51.473125253443,7.28609119643401!stopDuration=300&via=51.4871939377375,7.30587404313616!stopDuration=300&transportMode=truck&origin=51.4719851907272%2C7.31300969864971&destination=51.4807604%2C7.3152156&return=summary&apikey={API_KEY}&departureTime=2022-12-12T06%3A00%3A00&routingMode=short&avoid%5Bfeatures%5D=difficultTurns%2CuTurns
The answer now is a route which is 10.8 km long and takes 72 minutes. So the exact route is now more then 3 km longer for this short route. For lager routes I've already seen differences of 15km and more.
When not putting the avoid U-turns and difficult turns features into the requests, the routes have a roughly similar length. In this small example the route of the tour planning API is 6.4 km and the route of the route planning API 6.9 km which is a acceptable difference.
I'm not sure if route planning API and tour planning API are handling U-turns and difficult turns differently or if there is any way to get the exact routes directly from the tour planning API. Does anybody know how I can get properly planned tours with using the tour planning API and avoiding the difficult turns?
In the problem from the description difficult/uTurn happens at the driver stop itself. Tour planning does not consider the difficult turns at the stop. However, they are considered if they are on the way. That led to the discrepancy in the results from Tour Planning and Routing.

Storing optional attributes in DynamoDB's putItem via step functions

I have defined a state machine in AWS step functions and one of my states is storing an item to DynamoDB
...
"Store item": {
"End": true,
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:putItem",
"Parameters": {
"Item": {
"foo": {
"S.$": "$.data.foo"
},
"bar": {
"S.$": "$.data.bar"
},
"baz": {
"S.$": "$.data.baz"
},
},
"TableName": "nrp_items"
}
},
...
The problem starts from the fact that baz property is optional, ie not exist in some cases.
On those cases, the putItem task fails:
An error occurred while executing the state 'Store item' (entered at the event id #71). > The JSONPath '$.data.baz' specified for the field 'S.$' could not be found in the input
My backup plan is to use a lambda to perform that type of operation, but can I do it directly using the putItem task in steps function?
I was wondering if:
Is possible to somehow inject via JSONPath my whole $.data item to the "Item" property, something like:
...
"Store item": {
"End": true,
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:putItem",
"Parameters": {
"Item": "$.data",
"TableName": "nrp_items"
}
},
...
OR
2) Define that the baz property is optional
TL;DR We can deal with optional variables with a "Variable": "$.baz", "IsPresent": true Choice condition to handle no-baz cases.
The Amazon States Language spec does not have optional properties: Step Functions will throw an error if $.baz does not exist in the input. We can avoid undefined paths by inserting a two-branch Choice State, one branch of which handles baz-exists cases, the other no-baz cases. Each branch continues with a Pass State that reworks the data input into dynamo-format Item syntax, using Parameters. The put-item task's "Item.$": "$.data" (as in your #1) contains only foo-bar when baz is not defined, but all three otherwise.
{
"StartAt": "HasBazChoice",
"States": {
"HasBazChoice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.baz",
"IsPresent": true,
"Next": "MakeHasBazItem"
}
],
"Default": "MakeNoBazItem"
},
"MakeHasBazItem": {
"Type": "Pass",
"Parameters": {
"data": {
"foo": { "S.$": "$.foo"},
"bar": { "S.$": "$.bar"},
"baz": { "S.$": "$.baz"}
}
},
"Next": "PutItemTask"
},
"MakeNoBazItem": {
"Type": "Pass",
"Parameters": {
"data": {
"foo": {"S.$": "$.foo"},
"bar": {"S.$": "$.bar"}
}
},
"Next": "PutItemTask"
},
"PutItemTask": {
...
"Parameters": {
"TableName": "my-table",
"Item.$": "$.data"
}
},
}
}
If you have more than one optional field, your lambda backup plan is the better option - the above workaround would become unwieldy.

How to write this Fulfillment codes(including output)?

First of all, I tried to write how to save users input data to google sheet after developing the simple codes. It's able to work. Thank Mr.Master for providing this tutorial(Below the link).
Reference Mr.Master: https://www.youtube.com/watch?v=huwUpJZsTok
Next, I bumped into the problem below the code. I didn't know how to write it in Fulfillment. Could someone realize it to teach me?
Tool: Dialogflow, Google sheet, Firebase.
Theme: Order process
I tried to write Forhere() there. However, it didn't work.(First code)
function Forhere(agent){
const{
forhere, howmanypeople, whattime, namelist
} = agent.parameters;
const data1 = [{
Forhere: forhere,
HowManyPeople: howmanypeople,
Time: whattime,
Name: namelist
}];
axios.post('......', data1);
}
{/*....This code is a result of test(second one)
"responseId": "d0f44937-e58a-4b71-b6dc-ec2d6c39337b-f308a5c4",
"queryResult": {
"queryText": "黃大哥",
"parameters": {
"forhere": [
"內用"
],
"howmanypeople": [
2
],
"whattime": [
**{
"date_time": "2019-09-19T14:00:00+08:00"
}**
],
"namelist": [
"黃大哥"
]
},
"allRequiredParamsPresent": true,
"outputContexts": [
{
"name": "projects/test-tyrpxs/agent/sessions/5dd26d5c-bd99-072c-3693-41f95a3a348d/contexts/forhere",
"lifespanCount": 4,
"parameters": {
"howmanypeople": [
2
],
"namelist.original": [
"黃大哥"
],
"howmanypeople.original": [
"2"
],
"forhere": [
"內用"
],
"whattime.original": [
"明天下午2點"
],
"welcome": "嗨",
"whattime": [
{
"date_time": "2019-09-19T14:00:00+08:00"
}
],
"namelist": [
"黃大哥"
],
"welcome.original": "hi",
"forhere.original": [
"內用"
]
}
}
],
"intent": {
"name": "projects/test-tyrpxs/agent/intents/ec0f55c4-e9c9-401f-bce7-d2478c40fb85",
"displayName": "ForHere"
},
"intentDetectionConfidence": 1,
"diagnosticInfo": {
"webhook_latency_ms": 4992
},
"languageCode": "zh-tw"
},
"webhookStatus": {
"code": 4,
"message": "Webhook call failed. Error: Request timeout."
}
}
You can use below code
let forhere= agent.parameters.forhere;
let howmanypeople= agent.parameters.howmanypeople;
let whattime= agent.parameters.whattime;
let namelist= agent.parameters.namelist;
then use this variables in your api call.
To T.Ali:
Dialogflowfirebasefulfillment&Error message:
Although I think this error didn't show where these mistakes are.
Dialogflow Request body: {"responseId":"ab277bc6-3bcc-4c4b-9a94-192b9ecfb8af-f308a5c4","queryResult":{"queryText":"黃大哥","parameters":{"forhere":"內用","whattime":{"date_time":"2019-09-20T12:00:00+08:00"},"howmanypeople":3,"namelist":"黃大哥"},"allRequiredParamsPresent":true,"outputContexts":[{"name":"projects/test-tyrpxs/agent/sessions/5dd26d5c-bd99-072c-3693-41f95a3a348d/contexts/forhere","lifespanCount":4,"parameters":{"welcome":"嗨","welcome.original":"hi","forhere":"內用","forhere.original":"內用","whattime":{"date_time":"2019-09-20T12:00:00+08:00"},"whattime.original":"明天中午","howmanypeople":3,"howmanypeople.original":"3","namelist":"黃大哥","namelist.original":"黃大哥"}}],"intent":{"name":"projects/test-tyrpxs/agent/intents/ec0f55c4-e9c9-401f-bce7-d2478c40fb85","displayName":"ForHere"},"intentDetectionConfidence":1,"languageCode":"zh-tw"},"originalDetectIntentRequest":{"payload":{}},"session":"projects/test-tyrpxs/agent/sessions/5dd26d5c-bd99-072c-3693-41f95a3a348d"}
Error: No handler for requested intent
at WebhookClient.handleRequest (/srv/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:317:29)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/srv/index.js:105:9)
at cloudFunction (/srv/node_modules/firebase-functions/lib/providers/https.js:57:9)
at /worker/worker.js:783:7
at /worker/worker.js:766:11
at _combinedTickCallback (internal/process/next_tick.js:132:7)
at process._tickDomainCallback (internal/process/next_tick.js:219:9)
Furthermore, I've write below the code worked formally(input users data to google sheet).
enter image description here

Alexa intent slot AMAZON.LITERAL causes failed build

I'm trying to use the AMAZON.LITERAL slot type in my Alexa skill, but when I try building, I see this error:
Build Failed
Slot name "{What}" is used in a sample utterance but not defined in the intent schema. Error code: UndefinedSlotName - Thursday, Apr 12, 2018, 2:08 PM
The slot is named What, and I'm 100% sure it is defined. It builds successfully if I change the slot type to anything except AMAZON.LITERAL.
Here is my entire model:
{
"interactionModel": {
"languageModel": {
"invocationName": "chores",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "Remember",
"slots": [
{
"name": "Who",
"type": "AMAZON.Person"
},
{
"name": "When",
"type": "AMAZON.DATE"
},
{
"name": "What",
"type": "AMAZON.LITERAL"
}
],
"samples": [
"remember {Who} {What} {When}"
]
}
],
"types": []
}
}
}
EDIT:
This is the response I got from Amazon when I submitted the bug:
We are not supporting AMAZON.Literal slot type anymore and we ask
developer to use customer slot type is they have some set of values
but if not then you can use AMAZON.SearchQuery where you will get the
whole query which customer is looking for and same you can use it in
you lambda function.
I faced the same issue. Here's the solution.
You need to define your Sample Utterances as
Remember {Neil | Who} {died | What} {yesterday | When}
Amazon made it mandatory to provide the example inputs along with your Slot names as AMAZON.LITERAL can take in a wide variety of values.
For more information, refer here.
add some sample utterances in below format and it should work:
remember {Jack|Who} {bring fruits|What} {tomorrow|When}
remember {Mark|Who} {pay bills|What} {today|When}

How to get output text when using watson conversation api

I have created a workspace and created the intent, entities and dialogs for a conversation service
Where we use the launch tool and "try it out", i can see the watson text response for the question i asked. But however when i use the api command via a rest client, it does not return the text output.
The input i used for the api was
{
"input": {
"text": "increase the temperature of ac"
}
}
and as response i got the following
{
"input": {
"text": "increase the temperature of ac"
}-
"context": {
"conversation_id": "5a7ce4c2-c6be-4cb8-b728-19136457bf28"
"system": {
"dialog_stack": [1]
0: "root"
-
"dialog_turn_counter": 1
"dialog_request_counter": 1
}-
}-
"entities": [1]
0: {
"entity": "appliance"
"location": [2]
0: 28
1: 30
-
"value": "ac"
}-
-
"intents": [1]
0: {
"intent": "turn_up"
"confidence": 0.9854193755106732
}-
-
"output": {
"log_messages": [0]
"text": [0]
"nodes_visited": [1]
0: "node_1_1469526692057"
-
}-
}
It doesnot have any text message in the json output
This is working as intended.
Using the Github Conversation demo, You can find the related node in the JSON by searching for "conditions": "#turn_up". Here is the related block.
{
"go_to": {
"return": null,
"selector": "condition",
"dialog_node": "node_11_1467233013716"
},
"output": {},
"parent": null,
"context": null,
"created": "2016-07-22T04:55:54.661Z",
"metadata": null,
"conditions": "#turn_up",
"description": null,
"dialog_node": "node_10_1467233003109",
"previous_sibling": "node_7_1467232789215"
},
Alternatively you can look up the block in the Conversation UI looking for #turn_up. For example.
The output field is empty. So the output text is not being handled by Conversation.
It has to be handled at the application layer. There are valid reasons for doing this. For example creating an answer store independent makes it easier for a non-technical user to update. Or if you wanted to hand off to something like Retrieve and Rank to find the answer.
In this case, how the Car demo handles this is detailed in the tutorial video, which you can see here.
https://youtu.be/wG2fuliRVNk

Resources