When creating a google actions the name not allowed and blocking me from Saving - voice

I am trying to create a Google Action and I am getting this error:
Your sample invocations are structured incorrectly. Make sure they all
include either your app name or pronunciation, and trigger your app.
Even if I set the name to
Dr. Detroit
and the pronunciation to
doctor detroit
I'm totally confused with this. any help is appreciated.

I also got the same error. It comes because of the fact that your sample invocations does not include the full name of your app/action. Try to make it exactly the same and it might work. I got mine working the same way. 😁

It's because they all have to be of the form "Talk to <APP NAME>" or "Ask <APP NAME> about X", etc.
I agree that this is not made very clear, but this is what they mean by "… and trigger your app".
There's a list of allowed phrases here: https://developers.google.com/actions/localization/languages-locales

I also got the same error
our sample invocations are structured incorrectly. Make sure they all include either your app name or pronunciation and trigger your app.
What actually helped me is?
For the app Dr. Detroit
You can use invocation as Talk To Dr. Detroit.
This will fix the problem.
Talk To will help the app to get triggered.
Other Trigger Phrases are:
"let me talk to $name"
"I want to talk to $name"
"can I talk to $name"
"talk to $name"
"let me speak to $name"
"I want to speak to $name"
"can I speak to $name"
"speak to $name"
"ask $name"
"ask $name to ..."
From the reference doc:
https://developers.google.com/actions/localization/languages-locales
Here is the ref link:
Build Actions for the Google Assistant tutorial by codelabs
https://codelabs.developers.google.com/codelabs/actions-1/#0
Google documentation on Action on Google
https://developers.google.com/actions/console/publishing#linking_to_your_actions

This solved my same issue.
View Logs in simulator window. Expand log for Sending request with post data and make sure that inside rawInput element query attribute is matching with your sample invocation (overview -> app information -> details -> sample invocation)
Hope this can solve your issue.

Put the same name in the invocations as the full name of your application.

In this case, make sure that Sample innovation name and display name should be the same.
Then there will be no error encountered at the release stage.

Related

Gamelift Matchmaking times out after match found

Hoping to get some insight into the behavior I am seeing while trying to use GameLift Matchmaking.
I have my configuration setup as such that it does not require player acceptance, as such:
GameLiftMatchmakingConfiguration:
Type: AWS::GameLift::MatchmakingConfiguration
Properties:
AcceptanceRequired: false
...
When I go to the GameLift console and into the configuration I see that it was correctly set as well that it does not require acceptance.
This is where I am confused, because now I have it working where it places 2 users in PotentialMatchCreated and I get this event from GameLift. Then 30 seconds later, I get more events stating that these placements timed out and searching again.
The configuration documentation states that AcceptanceTimeoutSeconds is only required if AcceptanceRequired is true, which it is not for me.
the acceptance documentation states that you only call this When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE
Which its not, its in PotentialMatchCreated.
So my question is, what do I have to do to confirm a placement once GameLift places 2 users into a match? I am a bit surprised because I thought that the fact that it doesn't have to be accepted would mean that its automatically accepted match.
Also theres very little documentation I found regarding what to do in this situation, given the nature of this service not being as known as others I totally expected that but really hoping someone can help me on what to do next.
Any insight or help is greatly appreciated.
UPDATE1:
Additional information: I do not need to utilize GameLift fleets or builds at all. We have a browser game we are building and just want to utilize the matchmaking feature. So we dont have any game servers or anything like that, its just on our website where they would play the game and use our api's/websockets that puts the matchmaking on the server and notify the client when a match has been found with all the subsequent details.
UPDATE2:
To confirm my suspicions I decided to actually try to use the accept match endpoint and see what happens. Just as the documentation states, you can only accept a match if it requires acceptance. I get an error stating that I cannot accept a match that is not in REQUIRE_ACCEPTANCE state. Guessing this is a bug on AWS's side, I don't see any other endpoints that I can hit for being in state PotentialMatchCreated.
Figured out the issue. It has to do with the FlexModeMatch on the GameLiftMatchmakingConfiguration. For my use case, just needing matchmaking, STANDALONE is the correct implementation because we aren't having GameLift actually create game servers/sessions for us. I had mine using WITH_QUEUE which is why I believe I was having issues. Seemingly working correctly now.

Nest Device Access not publishing to pub/sub

All:
This topic has appeared before with no answers and for some reason been de-listed (not a valid question). The Nest Device Access Console states to ask questions on this forum so I will try again:
I have followed all the steps to get a Nest Doorbell to publish events/messages to Google pub/sub (i.e. https://developers.google.com/nest/device-access/get-started) and yet nothing is being pushed to the topic (i.e. flat line in pub/sub subscription window). Is there something obvious I'm missing, not read, regarding support for the Nest doorbell and pub/sub?
In short, it would be nice if a Google representative/expert could weigh in on this topic and provide some insight.
Thanks,
Rob.
Probably something on Google's part. At my home the doorbell is working again as well :-) So you probably didn't misread anything and were just part of some sort of glitch.

Why is it watson "Try it out" chat mapping the answer to Irrelevant?

In my dialog flow I have an node triggered by an #app_question intent that answers with a question about a person's location - "Where's your location?", then, I've created a child node that is triggered by an entity #app_location inside of which I have #app_location:americas, #app_location:other.
Watson is answering correctly in the Try it out chat, when I say I'm in america, it outputs the correct answer, but "Irrelevant" appears in the node box. Should I worry about that, is that standard or I am doing something wrong?
ps: I was using Jump to, but received an error. If I'm supposed to use Jump To, I can copy and paste the error here.
Below are screenshots:
If anyone can help, thanks a lot!
Conversation will always try to determine what the intent is, even if you are not interested in it at that time. The try it out is just showing you what it tried to figure out what the intent was. Not that it would do anything with it.
As you are looking for an entity and not an intent it should be fine.

Alexa - build custom slot for addresses

I am creating in which a user can say an address (for further processing). An address can be anything from "New York" to "123 First Avenue Washington" to "Seattle Harbor". Basically like something you can enter at Google Maps - it will recognize more or less everything :)
So now of course comes the problem on how to create a custom slot for this? LITERAL is deprecated PLUS I am working on a German language skill.
Should I actually try to fill the 50,000 lines I got available for a custom skill with as many enumerations of addresses as I can come up with? I'm afraid that even if I go down that road, Alexa will still try to map any input that's not in that list to one that is - and thereby rendering my skill a bit mood :(
Thanks for any advise!
As you suggest, using a custom slot with 50K sample addresses wouldn't really work. Something as complicated as an address really needs a built-in slot type, and there is one for US skills:
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/built-in-intent-ref/slot-type-reference#postaladdress
But you noted that you are targeting a German language skill and as far as I know there isn't a German language or address version of the above built-in slot yet.
The fact that they have done it for US suggests that they will add it for Germany at some point, but counting on that is risky, of course, so you are in a difficult position. In the mean-time I would suggest you go to the feature request space and add a request for a German version of the above:

Gherkin: How many preconditions should a scenario have?

I am new to Gherkin and BDD.
We are doing BDD regression tests with Squish.
Our application is very complex - similar to a flight simulator.
Now I ask myself how to write the tests in Gherkin.
As we have a large number of variables as preconditions to a certain situation I would normally put a lot of
Given some precondition
And some other precondition
into my tests.
My natural feeling is that I should avoid this because it would make things unnecessarily complex.
Is there a rule of thumb for how many preconditions there should be?
Should I try to reduce it to only one?
The general rule is to have as few as possible while still making the tests useful. How many that is will depend on the audience for your scenarios.
If you are using gherkin scenarios for actual BDD (Behaviour Driven Development) then you will have to write the scenarios in such a way that your stakeholders can make sense of them. If this means that you have to write many Given, And, And steps then that is the way it has to be. If it means that you can put several of these steps into one more general set up step then that is better.
If you are using the Gherkin scenarios only as a way of automating tests then do whatever is good for your dev team. The rule about having as few steps as possible comes from trying to make sure everyone, technical and non technical understands what is meant by the scenario (i.e. that they are as clear AND concise as possible. If it is only the technical team that needs to understand it then as above you can use many steps if that is what is necessary for your team to understand it, or you can use fewer steps that contain more code.
For your case of having a very complex system to get into the correct state for testing, I wouldn't be worried about having may set up steps, so long as the scenarios you end up with are clear and as short as possible. There wouldn't be any point in having a smaller neater scenario that no one really understood without looking at the code!
When you are writing scenarios, you can make the code behind the steps do whatever you need to set the application in a state that is needed for testing. Thre is a pretty good article explaining the point here.
I find that if you can be more descriptive in your steps, then your tests will make more sense when you need to go back to reference them. If your steps are just Given, click, click, click, Then .. you can easily lose track of the point of the test. Your tests should be about system behaviors not step by step instructions for using the system.
So as far as preconditions are concerned. You need to do whatever it takes to get the application in the state that you wish to test.
Gherkin is a Business Readable, Domain Specific Language created especially for behavior descriptions.
Gherkin serves two purposes: serving as your project’s documentation and automated tests. Gherkin’s grammar is defined in the Treetop grammar that is part of the Cucumber codebase.
To understand Gherkin in a better way, please take a look at simple scenario mentioned below:
Feature: As a existing facebook user, I am able to post a birthday greeting on any other existing user's facebook page
Scenario: Verify that Joe Joseph can post a birthday greeting on Sam Joseph's facebook page
Given Joe Joseph is an existing facebook user
Given Sam Joseph is an existing facebook user
Given Joe Joseph is on login page of facebook
Given Joe Joseph logs into his facebook account
When Joe Joseph opens Sam Joseph's facebook page
And Joe Joseph writes "Happy Birthday Sam" on Sam Joseph's facebook page
And Joe Joseph clicks on Post button
Then Joe Joseph verifies that "Happy Birthday Sam" is successfully posted on Sam Joseph's facebook page
In the above scenario, all the statements that starts with "Given" are my preconditions.
So, as far as precondictions are concerned, you can use as many preconditions which is required for the test.
Gherkin is the language that Cucumber understands. It is a Business Readable, Domain Specific Language that lets you describe software’s behaviour without detailing how that behaviour is implemented More here
From above statement my understanding is. You don't need to add precondition every time as who is using scenario knows how to write and understand Gherkin language.
Here is example:
When I take login as system admin
And I click on new user
And I enter new user details
Then new user is created successful
Here I don't need to mention where I am and other precondition like URL opened or not ?
First precondition is URL opened or not.
Given I opened URL "http://www.stackoverflow.com"
And I enter system admin details
And I click on new user
When I enter new user details
Then new user is created successfully
Second Condition New User Details form have all the required field. This can be separate scenario.
Goal is tell business and team that we are testing this scenario. It is not test case. You don't need to create tons of document for testing purpose.
Like YAML or Python, Gherkin is a line-oriented language that uses indentation to define structure. Line endings terminate statements (called steps) and either spaces or tabs may be used for indentation. Finally, most lines in Gherkin start with a special keyword:
Sample
Feature: Some terse yet descriptive text of what is desired
In order to realize a named business value
As an explicit system actor
I want to gain some beneficial outcome which furthers the goal
Scenario: Some determinable business situation
Given some precondition
And some other precondition
When some action by the actor
And some other action
And yet another action
Then some testable outcome is achieved
And something else we can check happens too
Scenario: A different situation
...
The parser divides the input into features, scenarios and steps. Let’s walk through the above example:
Feature: Some terse yet descriptive text of what is desired starts the feature and gives it a title.
Scenario: Some determinable business situation starts the scenario, and contains a description of the scenario.
The next 7 lines are the scenario steps, each of which is matched to a regular expression defined elsewhere.
Scenario: A different situation starts the next scenario, and so on.
When you’re executing the feature, the trailing portion of each step (after keywords like Given, And, When, etc) is matched to a regular expression
Also note, if you have precondition like Given I opened URL "http://www.stackoverflow.com" as mentioned my #Boston. It is recomended to use it as Background.
Example
Feature: This is test feature
Background: background scenario
Given I opened URL "http://www.stackoverflow.com"
Scenario: access first senario
Given: I should see this
...
Scenario: access second scenario
Given: I should see that
...
The background will execute before every scenario.
Reference:
https://github.com/cucumber/cucumber/wiki/Gherkin

Resources