PactDslJsonArray root level arrays that match all items - pact

I can successfully use PactDslJsonArray.arrayMaxLike(3,3) to create a pact that validates a maximum of 3 items returned.
"body": [
{
"firstName": "first",
"lastName": "last",
"city": "test",
},
{
"firstName": "first",
"lastName": "last",
"city": "test",
},
{
"firstName": "first",
"lastName": "last",
"city": "test",
}
]
"body": {
"$": {
"matchers": [
{
"match": "type",
"max": 3
}
]
...
However, I would like to reuse the body from another request without the need to specify the attributes again.
DslPart body = new PactDslJsonBody()
.stringType("firstName","first")
.stringType("lastName","last")
.stringType("city", "test")
What I'm looking for is something like :
PactDslJsonArray.arrayMaxLike(3,3).template(body)
instead of
PactDslJsonArray.arrayMaxLike(3,3)
.stringType("firstName","first")
.stringType("lastName","last")
.stringType("city", "test")
Thanks
Dan

The point of the DSL is to do validations of the Pact interactions in code. Using a template kinda goes against that concept. What I would recommend is that if you have the same interactions in multiple places, then adding a shared function to add said interaction would be the best way to do so. For example:
private void personalDetailInteraction(DslPart part) {
return part.stringType("firstName","first")
.stringType("lastName","last")
.stringType("city", "test");
}
private void yourTest() {
personalDetailInteraction(
PactDslJsonArray.arrayMaxLike(3,3)
)
.stringType("blarg", "weee")
...
}
If it needs to be shared across different classes, create a InteractionUtils class that can be shared across. This is the best way to do it in my opinion because the compiler makes sure no mistakes are made while creating the interactions, which is kind of the point of the whole framework; to reduce human error.

Related

Postman Schema Validation using TV4

I'm having trouble validating a schema in Postman using tv4 inside the tests tab - it is always returning a true test, no matter what I feed it. I am at a complete loss and could really use a hand - here is my example JSON Response, and my tests:
I've tried a ton of variations from every Stack Overflow/tutorial I could find and nothing will work - it always returns true.
//Test Example
var jsonData = JSON.parse(responseBody);
const schema = {
"required" : ["categories"],
"properties": {
"categories": {
"required" : ["aStringOne", "aStringTwo", "aStringThree" ],
"type": "array",
"properties" : {
"aStringOne": {"type": "string" },
"aStringTwo": {"type": "null" },
"aStringThree": {"type": "boolean" }
}
}
}
};
pm.test('Schema is present and accurate', () => {
var result=tv4.validateMultiple(jsonData, schema);
console.log(result);
pm.expect(result.valid).to.be.true;
});
//Response Example
{
"categories": [
{
"aStringOne": "31000",
"aStringTwo": "Yarp",
"aStringThree": "More Yarp Indeed"
}
]
}
This should return false, as all three properties are strings but its passing. I'm willing to use a different validator or another technique as long as I can export it as a postman collection to use with newman in my CI/CD process. I look forward to any help you can give.
I would suggest moving away from using tv4 in Postman, the project isn't actively supported and Postman now includes a better (in my opinion), more actively maintained option called Ajv.
The syntax is slightly different but hopefully, this gives you an idea of how it could work for you.
I've mocked out your data and just added everything into the Tests tab - If you change the jsonData variable to pm.response.json() it will run against the actual response body.
var jsonData = {
"categories": [
{
"aStringOne": "31000",
"aStringTwo": "Yarp",
"aStringThree": "More Yarp Indeed"
}
]
}
var Ajv = require('ajv'),
ajv = new Ajv({logger: console, allErrors: true}),
schema = {
"type": "object",
"required": [ "categories"],
"properties": {
"categories": {
"type": "array",
"items": {
"type": "object",
"required": [ "aStringOne", "aStringTwo", "aStringThree" ],
"properties": {
"aStringOne": { "type": "string" },
"aStringTwo": { "type": "integer"},
"aStringThree": { "type": "boolean"},
}
}
}
}
}
pm.test('Schema is valid', function() {
pm.expect(ajv.validate(schema, jsonData), JSON.stringify(ajv.errors)).to.be.true
});
This is an example of it failing, I've included the allErrors flag so that it will return all the errors rather than just the first one it sees. In the pm.expect() method, I've added JSON.stringify(ajv.errors) so you can see the error in the Test Result tab. It's a little bit messy and could be tidied up but all the error information is there.
Setting the properties to string show the validation passing:
If one of the required Keys is not there, it will also error for this too:
Working with schemas is quite difficult and it's not easy to both create them (nested arrays and objects are tricky) and ensure they are doing what you want to do.
There are occasions where I thought something should fail and it passed the validation test. It just takes a bit of learning/practising and once you understand the schema structures, they can become extremely useful.

alexa - audioPlayer.Play issue displaying content on Echo Show Now Playing screen

I am having issues understanding how to display images on the Echo Show inside the audioPlayer 'Now Playing' screen.
I am currently playing an audio file and want to display an image on the 'Now Playing' screen. The closest I have been able to get is the following code which displays the image and title just before the audio starts, but then disappears immediately and the Echo Show goes to the 'Now Playing' screen with no background image and no metadata. I feel I'm close, but just cannot understand how to update the 'Now Playing' screen, rather than the screen that comes immediately before it.
This is part of the code (which works as per above):
var handlers = {
'LaunchRequest': function() {
this.emit('PlayStream');
},
'PlayStream': function() {
let builder = new Alexa.templateBuilders.BodyTemplate1Builder();
let template = builder.setTitle('Test Title')
.setBackgroundImage(makeImage('https://link_to_my_image.png'))
.setTextContent(makePlainText('Test Text'))
.build();
this.response.speak('OK.').
audioPlayerPlay(
'REPLACE_ALL',
stream.url,
stream.url,
null,
0)
.renderTemplate(template);
this.emit(':responseReady');
}
I have been looking at this page https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html but cannot understand how to convert the structure of what is on that page into my code. I assume that, from the code on the page :
{
"type": "AudioPlayer.Play",
"playBehavior": "valid playBehavior value such as ENQUEUE",
"audioItem": {
"stream": {
"url": "https://url-of-the-stream-to-play",
"token": "opaque token representing this stream",
"expectedPreviousToken": "opaque token representing the previous stream",
"offsetInMilliseconds": 0
},
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
"backgroundImage": {
"sources": [
{
"url": "https://url-of-the-background-image.png"
}
]
}
}
}
}
I somehow need to get this part :
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
Into this block of my code :
audioPlayerPlay(
'REPLACE_ALL',
streamInfo.url,
streamInfo.url,
null,
0)
.renderTemplate(template);
(and could probably lose the .renderTemplate(template); part as it only flashes up briefly before the 'Now Playing' screen loads anyway.
Any ideas on how to achieve this?
Thanks!
Update :
I have added the following to index.js:
var metadata = {
title: "title of the track to display",
subtitle: "subtitle of the track to display",
art: {
sources: {
url: "https://url-of-the-album-art-image.png"
}
}
};
And modified the audioPlayer as follows :
audioPlayerPlay(
'REPLACE_ALL',
stream.url,
stream.url,
null,
0,
metadata)
.renderTemplate(template);
And modified the responseBuilder.js as indicated:
audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds, metadata) {
const audioPlayerDirective = {
type : DIRECTIVE_TYPES.AUDIOPLAYER.PLAY,
playBehavior: behavior,
audioItem: {
stream: {
url: url,
token: token,
expectedPreviousToken: expectedPreviousToken,
offsetInMilliseconds: offsetInMilliseconds,
metadata : metadata
}
}
};
this._addDirective(audioPlayerDirective);
return this;
}
But I'm still not getting anything displayed on the 'Now Playing' screen.
For some reason the Echo Show is not updating in realtime and needs to be rebooted before it will show whatever is passed in the metadata variable, which is why I wasn't seeing any results.
Simply passing a variable as such works fine. I just need to find out why the content gets stuck on the 'Now Playing' screen and requires a reboot to work.
var "metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
Just define your metadata as below. And pass it as a 6th argument to audioPlayerPlay;
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
audioPlayerPlay(
'REPLACE_ALL',
streamInfo.url,
streamInfo.url,
null,
0,metadata)
P.S. For this to work properly, You have to modify your node modules which you ll be zipping and uploading to lambda.
steps -
Go to your node_modules\alexa-sdk\lib and open responseBuilder file in it. And modify the code as follows-
audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds, **metadata**) {
const audioPlayerDirective = {
type : DIRECTIVE_TYPES.AUDIOPLAYER.PLAY,
playBehavior: behavior,
audioItem: {
stream: {
url: url,
token: token,
expectedPreviousToken: expectedPreviousToken,
offsetInMilliseconds: offsetInMilliseconds
},
**metadata : metadata**
}
};
this._addDirective(audioPlayerDirective);
return this;
}
P.S. - The node module modifications required only if you are using alexa-sdk version 1.
I know it's been years since this question was originally posted, but for those like me who stumble upon this now, make sure you use a unique token in the play directive because metadata is cached using that token.
See the yellow Important note in the following section https://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html#images
Important: The metadata for a given audio stream is identified by the
audioItem.stream.token included in the Play directive. Note that the
metadata associated with a particular audioItem.stream.token may be
cached in the Alexa service for up to five days, so changes to the
metadata (such as a different image, or a change to the title text)
may not be reflected immediately on the device. For instance, you may
notice this when testing if you experiment with different images or
title text for the same audio stream. You can send a new Play
directive with a different audioItem.stream.token to clear the cache.
And an example payload with a token:
{
"type": "AudioPlayer.Play",
"playBehavior": "valid playBehavior value such as ENQUEUE",
"audioItem": {
"stream": {
"url": "https://cdn.example.com/url-of-the-stream-to-play",
"token": "opaque token representing this stream",
"expectedPreviousToken": "opaque token representing the previous stream",
"offsetInMilliseconds": 0,
"captionData":{
"content": "WEBVTT\n\n00:00.000 --> 00:02.107\n<00:00.006>My <00:00.0192>Audio <00:01.232>Captions.\n",
"type": "WEBVTT"
}
},
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://cdn.example.com/url-of-the-album-art-image.png"
}
]
},
"backgroundImage": {
"sources": [
{
"url": "https://cdn.example.com/url-of-the-background-image.png"
}
]
}
}
}
}

Redux state shape without data duplication

I'm new to redux and it's hard to grasp how to implement good state shape without duplicating data, in case I need to update it, and naive way would be to update in few places, but that would negate the single source of truth.
We fetch user profile and posts from API server:
www.api.com/users/placeholder
{
"user": {
"username": "placeholder",
"bio": "It's my bio",
"profileImage": "http://via.placeholder.com/350x150",
"isViewerFollowing": false
}
}
www.api.com/posts?author=placeholder
{
"posts":[{
"id": "1",
"caption":"caption placeholder",
"image":"http://via.placeholder.com/1000x1000",
"createdAt": "2017-08-18T03:22:56.637Z",
"updatedAt": "2016-08-18T03:48:35.824Z",
"isLikedbyViewer": false,
"likesCount": 0,
"author": {
"username": "placeholder",
"bio": "It's my bio",
"profileImage": "http://via.placeholder.com/350x150",
"isViewerFollowing": false,
}
},
{
"id": "2",
"caption":"caption placeholder",
"image":"http://via.placeholder.com/1000x1000",
"createdAt": "2017-08-18T03:22:56.637Z",
"updatedAt": "2016-08-18T03:48:35.824Z",
"isViewerLiked": false,
"likesCount": 0,
"author": {
"username": "placeholder",
"bio": "It's my bio",
"profileImage": "http://via.placeholder.com/350x150",
"isViewerFollowing": false,
}
}],
"postsCount": 2
}
For example, we have separate reducers for users and posts, and user wants to follow user/author, then we would need to update information in two reducers. So my final question would be, could someone hint me what would good state shape look like in this particular example ?
Thanks!
You should normalize your Redux state: instead of saving the entire author object for every post, you should just save the authorId.
Since you ensure that when you have a post object in the posts branch of your Redux state, you also have the related author in the authors branch, to retrieve all the posts with their author's data you can create a selector:
export function getPosts(reduxState) {
return reduxState.posts.map(post => {
const author = reduxState.authors.find(a => a.id === post.authorId);
return {
...post,
author
};
});
}

Newtonsoft.Json.Schema.Generation: Human readable 'definitions' section

I want to have definitions section better generated and organized and for me this would mean to not allow depth building up in definitions, but, each class involved in the structure tree should have its own entry in the definitions section and referenced via $ref. This means that for each definition I would only have a list of properties that would either be of primitive types (string, boolean, etc.) or would be a $ref to another definition entry for another custom class. You can also see this as depth 1 definition, close to how classes are originally defined in C#.
To illustrate this via a trivial example:
JSchemaGenerator schemaGenerator = new JSchemaGenerator();
schemaGenerator = new JSchemaGenerator()
{
DefaultRequired = Newtonsoft.Json.Required.DisallowNull,
SchemaIdGenerationHandling = SchemaIdGenerationHandling.TypeName,
SchemaLocationHandling = SchemaLocationHandling.Definitions,
SchemaReferenceHandling = SchemaReferenceHandling.Objects,
};
JSchema schema = schemaGenerator.Generate(typeof(Setting));
Renders:
{
"id": "Setting",
"definitions": {
"SubSetting": {
"id": "SubSetting",
"type": "object",
"properties": {
"SubSubSetting": {
"id": "SubSubSetting",
"type": "object",
"properties": {
"String": {
"type": "string"
}
}
}
}
},
"SubSubSetting": {
"$ref": "SubSubSetting"
}
},
"type": "object",
"properties": {
"SubSetting": {
"$ref": "SubSetting"
},
"SubSubSetting": {
"$ref": "SubSubSetting"
}
}
}
Thus, SubSubSetting definition is placed inline SubSetting definition and later we have SubSubSetting defined as reference to that inline definition. That's what I want to avoid as for complex data structures it becomes really obscure and I want to provide the schema even as part of a living, auto-generated documentation based on data annotations and JsonProperty.
How can I accomplish this using JSchemaGenerator?
Maybe I shouldn't do this, but as a second very short question: Are those $ref syntactically correct? Shouldn't they look like "#/definitions/SubSetting"?
The latest version of Json.NET Schema (3.0.3) has been updated to fix this issue. SubSubSetting will contain the full definition and not just a $ref.
https://github.com/JamesNK/Newtonsoft.Json.Schema/releases/tag/3.0.3

How can I store a user's words using Amazon Alexa?

I'm writing Alexa skills and want to write a skill to store the speaker's words.
For example, if I say, 'Alexa, save {whatever I say}', it should save the words in some string.
Now from what I understand, the intent schema something should be like
{
intents:[
"intent" : "SaveIntent"
]
}
and utterances like
SaveIntent save
SaveIntent store
In this case, how do I store '{whatever I say}'?
To capture free-form speech input (rather than a defined list of possible values), you'll need to use the AMAZON.LITERAL slot type. The Amazon documentation for the Literal slot type describes a use case similar to yours, where a skill is created to take any phrase and post it to a Social Media site. This is done by creating a StatusUpdate intent:
{
"intents": [
{
"intent": "StatusUpdate",
"slots": [
{
"name": "UpdateText",
"type": "AMAZON.LITERAL"
}
]
}
]
}
Since it uses the AMAZON.LITERAL slot type, this intent will be able to capture any arbitrary phrase. However, to ensure that the speech engine will do a decent job of capturing real-world phrases, you need to provide a variety of example utterances that resemble the sorts of things you expect the user to say.
Given that in your described scenario, you're trying to capture very dynamic phrases, there's a couple things in the documentation you'll want to give extra consideration to:
If you are using the AMAZON.LITERAL type to collect free-form text
with wide variations in the number of words that might be in the slot,
note the following:
Covering this full range (minimum, maximum, and all in between) will
require a very large set of samples. Try to provide several hundred
samples or more to address all the variations in slot value words as
noted above.
Keep the phrases within slots short enough that users can
say the entire phrase without needing to pause.
Lengthy spoken input can lead to lower accuracy experiences, so avoid
designing a spoken language interface that requires more than a few
words for a slot value. A phrase that a user cannot speak without
pausing is too long for a slot value.
That said, here's the example Sample Utterances from the documentation, again:
StatusUpdate post the update {arrived|UpdateText}
StatusUpdate post the update {dinner time|UpdateText}
StatusUpdate post the update {out at lunch|UpdateText}
...(more samples showing phrases with 4-10 words)
StatusUpdate post the update {going to stop by the grocery store this evening|UpdateText}
If you provide enough examples of different lengths to give an accurate picture of the range of expected user utterances, then your intent will be able to accurately capture dynamic phrases in real uses cases, which you can access in the UpdateText slot. Based on this, you should be able to implement an intent specific to your needs.
Important: AMAZON.LITERAL is deprecated as of October 22, 2018. Older skills built with AMAZON.LITERAL do continue to work, but you must migrate away from AMAZON.LITERAL when you update those older skills, and for all new skills.
Instead of using AMAZON.LITERAL, you can use a custom slot to trick alexa into passing the free flow text into the backend.
You can use this configuration to do it:
{
"interactionModel": {
"languageModel": {
"invocationName": "siri",
"intents": [
{
"name": "SaveIntent",
"slots": [
{
"name": "text",
"type": "catchAll"
}
],
"samples": [
"{text}"
]
}
],
"types": [
{
"name": "catchAll",
"values": [
{
"name": {
"value": "allonymous isoelectrically salubrity apositia phantomize Sangraal externomedian phylloidal"
}
},
{
"name": {
"value": "imbreviate Bertie arithmetical undramatically braccianite eightling imagerially leadoff"
}
},
{
"name": {
"value": "mistakenness preinspire tourbillion caraguata chloremia unsupportedness squatarole licitation"
}
},
{
"name": {
"value": "Cimbric sigillarid deconsecrate acceptableness balsamine anostosis disjunctively chafflike"
}
},
{
"name": {
"value": "earsplitting mesoblastema outglow predeclare theriomorphism prereligious unarousing"
}
},
{
"name": {
"value": "ravinement pentameter proboscidate unexigent ringbone unnormal Entomophila perfectibilism"
}
},
{
"name": {
"value": "defyingly amoralist toadship psoatic boyology unpartizan merlin nonskid"
}
},
{
"name": {
"value": "broadax lifeboat progenitive betel ashkoko cleronomy unpresaging pneumonectomy"
}
},
{
"name": {
"value": "overharshness filtrability visual predonate colisepsis unoccurring turbanlike flyboy"
}
},
{
"name": {
"value": "kilp Callicarpa unforsaken undergarment maxim cosenator archmugwump fitted"
}
},
{
"name": {
"value": "ungutted pontificially Oudenodon fossiled chess Unitarian bicone justice"
}
},
{
"name": {
"value": "compartmentalize prenotice achromat suitability molt stethograph Ricciaceae ultrafidianism"
}
},
{
"name": {
"value": "slotter archae contrastimulant sopper Serranus remarry pterygial atactic"
}
},
{
"name": {
"value": "superstrata shucking Umbrian hepatophlebotomy undreaded introspect doxographer tractility"
}
},
{
"name": {
"value": "obstructionist undethroned unlockable Lincolniana haggaday vindicatively tithebook"
}
},
{
"name": {
"value": "unsole relatively Atrebates Paramecium vestryish stockfish subpreceptor"
}
},
{
"name": {
"value": "babied vagueness elabrate graphophonic kalidium oligocholia floccus strang"
}
},
{
"name": {
"value": "undersight monotriglyphic uneffete trachycarpous albeit pardonableness Wade"
}
},
{
"name": {
"value": "minacious peroratory filibeg Kabirpanthi cyphella cattalo chaffy savanilla"
}
},
{
"name": {
"value": "Polyborinae Shakerlike checkerwork pentadecylic shopgirl herbary disanagrammatize shoad"
}
}
]
}
]
}
}
}
You can try using the slot type AMAZON.SearchQuery. So you intent would be something like this
{
"intents": [
{
"intent": "SaveIntent",
"slots": [
{
"name": "UpdateText",
"type": "AMAZON.SearchQuery"
}
]
}
]
}
as of end of 2018 I am using SearchQuery to get whatever the user says.
It does work, and I have it on production systems.
But you have to ask the user something and fill the slot.
For example:
Define a slot type of SearchQuery named query (choose whatever name you want)
Add sample utterances in the slot prompts like I want to watch {query} or {query} or I want {query}
Make a question to the user for slot filling
const message = 'What movie do you want to watch?'
handlerInput
.responseBuilder
.speak(message)
.reprompt(message)
.addElicitSlotDirective('query')
.getResponse();
Updated: This answer isn't true. mentioned in the comments there is the Amazon.Literal Slot type that should allow this.
Alexa doesn't currently support access to the users raw speech input. It may be possible in the future, or you can look at some other voice to text API's such as Google's.
The only way to do this currently with Alexa would be to have a set list of words that the user could say that it would save.
To do that you can follow one of Amazon's examples of using a custom slot type. Then put all of the possible words that the user would say into that category.
(8/5/17) Unfortunately this feature was removed from Amazon with the elimination of AMAZON.LITERALS.
However, depending on how interested you are in capturing free form inputs you may be satisfied with an input MODE that captures one word, name, city, number, letter, symbol, etc. at a time and strings them together into a single variable with no message in between.
I've worked on a password input mode that can be modified to collect and concatenate user inputs. While your input would be slower, if you optimize your lambda function you may be able to achieve a fast user experience for entering a few sentences. The structure is what's important. The code could easily be adapted.
How to give input to Amazon Alexa Skills Kit (ASK) mixed string with numbers?
https://stackoverflow.com/a/45515598/8408056
Here is the better possible way to achieve what you were looking for. After trying several methods, I have got the complete words of the statement asked Alexa.
You need to make the following setup in your Alexa skill (name of intent, slot name, and slot type you can choose as per your need)
Setting up Intent
Setting up custom slot type
After setting up your Alexa skill, you can invoke your skill, keep some response for launch request and say anything you want, and you can catch the entire words or text as shown here.
"intent": {
"name": "sample",
"confirmationStatus": "NONE",
"slots": {
"sentence": {
"name": "sentence",
"value": "hello, how are you?",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "xxxxxxx",
"status": {
"code": "xxxxxxx"
}
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
}
}
}
Note*: In this method, you will need to handle utterances properly if there are more than one intent.

Resources