I'm having trouble validating a schema in Postman using tv4 inside the tests tab - it is always returning a true test, no matter what I feed it. I am at a complete loss and could really use a hand - here is my example JSON Response, and my tests:
I've tried a ton of variations from every Stack Overflow/tutorial I could find and nothing will work - it always returns true.
//Test Example
var jsonData = JSON.parse(responseBody);
const schema = {
"required" : ["categories"],
"properties": {
"categories": {
"required" : ["aStringOne", "aStringTwo", "aStringThree" ],
"type": "array",
"properties" : {
"aStringOne": {"type": "string" },
"aStringTwo": {"type": "null" },
"aStringThree": {"type": "boolean" }
}
}
}
};
pm.test('Schema is present and accurate', () => {
var result=tv4.validateMultiple(jsonData, schema);
console.log(result);
pm.expect(result.valid).to.be.true;
});
//Response Example
{
"categories": [
{
"aStringOne": "31000",
"aStringTwo": "Yarp",
"aStringThree": "More Yarp Indeed"
}
]
}
This should return false, as all three properties are strings but its passing. I'm willing to use a different validator or another technique as long as I can export it as a postman collection to use with newman in my CI/CD process. I look forward to any help you can give.
I would suggest moving away from using tv4 in Postman, the project isn't actively supported and Postman now includes a better (in my opinion), more actively maintained option called Ajv.
The syntax is slightly different but hopefully, this gives you an idea of how it could work for you.
I've mocked out your data and just added everything into the Tests tab - If you change the jsonData variable to pm.response.json() it will run against the actual response body.
var jsonData = {
"categories": [
{
"aStringOne": "31000",
"aStringTwo": "Yarp",
"aStringThree": "More Yarp Indeed"
}
]
}
var Ajv = require('ajv'),
ajv = new Ajv({logger: console, allErrors: true}),
schema = {
"type": "object",
"required": [ "categories"],
"properties": {
"categories": {
"type": "array",
"items": {
"type": "object",
"required": [ "aStringOne", "aStringTwo", "aStringThree" ],
"properties": {
"aStringOne": { "type": "string" },
"aStringTwo": { "type": "integer"},
"aStringThree": { "type": "boolean"},
}
}
}
}
}
pm.test('Schema is valid', function() {
pm.expect(ajv.validate(schema, jsonData), JSON.stringify(ajv.errors)).to.be.true
});
This is an example of it failing, I've included the allErrors flag so that it will return all the errors rather than just the first one it sees. In the pm.expect() method, I've added JSON.stringify(ajv.errors) so you can see the error in the Test Result tab. It's a little bit messy and could be tidied up but all the error information is there.
Setting the properties to string show the validation passing:
If one of the required Keys is not there, it will also error for this too:
Working with schemas is quite difficult and it's not easy to both create them (nested arrays and objects are tricky) and ensure they are doing what you want to do.
There are occasions where I thought something should fail and it passed the validation test. It just takes a bit of learning/practising and once you understand the schema structures, they can become extremely useful.
Related
I am trying to implement a Devextreme Data Grid with Remote Grouping with a Custom Store using .NET MVC, Angular. The configuration of my custom store looks like this:
this.dataSource = new CustomStore({
key:"id",
load: (loadOptions: any) => {
const gridHeaderModel: overviewGridModel = {
skip: loadOptions.skip || 0,
take: loadOptions.take || 20,
sortDescending: loadOptions?.sort?.[0]?.desc ?? true,
sortBy: loadOptions?.sort?.[0]?.selector ?? null,
filters: new OverviewFilterGridModel()
};
return this.service.getData(gridHeaderModel);
}
});
The data that is returned is in the following format:
"data": [
{
"id": 1,
"employeeId": 11
},
{
"id": 2,
"employeeId": 22
}
],
"totalCount": 2
Here is the implementation of the grid:
<dx-data-grid
#exampleGrid
[dataSource]="dataSource"
[allowColumnResizing]="true"
[columns]="columns"
[showRowLines]="true"
[showColumnLines]="true"
[showBorders]="true"
[remoteOperations]="{ groupPaging: true }"
>
<dxo-scrolling mode="virtual"></dxo-scrolling>
<dxo-group-panel [visible]="false"></dxo-group-panel>
<dxo-grouping [autoExpandAll]="true"></dxo-grouping>
<dxo-filter-row [visible]="true" [showOperationChooser]="false"></dxo-filter-row>
</dx-data-grid>
I am getting this error after loading of the grid:
E1037 - Invalid structure of grouped data. See: http://js.devexpress.com/error/21_1/E1037
Every example that i found out on the documentations and Support Center Q&A section was with using Web API Service which is not suitable for my problem. Also when i was analyzing the example here https://js.devexpress.com/Demos/WidgetsGallery/Demo/DataGrid/RemoteGrouping/Angular/Light/ i saw that the FE fires 3 different calls when i scroll on the grid. Why? Also i searched all Support Center but i wasn't able to find answers about my problem.
Can you help me about my problem? Can you share with me example of implementation of data grid with grouping with above technologies?
Thank you!
The problem with the above example was the model that was returned from the server.
Solution is to change the model like this:
{
"data": [
{
"key": "Blagojche",
"items": [
{
"id": 1038,
"employeeId": 52,
"employeName": "Blagojche"
}
]
},
{
"key": "Peter",
"items": [
{
"id": 1025,
"employeeId": 53,
"employeName": "Peter"
}
]
}
],
"totalCount": 38
}
I have an index in ElasticSearch with two fields of date type (metricsTime & arrivalTime). A sample document is quoted below. In Kibana, I created a scripted field delay for the difference between those two fields. My painless script is:
doc['arrivalTime'].value - doc['metricsTime'].value
However, I got the following error message when navigating to Kibana's Discover tab: class_cast_exception: Cannot apply [-] operation to types [org.joda.time.MutableDateTime] and [org.joda.time.MutableDateTime].
This looks same as the error mentioned in https://discuss.elastic.co/t/problem-in-difference-between-two-dates/121655. But the answer in that page suggests that my script is correct. Could you please help?
Thanks!
{
"_index": "events",
"_type": "_doc",
"_id": "HLV274_1537682400000",
"_version": 1,
"_score": null,
"_source": {
"metricsTime": 1537682400000,
"box": "HLV274",
"arrivalTime": 1539930920347
},
"fields": {
"metricsTime": [
"2018-09-23T06:00:00.000Z"
],
"arrivalTime": [
"2018-10-19T06:35:20.347Z"
]
},
"sort": [
1539930920347
]
}
Check the list of Lucene Expressions to check what expressions are available for date field and how you could use them
Just for sake of simplicity, check the below query. I have created two fields metricsTime and arrivalTime in a sample index I've created.
Sample Document
POST mydateindex/mydocs/1
{
"metricsTime": "2018-09-23T06:00:00.000Z",
"arrivalTime": "2018-10-19T06:35:20.347Z"
}
Query using painless script
POST mydateindex/_search
{ "query": {
"bool": {
"must": {
"match_all": {
}
},
"filter": {
"bool" : {
"must" : {
"script" : {
"script" : {
"inline" : "doc['arrivalTime'].date.dayOfYear - doc['metricsTime'].date.dayOfYear > params.difference",
"lang" : "painless",
"params": {
"difference": 2
}
}
}
}
}
}
}
}
}
Note the below line in the query
"inline" : "doc['arrivalTime'].date.dayOfYear - doc['metricsTime'].date.dayOfYear > params.difference"
Now if you change the value of difference from 2 to 26 (which is one more than the difference in the dates) then you see that the above query would not return the document.
But nevertheless, I have mentioned the query as an example as how using scripting you can compare two different and please do refer to the link I've shared.
I am having issues understanding how to display images on the Echo Show inside the audioPlayer 'Now Playing' screen.
I am currently playing an audio file and want to display an image on the 'Now Playing' screen. The closest I have been able to get is the following code which displays the image and title just before the audio starts, but then disappears immediately and the Echo Show goes to the 'Now Playing' screen with no background image and no metadata. I feel I'm close, but just cannot understand how to update the 'Now Playing' screen, rather than the screen that comes immediately before it.
This is part of the code (which works as per above):
var handlers = {
'LaunchRequest': function() {
this.emit('PlayStream');
},
'PlayStream': function() {
let builder = new Alexa.templateBuilders.BodyTemplate1Builder();
let template = builder.setTitle('Test Title')
.setBackgroundImage(makeImage('https://link_to_my_image.png'))
.setTextContent(makePlainText('Test Text'))
.build();
this.response.speak('OK.').
audioPlayerPlay(
'REPLACE_ALL',
stream.url,
stream.url,
null,
0)
.renderTemplate(template);
this.emit(':responseReady');
}
I have been looking at this page https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html but cannot understand how to convert the structure of what is on that page into my code. I assume that, from the code on the page :
{
"type": "AudioPlayer.Play",
"playBehavior": "valid playBehavior value such as ENQUEUE",
"audioItem": {
"stream": {
"url": "https://url-of-the-stream-to-play",
"token": "opaque token representing this stream",
"expectedPreviousToken": "opaque token representing the previous stream",
"offsetInMilliseconds": 0
},
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
"backgroundImage": {
"sources": [
{
"url": "https://url-of-the-background-image.png"
}
]
}
}
}
}
I somehow need to get this part :
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
Into this block of my code :
audioPlayerPlay(
'REPLACE_ALL',
streamInfo.url,
streamInfo.url,
null,
0)
.renderTemplate(template);
(and could probably lose the .renderTemplate(template); part as it only flashes up briefly before the 'Now Playing' screen loads anyway.
Any ideas on how to achieve this?
Thanks!
Update :
I have added the following to index.js:
var metadata = {
title: "title of the track to display",
subtitle: "subtitle of the track to display",
art: {
sources: {
url: "https://url-of-the-album-art-image.png"
}
}
};
And modified the audioPlayer as follows :
audioPlayerPlay(
'REPLACE_ALL',
stream.url,
stream.url,
null,
0,
metadata)
.renderTemplate(template);
And modified the responseBuilder.js as indicated:
audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds, metadata) {
const audioPlayerDirective = {
type : DIRECTIVE_TYPES.AUDIOPLAYER.PLAY,
playBehavior: behavior,
audioItem: {
stream: {
url: url,
token: token,
expectedPreviousToken: expectedPreviousToken,
offsetInMilliseconds: offsetInMilliseconds,
metadata : metadata
}
}
};
this._addDirective(audioPlayerDirective);
return this;
}
But I'm still not getting anything displayed on the 'Now Playing' screen.
For some reason the Echo Show is not updating in realtime and needs to be rebooted before it will show whatever is passed in the metadata variable, which is why I wasn't seeing any results.
Simply passing a variable as such works fine. I just need to find out why the content gets stuck on the 'Now Playing' screen and requires a reboot to work.
var "metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
Just define your metadata as below. And pass it as a 6th argument to audioPlayerPlay;
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://url-of-the-album-art-image.png"
}
]
},
audioPlayerPlay(
'REPLACE_ALL',
streamInfo.url,
streamInfo.url,
null,
0,metadata)
P.S. For this to work properly, You have to modify your node modules which you ll be zipping and uploading to lambda.
steps -
Go to your node_modules\alexa-sdk\lib and open responseBuilder file in it. And modify the code as follows-
audioPlayerPlay(behavior, url, token, expectedPreviousToken, offsetInMilliseconds, **metadata**) {
const audioPlayerDirective = {
type : DIRECTIVE_TYPES.AUDIOPLAYER.PLAY,
playBehavior: behavior,
audioItem: {
stream: {
url: url,
token: token,
expectedPreviousToken: expectedPreviousToken,
offsetInMilliseconds: offsetInMilliseconds
},
**metadata : metadata**
}
};
this._addDirective(audioPlayerDirective);
return this;
}
P.S. - The node module modifications required only if you are using alexa-sdk version 1.
I know it's been years since this question was originally posted, but for those like me who stumble upon this now, make sure you use a unique token in the play directive because metadata is cached using that token.
See the yellow Important note in the following section https://developer.amazon.com/en-US/docs/alexa/custom-skills/audioplayer-interface-reference.html#images
Important: The metadata for a given audio stream is identified by the
audioItem.stream.token included in the Play directive. Note that the
metadata associated with a particular audioItem.stream.token may be
cached in the Alexa service for up to five days, so changes to the
metadata (such as a different image, or a change to the title text)
may not be reflected immediately on the device. For instance, you may
notice this when testing if you experiment with different images or
title text for the same audio stream. You can send a new Play
directive with a different audioItem.stream.token to clear the cache.
And an example payload with a token:
{
"type": "AudioPlayer.Play",
"playBehavior": "valid playBehavior value such as ENQUEUE",
"audioItem": {
"stream": {
"url": "https://cdn.example.com/url-of-the-stream-to-play",
"token": "opaque token representing this stream",
"expectedPreviousToken": "opaque token representing the previous stream",
"offsetInMilliseconds": 0,
"captionData":{
"content": "WEBVTT\n\n00:00.000 --> 00:02.107\n<00:00.006>My <00:00.0192>Audio <00:01.232>Captions.\n",
"type": "WEBVTT"
}
},
"metadata": {
"title": "title of the track to display",
"subtitle": "subtitle of the track to display",
"art": {
"sources": [
{
"url": "https://cdn.example.com/url-of-the-album-art-image.png"
}
]
},
"backgroundImage": {
"sources": [
{
"url": "https://cdn.example.com/url-of-the-background-image.png"
}
]
}
}
}
}
I want to have definitions section better generated and organized and for me this would mean to not allow depth building up in definitions, but, each class involved in the structure tree should have its own entry in the definitions section and referenced via $ref. This means that for each definition I would only have a list of properties that would either be of primitive types (string, boolean, etc.) or would be a $ref to another definition entry for another custom class. You can also see this as depth 1 definition, close to how classes are originally defined in C#.
To illustrate this via a trivial example:
JSchemaGenerator schemaGenerator = new JSchemaGenerator();
schemaGenerator = new JSchemaGenerator()
{
DefaultRequired = Newtonsoft.Json.Required.DisallowNull,
SchemaIdGenerationHandling = SchemaIdGenerationHandling.TypeName,
SchemaLocationHandling = SchemaLocationHandling.Definitions,
SchemaReferenceHandling = SchemaReferenceHandling.Objects,
};
JSchema schema = schemaGenerator.Generate(typeof(Setting));
Renders:
{
"id": "Setting",
"definitions": {
"SubSetting": {
"id": "SubSetting",
"type": "object",
"properties": {
"SubSubSetting": {
"id": "SubSubSetting",
"type": "object",
"properties": {
"String": {
"type": "string"
}
}
}
}
},
"SubSubSetting": {
"$ref": "SubSubSetting"
}
},
"type": "object",
"properties": {
"SubSetting": {
"$ref": "SubSetting"
},
"SubSubSetting": {
"$ref": "SubSubSetting"
}
}
}
Thus, SubSubSetting definition is placed inline SubSetting definition and later we have SubSubSetting defined as reference to that inline definition. That's what I want to avoid as for complex data structures it becomes really obscure and I want to provide the schema even as part of a living, auto-generated documentation based on data annotations and JsonProperty.
How can I accomplish this using JSchemaGenerator?
Maybe I shouldn't do this, but as a second very short question: Are those $ref syntactically correct? Shouldn't they look like "#/definitions/SubSetting"?
The latest version of Json.NET Schema (3.0.3) has been updated to fix this issue. SubSubSetting will contain the full definition and not just a $ref.
https://github.com/JamesNK/Newtonsoft.Json.Schema/releases/tag/3.0.3
I'm writing Alexa skills and want to write a skill to store the speaker's words.
For example, if I say, 'Alexa, save {whatever I say}', it should save the words in some string.
Now from what I understand, the intent schema something should be like
{
intents:[
"intent" : "SaveIntent"
]
}
and utterances like
SaveIntent save
SaveIntent store
In this case, how do I store '{whatever I say}'?
To capture free-form speech input (rather than a defined list of possible values), you'll need to use the AMAZON.LITERAL slot type. The Amazon documentation for the Literal slot type describes a use case similar to yours, where a skill is created to take any phrase and post it to a Social Media site. This is done by creating a StatusUpdate intent:
{
"intents": [
{
"intent": "StatusUpdate",
"slots": [
{
"name": "UpdateText",
"type": "AMAZON.LITERAL"
}
]
}
]
}
Since it uses the AMAZON.LITERAL slot type, this intent will be able to capture any arbitrary phrase. However, to ensure that the speech engine will do a decent job of capturing real-world phrases, you need to provide a variety of example utterances that resemble the sorts of things you expect the user to say.
Given that in your described scenario, you're trying to capture very dynamic phrases, there's a couple things in the documentation you'll want to give extra consideration to:
If you are using the AMAZON.LITERAL type to collect free-form text
with wide variations in the number of words that might be in the slot,
note the following:
Covering this full range (minimum, maximum, and all in between) will
require a very large set of samples. Try to provide several hundred
samples or more to address all the variations in slot value words as
noted above.
Keep the phrases within slots short enough that users can
say the entire phrase without needing to pause.
Lengthy spoken input can lead to lower accuracy experiences, so avoid
designing a spoken language interface that requires more than a few
words for a slot value. A phrase that a user cannot speak without
pausing is too long for a slot value.
That said, here's the example Sample Utterances from the documentation, again:
StatusUpdate post the update {arrived|UpdateText}
StatusUpdate post the update {dinner time|UpdateText}
StatusUpdate post the update {out at lunch|UpdateText}
...(more samples showing phrases with 4-10 words)
StatusUpdate post the update {going to stop by the grocery store this evening|UpdateText}
If you provide enough examples of different lengths to give an accurate picture of the range of expected user utterances, then your intent will be able to accurately capture dynamic phrases in real uses cases, which you can access in the UpdateText slot. Based on this, you should be able to implement an intent specific to your needs.
Important: AMAZON.LITERAL is deprecated as of October 22, 2018. Older skills built with AMAZON.LITERAL do continue to work, but you must migrate away from AMAZON.LITERAL when you update those older skills, and for all new skills.
Instead of using AMAZON.LITERAL, you can use a custom slot to trick alexa into passing the free flow text into the backend.
You can use this configuration to do it:
{
"interactionModel": {
"languageModel": {
"invocationName": "siri",
"intents": [
{
"name": "SaveIntent",
"slots": [
{
"name": "text",
"type": "catchAll"
}
],
"samples": [
"{text}"
]
}
],
"types": [
{
"name": "catchAll",
"values": [
{
"name": {
"value": "allonymous isoelectrically salubrity apositia phantomize Sangraal externomedian phylloidal"
}
},
{
"name": {
"value": "imbreviate Bertie arithmetical undramatically braccianite eightling imagerially leadoff"
}
},
{
"name": {
"value": "mistakenness preinspire tourbillion caraguata chloremia unsupportedness squatarole licitation"
}
},
{
"name": {
"value": "Cimbric sigillarid deconsecrate acceptableness balsamine anostosis disjunctively chafflike"
}
},
{
"name": {
"value": "earsplitting mesoblastema outglow predeclare theriomorphism prereligious unarousing"
}
},
{
"name": {
"value": "ravinement pentameter proboscidate unexigent ringbone unnormal Entomophila perfectibilism"
}
},
{
"name": {
"value": "defyingly amoralist toadship psoatic boyology unpartizan merlin nonskid"
}
},
{
"name": {
"value": "broadax lifeboat progenitive betel ashkoko cleronomy unpresaging pneumonectomy"
}
},
{
"name": {
"value": "overharshness filtrability visual predonate colisepsis unoccurring turbanlike flyboy"
}
},
{
"name": {
"value": "kilp Callicarpa unforsaken undergarment maxim cosenator archmugwump fitted"
}
},
{
"name": {
"value": "ungutted pontificially Oudenodon fossiled chess Unitarian bicone justice"
}
},
{
"name": {
"value": "compartmentalize prenotice achromat suitability molt stethograph Ricciaceae ultrafidianism"
}
},
{
"name": {
"value": "slotter archae contrastimulant sopper Serranus remarry pterygial atactic"
}
},
{
"name": {
"value": "superstrata shucking Umbrian hepatophlebotomy undreaded introspect doxographer tractility"
}
},
{
"name": {
"value": "obstructionist undethroned unlockable Lincolniana haggaday vindicatively tithebook"
}
},
{
"name": {
"value": "unsole relatively Atrebates Paramecium vestryish stockfish subpreceptor"
}
},
{
"name": {
"value": "babied vagueness elabrate graphophonic kalidium oligocholia floccus strang"
}
},
{
"name": {
"value": "undersight monotriglyphic uneffete trachycarpous albeit pardonableness Wade"
}
},
{
"name": {
"value": "minacious peroratory filibeg Kabirpanthi cyphella cattalo chaffy savanilla"
}
},
{
"name": {
"value": "Polyborinae Shakerlike checkerwork pentadecylic shopgirl herbary disanagrammatize shoad"
}
}
]
}
]
}
}
}
You can try using the slot type AMAZON.SearchQuery. So you intent would be something like this
{
"intents": [
{
"intent": "SaveIntent",
"slots": [
{
"name": "UpdateText",
"type": "AMAZON.SearchQuery"
}
]
}
]
}
as of end of 2018 I am using SearchQuery to get whatever the user says.
It does work, and I have it on production systems.
But you have to ask the user something and fill the slot.
For example:
Define a slot type of SearchQuery named query (choose whatever name you want)
Add sample utterances in the slot prompts like I want to watch {query} or {query} or I want {query}
Make a question to the user for slot filling
const message = 'What movie do you want to watch?'
handlerInput
.responseBuilder
.speak(message)
.reprompt(message)
.addElicitSlotDirective('query')
.getResponse();
Updated: This answer isn't true. mentioned in the comments there is the Amazon.Literal Slot type that should allow this.
Alexa doesn't currently support access to the users raw speech input. It may be possible in the future, or you can look at some other voice to text API's such as Google's.
The only way to do this currently with Alexa would be to have a set list of words that the user could say that it would save.
To do that you can follow one of Amazon's examples of using a custom slot type. Then put all of the possible words that the user would say into that category.
(8/5/17) Unfortunately this feature was removed from Amazon with the elimination of AMAZON.LITERALS.
However, depending on how interested you are in capturing free form inputs you may be satisfied with an input MODE that captures one word, name, city, number, letter, symbol, etc. at a time and strings them together into a single variable with no message in between.
I've worked on a password input mode that can be modified to collect and concatenate user inputs. While your input would be slower, if you optimize your lambda function you may be able to achieve a fast user experience for entering a few sentences. The structure is what's important. The code could easily be adapted.
How to give input to Amazon Alexa Skills Kit (ASK) mixed string with numbers?
https://stackoverflow.com/a/45515598/8408056
Here is the better possible way to achieve what you were looking for. After trying several methods, I have got the complete words of the statement asked Alexa.
You need to make the following setup in your Alexa skill (name of intent, slot name, and slot type you can choose as per your need)
Setting up Intent
Setting up custom slot type
After setting up your Alexa skill, you can invoke your skill, keep some response for launch request and say anything you want, and you can catch the entire words or text as shown here.
"intent": {
"name": "sample",
"confirmationStatus": "NONE",
"slots": {
"sentence": {
"name": "sentence",
"value": "hello, how are you?",
"resolutions": {
"resolutionsPerAuthority": [
{
"authority": "xxxxxxx",
"status": {
"code": "xxxxxxx"
}
}
]
},
"confirmationStatus": "NONE",
"source": "USER"
}
}
}
Note*: In this method, you will need to handle utterances properly if there are more than one intent.