JSON.Net Schema multiple custom validations - json.net

I am using Newtonsoft.Json.Schema library and making use of custom validations to produce custom error messages, which are themselves defined in the schema.
"sku" : {
"type" : "string",
"message" : {
"required" : "The SKU is a required field",
},
"format" : "Required"
}
This works fine and I can pull the message from the given schema when the custom validation executes.
However I would like to add more custom validation like:
"sku" : {
"type" : "string",
"pattern" : "\d",
"message" : {
"required" : "The SKU is a required field",
"pattern" : "The SKU must be a number"
},
"format" : "Pattern" <--- can only specify a single custom validation
}
Is there any way to get a second custom validator to run in order to get a custom validation for both required and pattern?
(Just to provide some background here... we have incoming data that must be validated but the errors need to be meaningful within the business context that the data originated from. That business context is unknown, so context sensitive messages, such as errors must be provided to the validation in some way).

So in the end I made a single custom validation which creates a new schema from the custom validation context like this:
JSchema valueSchema = JSchema.Parse(context.Schema.ToString() );
This creates a new schema just for the value being processed by the custom validation without any custom validation calls. A necessary step because the existing schema in context.Schema contains the call to the custom validation and we'll be stuck in an infinite loop without doing this.
Now I can run the value through this local schema and get a full report of all the errors that occur and deliver the relevant custom error message.
Custom error messages have property names that match the given ErrorType in the validation errors making the look up easy.

Related

Extend Hydra search context to generate filter fields client side?

I'm using API Platform 3.1. When you've got an entity with a ApiFilter, Hydra will give some information about all fields with a filter. In this simple example it will give the name of the field (createdAt):
class Entity
{
#[ApiFilter(DateFilter::class, properties: ['createdAt'])]
protected ?DateTimeInterface $createdAt = null;
}
'hydra:search': {
"#type": "hydra:IriTemplate"
"hydra:mapping": {
"#type": "IriTemplateMapping",
"variable": "createdAt[after]",
"property": "createdAt",
"required": false
}
}
This information can be used to populate search fields client side, like Swagger does:
Now I want this information to generate a form with fields that can be used to sort or filter. But I want not only the field names, but some extra information about how the filter can be used. For example: a date field createdAt should render a slightly different datepicker than a birthdate field.
I've already tried extending the Hydra context as described in API Platform's documentation:
#[ApiResource(operations: [
new Get(hydraContext: ['foo' => 'bar'])
])
But I didn't found foo/bar in my /api/items response.
Is there a way to add some information to the Hydra model which I can use in my application?

Create or Read item in DynamoDb

I'm trying to read an item with ID of X from DynamoDB (Using Appsync graphql) and I want it to create a default item if there is none.
This seems like it should be a normal use case. But the solutions I've tried have all been pretty bad:
I tried to create a Pipeline resolver that would first get the item, then in a second function create an item if there was no item in the result from the previous function. This had with returning the read item.
I tried making a PutAction with the condition that an item with this ID doesn't work. This does what I need it to, but I can't change the response from an error warning, no matter what I do to the response mapping template.
So how does one efficiently create a "read - or create if it does not exist" resolver for DynamoDb?
It turns out that I was close to the solution.
According to this documentation: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html#aws-appsync-resolver-mapping-template-reference-dynamodb-condition-handling
Create a putItem resolver that conditionally checks if there is an item with the same unique identifier (in DynamoDB that's usually a primary key and a sort key combination)
If the resolver determines the read object to not be different from the intended new object a warning will not be sent. So we can simply remove ALL fields from the comparison.
Example:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"id" : { "S" : "${ctx.args.id}" }
},
"condition" : {
"expression" : "attribute_not_exists(id)",
"equalsIgnore": [ "__typename", "_version", "_lastChangedAt", "_createdAt", "name", "owner"]
},
"attributeValues": {
"name": { "S" : "User Username" }
}
}

Watson Conversation Dialogue, how to save user input using slot

In my Watson conversation dialogue am trying to read user input using slot,
my requirement is to prompt user for enter an issue description and save it in a variable named issue_description.
but in slot, watson check for intent or entity before saving it into a variable. in my case i have put an intent to check with, but it is not saved into variable after the check, i always get true as issue_description.
how can i save the issue _description into a variable?
what should be the possible intent and entity for this?
If you want to save user input then you can use to save the input in any variable.
"context":{
"issue_description":"<?input.text?>"
}
To capture something like a description in a slot, my recommendation is to
define an entity based on a pattern that describes how the description should be.
in the pattern, you could use quotes as delimiter of the string to capture
in the slot definition Watson has to look for that entity, you provide the name of a context variable the entity value is saved to
access the context variable to process the captured value
There is a sample workspace I wrote that captures an event description using a pattern. In the dialog I cut the quotes off the string and then send it to a function for postprocessing. The eventName is defined as follows, the pattern in patterns is the interesting part:
{
"entity": "eventName",
"values": [
{
"type": "patterns",
"value": "shortname",
"created": "2018-01-31T13:28:56.245Z",
"updated": "2018-02-07T09:08:31.651Z",
"metadata": null,
"patterns": [
"[\"„“][A-Za-z0-9.:| #\\']+[\"”“]"
]
}
],
}
To store the user input as in the context variable issue_description, you can either use an intent if you are not validating the input (description) or you can use an entity with the synonym value based on pattern. By doing this, you can configure the bot to recognize the condition and save the value to the context variable.

Filter by type in Google AutocompleteService

I want to filter away establishments in my autocomplete service.
I have tried
var service = new google.maps.places.AutocompleteService;
var request = {
"input": "Nørregade",
"componentRestrictions": { "country": "dk" },
"types": ["(cities)", "(regions)", "geocode"]
};
service.getPlacePredictions(request, function(predictions, status) {
console.log(status) # => INVALID_REQUEST
});
(http://jsfiddle.net/gdk0j9eg/1/)
Following this link (https://developers.google.com/maps/documentation/javascript/reference#AutocompletionRequest) it seems that the autocomplete service does indeed take these types.
What am I doing wrong here?
See the documentation, you are only allowed to use a single type or type collection:
You may restrict results from a Place Autocomplete request to be of a certain type by passing a types parameter. The parameter specifies a type or a type collection, as listed in the supported types below. If nothing is specified, all types are returned. In general only a single type is allowed. The exception is that you can safely mix the geocode and establishment types, but note that this will have the same effect as specifying no types.

Exception in defer callback: Error: When the modifier option is true, validation object must have at least one operator

I'm trying to add the roles package and then set a custom user role like guest or member so I can use it with paid plans. I'm getting the following error
Exception in defer callback: Error: When the modifier option is true, validation object must have at least one operator
at checkModifier (packages/aldeed:simple-schema/simple-schema-validation.js:271:1)
at doValidation1 (packages/aldeed:simple-schema/simple-schema-validation.js:321:1)
When I run the following function
Meteor.methods({
setUserRole: function(userId, roleToSet){
// check(Meteor.userId(), String);
check(userId, String );
check(roleToSet, String);
var user = Meteor.users.findOne(userId);
if (_.isEmpty(user.roles)) {
Roles.addUsersToRoles(userId, roleToSet);
}
}
});
This often means you're trying to $set a field that hasn't been added to the schema.
If you're using Telescope, make sure you call Users.addField() for whatever fields are needed by the Roles package.
This happens when you apply schema to Users collection.
There are two types of roles you can apply:
roles: {
type: Object,
optional: true,
blackbox: true
},
or
roles: {
type: [String],
optional: true
}
You can't use both at the same time. In your case, as you do not use the groups in Roles.addUsersToRoles(userId, roleToSet); you need the second example of roles schema definition.
Just be aware that you will not be able to use groups without changing the scheme.
This error is thrown by simple-schema, it means that an update method is used using a modifier that doesn't have an operator ($set, $unset, .. etc). The latest version of the roles package seems to avoid this in the code related to Roles.addUsersToRoles, but if the error goes away when you comment the line where you use addUsersToRoles method, then maybe you need to
Make sure you are using latest version of roles package or use :
meteor update alanning:roles
Check the code that calls this method and make sure arguments are correct and in correct order
Make sure you are not mixing grouped with non-grouped model (when using the roles package you should choose whether to always use groups or never use them) .. for example :
Roles.addUsersToRoles(userId, roles, Roles.GLOBAL_GROUP)

Resources