Pact: How do I match an object whose keys match a regular expression? - pact

I am trying to write a pact consumer test to match the following response.
[
{
"accountId" : 1,
"permissions" : [
{
"schedule" : {
"01/01/2018" : false,
"01/01/1900" : true
},
"permissionId" : 3
}
]
}
]
Each schedule object is composed of an unknown number of keys which match a simple regular expression. But I don't see a way to match a key using a regular expression while having the value map to a simple boolean.
For instance, I see the following method in the API.
public LambdaDslObject eachKeyLike(
String exampleKey,
Consumer<LambdaDslObject> nestedObject)
But that is going to expect a new object as the value, instead of a primitive type.
"schedule" : {
"01/01/2018" : { ... }, // not what I want to match
"01/01/1900" : false // what I want to match
}
Is there a way to specify an imprecise key mapped to a primitive value in pact-jvm?

Sorry, this feature doesn't exist yet, but it's been discussed for the next version of the pact specification. You can add your thoughts on this issue: https://github.com/pact-foundation/pact-specification/issues/47

Related

FieldValue.arrayRemove() to remove an object from array of objects based on property value

I have a document with the following structure:
{
"email" : "a#gmail.com",
"value" : 100,
"children" : [
{
"email" : "b#gmail.com",
"value" : 100
},
{
"email" : "b#gmail.com",
"value" : 200
}
]
}
I want to remove all elements with the email b#gmail.com from the children array. I am able to remove one item if I pass the whole object to be removed like this:
FieldValue.arrayRemove(childObject)
But I want both the objects with the email b#gmail.com to be removed. Is there anyway to achieve this using FieldValue.arrayRemove()?
The arrayRemove operation removes the exact item that you specify from the array. There is no way to pass a partial object and remove all array items that match the partial information. You will have to pass in each complete item that you want to remove.
If you don't know what those items are yet, you will typically have to first read the document, loop over the items in the array to remove them, and write the modified array back to the document.
As an update, it is still the case that you must match the object exactly to remove it from an array. Additionally, of course, in the example above, he is querying for a value, which requires a query to see what matches.
However, depending on the logic: if you use a Map instead...for instance in the case above, adjusted:
"children" :
"b#gmail.com_100":
{
"email" : "b#gmail.com",
"value" : 100
},
"b#gmail.com_200":
{
"email" : "b#gmail.com",
"value" : 200
}
You can simply use:
'children.b#gmail.com_200': FieldValue.delete(),
As of late, I've gravitated away from Lists to Maps for this reason.

Create or Read item in DynamoDb

I'm trying to read an item with ID of X from DynamoDB (Using Appsync graphql) and I want it to create a default item if there is none.
This seems like it should be a normal use case. But the solutions I've tried have all been pretty bad:
I tried to create a Pipeline resolver that would first get the item, then in a second function create an item if there was no item in the result from the previous function. This had with returning the read item.
I tried making a PutAction with the condition that an item with this ID doesn't work. This does what I need it to, but I can't change the response from an error warning, no matter what I do to the response mapping template.
So how does one efficiently create a "read - or create if it does not exist" resolver for DynamoDb?
It turns out that I was close to the solution.
According to this documentation: https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-reference-dynamodb.html#aws-appsync-resolver-mapping-template-reference-dynamodb-condition-handling
Create a putItem resolver that conditionally checks if there is an item with the same unique identifier (in DynamoDB that's usually a primary key and a sort key combination)
If the resolver determines the read object to not be different from the intended new object a warning will not be sent. So we can simply remove ALL fields from the comparison.
Example:
{
"version" : "2017-02-28",
"operation" : "PutItem",
"key" : {
"id" : { "S" : "${ctx.args.id}" }
},
"condition" : {
"expression" : "attribute_not_exists(id)",
"equalsIgnore": [ "__typename", "_version", "_lastChangedAt", "_createdAt", "name", "owner"]
},
"attributeValues": {
"name": { "S" : "User Username" }
}
}

Nested arrays are not supported

The new Firebase database Firestore says
Function DocumentReference.set() called with invalid data. Nested arrays are not supported.
When trying to save the following object:
{
"desc" : "Blala",
"geojson" : {
"features" : [ {
"geometry" : {
"coordinates" : [ 8.177433013916017, 48.27753810094064 ],
"type" : "Point"
},
"type" : "Feature"
} ],
"type" : "FeatureCollection"
},
"location" : {
"lat" : 48.27753810094064,
"lng" : 8.177433013916017
},
"name" : "Wald und Wiesen",
"owner" : "8a2QQeTG2zRawZJA3tr1oyOAOSF3",
"prices" : {
"game" : {
"Damwild" : 10,
"Raubwild" : 300,
"Rehwild" : 250,
"Schwarzwild" : 40
},
"perDay" : 35
},
"rules" : "Keine Regeln!",
"wild" : {
"desc" : "kein Wild",
"tags" : [ "Damwild", "Rehwild", "Schwarzwild", "Raubwild" ]
}
}
what exactly is the nested array that firestore is complaining about? I can't find it in the documentation.
If it's the GeoJSON object - how would I save it instead?
UPDATE: This was fixed in Firebase JS SDK 4.6.0. Directly nested arrays are still unsupported, but you can now have an array that contains an object that contains an array, etc.
This is a bug in the currently released SDKs.
The backend has the restriction that only directly nested Arrays are unsupported.
In your case you have arrays containing objects containing arrays and the validation logic in the clients is disallowing it when it shouldn't.
There's no public bug tracking this but I'll post back when we have a fix.
You could adapt a serialization function that converts arrays with object types into a map. The keys can be numeric to maintain order.
i.e.
{ 1: Object, 2: Object2 ... }
On deserialization you can get the Object.values(data); to put it back into an array to be used client-side.
Can't comment so here it goes: this is fixed in 4.6.0, see release notes: https://firebase.google.com/support/release-notes/js#4.6.0
Cloud Firestore
FIXED Fixed the validation of nested arrays to allow indirect nesting.
Python:
# Matrix storage in firestore
def matrix_to_fb_data(matrix):
return [{'0': row} for row in matrix]
def fb_data_to_matrix(fb_data):
return [row['0'] for row in fb_data]
Firestore doesn't allow 2d arrays, like previous answers have noted, but they allow arrays of maps... of arrays :)

Cannot append values to an entity

My question is : how to append a value given by a user to an entity. The user provided value is dynamic.
The Watson response overwrites the toppings variable with the value given by the user, as you can see in the attached image.
{
"output": {
"text": "I got an order to add one or more toppings.
Adding <?context.toppings.append('toppings')?>.
Toppings to provide: <?entities['toppings']?.toString()?>"
},
"context": {
"toppings": "<? entities['toppings']?.toString()?>"
}
}
You can append to an array with the .append() function.
In your example, the expression "toppings": "<? entities['toppings']?.toString()?>" will overwrite the toppings variable every time this node is processed with the actual recognized entities #toppings. First the the $toppings variable needs to be defined as an array, e.g.:
"context" : {
"toppings" : []
}
Then in context part of a dialog node you can write:
"context" : {
"toppings" : "<?$toppings.append(entities['toppings'].toJsonArray())?>"
}
More info in our doc: Watson Conversation Doc
EDIT: Thinking about this, it is probably not a good idea to have the same name for the entity and for the variable you store it in. :-)

Single integer as key in firebase (Firebase array behavior)

If I insert data into node/a/0 in firebase.
the result will treat a is an array a[0].
The same way, if I set my data in node/a/1 the first array will become null
"a" : [ null, {
"-J-22j_Mb59l0Ws0H9xc" : {
"name" : "chok wee ching"
}
} ]
But it will be fine if node/a/2
"a" : {
"2" : {
"-J-25xjEXUqcpsC-5LOE" : {
"name" : "chok wee ching"
}
}
}
my guess, firebase automatically treat single 0 and 1 as an array.
How can I prevent this?
Firebase has no native support for arrays. If you store an array, it really gets stored as an "object" with integers as the key names.
However, to help people that are storing arrays in Firebase, when you call .val() or use the REST api to read data from Firebase, if the data looks like an array, Firebase will render it as an array.
In particular, if all of the keys are integers, and more than half of the keys between 0 and the maximum key in the object have non-empty values, then Firebase will render it as an array. It's this latter part of the heuristic that you are running into. Your second example only sets a value for 2, not 0 and 1, so less than half of the keys have values, and therefore Firebase renders it as an object.
You can't currently change or prevent this behavior (though it's a common source of confusion so we'll likely make some tweaks here in the future). However it's usually very easy to deal with the behavior once you understand it. If you need further help, perhaps you can expand your question to explain what you need to accomplish and how this behavior is preventing it.
I've encountered the same problem, but actually wanted to have a numeric key array in Swift ([Int:AnyObject]). I've written this function to make sure to always have an array (without null values):
func forceArray(from: Any?) -> [Int:AnyObject] {
var returnArray = [Int:AnyObject]()
if let array = from as? [String:AnyObject] {
for (key, value) in array {
if let key = Int(key) {
returnArray[key] = value
}
}
return returnArray
}
if let array = from as? [AnyObject] {
for (key, value) in array.enumerated() {
if !(value is NSNull) {
returnArray[key] = value
}
}
return returnArray
}
return returnArray
}
Result:
["0":1, "1":2] becomes: [0:1, 1:2]
{"0":1, "6":2} becomes: [0:1, 6:2]
["0":1, "1": null, "2":2] becomes: [0:1, 2:2]
Hope this is helpful for someone!
Yes, storing 0 and 1 as the key will have an issue as Firebase will think it is an array.
My simple workaround is using:
String(format:"%03d", intValue)
So that the resulting key will be "000" and "001", and they can be converted back to Int with ease.
TL;DR;
Workaround for REST API: add a meaningless filter like this one orderBy="$.key"&startAt="0" which actually filters-out all items with negative key): https://workaround-arrays-bagohack.firebaseio.com/matchesHeuristic.json?orderBy=%22$key%22&startAt=%220%22&print=pretty
Explanation
This is a known and unfortunate (IMHO) behaviour of Firebase, documented deep down in their support knowledge base. Quote from Best Practices: Arrays in Firebase:
Firebase has no native support for arrays. If you store an array, it
really gets stored as an "object" with integers as the key names.
// we send this ['hello', 'world']
// Firebase stores this {0:'hello', 1: 'world'}
However, to help people that are storing arrays
in Firebase, when you call .val() or use the REST api to read data, if
the data looks like an array, Firebase will render it as an array.
In particular, *if all of the keys are integers, and more than half of
the keys between 0 and the maximum key in the object have non-empty
values, then Firebase will render it as an array. This latter part is
important to keep in mind.
You can't currently change or prevent this behavior. Hopefully
understanding it will make it easier to see what one can and can't do
when storing array-like data.
So I've set up a small repro for you. Original data:
{
"matchesHeuristic": {
"1": {
"id": "foo",
"value": "bar"
},
"2": {
"id": "w",
"value": "tf"
}
},
"notMatchesHeuristic": {
"1": {
"id": "foo",
"value": "bar"
},
"365": {
"id": "w",
"value": "tf"
}
}
}
As returned by Firebase REST API: https://workaround-arrays-bagohack.firebaseio.com/.json?print=pretty
{
"matchesHeuristic" : [ null, {
"id" : "foo",
"value" : "bar"
}, {
"id" : "w",
"value" : "tf"
} ],
"notMatchesHeuristic" : {
"1" : {
"id" : "foo",
"value" : "bar"
},
"365" : {
"id" : "w",
"value" : "tf"
}
}
}
as you can see matchesHeuristic object is transformed into an array with a null value at index 0 (because it matches the heuristic defined in Firebase docs) whereas notMatchesHeuristic is left intact. This is especially "nice" if you have dynamic data like we do - so we don't know untill runtime if it will match heauristic or not.
Workaround (REST API)
However this portion of the docs doesn't seem to hold:
You can't currently change or prevent this behavior. Hopefully
understanding it will make it easier to see what one can and can't do
when storing array-like data.
You can actually workaround this by requesting items searched by key, so
this https://workaround-arrays-bagohack.firebaseio.com/matchesHeuristic.json?print=pretty is broken
this is intact https://workaround-arrays-bagohack.firebaseio.com/matchesHeuristic.json?orderBy=%22$key%22&startAt=%220%22&print=pretty (add a meaningless filter like this one orderBy="$.key"&startAt="0" which actually filters-out all items with negative key):
{"1":{"id":"foo","value":"bar"},"2":{"id":"w","value":"tf"}}
NB: interestingly, seems Firebase support guys don't know about thi workaround (at least they didn't suggest it when we asked them about this behavior).

Resources