Provider to ignore some date and identifier fileds - pact

New to pact, but excited to implement at my current 'micro services' based company
I generated some reports that compares JSON responses, all done in ruby.
But I am getting stumped by a few keys that are date based and index based.
Key: - is expected
+ is actual
Matching keys and values are not shown
{
"metadata": {
- "received_at": "2017-10-23T11:50:12Z"
+ "received_at": "2017-10-25T01:26:00Z"
},
"response": {
"cascading_avm_results": {
"automated_valuation": {
- "run_date": "2017-10-23",
- "internal_run_identifier": "1508784611820479",
- "valuation_date": "2017-10-23"
+ "run_date": "2017-10-25",
+ "internal_run_identifier": "1508963160085440",
+ "valuation_date": "2017-10-25"
}
}
}
is there a way to make the provider response ignore it, or modify the output to match those keys?
My understanding, and please correct me if I am wrong, is that we shouldn't stub that provider response on the contract testing right?

What you want to do is match the shape of the response, and not specific values. For that you'll need to use flexible matchers - https://github.com/realestate-com-au/pact/wiki/Regular-expressions-and-type-matching-with-Pact.
You can match based on regular expressions, value types etc., including within arrays.

Related

DynamoBD/Amplify non-negative field and field validation on mutations

I am new to AWS in general, I am building a relatively simple application with Amplify, but I've used Google Firebase before. My question is: Is there a way to set a constrain for a field to be non-negative? I have an application that does transactions and I don't want my balance to be negative. I just need a simple error/exception. Is it possible to set a field constraint in DynamoDB that says "This field should be >= 0"?.
I also checked if it was possible to do it in the VTL amplify generated resolver of my graphql mutation, and indeed it is possible to set some constraints, But somehow it allows the operation and crashes on the next one (when the balance on the DB is already < 0, like if it checks it before the update). I tried saying something like "current_balance - transaction >= 0" but I couldn't get it to work.
So it seems that the only way is to create a custom lambda resolver that does the various checks before submitting the mutation to DynamoDB. I haven't tried it yet but I don't understand how I can do a check on the current balance (stored in the DB) without doing a query.
More in general is it even possible to validate fields (even with simple assertions like non-negative) on amplify/dynamoDB? Moving to another DB like Aurora would help?
Thanks for you help
DynamoDb supports conditional updates which allow an update to be applied when the given condition is met. You can set the condition current_balance >= cost for your update.
However, the negative balance is not the main problem. What you should address is how to prevent other requests from updating the same current_balance at the same time, or in short, race conditions on current_balance. In order to deal with that, you also need a conditional update whose condition is "current_balance = initial_balance". The initial_balance is, I guess, what you get from DynamoDB at the very beginning of the purchase process.
Sample VTL code
#set( $remaining_balance = $initial_balance - $transaction_cost )
#if( $remaining_balance < 0 )
$util.error("Insufficient balance")
#end
{
"version" : "2018-05-29",
"operation" : "UpdateItem",
"key": { <your-dynamodb-key> },
"update" : {
"expression" : "SET current_balance = :remaining_balance",
"expressionValues" : {
":remaining_balance" : $util.dynamodb.toNumberJson($remaining_balance)
}
},
"condition": {
"expression": "current_balance = :initial_balance",
"expressionValues" : {
":initial_balance" : $util.dynamodb.toNumberJson($initial_balance)
}
}
}

Google Cloud Vision Raw JSON Response

When trying out google cloud vision with the drag and drop Try Drag and Drop, the last tab has raw JSON. What parameter do we need to pass to get that data?
I'm currently doing DOCUMENT_TEXT_DETECTION but it only gives data at the level of words and not of individual characters.
Edit: I modified this code vision test and changed the feature ...
feature := &vision.Feature{
Type: "DOCUMENT_TEXT_DETECTION",
}
and the printing to ...
body, err := json.Marshal(res)
fmt.Println(string(body))
I'm only seeing textAnnotations in the output.
The JSON file contains different things like text, locations and etc etc, Your concern is about getting full text.
Here I am adding a Python code, it says that you can get the full text by rendering the JSON file, you will find your required result using data['fullTextAnnotation']['text'], and you can get characters by breaking this file into smaller chunks and I guess JSON file has individual characters in it but I have never worked on it.
import json
from pprint import pprint
data = json.load(open('File Path'))
pprint(data['fullTextAnnotation']['text'])
Well, if you check properly there are various things available in that last tab containing raw JSON.
Based on your requirements you can fetch any of them.
From the response that you get from DOCUMENT_TEXT_DETECTION, you can fetch text_annotations, full_text_annotations, etc.
From text_annotations, you can fetch description, language of entire text, each words of texts, numeric digits, special characters and their respective co-ordinates.
From full_text_annotations, you can fetch pages, blocks of data, paragraphs, and individual characters, with their respective co-ordinates and confidence score.
Using the same code template you are using in Go language:
Search “type Feature struct” in the browser in this page. You can see the following feature types and descriptions:
// Type: The feature type.
//
// Possible values:
// "TYPE_UNSPECIFIED" - Unspecified feature type.
// "FACE_DETECTION" - Run face detection.
// "LANDMARK_DETECTION" - Run landmark detection.
// "LOGO_DETECTION" - Run logo detection.
// "LABEL_DETECTION" - Run label detection.
// "TEXT_DETECTION" - Run text detection / optical character
// recognition (OCR). Text detection
// is optimized for areas of text within a larger image; if the image
// is
// a document, use `DOCUMENT_TEXT_DETECTION` instead.
// "DOCUMENT_TEXT_DETECTION" - Run dense text document OCR. Takes
// precedence when both
// `DOCUMENT_TEXT_DETECTION` and `TEXT_DETECTION` are present.
// "SAFE_SEARCH_DETECTION" - Run Safe Search to detect potentially
// unsafe
// or undesirable content.
// "IMAGE_PROPERTIES" - Compute a set of image properties, such as
// the
// image's dominant colors.
// "CROP_HINTS" - Run crop hints.
// "WEB_DETECTION" - Run web detection.
There is not an option to directly show the JSON tab contents. The JSON tab contents are the addition of all the tabs “output”. Users tend to ask just for one. For example, when someone is analyzing faces is not interested in text detection.
If you need more than one, you can obtain multiple features outputs by “adding” the result of all the possible values together. Based on the facts mentioned, I have added the following lines to your code:
feature2 := &vision.Feature{
Type: "LABEL_DETECTION",
MaxResults: 10,
}
req2 := &vision.AnnotateImageRequest{
Image: img,
Features: []*vision.Feature{feature2},
}
batch2 := &vision.BatchAnnotateImagesRequest{
Requests: []*vision.AnnotateImageRequest{req2},
}
res2, err := svc.Images.Annotate(batch2).Do()
if err != nil {
log.Fatal(err)
}
body2, err := json.Marshal(res2)
fmt.Println(string(body2))
I have tested it and works. You should add this block of code for all the features in which you are interested. If you intend to add many of them, I would suggest to create a function/loop to avoid repeating code.
Anyway, I suggest you to fulfill the request here in order to exactly obtain the JSON output (that gives data at the level of words or letters) through calling the API instead of using a client library. I have used the next code to obtain the bounding box for the numbers of my interest:
{
"requests":
[
{
"features":
[
{
"type":
""
"maxResults":
-- add a property --model
}
{
"type":
""
-- add a property --maxResultsmodel
}
]
"image":
{
"source":
{
"gcsImageUri":
""
-- add a property --imageUri
}
-- add a property --content
}
-- add a property --imageContext
}
]
-- add a property --
}

Firebase indexes on dynamically created keys

I am trying to use Firebase index on the realtime database, the problem I have is that the data key is created programmically and has an incremental value at the end of the key.
For example;
match_01, match_02, match_03, etc...
The structure is;
- Matches
- Match_01
- played
- hometeam
- awayteam
etc....
I've look at how to setup Firebase rules that defines the index, but this seems to only be applicable to known data, not data that has a dynamically generated key. The code below for example won't work as the parent of played is Match_01, Match_02, etc...
{
"rules": {
"Matches": {
".indexOn": ".played"
}
}
}
Does anyone know how to set an index against this type of data structure?

AddToSet operation requires a target array field

Trying to make use of Azure DocumentDB/CosmsoDB using the MongoDB driver. I have learned that there are many limitations as the full set of features is not currently implemented. I want to use aggregate functions, specifically $group, and .distinct but I don't think that is available yet. As a work around, I am trying to maintain a separate "tracking" document to enable "distinct". trying to update a document using $addToSet, but getting the following:
MongoError: Message: {"Errors":["Encountered exception while executing function. Exception = Error: AddToSet operation requires a target array field.\r\nStack trace: Error: AddToSet operation requires a target array field.\n at arrayAddToSet (__.sys.commonUpdate.js:2907:25)\n at handleUpdate (__.sys.commonUpdate.js:2649:29)\n at processOneResult (__.sys.commonUpdate.js:2484:25)\n at queryCallback (__.sys.commonUpdate.js:2461:21)\n at Anonymous function (__.sys.commonUpdate.js:619:29)"]}
The update command i am using:
var usersDocument = collection.updateOne(
{ "type": "users" },
{ $addToSet: {users: "someone#gmail.com"} },
function(err, count, status) {
console.log("updateOne err: " + err)
console.log("updateOne count: " + count)
console.log("updateOne status: " + status)
}
)
This seems to me to be a pretty straight-forward command, pulled from the mongo documentation and fields adjusted as needed. Maybe I am missing something really basic?
My ultimate goal was to make sure that my code was portable as to be able to move it into a Mongo cluster, if I so desired (not be locked into Azure-specific). To get started and not have to manage a multi-server cluster, Azure CosmosDB looked like a great jumpstart, but the limitations are maddening.
UPDATE:
Now that I have fixed my document and I actually have a field with an array, $addToSet is just replacing the value, rather than adding to the array. I'll create a new question for that.
Yup, something basic. The error message was actually correct. After inspecting the existing document:
I found:
{ "users": "[]" }
And changed it to:
{ "users": [] }
Now it is working.

Referring to a user by several possible identities

My application keeps multiple profile attributes for its users, such as:
An internal userId
Their phone number
Their email
etc. Each attribute is unique to a user; they can all be used as identity information.
I am designing an API with operations that refer to a specific user, eg charge.
I want to allow clients to identify users by any of the available profile attributes. In my specific domain, it is not possible to just enforce clients to use the internal userId, even if they can receive it in a separate call (eg getUserIdFromProfileAttribute).
Assuming the charge operation, it is a POST request with a JSON document inside the body. What would be the best way to identify the users? I am thinking one of the following:
Top-level key/value pairs for both the id and the id type:
{
"userId": <id>,
"userIdType": <idType>
}
Nested key/value pairs inside a user key:
{
"user": {
"id": <id>,
"type": <idType>
}
}
Single key/value pair, using a URI format with (possibly) custom protocols:
{
"user": <uri> # eg id:1234, tel:+19283912000, email:user#mail.com
}
Single key/value pair, using different keys for each id (one key per call):
{
"userId": <id> *OR*
"userMsisdn": <msisdn> *OR*
"userEmail": <email>
}
Same as above, but nested inside a user key:
{
"user": {
"id": <id> *OR*
"msisdn": <msisdn>
}
}
Any suggestions about best practices? Anyone can point me to some standard / widely used APIs with a similar need?
I should repeat that using just the internal userId in all calls is not possible, and using a separate call for each id (eg chargeById, chargeByEmail) is not practical as there are many such calls.

Resources