Karate: Using data-driven embedded template approach for API testing - automated-tests

I want to write data-driven tests passing dynamic values reading from external file (csv).
Able to pass dynamic values from csv for simple strings (account number & affiliate id below). But, using embedded expressions, how can I pass dynamic values from csv file for "DealerReportFormats" json array below?
Any help is highly-appreciated!!
Scenario Outline: Dealer dynamic requests
Given path '/dealer-reports/retrieval'
And request read('../DealerTemplate.json')
When method POST
Then status 200
Examples:
| read('../DealerData.csv') |
DealerTemplate.json is below
{
"DealerId": "FIXED",
"DealerName": "FIXED",
"DealerType": "FIXED",
"DealerCredentials": {
"accountNumber": "#(DealerCredentials_AccountNumber)",
"affiliateId": "#(DealerCredentials_AffiliateId)"
},
"DealerReportFormats": [
{
"name": "SalesReport",
"format": "xml"
},
{
"name": "CustomerReport",
"format": "txt"
}
]
}
DealerData.csv:
DealerCredentials_AccountNumber,DealerCredentials_AffiliateId
testaccount1,123
testaccount2,12345
testaccount3,123456

CSV is only for "flat" structures, so trying to mix that with JSON is too ambitious in my honest opinion. Please look for another framework if needed :)
That said I see 2 options:
a) use proper quoting and escaping in the CSV
b) refer to JSON files
Here is an example:
Scenario Outline:
* json foo = foo
* print foo
Examples:
| read('test.csv') |
And test.csv is:
foo,bar
"{ a: 'a1', b: 'b1' }",test1
"{ a: 'a2', b: 'b2' }",test2
I leave it as an exercise to you if you want to escape double-quotes. It is possible.
Option (b) is you can refer to stand-alone JSON files and read them:
foo,bar
j1.json,test1
j2.json,test2
And you can do * def foo = read(foo) in your feature.

Related

Multi line Json read using USQL Custom extractor

I could not find proper solution to make use Newtonsoft JsonExtractor to parse Input file with new line delimiter.
From the Newtonsoft JsonExtractor I can read the first line successfully when exploded with "$.d.results[*]" but it's not moving to the next line,
As the data is >4MB for each row, can't extract as text. So parsing need to be performed using custom extractor to proceed.
Sample Input:
{"d":{"results":[{"data":{"Field_1":"1","Field_2":"2"},"Field_3":"3","Field_4":"4"}]}}
{"d":{"results":[{"data":{"Field_1":"11","Field_2":"21"},"Field_3":"31","Field_4":"41"}]}}
{"d":{"results":[{"data":{"Field_1":"12","Field_2":"22"},"Field_3":"32","Field_4":"42"}]}}
Expected Output:
Field_1|Field_2|Field_3|Field_4
1 |2 |3 |4
11 |21 |31 |41
12 |22 |32 |42
USQL Code:
CREATE ASSEMBLY IF NOT EXISTS [Microsoft.Analytics.Samples.Formats] FROM #"Microsoft.Analytics.Samples.Formats.dll";
CREATE ASSEMBLY IF NOT EXISTS [Newtonsoft.Json] FROM #"Newtonsoft.Json.dll";
REFERENCE ASSEMBLY [Microsoft.Analytics.Samples.Formats];
REFERENCE ASSEMBLY [Newtonsoft.Json];
USING Microsoft.Analytics.Samples.Formats.Json;
DECLARE #DATA_SOURCE string = "Input.data" ;
#SOURCE =
EXTRACT Field_1 string,
Field_2 string,
Field_3 string,
Field_4 string
FROM #DATA_SOURCE
USING new JsonExtractor("$.d.results[*]");
Considering your source input as below, i.e. more than one JSON document in a file.
{
"d": {
"results": [
{
"data": {
"Field_1": "1",
"Field_2": "2"
},
"Field_3": "3",
"Field_4": "4"
}
]
}
}
{
"d": {
"results": [
{
"data": {
"Field_1": "11",
"Field_2": "21"
},
"Field_3": "31",
"Field_4": "41"
}
]
}
}
{
"d": {
"results": [
{
"data": {
"Field_1": "12",
"Field_2": "22"
},
"Field_3": "32",
"Field_4": "42"
}
]
}
}
// if more than one JSON document in a file
The JSON assembly includes a MultiLevelJsonExtractor which allows us to extract data from multiple JSON paths at differing levels within a single pass. See the underlying code and inline documentation over at Github.
Use it by supplying multiple levels of Json Paths. They will be assigned to the schema by index.
The code snippet shows the MultiLevelJsonExtractor in action.
The first parameter (rowpath) specifies the base path to start from.
The second parameter (bypassWarning) is expecting a boolean value.
True = If path isn't found: don't error, return null. False = If path
isn't found: error.
The third parameter (jsonPaths) is a list of JSON paths starting at
the base path otherwise the extractor will recurse to the top of the
tree to locate it.
in your case....
#json =
EXTRACT
Field_1 string,
Field_2 string,
Field_3 string,
Field_4 string,
FROM
#DATA_SOURCE
USING new MultiLevelJsonExtractor("d.results[*]",
true,
"data.Field_1",
"data.Field_2",
"Field_3",
"Field_4",
);

JSON Path not working properly with athena

I have a lambda function that converts my logs to this format:
{
"events": [
{
"field1": "value",
"field2": "value",
"field3": "value"
}, (...)
]
}
When I query it on S3, I get in this format:
[
{
"events": [
{ (...) }
]
}
]
And I'm trying to run a custom classifier for it because the data I want is inside the objects kept by 'events' and not events itself.
So I started with the simplest path I could think that worked in my tests (https://jsonpath.curiousconcept.com/)
$.events[*]
And, sure, worked in the tests but when I run a crawler against the file, the table created includes only an events field with a struct inside it.
So I tried a bunch of other paths:
$[*].events
$[*].['events']
$[*].['events'].[*]
$.[*].events[*]
$.events[*].[*]
Some of these does not even make sense and absolutely every one of those got me an schema with an events field marked as array.
Can anyone point me to a better direction to handle this issue?

Writing specs for grammar in atom editor

I have written my own grammar in atom. I would like to write some specs for the same, but I am unable to understand how exactly to write one. I read the Jasmine documentation, but still not very clear. Can someone please explain how to write specs for testing out grammar in atom. Thanks
Grammars are availabe available under atom.grammars.grammarForScopeName("source.yourlanguage")
The grammar object it returns has methods you can feed code snippets (e.g. tokenizeLine, tokenizeLines).
These methods return arrays of tokens.
Testing is just verifying if these methods return what you expect.
E.g. (CoffeeScript alert):
grammar = atom.grammars.grammarForScopeName("source.yourlanguage")
{tokens} = grammar.tokenizeLine("# this is a comment line of some sort")
expect(tokens[0].value).toEqual "#"
expect(tokens[0].scopes).toEqual [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage",
"punctuation.definition.comment.yourlanguage"
]
Happy testing!
Example specs
spec for MscGen (a simple language)
spec for Haskell (more complex)
The array returned by the grammar.tokenizeLine call above looks like this:
[
{
"value": "#",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage",
"punctuation.definition.comment.yourlanguage"
]
},
{
"value": " this is a comment line of some sort",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage"
]
},
{
"value": "",
"scopes": [
"source.yourlanguage",
"comment.line.number-sign.yourlanguage"
]
}
]
(Kept seeing this question pop up in the search results when I was looking for an answer to the same question - so just as well document it here.)

"Reverse formatting" Riak search results

Let's say I have an object in the test bucket in my Riak installation with the following structure:
{
"animals": {
"dog": "woof",
"cat: "miaow",
"cow": "moo"
}
}
When performing a search request for this object, the structure of the search results is as follows:
{
"responseHeader": {
"status": 0,
"QTime": 3,
"params": {
"q": "animals_cow:moo",
"q.op": "or",
"filter":"",
"wt": "json"
}
},
"response": {
"numFound": 1,
"start": 0,
"maxScore": "0.353553",
"docs": [
{
"id": "test",
"index": "test",
"fields": {
"animals_cat": "miaow",
"animals_cow": "moo",
"animals_dog": "woof"
},
"props": {}
}
]
}
}
As you can see, the way the object is stored, the cat, cow and dog keys are nested within animals. However, when the search results come back, none of the keys are nested, and are simply separated by _.
My question is this: Is there any way provided by Riak to "reverse format" the search, and return the fields of the object in the correct (nested) format? This becomes a problem when storing and returning user data that might possibly contain _.
I do see that the latest version of Riak (beta release) provides a search schema, but I can't seem to see whether my question would be answered by this.
What you receive back in the search result is what the object looked like after passing through the json analyzer. If you need the data formatted differently, you can use a custom analyzer. However, this will only affect newly put data.
For existing data, you can use the id field and issue a get request for the original object, or use the solr query as input to a MapReduce job.

How should a Rebol-structured data file (which contains no code) be written and read?

If you build up a block structure, convert it to a string with MOLD, and write it to a file like this:
>> write %datafile.dat mold [
[{Release} 12-Dec-2012]
[{Conference} [12-Jul-2013 .. 14-Jul-2013]]
]
You can LOAD it back in later. But what about headers? If a file contains code, it is supposed to start with a header like:
rebol [
title: "Local Area Defringer"
date: 1-Jun-1957
file: %defringe.r
purpose: {
Stabilize the wide area ignition transcriber
using a double ganged defringing algorithm.
}
]
If you are just writing out data and reading it back in, are you expected to have a rebol [] header, and extend it with any properties you want to add? Should you come up with your own myformat [] header concept with your own properties?
Also, given that LOAD does binding, does it make sense to use it for data or is there a different operation?
Rebol data doesn't have to have a header, but is best practice to include one (even if it's just data).
Some notes:
SAVE is your best bet for serializing to file! or port! and has a mechanism for including a header.
MOLD and SAVE both have an /ALL refinement that corresponds to LOAD (without /ALL, some data from MOLD and SAVE cannot be reliably recovered, including Object, Logic and None values).
LOAD discards the header, though you can load it using the /HEADER refinement.
Putting this together:
save/all/header %datafile.dat reduce [next "some" 'data][
title: "Some Data"
]
header: take data: load/header %datafile.dat
To use a header other than Rebol [], you'd need to devise a separate loader/saver.
For the case of reading, construct works very well alongside load to prevent evaluation (of code as opposed to data):
prefs: construct/with load %options.reb default-prefs
It is:
Similar to context
obj: [
name: "Fred"
age: 27
city: "Ukiah"
]
obj-context: context obj
obj-construct: construct obj
In this case, the same:
>> obj-context = obj-construct
== true
Different
when it comes to evaluating code:
obj-eval: [
name: uppercase "Fred"
age: 20 + 7
time: now/time
]
obj-eval-context: context obj-eval
obj-eval-construct: construct obj-eval
This time parsing differently:
>> obj-eval-context = obj-eval-construct
false
>> ?? obj-eval-construct
obj-eval-construct: make object! [
name: 'uppercase
age: 20
time: now/time
]
Aside:
This is the point I realize the following code wasn't behaving as I expected:
obj-eval: [
title: uppercase "Fred"
age: 20 + 7
city: "Ukiah"
time: now/time
]
gives in red (and by extension, rebol2):
>> obj-eval-construct: construct obj-eval
== make object! [
title: 'uppercase
age: 20
city: "Ukiah"
time: now/time
]
lit-word! and lit-path! is different.
TODO: question
It has also
Useful refinement /with
Which can be used for defaults, similar to make

Resources