Indexing unnamed QJsonDocument in Qt < 5.10 - qt

Given unnamed JSON document:
[
{
},
{
},
]
Qt 5.10+ has operator[] for QJsonDocument, so we can address any of them by index:
json_doc[1];
How does one do the same in older versions of Qt?

In your particular example the Json document is represented by a Json array. You might get it like:
if (document.isArray() {
auto a = document.array();
// TODO: check the array size before
auto v = a[0];
}

Related

Array of dictionaries to list view SwiftUI

I have an array of dictionaries I would like to populate in a list view with SwiftUI.
I used for loops in the past but since that's not possible within a View I'm stuck as to what to do. I'm able to achieve partial results with the code below.
struct Test : View {
let dict = csvArray[0]
var body: some View {
let keys = dict.map{$0.key}
let values = dict.map {$0.value}
return List {
ForEach(keys.indices) {index in
HStack {
Text(keys[index])
Text("\(values[index])")
}
}
}
}
}
I'm looking to index through the entire Array of dictionaries and append them to the list, not just csvArray[0].
This is like Sections of key values, right?
So do it like:
This is an example of csvArray, This could be anything but you should update the rest of the code duo to the original data type
let csvArray = [
[ "section0-key0": "section0-value0",
"section0-key1": "section0-value1"],
[ "section1-key0": "section1-value0",
"section1-key1": "section1-value1"],
[ "section2-key0": "section2-value0",
"section2-key1": "section2-value1"]
]
This is your code for a single dictionary. but this will take that dictionary instead of hardcoded one:
struct SectionView : View {
#State var dict = [String: String]()
var body: some View {
let keys = dict.map{$0.key}
let values = dict.map {$0.value}
return ForEach(keys.indices) {index in
HStack {
Text(keys[index])
Text("\(values[index])")
}
}
}
}
And this is the list builder connected to the original array. I used sections for the meaningfulness of the data structure.
struct ContentView: View {
var body: some View {
List {
ForEach(csvArray, id:\.self) { dict in
Section {
SectionView(dict: dict)
}
}
}
}
}
Note that you can't relay on the order of the key value in a dictionary. So I suggest you to do some sorting before populating the list or use another data structure like class or struct instead of a plain dictionary.

Pacts: Matching rule for non-empty map (or a field which is not null) needed

I need help with writing my consumer Pacts using pact-jvm (https://github.com/DiUS/pact-jvm).
My problem is I have a field which is a list (an array) of maps. Each map can have elements of different types (strings or sub-maps), eg.
"validatedAnswers": [
{
"type": "typeA",
"answers": {
"favourite_colour": "Blue",
"correspondence_address": {
"line_1": "Main St",
"postcode": "1A 2BC",
"town": "London"
}
}
},
{
"type": "typeB",
"answers": {
"first_name": "Firstname",
"last_name": "Lastname",
}
}
]
but we're only interested in some of those answers.
NOTE: The above is only an example showing the structure of validatedAnswers. Each answers map has dozens of elements.
What we really need is this: https://github.com/pact-foundation/pact-specification/issues/38, but it's planned for v.4. In the meantime we're trying a different approach. What I'm attempting to do now is to specify that each element of the list is a non-empty map. Another approach is to specify that each element of the list is not null. Can any of this be done using Groovy DSL?
This:
new PactBuilder().serviceConsumer('A').hasPactWith('B')
.port(findAvailablePort()).uponReceiving(...)
.willRespondWith(status: 200, headers: ['Content-Type': 'application/json'])
.withBody {
validatedAnswers minLike(1) {
type string()
answers {
}
}
}
doesn't work because it mean answers is expected to be empty ("Expected an empty Map but received Map( [...] )", see also https://github.com/DiUS/pact-jvm/issues/298).
So what I would like to do is something like this:
.withBody {
validatedAnswers minLike(1) {
type string()
answers Matchers.map()
}
}
or:
validatedAnswers minLike(1) {
type string()
answers {
keyLike 'title', notNull()
}
}
or:
validatedAnswers minLike(1) {
type string()
answers notNull()
}
Can it be done?
I would create two separate tests for this, one test for each of the different response shapes and have a provider state for each e.g. given there are type b answers.
This way when you verify on provider side, it will only send those two field types.
The union of the two examples gives a contract that allows both.
You can do it without DSL, sample Groovy script:
class ValidateAnswers {
static main(args) {
/* Array with some samples */
List<Map> answersList = [
[
type: 'typeA',
answers: [
favourite_colour: 'Blue',
correspondence_address: [
line_1: 'Main St',
postcode: '1A 2BC',
town: 'London'
]
]
],
[
type: 'typeB',
answers: [
first_name: 'Firstname',
last_name: "Lastname"
]
],
[
type: 'typeC',
answers: null
],
[
type: 'typeD'
],
[
type: 'typeE',
answers: [:]
]
]
/* Iterating through all elements in list above */
for (answer in answersList) {
/* Print result of checking */
println "$answer.type is ${validAnswer(answer) ? 'valid' : 'not valid'}"
}
}
/**
* Method to recursive iterate through Map's.
* return true only if value is not an empty Map and it key is 'answer'.
*/
static Boolean validAnswer(Map map, Boolean result = false) {
map.each { key, value ->
if (key == 'answers') {
result = value instanceof Map && value.size() > 0
} else if (value instanceof Map) {
validAnswer(value as Map, false)
}
}
return result
}
}
Output is:
typeA is valid
typeB is valid
typeC is not valid
typeD is not valid
typeE is not valid

Arangodb AQL recursive graph traversal

I have a graph with three collections which items can be connected by edges.
ItemA is a parent of itemB which in turn is a parent of itemC.
Elements only can be connected by edges in direction
"_from : child, _to : parent"
Currently I can get only "linear" result with this AQL query:
LET contains = (FOR v IN 1..? INBOUND 'collectionA/itemA' GRAPH 'myGraph' RETURN v)
RETURN {
"root": {
"id": "ItemA",
"contains": contains
}
}
And result looks like this:
"root": {
"id": "itemA",
"contains": [
{
"id": "itemB"
},
{
"id": "itemC"
}
]
}
But I need to get a "hierarchical" result of graph traversal like that:
"root": {
"id": "itemA",
"contains": [
{
"id": "itemB",
"contains": [
{
"id": "itemC"
}
}
]
}
So, can I get this "hierarchical" result running an aql query?
One more thing: traversal should run until leaf nodes will be encountered. So depth of the traversal is unknown in advance.
I have found solution. We decided to use UDF (user defined functions).
Here is a few steps to construct the proper hierarchical structure:
Register the function in arango db.
Run your aql query, that constructs a flat structure (vertex and corresponding path for this vertex). And pass result as input parameter of your UDF function.
Here my function just append each element to its parent
In my case:
1) Register the function in arango db.
db.createFunction(
'GO::LOCATED_IN::APPENT_CHILD_STRUCTURE',
String(function (root, flatStructure) {
if (root && root.id) {
var elsById = {};
elsById[root.id] = root;
flatStructure.forEach(function (element) {
elsById[element.id] = element;
var parentElId = element.path[element.path.length - 2];
var parentEl = elsById[parentElId];
if (!parentEl.contains)
parentEl.contains = new Array();
parentEl.contains.push(element);
delete element.path;
});
}
return root;
})
);
2) Run AQL with udf:
LET flatStructure = (FOR v,e,p IN 1..? INBOUND 'collectionA/itemA' GRAPH 'myGraph'
LET childPath = (FOR pv IN p.vertices RETURN pv.id_source)
RETURN MERGE(v, childPath))
LET root = {"id": "ItemA"}
RETURN GO::LOCATED_IN::APPENT_CHILD_STRUCTURE(root, flatStructure)
Note: Please don't forget the naming convention when implement your functions.
I also needed to know the answer to this question so here is a solution that works.
I'm sure the code will need to be customised for you and could do with some improvements, please comment accordingly if appropriate for this sample answer.
The solution is to use a Foxx Microservice that supports recursion and builds the tree. The issue I have is around looping paths, but I implemented a maximum depth limit that stops this, hard coded to 10 in the example below.
To create a Foxx Microservice:
Create a new folder (e.g. recursive-tree)
Create the directory scripts
Place the files manifest.json and index.js into the root directory
Place the file setup.js in the scripts directory
Then create a new zip file with these three files in it (e.g. Foxx.zip)
Navigate to the ArangoDB Admin console
Click on Services | Add Service
Enter an appropriate Mount Point, e.g. /my/tree
Click on Zip tab
Drag in the Foxx.zip file you created, it should create without issues
If you get an error, ensure the collections myItems and myConnections don't exist, and the graph called myGraph does not exist, as it will try to create them with sample data.
Then navigate to the ArangoDB admin console, Services | /my/tree
Click on API
Expand /tree/{rootId}
Provide the rootId parameter of ItemA and click 'Try It Out'
You should see the result, from the provided root id.
If the rootId doesn't exist, it returns nothing
If the rootId has no children, it returns an empty array for 'contains'
If the rootId has looping 'contains' values, it returns nesting up to depth limit, I wish there was a cleaner way to stop this.
Here are the three files:
setup.js (to be located in the scripts folder):
'use strict';
const db = require('#arangodb').db;
const graph_module = require("org/arangodb/general-graph");
const itemCollectionName = 'myItems';
const edgeCollectionName = 'myConnections';
const graphName = 'myGraph';
if (!db._collection(itemCollectionName)) {
const itemCollection = db._createDocumentCollection(itemCollectionName);
itemCollection.save({_key: "ItemA" });
itemCollection.save({_key: "ItemB" });
itemCollection.save({_key: "ItemC" });
itemCollection.save({_key: "ItemD" });
itemCollection.save({_key: "ItemE" });
if (!db._collection(edgeCollectionName)) {
const edgeCollection = db._createEdgeCollection(edgeCollectionName);
edgeCollection.save({_from: itemCollectionName + '/ItemA', _to: itemCollectionName + '/ItemB'});
edgeCollection.save({_from: itemCollectionName + '/ItemB', _to: itemCollectionName + '/ItemC'});
edgeCollection.save({_from: itemCollectionName + '/ItemB', _to: itemCollectionName + '/ItemD'});
edgeCollection.save({_from: itemCollectionName + '/ItemD', _to: itemCollectionName + '/ItemE'});
}
const graphDefinition = [
{
"collection": edgeCollectionName,
"from":[itemCollectionName],
"to":[itemCollectionName]
}
];
const graph = graph_module._create(graphName, graphDefinition);
}
mainfest.json (to be located in the root folder):
{
"engines": {
"arangodb": "^3.0.0"
},
"main": "index.js",
"scripts": {
"setup": "scripts/setup.js"
}
}
index.js (to be located in the root folder):
'use strict';
const createRouter = require('#arangodb/foxx/router');
const router = createRouter();
const joi = require('joi');
const db = require('#arangodb').db;
const aql = require('#arangodb').aql;
const recursionQuery = function(itemId, tree, depth) {
const result = db._query(aql`
FOR d IN myItems
FILTER d._id == ${itemId}
LET contains = (
FOR c IN 1..1 OUTBOUND ${itemId} GRAPH 'myGraph' RETURN { "_id": c._id }
)
RETURN MERGE({"_id": d._id}, {"contains": contains})
`);
tree = result._documents[0];
if (depth < 10) {
if ((result._documents[0]) && (result._documents[0].contains) && (result._documents[0].contains.length > 0)) {
for (var i = 0; i < result._documents[0].contains.length; i++) {
tree.contains[i] = recursionQuery(result._documents[0].contains[i]._id, tree.contains[i], depth + 1);
}
}
}
return tree;
}
router.get('/tree/:rootId', function(req, res) {
let myResult = recursionQuery('myItems/' + req.pathParams.rootId, {}, 0);
res.send(myResult);
})
.response(joi.object().required(), 'Tree of child nodes.')
.summary('Tree of child nodes')
.description('Tree of child nodes underneath the provided node.');
module.context.use(router);
Now you can invoke the Foxx Microservice API end point, providing the rootId it will return the full tree. It's very quick.
The example output of this for ItemA is:
{
"_id": "myItems/ItemA",
"contains": [
{
"_id": "myItems/ItemB",
"contains": [
{
"_id": "myItems/ItemC",
"contains": []
},
{
"_id": "myItems/ItemD",
"contains": [
{
"_id": "myItems/ItemE",
"contains": []
}
]
}
]
}
]
}
You can see that Item B contains two children, ItemC and ItemD, and then ItemD also contains ItemE.
I can't wait until ArangoDB AQL improves the handling of variable depth paths in the FOR v, e, p IN 1..100 OUTBOUND 'abc/def' GRAPH 'someGraph' style queries. Custom visitors were not recommended for use in 3.x but haven't really be replaced with something as powerful for handling wild card queries on the depth of a vertex in a path, or handling prune or exclude style commands on path traversal.
Would love to have comments/feedback if this can be simplified.

Grunt uglify : Weird behavior

I cannot figure out why uglify does not want concat string as input or output ...
This works :
uglify: {
dev_uglify_js: {
files: {
'my_file.min.js': ['my_file.js']
}
}
}
For example, this does not works :
uglify: {
dev_uglify_js: {
files: {
'my'+'_file.min.js': ['my_file.js']
}
}
}
Do you have any idea why ?
The output error is "SyntaxError: Unexpected token".
The real insterest here is to concatenate a timestamp to the file name.
But just with 2 strings it does not work so ...
Thanks for your help !
In JavaScript, an object key cannot be declared dynamically. This is not a problem with grunt or uglify - it's a language constraint.
myObject = { 'a' + 'b' : 'b' } // NOPE!
However, any object property can be accessed via square brackets. For example:
myObject = { 'banana': 'boat' }
myObject.banana // boat
myObject['banana'] // boat!
Therefore, you can add properties after the object is already created, using the square brackets syntax.
myObject = {}
myObject[ 'a' + 'b' ] = 'b' // Yes
myObject.ab // b
The Gruntfile example
In your Gruntfile, you're bound to, at some point, call something like grunt.config.init or grunt.initConfig. This is usually done inline:
grunt.initConfig({
uglify: {} // properties ...
});
However, initConfig simply receives an object. You can define it and manipulate it as much as you need before calling this function. So, for example:
var config = { uglify: {} };
config.uglify['such'+'dynamic'+'very'+'smarts'] = {};
grunt.initConfig(config);
Similar questions:
How do I create a dynamic key to be added to a JavaScript object variable
How do I add a property to a JavaScript object using a variable as the name?

How to retrieve a subset of array within an object in a meteor collection?

Hope someone can help! I have a collection in meteor which has objects which contain arrays of temperature readings in the following format:
"temp_readings": [
{
"reading_time": {
"$date": "2015-01-18T11:54:00.700Z"
},
"temp_F": 181.76
},
{
"reading_time": {
"$date": "2015-01-18T11:55:00.700Z"
},
"temp_F": 187.16
},
{
"reading_time": {
"$date": "2015-01-18T11:56:00.700Z"
},
"temp_F": 190.76
},
{
"reading_time": {
"$date": "2015-01-18T11:57:00.700Z"
},
"temp_F": 196.16
}
]
I can retrieve this complete array in my client side meteor code but I now want to read just a subset of this array based on a date/time which is being set by the user. So for example retrieve the subset of the array which has only entries equal or later than "2015-01-18T11:56:00.700Z"... I know I could probably do something with selective publish/subscribe methods but for now is there a simple way on the client side to retrieve this subset of data? Maybe some javascript methods can help?
Thanks in advance,
Rick
// or however you fetch your document
var doc = MyCollection.findOne();
if (!doc)
return [];
// ensure temps is an array
var temps = doc.temp_readings || [];
// return an array of temp readings where the date is > someOtherDate
return _.filter(temps, function(temp) {
return temp.reading_time.$date > someOtherDate;
});

Resources