QTreeView extended selection and removal/moving multiple items - qt

Is it permissible to implement below removal algorithm for QTreeView, where QTreeView::setSelectionMode(QAbstractItemView::ExtendedSelection);, i.e. multiple items is selectable?
QModelIndexList indexList = treeView->selectionModel()->selectedRows();
QList< QPersistentModelIndex > persistentIndexList;
for (QModelIndex const & index : indexList) {
persistentIndexList.append(index);
}
for (QPersistentModelIndex const & persistentIndex : persistentIndexList) {
if (!treeModel->removeRow(persistentIndex.row(), persistentIndex.parent())) {
qWarning() << "Can't remove row" << persistentIndex;
}
}
I think, it is possible the situation, when parent removed before the child and even persistent indexes are not valid at that moment. Am I wrong?
Must the model to check hasIndex in removeRows?

Every implementation of QAbstractItemModel does its best (at least it has to) to keep QPersistentModelIndexs valid when the model is changed. If the model can't calculate new location of the index it invalidates the QPersistentModelIndex. It can happen when the index is removed or a model layout or whole data was changed.
That is why it's always neccessary to check if QPersistentModelIndex valid or not before using it.
But QAbstractItemModel::removeRows returns bool. It means that wrong arguments can be passed to this method. If the model can't remove rows due to wrong arguments it returns false.
So the answer to your question is yes, you should check an index in removeRows and return correct result.

Related

complex reduce sample unclear how the reduce works

Starting with complex reduce sample
I have trimmed it down to a single chart and I am trying to understand how the reduce works
I have made comments in the code that were not in the example denoting what I think is happening based on how I read the docs.
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
function groupArrayRemove(keyfn) {
var bisect = d3.bisector(keyfn);//set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for remove below
return function(elements, item) {
//get the position of the key value for this element in the sorted array and splice it out
var pos = bisect.left(elements, keyfn(item));
if(keyfn(elements[pos])===keyfn(item))
elements.splice(pos, 1);
return elements;
};
}
function groupArrayInit() {
//for each key found by the key function return this array?
return []; //the result array for where the data is being inserted in sorted order?
}
I am not quite sure my perception of how this is working is quite right. Some of the magic isn't showing itself. Am I correct that elements is the group the reduce function is being called on ? also the array in groupArrayInit() how is it being indirectly populated?
Part of me feels that the functions supplied to the reduce call are really array.map functions not array.reduce functions but I just can't quite put my finger on why. having read the docs I am just not making a connection here.
Any help would be appreciated.
Also have I missed Pens/Fiddles that are created for all these examples? like this one
http://dc-js.github.io/dc.js/examples/complex-reduce.html which is where I started with this but had to download the csv and manually convert to Json.
--------------Update
I added some print statements to try to clarify how the add function is working
function groupArrayAdd(keyfn) {
var bisect = d3.bisector(keyfn); //set the bisector value function
//elements is the group that we are reducing,item is the current item
//this is a the reduce function being supplied to the reduce call on the group runAvgGroup for add below
return function(elements, item) {
console.log("---Start Elements and Item and keyfn(item)----")
console.log(elements) //elements grouped by run?
console.log(item) //not seeing the pattern on what this is on each run
console.log(keyfn(item))
console.log("---End----")
//get the position of the key value for this element in the sorted array and put it there
var pos = bisect.right(elements, keyfn(item));
elements.splice(pos, 0, item);
return elements;
};
}
and to print out the group's contents
console.log("RunAvgGroup")
console.log(runAvgGroup.top(Infinity))
which results in
Which appears to be incorrect b/c the values are not sorted by key (the run number)?
And looking at the results of the print statements doesn't seem to help either.
This looks basically right to me. The issues are just conceptual.
Crossfilter’s group.reduce is not exactly like either Array.reduce or Array.map. Group.reduce defines methods for handling adding new records to a group or removing records from a group. So it is conceptually similar to an incremental Array.reduce that supports an reversal operation. This allows filters to be applied and removed.
Group.top returns your list of groups. The value property of these groups should be the elements value that your reduce functions return. The key of the group is the value returned by your group accessor (defined in the dimension.group call that creates your group) or your dimension accessor if you didn’t define a group accessor. Reduce functions work only on the group values and do not have direct access to the group key.
So check those values in the group.top output and hopefully you’ll see the lists of elements you expect.

How get random item from es6 Map or Set

I have a project that uses arrays of objects that I'm thinking of moving to es6 Sets or Maps.
I need to quickly get a random item from them (obviously trivial for my current arrays). How would I do this?
Maps and Sets are not well suited for random access. They are ordered and their length is known, but they are not indexed for access by an order index. As such, to get the Nth item in a Map or Set, you have to iterate through it to find that item.
The simple way to get a random item from a Set or Map would be to get the entire list of keys/items and then select a random one.
// get random item from a Set
function getRandomItem(set) {
let items = Array.from(set);
return items[Math.floor(Math.random() * items.length)];
}
You could make a version that would work with both a Set and a Map like this:
// returns random key from Set or Map
function getRandomKey(collection) {
let keys = Array.from(collection.keys());
return keys[Math.floor(Math.random() * keys.length)];
}
This is obviously not something that would perform well with a large Set or Map since it has to iterate all the keys and build a temporary array in order to select a random one.
Since both a Map and a Set have a known size, you could also select the random index based purely on the .size property and then you could iterate through the Map or Set until you got to the desired Nth item. For large collections, that might be a bit faster and would avoid creating the temporary array of keys at the expense of a little more code, though on average it would still be proportional to the size/2 of the collection.
// returns random key from Set or Map
function getRandomKey(collection) {
let index = Math.floor(Math.random() * collection.size);
let cntr = 0;
for (let key of collection.keys()) {
if (cntr++ === index) {
return key;
}
}
}
There's a short neat ES6+ version of the answer above:
const getRandomItem = iterable => iterable.get([...iterable.keys()][Math.floor(Math.random() * iterable.size)])
Works for Maps as well as for Sets (where keys() is an alias for value() method)
This is the short answer for Sets:
const getRandomItem = set => [...set][Math.floor(Math.random()*set.size)]

Partial key matching QHash

I have a QHash defined as follows
QHash<QString, QString> hashLookup;
I have inserted a few values to this hash as follows:
hashLookup.insert("OMG", "Oh my God!");
hashLookup.insert("LOL", "Laugh out loud");
hashLookup.insert("RIP", "Rest in peace");
// and so on
I have a few QStrings as follows:
QString a = "OMG_1";
QString b = "LOL_A";
QStirng c = "OMG_YOU";
QString d = "RIP_two";
I am supposed to find if these values exist in hashLookup, i.e, since OMG_1 contains OMG, I should be able to retrieve Oh my God!.
I have tried to do this using
if(hashLookup.contains(a)
//do something
which ofcourse tries to look for a key OMG which is not present in the lookup table and does not return anything. Is partial matching of key values possible in Qt? If yes, how should I go about implementing this.
There is no opportunity in QHash class to extract values by partial matching of key, because QHash use hash function (Qt documentation: qHash) which:
The qHash() function computes a numeric value based on a key. It can
use any algorithm imaginable, as long as it always returns the same
value if given the same argument. In other words, if e1 == e2, then
qHash(e1) == qHash(e2) must hold as well. However, to obtain good
performance, the qHash() function should attempt to return different
hash values for different keys to the largest extent possible.
Different keys give almost always different hash.
In your task you can run on QHash keys and make comparison with QString functionality. Something like this:
QString getHashValue(const QString& strKey, const QHash<QString, QString>& hashLookup)
{
QList<QString> uniqueKeys = hashLookup.uniqueKeys();
foreach(const QString& key, uniqueKeys)
{
if(strKey.contains(key))
return hashLookup.value(key);
}
}
...
getHashValue("OMG_1", hashLookup);
First, in your example the QHash.contains(QString key) method tries to find OMG_1, which in fact it will not find.
You may implement a method which will take a expanded key and tries to locate any subkey of the given value in the hash. Here you have to define some rules I think or it may not return the intendend value.
Think of following example: the hash contains the keys OMG and OM. To match the provided expanded key you implement something like this
bool hashContainsExpanded(const QString &key) const {
if (!hash.contains(key) && key.length() > 1)
return hasContainsExpanded(key.substring(0, key.length() - 1));
return hash.contains(key);
}
This method will let you find the key OMG but not OM which is contained in this key. You may also implement a method which will take the first character of the provided expanded key and test it for containment. If not found, it will take the second and test again and so on. This will match OM in favour of OMG.
Also keep in mind, that you may work later with the matched key and thus you should return it instead of only returning true.

Specman e: Is there a way to constrain the amount of set bits in a number?

I have unit field events:
events:uint;
The values of events not so interesting as the amount of set bits in it. I would like to constrain the ranges of the amount of set bits in the events.
Is there a way to do it?
Thank you for your help.
The operations [..] and %{} aren't generative therefore they considered as inputs in the constraints.
The constraint:
keep events_bits == events[..];
is equivalent to:
keep events_bits == read_only(events[..]);
The generator will generate events and only then will enforce the constraints on `events_bits.
You can do the following:
extend sys {
events : uint;
events_bits_on [32] : list of bool;
keep for each in events_bits_on {
it == (events[index:index] == 1);
};
keep events_bits_on.count(it) == 2;
};
There might be an easier way to do it, but you can use bit slicing to constrain a list of uints to equal the bits of your variable, and use sum to constrain their amount.
The following example does what you need (limited to a 4 bit variable for brevity):
<'
extend sys {
events : uint(bits:4);
b: list of uint(bits:1);
keep b.size() == 4;
keep b[0] == events[0:0];
keep b[1] == events[1:1];
keep b[2] == events[2:2];
keep b[3] == events[3:3];
keep (b.sum(it) == 2);
};
'>
Writing all the constraints is probably a little ugly, but it can easily be done using a define as computed macro.
This is only a partial answer.
You can use events[..] or %{events} to convert from a vector to a list containing the bits of that vector. Using it directly in a constraint directly doesn't work, because it complains there's no generative element:
extend sys {
events : uint(bits:4);
// neither of these compile
keep events[..].sum(it) == value(2);
keep %{events}.sum(it) == value(2);
};
This is probably a case for Cadence.
What is allowed however is to create an intermediate list and assign that to the output of either of these operators:
extend sys {
events_bits : list of bit;
keep events_bits == events[..];
};
You would think that you could then constrain this list to have a certain number of 1s:
extend sys {
// these constraints apply, but aren't enforced
keep events_bits.sum(it) == value(2);
keep events_bits.count(it == 1) == value(2);
};
This doesn't work however. Even though the constraints are there, they don't get enforced for some reason. This is another issue for Cadence to look at.
Summarizing, if these issues weren't there, you could probably count the number of ones easily. Maybe in a future version of Specman. I still hope that at least seeing that the [..] and the %{} operators exist will help you for other things.

How do I check if System::Collections:ArrayList is empty / nullptr / null?

I'd like to know how in C++/CLI it is possible to check whether an ArrayList is existent.
System::Collections::ArrayList %queue_tx
I tried if ( nullptr != queue_tx ) { queue_tx.Add(msg); } but that didn't work. I'm passing queue_tx as a parameter to a function and there's supposed to be the possibility of this parameter not being set (or being set to nullptr).
The compiler throws '!=' : no conversion from 'System::Collections::ArrayList' to 'nullptr'.
How do I do this?
% defines a reference variable this is why it cannot be null
if you would have declared the ArrayList like this:
System::Collections::ArrayList^ queue_tx
then your nullptr check would be possible and have a meaning
otherwise just use the queue_tx.Count() to check if the collection is empty
I would recommend going over:
the difference between reference and pointer variables
When to use a Reference VS Pointers
It is quite impossible for a T% to be null.

Resources