Order of QObject children (strategy question) - qt

For one of my projects I have a tree of QObject derived objects, which utilize QObject's parent/child functionality to build the tree.
This is very useful, since I make use of signals and slots, use Qt's guarded pointers and expect parent objects to delete children when they are deleted.
So far so good. Unfortunately now my project requires me to manage/change the order of children. QObject does not provide any means of changing the order of its children (exception: QWidget's raise() function - but that's useless in this case). So now I'm looking for a strategy of controlling the order of children. I had a few ideas, but I'm not sure about their pros & cons:
Option A: Custom sort index member variable
Use a int m_orderIndex member variable as a sort key and provide a sortedChildren() method which returns a list of QObjects sorted by this key.
Easy to implement into existing object structure.
Problematic when QObject::children() method is overriden - will lead to problems during loops when items' order is changed, also is more expensive than default implementation.
Should fall back to QObject object order if all sort keys are equal or 0/default.
Option B: Redundant list of children
Maintain a redundant list of children in a QList, and add children to it when they are created and destroyed.
Requires expensive tracking of added/deleted objects. This basically leads to a second child/parent tracking and lot of signals/slots. QObject does all of this internally already, so it might not be a good idea to do it again. Also feels like a lot of bloat is added for a simple thing like changing the order of children.
Good flexibility, since a QList of children can be modified as needed.
Allows a child to be in the QList more than one time, or not at all (even though it might be still a child of the QObject)
Option C: ...?
Any ideas or feedback, especially from people who already solved this in their own projects, is highly appreciated. Happy new year!

I spent a lot of time going through all these options in the past days and discussed them carefully with some other programmers. We decided to go for Option A.
Each of the objects we are managing is a child of a parent object. Since Qt does not provide any means of re-ordering these objects, we decided to add a int m_orderIndex property to each object, which defaults to 0.
Each object has an accessor function sortedChildren() which returns a QObjectList of the children. What we do in that function is:
Use the normal QObject::chilren() function to get a list of all available child objects.
dynamic_cast all objects to our "base class", which provides the m_orderIndex property.
If the object is castable, add it to a temporary object list.
use qSort with a custom LessThan function to find out if qSort needs to change the order of two objects.
Return the temporary object list.
We did this for the following reasons:
Existing code (especially Qt's own code) can continue using children() without having to worry about side effects.
We can use the normal children() function in places where the order does not matter, without having any performance loss.
In the places where we need the ordered list of children, we simply replace children() by sortedChildren() and get the desired effect.
One of the good things about this approach is, that the order of children does not change if all sort indices are set to zero.
Sorry for answering my own question, hope that enlightens people with the same problem. ;)

What about something like...
QList listChildren = (QList)children();
sort listChildren
foreach listChildren setParent( TempParent )
foreach listChildren setParent( OriginalParent )

A nasty hack: QObject::children() returns a reference-to-const. You could cast away the const-ness and thus manipulate the internal list directly.
This is pretty evil though, and has the risk of invalidating iterators which QObject keeps internally.

I do not have an separate option C, but comparing option A and B, you are speaking ~4 bytes (32-bit pointer, 32-bit integer) in either case, so I'd go with option B, as you can keep that list sorted.
To avoid the additional complexity of tracking children, you could cheat and keep your list sorted and tidy, but combine it with a sortedChildren method that filters out all the non-children. Complexity wise, this aught to end up around O(nlogm) (n = children, m = list entries, assuming that m >= n, i.e. children are always added) unless you have a large turnaround on children. Let's call this option C.
Quicksort, in your suggested option A, gives you O(n2) (wc), but also requires you to retreive pointers, trace them, retreive an integer, etc. The combined method only needs a list of the pointers (aught to be O(n)).

I had the same problem and I solved it by Option B. Tracking isn't that difficult, just create a method "void addChild(Type *ptr);" and another one to delete a childitem.
You don't suffer from evil redundancy if you exclusivly store the children within the private/public childlist (QList) of each item and drop the QObject base. Its actually quite easy to implement auto-child-free on free (though that requires an extra parent pointer).

Related

How to have valid references to objects owned by containers that dynamically move their objects?

If you have a pointer or reference to an object contained in a container, say a dynamic array or a hash table/map, there's the problem that the objects don't permanently stay there, and so it seems any references to these objects become invalid before too long. For example a dynamic array might need to reallocate, and a hash table might have to rehash, changing the positions of the buckets in the array.
In languages like Java (and I think C#), and probably most languages this may not be a problem. In these languages many things are references instead of the object itself. You can create a reference to the 3rd element of a dynamic array, you basically create a new reference by copying the reference to the object which lives somewhere else.
But in say C++ where a dynamic array or hash table will actually store the objects directly in its memory owned by the container what are you supposed to do? There's only one place where the object I create can live. I can create the object by allocating it somewhere, and then store pointers to that object in a dynamic array or a hash table, or any other container. However if I decide to have the container be the owner of those objects I run into problems with having a pointer or reference to those objects.
In the case of a dynamic array like an std::vector you can reference an object in the array with a index instead of a memory address. If the array is reallocated the index is still valid. However I run into the same problem if I remove an element in the array, then the index is potentially no longer valid.
In the case of something like a hash table, the table might dynamically rehash, changing the position of all the values in the buckets. Is the only way of having references to hash table values to just search for or hash the key every time you want to access it?
What are the ways of having references to objects that live in containers like these, or any others?
There aren't any magic or generally used solutions to this. You have to make tradeoffs. If you are optimizing things at this low level, one good approach might be to use a container class that informs you when it does a reallocation. It'd be interesting to find out if there is any container library with this property

Redux: is state normalization necessary for composition relationship?

As we know, when saving data in a redux store, it's supposed to be transformed into a normalized state. So embedded objects should be replaced by their ids and saved within a dedicated collection in the store.
I am wondering, if that also should be done if the relationship is a composition? That means, the embedded data isn't of any use outside of the parent object.
In my case the embedded objects are registrations, and the parent object is a (real life) event. Normalizing this data structure to me feels like a lot of boilerplate without any benefit.
State normalization is more than just how you access the data by traversing the object tree. It also has to do with how you observe the data.
Part of the reason for normalization is to avoid unnecessary change notifications. Objects are treated as immutable so when they change a new object is created so that a quick reference check can indicate if something in the object changed. If you nest objects and a child object changes then you should change the parent. If some code is observing the parent then it will get change notifications every time a child changes even though it might not care. So depending on your scenario you may end up with a bunch of unnecessary change notifications.
This is also partly why you see lists of entities broken out into an array of identifiers and a map of objects. In relation to change detection, this allows you to observe the list (whether items have been added or removed) without caring about changes to the entities themselves.
So it depends on your usage. Just be aware of the cost of observing and the impact your state shape has on that.
I don't agree that data is "supposed to be [normalized]". Normalizing is a useful structure for accessing the data, but you're the architect to make that decision.
In many cases, the data stored will be an application singleton and a descriptive key is more useful than forcing some kind of id.
In your case I wouldn't bother unless there is excessive data duplication, especially because your would have to then denormalize for the object to function properly.

Unity 3d how heavy is a dictionary?

Everyone was telling me that a List is heavy on performance, so I was wondering is it the same with a dictionary? Because a dictionary doesn't have a fixed size. Is there also a dictionary with a fixed size, just like a normal array?
Thanks in advance!
A list can be heavy on performance, but it depends on your use case.
If your use case is the indexing of a very large data set, in which you plan to search for elements during runtime, then a Dictionary will behave with O(1) Time Complexity for retrievals (which is great!).
If you plan to insert/remove a little bit of data here and there at runtime then that's okay. But, if you plan to do constant insertions at runtime then you will be taking a hit on performance due to the hashing and collision handling functions.
If your use case requires a lot of insertions, removals, iteration through the consecutive data, then a list would be and fast. But if you are planning to search constantly at runtime, then a list could take a hit performance-wise.
Regarding the Dictionary and size:
If you know the size/general range of your data set then you could technically account for that and initialize accordingly. Or you could write your own Dictionary and Hash Table implementation.
In all:
Each data structure has it's advantages and disadvantages. So think about what you plan to do with the data at runtime, then pick accordingly.
Also, keeping a data structure time and space complexity table is always handy :P
This is depends on your needs.
If you just add and then iterate items in a List in sequental way - this is a good choice.
If you have a key for every item and need fast random access by key - use Dictionary.
In both cases you can specify the initial size of the collection to reduce memory allocation.
If you have a varying number of items in the collection, you'll want to use a list vs recreating an array with the new number of items in the collection.
With a dictionary, it's a little easier to get to specific items in the collection, given you have a key and just need to look it up, so performance is a little better when getting an item from the collection.
List and dictionary are part of the System.Collections namespace, which are mutable types. There is a System.Collections.Immutable namespace, but it's not yet supported in Unity.

Storing doubly linked lists in Riak without a race condition?

We want to use Riak's Links to create a doubly linked list.
The algorithm for it is quite simple, I believe:
Let 'N0' be the new element to insert
Get the head of the list, including its 'next' link (N1)
Set the 'previous' of N1 to be the N0.
Set the 'next' of N0 to be N1
Set the 'next' of the head of the list to be N0.
The problem that we have is that there is an obvious race condition here, because if 2 concurrent clients get the head of the list, one of the items will likely be 'lost'. Any way to avoid that?
Riak is an eventually consistent system when talking about CAP theorem.
Provided you set the bucket property allow_multi=true, if two concurrent clients get the head of the list then write, you will have sibling records. On your next read you'll receive multiple values (siblings) and will then have to resolve the conflict and write the result. Given that we don't have any sort of atomicity this will possibly lead to additional conflicts under heavy write concurrency as you attempt to update the linked objects. Not impossible to resolve, but definitely tricky.
You're probably better off simply serializing the entire list into a single object. This makes your conflict resolution much simpler.

LinkedHashMap's impl - Uses Double Linked List, not a Single Linked List; Why

As I referred to the documentation of LinkedHashMap, It says, a double linked list(DLL) is maintained internally
I was trying to understand why a DLL was chosen over S(ingle)LL
The biggest advantage I get with a DLL would be traversing backwards, but I dont see any use case for LinkedHashMap() exploiting this advantage, since there is no previous() sort of operation like next() in the Iterable interface..
Can anyone explain why was a DLL, and not a SLL?
It's because with an additional hash map, you can implement deletes in O(1). If you are using a singly linked list, delete would take O(n).
Consider a hash map for storing a key value pair, and another internal hash map with keys pointing to nodes in the linked list. When deleting, if it's a doubly linked list, I can easily get to the previous element and make it point to the following element. This is not possible with a singly linked list.
http://www.quora.com/Java-programming-language/Why-is-a-Java-LinkedHashMap-or-LinkedHashSet-backed-by-a-doubly-linked-list

Resources