Deallocating OpenGL context created by QGLWidget - qt

If I create a QGLWidget, and then I allocate my own textures using something like glGenTextures, glTex2DImage, etc, will all that texture data get cleaned up when I delete the widget? (Also, I will also have shared widgets which will get deleted too).
I looked at the source for the destructor and it looks like it is deleting the context, which I assume will also clean up any textures I generated with that context
https://qt.gitorious.org/qt/qt/source/ca5b49a2ec0ee9d7030b8d03b561717addd3441f:src/opengl/qgl.cpp#L3409
Just want to make sure incase I am missing something

No, the texture storage will only be released when an object that uses it is not bound in any of the contexts that share it. Moreover, it is not implicitly released just because 1 context is destroyed. You share the same object name space between all of your shared contexts, so there is no way that could be allowed to happen (all contexts in the share group would have to be destroyed).
Each context maintains its own set of bound textures, so if you bind texture 1 in context A and B, then delete context A the texture cannot be freed until you also delete (or unbind it from) context B. This behavior applies to calling glDeleteTextures (...) as well.
That function will implicitly unbind the texture(s) you pass it from the current (calling) context, but until it is unbound in any other context the memory is not allowed to be freed. The only thing that will happen immediately is that the texture name is immediately re-usable and may be returned by a subsequent call to glGenTextures (...).
Long story short, in your case the memory will eventually be freed (you claim that you are going to destroy all of the contexts). It just will not necessarily be freed immediately when you destroy your first context - other conditions described above have to be met first.

Related

Rust Global.dealloc vs ptr::drop_in_place vs ManuallyDrop

I'm relatively new to Rust. I was working on some lock-free algorithms, and started playing around with manually managing memory, something similar to C++ new/delete. I noticed a couple different ways that do this throughout the standard library components, but I want to really understand the differences and use cases of each. Here's what it seems like to me:
ManuallyDrop<Box<T>> will prevent Box's destructor from running. I can save a raw pointer to the ManuallyDrop element, and have the actual element go out of scope (what would normally be dropped in Rust) without being dropped. I can later call ManuallyDrop::drop(&mut *ptr) to drop this value manually.
I can also dereference the ManuallyDrop<Box<T>> element, save a raw pointer to just the Box<T>, and later call std::ptr::drop_in_place(box_ptr). This is supposed to destroy the Boxitself and drop the heap-allocated T.
Looking at the ManuallyDrop::drop implementation, it looks those are literally doing the exact same thing. Since ManuallyDrop is zero cost and just stores a value in it's struct, is there any difference in the above two approaches?
I can also call std::alloc::Global.dealloc(...), which looks like it will deallocate the memory block without calling drop. So if I call this on a pointer to Box<T>, it'll deallocate the heap pointer, but won't call drop, so T will still be lying around on the heap. I could call it on a pointer to T itself, which will remove T.
From exploring the standard library, it looks like Global.dealloc gets called in the raw_vec implementation to actually remove the heap-allocated array that Vec points to. This makes sense, since it's literally trying to remove a block of memory.
Rc has a drop implementation that looks roughly like this:
// destroy the contained object
ptr::drop_in_place(self.ptr.as_mut());
// remove the implicit "strong weak" pointer now that we've
// destroyed the contents.
self.dec_weak();
if self.weak() == 0 {
Global.dealloc(self.ptr.cast(), Layout::for_value(self.ptr.as_ref()));
}
I don't really understand why it needs both the dealloc and the drop_in_place. What does the dealloc add that the drop_in_place doesn't do?
Also, if I just save a raw pointer to a heap-allocated value by doing something like Box::new(5).into_raw(), does my pointer now control that memory allocation. As in, will it remain alive until I explicitly call ptr::drop_in_place()?
Finally, when I was playing with all this, I ran into a strange issue. After running ManuallyDrop::drop or ptr::drop_in_place on my raw pointer, I then tried running println! on the pointer's dereferenced value. Sometimes I get a scary heap error and my test fails, which is what I would expect. Other times, it just prints the same value, as if no drops happened. I also tried running ManuallyDrop::drop multiple times on the exact same value, and same thing. Sometimes a heap error, sometimes totally fine, and the same value prints out.
What is happening here?
If you come from C++, you can think of drop_in_place as calling the destructor manually, and dealloc as calling old C free.
They serve different purposes:
drop_in_place just calls Drop::drop, that releases the resources held by your type.
dealloc frees the memory pointed to by a pointer, previously allocated with alloc.
You seem to think that drop_in_place also frees the memory, but that is not the case. I think your confusion arises because Box<T> contains a dynamically allocated object, so its Box::drop implementation does release the memory used by that object, after calling its drop_in_place, of course.
That is what you see in the Rc implementation, first it calls the drop_in_place (destructor) of the inner object, then it releases the memory.
About what happens if you call drop_in_place several times in a row... well, the function is unsafe for a reason: you most likely get Uundefined Behavior. From the docs:
...if T is not Copy, using the pointed-to value after calling drop_in_place can cause undefined behavior.
Note the can cause. I think it is perfectly possible to write a type that allows calling drop several times, but it doesn't sound like such a good idea.

When exactly is "Component.completed" fired?

When exactly is "Component.completed" fired?
The docs say this:
Emitted after the object has been instantiated.
And if this was C++, I'd know that, since the object has been instantiated, I can rely on the constructor to have been executed, with all the guarantees that come from that.
But in QML I don't know what guarantees I have about an object that "has been instantiated". That memory has been allocated for it? That its properties have evaluated and received their initial values? That the whole descendant subtree has been loaded?
The guarantee is it will be fired after the object has been completed. That includes the allocation of memory, construction of object and rigging of property bindings, initial evaluations and such.
What is not guaranteed is the order in which completed signals are handled when objects are nested in a tree. You should not rely on that. An object will not be completed before its entire object tree is completed, but for some inexplicable reason, you can't expect notifications to arrive in the tree-defined order.

push object originally on the stack to a vector, will the objects get lost?

I just started using STL, say I have a rabbit class, now I'm creating a rabbit army...
#include <vector>
vector<rabbit> rabbitArmy (numOfRabbits,rabbit());
//Q1: these rabbits are on the heap right?
rabbit* rabbitOnHeap = new rabbit();
//Q2: rabbitOnHeap is on the heap right?
rabbit rabbitOnStack;
//Q3: this rabbit is on the stack right?
rabbitArmy.push_back(rabbitOnStack);
//Q4: rabbitOnStack will remain stored on the stack?
//And it will be deleted automatically, though it's put in the rabbitArmy now?
Q4 is the one I'm most concerned with, should I always use new keyword to add rabbit to my army?
Q5: Is there better way to add rabbits to the army than:
rabbitArmy.push_back(*rabbitOnHeap);
Since you haven't specified otherwise, the objects you put in the vector will be allocated with std::allocator<rabbit>, which uses new. For what it's worth, that's usually called the "free store" rather than the heap1.
Again, the usual term is the free store.
Officially, that's "automatic storage", but yes, on your typical implementation that'll be the stack, and on an implementation that doesn't support a stack in hardware, it'll still be a stack-like (LIFO) data structure of some sort.
When you add an item to a vector (or other standard container) what's actually added to the container is a copy of the item you pass as a parameter. The item you pass as a parameter remains yours to do with as you please. In the case of something with automatic storage class, it'll be destroyed when it goes out of scope -- but the copy of it in the collection will remain valid until it's erased or the collection destroyed, etc.
No. In fact, you should only rarely use new to allocate items you're going to put in a standard collection. Since the item in the array will be a copy of what you pass, you don't normally need to use new to allocate it.
Usually you just push back a local object.
For example:
for (i=0; i<10; i++)
rabbitArmy.push_back(rabbit());
This creates 10 temporary rabbit objects (the rabbit() part), adds a copy of each to the rabbitArmy. Then each of the temporaries is destroyed, but the copies of them in the rabbitArmy remain.
In typical usage, "the heap" refers to memory managed by calloc, malloc, realloc, and free. What new and delete manage is the free store. A new expression, in turn, obtains memory from an operator new (either global or inside a class). operator new and operator delete are specified so they could be almost a direct pass-through to malloc and free respectively, but even when that's the case the heap and free store are normally thought of separately.

Finalisers in Adobe Flex 3

Using Adobe Flex 3, is there any way to specify a finaliser?
There is no concept of a finaliser/destructor in ActionScript 3, even at the AVM/bytecode level.
Even though there isn't such a thing as a destructor/ finalizer in ActionScript per se I would consider it good practice to have a method that frees all the resources in your class, when you no longer need them.
Garbage collection only picks up objects that are no longer needed anywhere, and it uses reference counting to determine when this is the case. So as long as there are unremoved event listeners, circular dependencies (objects referencing each other), etc., you may not notice it, but your memory usage will keep increasing, and the GC never frees up these resources at all.
Therefore, you should have a destroy() or finalize method that:
removes all event listeners
calls destroy() or finalize() on nested objects
deletes all strong object keys in dictionaries
sets all object type variables to null (it's okay for primitive values not to be reset)
For display objects, it is usually not a bad idea to call this method when Event.REMOVED_FROM_STAGE is dispatched.

Why are weak pointers useful?

I've been reading up on garbage collection looking for features to include in my programming language and I came across "weak pointers". From here:
Weak pointers are like pointers,
except that references from weak
pointers do not prevent garbage
collection, and weak pointers must
have their validity checked before
they are used.
Weak pointers interact with the
garbage collector because the memory
to which they refer may in fact still
be valid, but containing a different
object than it did when the weak
pointer was created. Thus, whenever a
garbage collector recycles memory, it
must check to see if there are any
weak pointers referring to it, and
mark them as invalid (this need not be
implemented in such a naive way).
I've never heard of weak pointers before. I would like to support many features in my language, but in this case I cannot for the life of me think of a case where this would be useful. For what would one use weak pointer?
A really big one is caching. Let's think through how a cache would work:
The idea behind a cache is to store objects in memory until memory pressure becomes so great that some of the objects need to be pushed out (or are explicitly invalidated of course). So your cache repository object must hold on to these objects somehow. By holding onto them via weak reference, when the garbage collector goes looking for things to consume because memory is low, the items referred to only by weak reference will appear as candidates for garbage collection. Items in the cache that are currently being used by other code will have hard references still active, so those items will be protected from garbage collection.
In most situations you won't be rolling your own caching mechanism, but it is common to use a cache. Let's suppose you want to have a property which refers to an object in cache, and that property stays in scope for a long time. You would prefer to fetch the object from cache, but if it's not available, you can get it from persisted storage. You also don't want to force that particular object to stay in memory if pressure gets too high. So you can use a weak reference to that object, which will allow you to fetch it if it is available but also allow it to fall out of cache.
A typical use case is storage of additional object attributes. Suppose you have a class with a fixed set of members, and, from the outside, you want to add more members. So you create a dictionary object -> attributes, where the keys are weak references. Then, the dictionary doesn't prevent the keys from being garbage collected; removal of the object should also trigger removal of the values in the WeakKeyDictionary (e.g. by means of a callback).
If your language's garbage collector is incapable of collecting circular data structures, then you can use weak references to enable it to do so. Normally, if you have two objects which have references to each other, but no other outside object has a reference to those two, they would be candidates for garbage collection. But, a naïve garbage collector wouldn't collect them, since they contain references to each other.
To fix this, you make it so one object has a strong reference to the second, but the second has a weak reference to the first. Then, when the last outside reference to the first object goes away, the first object becomes a candidate for garbage collection, followed shortly thereafter by the second, since now its only reference is weak.
Another example... not quite caching, but similar: Suppose an I/O library provides an object which wraps a file descriptor and permits access to the file. When the object is collected, the file descriptor is closed. It is desired to be able to list all currently opened files. If you use strong pointers for this list, then files are never closed.
Use them when you wanted to keep a cached list of objects but not prevent those objects from getting garbage collected if the "real" owner of the object is done with it.
A web browser might have a history object that keeps references to image objects that the browser loaded elsewhere and saved in the history/disk cache. The web browser might expire one of those images (user cleared the cache, the cache timeout elapsed, etc) but the page would still have the reference/pointer. If the page used a weak reference/pointer the object would go away as expected and the memory would be garbage collected.
One important reason for having weak references is to deal with the possibility that an object may serve as a pipeline to connect a source of information or events to one or more listeners. If there aren't any listeners, there's no reason to keep sending information to the pipeline.
Consider, for example, an enumerable collection which allows updates during enumeration. The collection may need notify any active enumerators that it has been changed, so those enumerators can adjust themselves accordingly. If some enumerators get abandoned by their creators, but the collection holds strong references to them, those enumerators will continue to exist (and process update notifications) as long as the collection exists. If the collection itself will exist for the lifetime of the application, those enumerators will effectively become a permanent memory leak.
If the collection holds weak references to the enumerators, this problem can be largely solved. If an enumerator is abandoned, it will be eligible for garbage collection, even though the collection still holds a weak reference to it. The next time the collection is changed, it can look through its list of weak references, send updates to the ones that are still valid, and remove from its list the ones that are not.
It would be possible to achieve many of the effects of weak references using finalizers along with some extra objects, and it's possible to make such implementations more efficient than those using weak references, but there are many pitfalls and it's hard to avoid bugs. It's much easier to make a correct approach using WeakReference. The approach may not be optimally efficient, but it won't fail badly.
Weak Pointers keep whatever holds them from becoming a form of "life support" for the object the pointer points to.
Say you had a Viewport class, and 2 UI classes, and a buch of Widget classes. You want your UI to control the lifespan of the Widgets it creates, so your UI keeps SharedPtrs to all the Widgets it controls. For as long as your UI object is alive, none of the Widgets it refrences will be garbage collected (thanks to SharedPtr).
However, the Viewport is your class that actually does the drawing, so your UI needs to pass the Viewport a pointer to the Widgets so that it can draw them. For whatever reason, you want to change your active UI class to the other one. Lets consider two scenarios, one where the UI passed the Viewport WeakPtrs and one where it passed SharedPtrs (pointing to the Widgets).
If you had passed the Viewport all the Widgets as WeakPointers, as soon as the UI class was deleted there would be no more SharedPointers to the Widgets, so they would be garbage collected, the Viewport's references to the objects wouldn't keep them on "life support", which is exactly what you want because you aren't even using that UI anymore, much less the Widgets it created.
Now, consider you had passed the Viewport a SharedPointer, you delete the UI, and the Widgets are NOT garbage collected! Why? because the Viewport, which is still alive has an array (vector or list, whatever) full of SharedPtrs to the Widgets. The Viewport has in effect became a form of "life support" for them, even though you had deleted the UI that was controlling the widgets for another UI object.
Generally, a language/system/framework will garbage collect anything unless there is a "strong" reference to it somewhere in memory. Imagine if everything had a strong reference to everything, nothing would ever get garbage collected! Sometimes you want that behavior sometimes you don't. If you use a WeakPtr, and there are no Shared/StrongPtrs left pointing at the object (only WeakPtrs), then the objects will be garbage collected despite the WeakPtr references, and the WeakPtrs (should be) set to NULL (or deleted, or something).
Again, when you use a WeakPtr you're basically allowing the object you're giving it too to be able to access the data, but the WeakPtr won't prevent garbage collection of the object it points to like a SharedPtr would. When you think SharedPtr, think "life support", WeakPtr, NO "life support." Garbage collection won't (generally) occur until the object has zero life support.
Weak references can for example be used in caching scenarios - you can access data through weak references, but if you don't access the data for a long time or there is high memory pressure, the GC can free it.
The reason for garbage collection at all is that in a language like C where memory management is totally under explicit control of the programmer, when object ownership is passed around, especially between threads or, even harder, between processes sharing memory, avoiding memory leaks and dangling pointers can become very hard. If that weren't hard enough, you also have to deal with the need to have access to more objects than will fit in memory at one time—you need to have a way to have free up some objects for a while so that other objects can be in memory.
So, some languages (e.g., Perl, Lisp, Java) provide a mechanism where you can just stop "using" an object and the garbage collector will eventually discover this and free up memory used for the object. It does this correctly without the programmer worrying about all the ways they can get it wrong (albeit there are lots of ways programmers can screw this up).
If you conceptually multiply the number of times you access an object by the time that it takes to compute the value of an object, and possibly multiply again by the cost of not having the object readily available or by the size of an object since keeping a large object around in memory can prevent keeping several smaller objects around, you could classify objects into three categories.
Some objects are so important that you want to explicitly manage their existence—they will not be managed by the garbage collector or they must never be collected until explicitly freed. Some objects are cheap to compute, are small, are not accessed frequently or have similar characteristics that allow them to be garbage collected at any time.
The third class, objects which are expensive to be recomputed but could be recomputed, are accessed somewhat frequently (perhaps for a short burst of time), are of large size, and so on are a third class. You'd like to keep them in memory as long as possible because they might be reused again, but you don't want to run out of memory needed for critical objects. These are candidates for weak references.
You want these objects kept around as long as possible if they aren't conflicting with critical resources, but they should be dropped if memory is needed for a critical resource because it can be recomputed again when needed. These are hat weak pointers are for.
An example of this might be pictures. Say you have a photo web page with thousands of pictures to display. You need to know how many pictures to lay out and maybe you have to do a database query to get the list. The memory to hold a list of a few thousand items is probably very small. You want to do the query once and keep it around.
You can only physically show perhaps a few dozen pictures at a time, though, in a pane of a web page. You don't need to fetch the bits for the pictures that the user can't be looking at. When the user scrolls the page, you'll gather the actual bits for the pictures visible. Those pictures could require many megabytes to show them. If the user scrolls back and forth between a few scroll positions, you'd like not to have to refetch those megabytes over and over again. But you can't keep all the pictures in memory all the time. So you use weak pointers.
If the user just looks at a few pictures over and over again, they may stay in cache and you don't have to refetch them. But if they scroll enough, you need to free up some memory so the visible pictures can be fetched. With a weak reference, you check the reference just before you use it. If its still valid, you use it. If its not, you make the expensive calculation (fetch) to get it.

Resources