How QVariant Works Internally? - qt

I want to know, How QVariant can internally stores, int, QMap, QList,...
I mean what is the internal data-structure/Implementation? What is the overhead of storing and retrieving types (int, float) in QVariant?

A quick look at the code reveals that a QVariant is basically a union of several primitive types (int, float etc'), a QObject pointer, and a void* pointer for anything else that is not a QObject and not a primitive. There is also a type data member that allows it to know what's actually currently stored there. The overhead appears to be not much more than storing to a member of a struct, checking that for type compatibility and possibly making a conversion (int to float for instance)

Related

Is sendmsg allowed to write to the buffers passed to it?

sendmsg takes a const struct msghdr *, so it can't write to the struct msghdr itself. struct msghdr contains a non-const pointer to struct iovec. Theoretically, sendmsg could write to the struct iovec and the buffers these iovec point to.
It seems like sendmsg never writes to these data buffers, but is this guaranteed somewhere?
Additionally, can the ancillary data (control data) be written to during sendmsg? It seems like SCM_RIGHTS and SCM_CREDENTIALS don't write to control data during sendmsg, but could there be other ancillary message types that do so?
Background is that Nightly Rust has send_vectored_with_ancillary_to, a safe abstraction for sendmsg. Rust distinguishes mutable (non-const) and non-mutable (const) data. Currently, it assumes that sendmsg data is mutable, but I think these buffers should be non-mutable instead.

In Crystal FFI, how do I access a type in the C library?

The task I'm working on is to add support for the create_function interface to Crystal's SQLite binding: https://github.com/crystal-lang/crystal-sqlite3/issues/61
To access the parameters for a user-defined function, I need to access a C-style array (that is, a pointer to contiguous instances) of the sqlite3_value type, which if I'm not mistaken requires knowing the size of the type. But as far as I have found, there is no way to declare a Crystal type as an alias for a type defined in the C library.
Because it's a pointer, no, you don't necessarily need to know its layout. For opaque pointers this pattern is common in Crystal:
type Sqlite3Context = Void*
type Sqlite3Value = Void*
fun sqlite3_create_function(
[...]
xFunc : (Sqlite3Context, Int, Sqlite3Value*) ->,
[...]
)

What's the point of unique_ptr?

Isn't a unique_ptr essentially the same as a direct instance of the object? I mean, there are a few differences with dynamic inheritance, and performance, but is that all unique_ptr does?
Consider this code to see what I mean. Isn't this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
unique_ptr<int> a(new int);
print(*a);
return 0;
}
Almost exactly the same as this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
int a;
print(a);
return 0;
}
Or am I misunderstanding what unique_ptr should be used for?
In addition to cases mentioned by Chris Pitman, one more case you will want to use std::unique_ptr is if you instantiate sufficiently large objects, then it makes sense to do it in the heap, rather than on a stack. The stack size is not unlimited and sooner or later you might run into stack overflow. That is where std::unique_ptr would be useful.
The purpose of std::unique_ptr is to provide automatic and exception-safe deallocation of dynamically allocated memory (unlike a raw pointer that must be explicitly deleted in order to be freed and that is easy to inadvertently not get freed in the case of interleaved exceptions).
Your question, though, is more about the value of pointers in general than about std::unique_ptr specifically. For simple builtin types like int, there generally is very little reason to use a pointer rather than simply passing or storing the object by value. However, there are three cases where pointers are necessary or useful:
Representing a separate "not set" or "invalid" value.
Allowing modification.
Allowing for different polymorphic runtime types.
Invalid or not set
A pointer supports an additional nullptr value indicating that the pointer has not been set. For example, if you want to support all values of a given type (e.g. the entire range of integers) but also represent the notion that the user never input a value in the interface, that would be a case for using a std::unique_ptr<int>, because you could get whether the pointer is null or not as a way of indicating whether it was set (without having to throw away a valid value of integer just to use that specific value as an invalid, "sentinel" value denoting that it wasn't set).
Allowing modification
This can also be accomplished with references rather than pointers, but pointers are one way of doing this. If you use a regular value, then you are dealing with a copy of the original, and any modifications only affect that copy. If you use a pointer or a reference, you can make your modifications seen to the owner of the original instance. With a unique pointer, you can additionally be assured that no one else has a copy, so it is safe to modify without locking.
Polymorphic types
This can likewise be done with references, not just with pointers, but there are cases where due to semantics of ownership or allocation, you would want to use a pointer to do this... When it comes to user-defined types, it is possible to create a hierarchical "inheritance" relationship. If you want your code to operate on all variations of a given type, then you would need to use a pointer or reference to the base type. A common reason to use std::unique_ptr<> for something like this would be if the object is constructed through a factory where the class you are defining maintains ownership of the constructed object. For example:
class Airline {
public:
Airline(const AirplaneFactory& factory);
// ...
private:
// ...
void AddAirplaneToInventory();
// Can create many different type of airplanes, such as
// a Boeing747 or an Airbus320
const AirplaneFactory& airplane_factory_;
std::vector<std::unique_ptr<Airplane>> airplanes_;
};
// ...
void Airline::AddAirplaneToInventory() {
airplanes_.push_back(airplane_factory_.Create());
}
As you mentioned, virtual classes are one use case. Beyond that, here are two others:
Optional instances of objects. My class may delay instantiating an instance of the object. To do so, I need to use memory allocation but still want the benefits of RAII.
Integrating with C libraries or other libraries that love returning naked pointers. For example, OpenSSL returns pointers from many (poorly documented) methods, some of which you need to cleanup. Having a non-copyable pointer container is perfect for this case, since I can protect it as soon as it is returned.
A unique_ptr functions the same as a normal pointer except that you do not have to remember to free it (in fact it is simply a wrapper around a pointer). After you allocate the memory, you do not have to afterwards call delete on the pointer since the destructor on unique_ptr takes care of this for you.
Two things come to my mind:
You can use it as a generic exception-safe RAII wrapper. Any resource that has a "close" function can be wrapped with unique_ptr easily by using a custom deleter.
There are also times you might have to move a pointer around without knowing its lifetime explicitly. If the only constraint you know is uniqueness, then unique_ptr is an easy solution. You could almost always do manual memory management also in that case, but it is not automatically exception safe and you could forget to delete. Or the position you have to delete in your code could change. The unique_ptr solution could easily be more maintainable.

When should I use MPI_Datatype instead of serializing manualy?

It's all begun when I needed to MPI_Bcast a 64 bit integer. Since MPI does not know how to handle it I did:
template<typename T>
inline int BcastObjects(T* pointer,
int count,
int root,
MPI_Comm comm)
{
return MPI_Bcast(pointer,
count * sizeof(*pointer),
MPI_BYTE,
root,
comm);
}
Now I can do:
int64_t i = 0;
BcastObjects(&i, 1, root_rank, some_communicator);
Then I started to use BcastObjects to send over an array of structures. I wonder if it's OK to do that?
The manuals about MPI_Datatype focus on how to do it, but not on why would I want to do it.
Why not just use MPI_INT64_T?
You can always mock up your own datatypes with MPI_Byte or what have you; the datatype stuff is there so that you don't have to. And in many cases it's much easier; if you want to send data that has "holes" in it (eg, a slice of a multidimensional array, data in a structure that has gaps), you can map that out fairly straighforwardly with a datatype, whereas you'd have to manually count out byte strings and use something like MPI_Pack otherwise. And of course describing the data at a higher level is certainly less brittle if something in your data structure changes.

CUDA/C++: Passing __device__ pointers in C++ code

I am developing a Windows 64-bit application that will manage concurrent execution of different CUDA-algorithms on several GPUs.
My design requires a way of passing pointers to device memory
around c++ code. (E.g. remember them as members in my c++ objects).
I know that it is impossible to declare class members with __device__ qualifiers.
However I couldn't find a definite answer whether assigning __device__ pointer to a normal C pointer and then using the latter works. In other words: Is the following code valid?
__device__ float *ptr;
cudaMalloc(&ptr, size);
float *ptr2 = ptr
some_kernel<<<1,1>>>(ptr2);
For me it compiled and behaved correctly but I would like to know whether it is guaranteed to be correct.
No, that code isn't strictly valid. While it might work on the host side (more or less by accident), if you tried to dereference ptr directly from device code, you would find it would have an invalid value.
The correct way to do what your code implies would be like this:
__device__ float *ptr;
__global__ void some_kernel()
{
float val = ptr[threadIdx.x];
....
}
float *ptr2;
cudaMalloc(&ptr2, size);
cudaMemcpyToSymbol("ptr", ptr2, sizeof(float *));
some_kernel<<<1,1>>>();
for CUDA 4.x or newer, change the cudaMemcpyToSymbol to:
cudaMemcpyToSymbol(ptr, ptr2, sizeof(float *));
If the static device symbol ptr is really superfluous, you can just to something like this:
float *ptr2;
cudaMalloc(&ptr2, size);
some_kernel<<<1,1>>>(ptr2);
But I suspect that what you are probably looking for is something like the thrust library device_ptr class, which is a nice abstraction wrapping the naked device pointer and makes it absolutely clear in code what is in device memory and what is in host memory.

Resources