As described in this MSDN article, Microsoft has these two type annotations to declare native pointers on different architectures. However, on the second line:
On a 32-bit system, a pointer declared with __ptr64 is truncated to a 32-bit pointer. On a 64-bit system, a pointer declared with __ptr32 is coerced to a 64-bit pointer.
This sounds to me like the declaration doesn't matter; if the architecture overrides the declaration of __ptrXX to be the default anyways, what's the point of marking __ptrXX in the first place?
I see that this answer says that it's for interop, but if the declarations are essentially overridden as above, how does that help with interop?
There's a big difference between declaring and assigning a 32-bit pointer and actually using it. In other words, dereferencing the pointer. If you do that in a 64-bit process then there is no other option but to sign-extend it to a 64-bit pointer. Which is what "coerced" means. That may work by accident, but you'd have to be pretty lucky. It just doesn't make sense to try.
The point of declaring a __ptr32 is as described in that linked answer, it only makes sense when you interop with a 32-bit process. Which uses 32-bit pointers. It is not common.
Related
In bellow example why we should use int32_t instead of uint32_t ? (platform is ARM 32 bit microcontroller)
struct tcb{
int32_t *stackPt;
struct tcb *nextPt;
};
It's a part of RTOS tutorial. and tcb is for thread control block .
why we should use int32_t* for stack ?
There is no particular reason that you should use pointer to signed rather than unsigned.
You are probably never going to dereference this pointer directly to access the words on the stack. If you do want to access the data on the stack, some of it will be signed, some unsigned, and some neither (strings etc) so the pointer type will not help you with that.
When you want to pass around a pointer but never dereference it then one convention is to use a pointer to void, but that convention isn't so popular in embedded system code.
One reason to use a pointer to a 32-bit integer is to suggest that the pointer is at least word-aligned. If you intend on complying with the ARM EABI (which you should) then the stack should be doubleword (64-bit) aligned at the entry to every EABI compliant function. To hint that that is the case you might want to even use a (u)int64_t pointer. This might be misleading though because not everything on the stack is 64-bit or 32-bit aligned, just the whole frames.
I'm learning V8 internals now. I learned that V8 uses pointer tagging for value storing, but wondered why it is not use NaN boxing.
AFAIK, NaN boxing is better because it can also store doubles and not just SMIs. I've read this, and understand (if that true) why not use NaN boxing on 32-bit platforms. But on 64-bit platforms I don't see why.
I suspect the reason has something to do with SMIs. Maybe they can't be stored using NaN boxing? I think they can. We have 52 superfluous bits for them (we can even use more than 32 bits). Maybe this will require additional masking operations that will render integer math slower? But we already need to do bitwise shift!
I don't know why. Thanks for anyone willing to answer.
(V8 developer here.) NaN boxing and pointer tagging are design choices with different tradeoffs, neither is strictly better than the other. V8's decision to use pointer tagging has been made long before I joined the project, so I can only speculate what the specific reason(s) might have been at the time.
Advantages of pointer tagging are:
significantly less memory consumption (certainly on 32-bit platforms; with "pointer compression" on 64-bit platforms too)
slightly more efficient (small) integer operations, because most CPUs' integer operations are faster than their double operations. This may not matter at all once an optimizing compiler enters the picture.
slightly more efficient pointer operations, because you can simply add an adjusted offset when accessing object fields (which has the same performance as not playing any pointer tricks at all), as opposed to having to mask off irrelevant parts of a NaN. This may not matter at all once an optimizing compiler enters the picture.
As you point out, the main benefit of NaN tagging is that it supports the full double range, which is very nice in some situations. You can build a well-performing engine based on either technique.
Is there any reason to use Qt standard function wrappers like qstrncpy instead of strncpy?
I could not find any hint in documentation. And I'm curious if there is any functional difference. It looks like making code dependent on Qt, even in not mandatory places.
I found this: Qt wrapper for C libraries
But it doesn't answer my question.
These methods are part of Qt's efforts for platform-independence. Qt tries to hide platform differences and use the best each platform has to offer, replicating that functionality on platforms where it is not available. Here is what the documentation of qstrncpy has to say:
A safe strncpy() function.
Copies at most len bytes from src (stopping at len or the terminating '\0' whichever comes first) into dst and returns a pointer to dst. Guarantees that dst is '\0'-terminated. If src or dst is nullptr, returns nullptr immediately.
[…]
Note: When compiling with Visual C++ compiler version 14.00 (Visual C++ 2005) or later, internally the function strncpy_s will be used.
So qstrncpy is safer than strncpy.
The Qt wrappers for these functions are safer than the standard ones because they guarantee the destination string will always be null-terminated. strncpy() does not guarantee this.
In C11, strncpy_s() and other _s() suffixed functions were added as safe string functions. However, they are not available in any C++ standard, they are C-only. The Qt wrappers fix this.
Do you know if are there pointers in Haskell?
If yes: how do you use them? Are there any problems with them? And why aren't they popular?
If no: is there any reason for it?
Yes there are. Take a look at Foreign.Ptr or Data.IORef
I suspect this wasn't what you are asking for though. As Haskell is for the most part without state, it means pointers don't fit into the language design. Having a pointer to memory outside the function would mean that a function is no longer pure and only allowing pointers to values within the current function is useless.
Haskell does provide pointers, via the foreign function interface extension. Look at, for example, Foreign.Storable.
Pointers are used for interoperating with C code. Not for every day Haskell programming.
If you're looking for references -- pointers to objects you wish to mutate -- there are STRef and IORef, which serve many of the same uses as pointers. However, you should rarely -- if ever -- need Refs.
If you simply wish to avoid copying large values, as sepp2k supposes, then you need do nothing: in most implementation, all non-trivial values are allocated separately on a heap and refer to one another by machine-level addresses (i.e. pointers). But again, you need do nothing about any of this, it is taken care of for you.
To answer your question about how values are passed, they are passed in whatever way the implementation sees fit: since you can't mutate the values anyway, it doesn't impact the meaning of the code (as long as the strictness is respected); usually this works out to by-need unless you're passing in e.g. Int values that the compiler can see have already been evaluated...
Pass-by-need is like pass-by-reference, except that any given reference could refer either to an actual evaluated value (which cannot be changed), or to a "thunk" for a not-yet-evaluated value. Wikipedia has more.
I recently read a discussion regarding whether managed languages are slower (or faster) than native languages (specifically C# vs C++). One person that contributed to the discussion said that the JIT compilers of managed languages would be able to make optimizations regarding references that simply isn't possible in languages that use pointers.
What I'd like to know is what kind of optimizations that are possible on references and not on pointers?
Note that the discussion was about execution speed, not memory usage.
In C++ there are two advantages of references related to optimization aspects:
A reference is constant (refers to the same variable for its whole lifetime)
Because of this it is easier for the compiler to infer which names refer to the same underlying variables - thus creating optimization opportunities. There is no guarantee that the compiler will do better with references, but it might...
A reference is assumed to refer to something (there is no null reference)
A reference that "refers to nothing" (equivalent to the NULL pointer) can be created, but this is not as easy as creating a NULL pointer. Because of this the check of the reference for NULL can be omitted.
However, none of these advantages carry over directly to managed languages, so I don't see the relevance of that in the context of your discussion topic.
There are some benefits of JIT compilation mentioned in Wikipedia:
JIT code generally offers far better performance than interpreters. In addition, it can in some or many cases offer better performance than static compilation, as many optimizations are only feasible at run-time:
The compilation can be optimized to the targeted CPU and the operating system model where the application runs. For example JIT can choose SSE2 CPU instructions when it detects that the CPU supports them. With a static compiler one must write two versions of the code, possibly using inline assembly.
The system is able to collect statistics about how the program is actually running in the environment it is in, and it can rearrange and recompile for optimum performance. However, some static compilers can also take profile information as input.
The system can do global code optimizations (e.g. inlining of library functions) without losing the advantages of dynamic linking and without the overheads inherent to static compilers and linkers. Specifically, when doing global inline substitutions, a static compiler must insert run-time checks and ensure that a virtual call would occur if the actual class of the object overrides the inlined method.
Although this is possible with statically compiled garbage collected languages, a bytecode system can more easily rearrange memory for better cache utilization.
I can't think of something related directly to the use of references instead of pointers.
In general speak, references make it possible to refer to the same object from different places.
A 'Pointer' is the name of a mechanism to implement references. C++, Pascal, C... have pointers, C++ offers another mechanism (with slightly other use cases) called 'Reference', but essentially these are all implementations of the general referencing concept.
So there is no reason why references are by definition faster/slower than pointers.
The real difference is in using a JIT or a classic 'up front' compiler: the JIT can data take into account that aren't available for the up front compiler. It has nothing to do with the implementation of the concept 'reference'.
Other answers are right.
I would only add that any optimization won't make a hoot of difference unless it is in code where the program counter actually spends much time, like in tight loops that don't contain function calls (such as comparing strings).
An object reference in a managed framework is very different from a passed reference in C++. To understand what makes them special, imagine how the following scenario would be handled, at the machine level, without garbage-collected object references: Method "Foo" returns a string, which is stored into various collections and passed to different pieces of code. Once nothing needs the string any more, it should be possible to reclaim all memory used in storing it, but it's unclear what piece of code will be the last one to use the string.
In a non-GC system, every collection either needs to have its own copy of the string, or else needs to hold something containing a pointer to a shared object which holds the characters in the string. In the latter situation, the shared object needs to somehow know when the last pointer to it gets eliminated. There are a variety of ways this can be handled, but an essential common aspect of all of them is that shared objects need to be notified when pointers to them are copied or destroyed. Such notification requires work.
In a GC system by contrast, programs are decorated with metadata to say which registers or parts of a stack frame will be used at any given time to hold rooted object references. When a garbage collection cycle occurs, the garbage collector will have to parse this data, identify and preserve all live objects, and nuke everything else. At all other times, however, the processor can copy, replace, shuffle, or destroy references in any pattern or sequence it likes, without having to notify any of the objects involved. Note that when using pointer-use notifications in a multi-processor system, if different threads might copy or destroy references to the same object, synchronization code will be required to make the necessary notification thread-safe. By contrast, in a GC system, each processor may change reference variables at any time without having to synchronize its actions with any other processor.