I want to analyze the pointer values in LLVM IR.
As illustrated in LLVM Value Class,
Value is is a very important LLVM class. It is the base class of all
values computed by a program that may be used as operands to other
values. Value is the super class of other important classes such as
Instruction and Function. All Values have a Type. Type is not a
subclass of Value. Some values can have a name and they belong to some
Module. Setting the name on the Value automatically updates the
module's symbol table.
To test if a Value is a pointer or not, there is a function a->getType()->isPointerTy(). LLVM also provides a LLVM PointerType class, however there are not direct apis to compare the values of pointers.
So I wonder how to compare these pointer values, to test if they are equal or not. I know there is AliasAnalysis, but I have doubt with the AliasAnalysis results, so I want to validate it myself.
The quick solution is to use IRBuilder::CreatePtrDiff. This will compute the difference between the two pointers, and return an i64 result. If the pointers are equal, this will be zero, and otherwise, it will be nonzero.
It might seem excessive, seeing as CreatePtrDiff will make an extra effort to compute the result in terms of number of elements rather than number of bytes, but in all likelihood that extra division will get optimized out.
The other option is to use a ptrtoint instruction, with a reasonably large result type such as i64, and then do an integer comparison.
From the online reference:
Value * CreatePtrDiff (Value *LHS, Value *RHS, const Twine &Name="")
Return the i64 difference between two pointer values, dividing out the size of the pointed-to objects.
Related
In Practical Common Lisp, Peter Seibel write:
The mechanism by which multiple values are returned is implementation dependent just like the mechanism for passing arguments into functions is. Almost all language constructs that return the value of some subform will "pass through" multiple values, returning all the values returned by the subform. Thus, a function that returns the result of calling VALUES or VALUES-LIST will itself return multiple values--and so will another function whose result comes from calling the first function. And so on.
The implementation dependent does worry me.
My understanding is that the following code might just return primary value:
> (defun f ()
(values 'a 'b))
> (defun g ()
(f))
> (g) ; ==> a ? or a b ?
If so, does it mean that I should use this feature sparingly?
Any help is appreciated.
It's implementation-dependent in the sense that how multiple values are returned at the CPU level may vary from implementation to implementation. However, the semantics are well-specified at the language level and you generally do not need to be concerned about the low-level implementation.
See section 2.5, "Function result protocol", of The Movitz development platform for an example of how one implementation handles multiple return values:
The CPU’s carry flag (i.e. the CF bit in the eflags register) is used to signal whether anything other than precisely one value is being returned. Whenever CF is set, ecx holds the number of values returned. When CF is cleared, a single value in eax is implied. A function’s primary value is always returned in eax. That is, even when zero values are returned, eax is loaded with nil.
It's this kind of low-level detail that may vary from implementation to implementation.
One thing to be aware: there is a limit for the number of values which can be returned on a specific Common Lisp implementation.
The variable MULTIPLE-VALUES-LIMIT has the implementation/machine specific value of the maximum numbers of values which can be returned. The standard says that it should not be smaller than 20. SBCL has a very large number on my computer, while LispWorks has only 51, ECL has 64 and CLISP has 128.
But I can't remember seeing Lisp code which wants to return more than 5 values.
I have a pointer uvw(:,:) which is two-dimensional, and I got a 1d buffer array x(:).
Now I need to point uvw(1,:)=>x(1:ncell) and uvw(2,:)=>x(ncell+1:ncell*2) etc.
I made a very simple example. I know that array of pointers does not work, but does anybody have an idea how this can be worked around?
PS: For a pragmatic reason I do not want to wrap my uvw with a declared type. ( i am changing some bit of code, and need uvw as 2D pointer. Currently is an array, and my idea is to avoid changing the way uvw is being used as it being used thousands of times)
program test
real, allocatable,target :: x(:)
real, pointer :: ptr(:,:)
allocate(x(100) )
x = 1.
ptr(1,:) => x(1:10)
end program
The error message says:
`error #8524: The syntax of this data pointer assignment is incorrect:
either 'bound spec' or 'bound remapping' is expected in this context.
[1]
ptr(1,:) => x(1:10)
----^`
You are trying to perform pointer bounds remapping, but you have the incorrect syntax and approach.
Pointer bounds remapping is a way to have the shape of the pointer different from that of the target. In particular, the rank of the pointer and target may differ. However, in such an assignment it is necessary to explicitly specify the lower and upper bounds of the remapping; it isn't sufficient to use : by itself.
Also, you'll need to assign the whole pointer in one go. That is, you can't have "the first ten elements point to this slice, the next ten to this slice" and so on in multiple statements.
The assignment statement would be
ptr(1:10,1:10) => x
Note, that this also means that you can't actually have what you want. You are asking for the elements ptr(1,1:10) to correspond to x(1:10) and ptr(2,2:10) to correspond to x(11:20). That isn't possible: the array elements must match in order: ptr(1:10,1) being the first ten elements of ptr must instead be associated with the first ten elements x(1:10). The corrected pointer assignment above has this.
If you prefer avoiding a pointer, then the UNION/MAP is an option depending on compiler. It was added to gfortran a while ago... then you can think of the array as a rank=2 but also use the vector (Rank=1) for SIMD operations.
All this assumes that one wants to avoid pointers...
I have a slice that contains pointers to values. In a performance-critical part of my program, I'm adding or removing values from this slice. For the moment, inserting a value is just an append (O(1) complexity), and removal consists in searching the slice for the corresponding pointer value, from 0 to n-1, until the pointer is found (O(n)). To improve performance, I'd like to sort values in the slice, so that searching can be done using dichotomy (so O(log(n)).
But how can I compare pointer values? Pointer arithmetic is forbidden in go, so AFAIK to compare pointer values p1 and p2 I have to use the unsafe package and do something like
uintptr(unsafe.Pointer(p1)) < uintptr(unsafe.Pointer(p2))
Now, I'm not comfortable using unsafe, at least because of its name. So, is that method correct? Is it portable? Are there potential pitfalls? Is there a better way to define an order on pointer values? I know I could use maps, but maps are slow as hell.
As said by others, don't do this. Performance can't be that critical to resort to pointer arithmetic in Go.
Pointers are comparable, Spec: Comparison operators:
Pointer values are comparable. Two pointer values are equal if they point to the same variable or if both have value nil. Pointers to distinct zero-size variables may or may not be equal.
Just use a map with the pointers as keys. Simple as that. Yes, indexing maps is slower than indexing slices, but then again, if you'd want to keep your slice sorted and you wanted to perform binary searches in that, then the performance gap decreases, as the (hash) map implementation provides you O(1) lookup while binary search is only O(log n). In case of big data set, the map might even be faster than searching in the slice.
If you anticipate a big number of pointers in the map, then pre-allocate a big one with make() passing an estimated upper size, and until your map exceeds this size, no reallocation will occur.
m := make(map[*mytype]struct{}, 1<<20) // Allocate map for 1 million entries
In Julia, say I have an object_id for a variable but have forgotten its name, how can I retrieve the object using the id?
I.e. I want the inverse of some_id = object_id(some_object).
As #DanGetz says in the comments, object_id is a hash function and is designed not to be invertible. #phg is also correct that ObjectIdDict is intended precisely for this purpose (it is documented although not discussed much in the manual):
ObjectIdDict([itr])
ObjectIdDict() constructs a hash table where the keys are (always)
object identities. Unlike Dict it is not parameterized on its key and
value type and thus its eltype is always Pair{Any,Any}.
See Dict for further help.
In other words, it hashes objects by === using object_id as a hash function. If you have an ObjectIdDict and you use the objects you encounter as the keys into it, then you can keep them around and recover those objects later by taking them out of the ObjectIdDict.
However, it sounds like you want to do this without the explicit ObjectIdDict just by asking which object ever created has a given object_id. If so, consider this thought experiment: if every object were always recoverable from its object_id, then the system could never discard any object, since it would always be possible for a program to ask for that object by ID. So you would never be able to collect any garbage, and the memory usage of every program would rapidly expand to use all of your RAM and disk space. This is equivalent to having a single global ObjectIdDict which you put every object ever created into. So inverting the object_id function that way would require never deallocating any objects, which means you'd need unbounded memory.
Even if we had infinite memory, there are deeper problems. What does it mean for an object to exist? In the presence of an optimizing compiler, this question doesn't have a clear-cut answer. It is often the case that an object appears, from the programmer's perspective, to be created and operated on, but in reality – i.e. from the hardware's perspective – it is never created. Consider this function which constructs a complex number and then uses it for a simple computation:
julia> function f(y::Real)
z = Complex(0,y)
w = 2z*im
return real(w)
end
f (generic function with 1 method)
julia> foo(123)
-246
From the programmer's perspective, this constructs the complex number z and then constructs 2z, then 2z*im, and finally constructs real(2z*im) and returns that value. So all of those values should be inserted into the "Great ObjectIdDict in the Sky". But are they really constructed? Here's the LLVM code for this function applied to an Int:
julia> #code_llvm foo(123)
define i64 #julia_foo_60833(i64) #0 !dbg !5 {
top:
%1 = shl i64 %0, 1
%2 = sub i64 0, %1
ret i64 %2
}
No Complex values are constructed at all! Instead, all of the work is inlined and eliminated instead of actually being done. The whole computation boils down to just doubling the argument (by shifting it left one bit) and negating it (by subtracting it from zero). This optimization can be done first and foremost because the intermediate steps have no observable side effects. The compiler knows that there's no way to tell the difference between actually constructing complex values and operating on them and just doing a couple of integer ops – as long as the end result is always the same. Implicit in the idea of a "Great ObjectIdDict in the Sky" is the assumption that all objects that seem to be constructed actually are constructed and inserted into a large, permanent data structure – which is a massive side effect. So not only is recovering objects from their IDs incompatible with garbage collection, it's also incompatible with almost every conceivable program optimization.
The only other way one could conceive of inverting object_id would be to compute its inverse image on demand instead of saving objects as they are created. That would solve both the memory and optimization problems. Of course, it isn't possible since there are infinitely many possible objects but only a finite number of object IDs. You are vanishingly unlikely to actually encounter two objects with the same ID in a program, but the finiteness of the ID space means that inverting the hash function is impossible in principle since the preimage of each ID value contains an infinite number of potential objects.
I've probably refuted the possibility of an inverse object_id function far more thoroughly than necessary, but it led to some interesting thought experiments, and I hope it's been helpful – or at least thought provoking. The practical answer is that there is no way to get around explicitly stashing every object you might want to get back later in an ObjectIdDict.
Assume I want to store I vector together with its norm. I expected the corresponding type definition to be straightforward:
immutable VectorWithNorm1{Vec <: AbstractVector}
vec::Vec
norm::eltype(Vec)
end
However, this doesn't work as intended:
julia> fieldtype(VectorWithNorm1{Vector{Float64}},:norm)
Any
It seems I have to do
immutable VectorWithNorm2{Vec <: AbstractVector, Eltype}
vec::Vec
norm::Eltype
end
and rely on the user to not abuse the Eltype parameter. Is this correct?
PS: This is just a made-up example to illustrate the problem. It is not the actual problem I'm facing.
Any calculations on a type parameter currently do not work
(although I did discuss the issue with Jeff Bezanson at JuliaCon, and he seemed amenable to fixing it).
The problem currently is that the expression for the type of norm gets evaluated when the parameterized type is defined, and gets called with a TypeVar, but it is not yet bound to a value, which is what you really need it to be called with, at the time that that parameter is actually bound to create a concrete type.
I've run into this a lot, where I want to do some calculation on the number of bits of a floating point type, i.e. to calculate and use the number of UInts needed to store a fp value of a particular precision, and use an NTuple{N,UInt} to hold the mantissa.