I have been reading the Julia Documentation, and also some blogs on the internet, and I have found sentences that mention the concept of instantiation, for instance, "Int64 can be instantiated". I found some information on this concept here:
https://www.computerhope.com/jargon/i/instantiation.htm
but I cannot see how this could be used in Julia or why it is relevant. Any comment would be welcome. Thanks.
We say that type can be instantiated if it is possible to create an object that has this type.
For example Int64 can be instantiated as e.g. 1 on a 64-bit machine by default has this type:
julia> typeof(1)
Int64
However, a supertype of Int64 is Signed:
julia> supertype(Int64)
Signed
The Signed type cannot be instantiated, because it is abstract (see https://docs.julialang.org/en/v1/manual/types/#man-abstract-types). This means that it is impossible to create an object that has type Signed.
Related
According to the Golang tour, we're provided with the following integer types:
int int8 int16 int32 int64
uint uint8 uint16 uint32 uint64 uintptr
In theory, that means we could also have pointers to all of these types as follows:
*int *int8 *int16 *int32 *int64
*uint *uint8 *uint16 *uint32 *uint64 *uintptr
If this is the case, then we already have a pointer to a uint in the form of *uint. That would make uintptr redundant. The official documentation doesn't shed much light on this:
uintptr is an integer type that is large enough to hold the bit pattern of any pointer.
As I understand it, that means that the bit width of a uint is determined at compile time based on the target architecture (typically either 32-bit or 64-bit). It seems logical that the pointer width should scale to the target architecture as well (IE: a 32-bit *uint points to a 32-bit uint). Is that the case in Golang?
Another thought was that maybe uintptr was added to make the syntax less confusing when doing multiple indirection (IE: foo *uinptr vs foo **uint)?
My last thought is that perhaps pointers and integers are incompatible data types in Golang. That would be pretty frustrating since the hardware itself doesn't make any distinction between them. For instance, a "branch to this address" instruction can use the same data from the same register that was just used in an "add this value" instruction.
What's the real point (pun intended) of uintptr?
The short answer is "never use uintptr". 😀
The long answer is that uintptr is there to bypass the type system and allow the Go implementors to write Go runtime libraries, including the garbage collection system, in Go, and to call C-callable code including system calls using C pointers that are not handled by Go at all.
If you're acting as an implementor—e.g., providing access to system calls on a new OS—you'll need uintptr. You will also need to know all the special magic required to use it, such as locking your goroutine to an OS-level thread if the OS is going to do stack-ish things to OS-level threads, for instance. (If you're using it with Go pointers, you may also need to tell the compiler not to move your goroutine stack, which is done with special compile-time directives.)
Edit: as kostix notes in a comment, the runtime system considers an unsafe.Pointer as a reference to an object, which keeps the object alive for GC. It does not consider a uintptr as such a reference. (That is, while unsafe.Pointer has a pointer type, uintptr has integer type.) See also the documentation for the unsafe package.
uintptr is simply an integer representation of a memory address, regardless of the actual type it points to. Sort of like void * in C, or just casting a pointer to an integer. It's purpose is to be used in unsafe black magic, and it is not used in everyday go code.
You are conflating uintptr and *uint. uintptr is used when you're dealing with pointers, it is a datatype that is large enough to hold a pointer. It is mainly used for unsafe memory access, look at the unsafe package. *uint is a pointer to an unsigned integer.
I have been looking at Julia documentation on reflection and metaprogramming. It covers introspection broadly (ability to check datatype fields, methods in generic functions, expanding macros and lowering functions) but I have not seen any point where it talks about intercession (ability to change its structure e.g. edit data fields). Does it mean that Julia does not support intercession?
Julia's struct is immutable by default, so a struct cannot change fields or values. A mutable struct can have the values of its fields changed but fields cannot be added or deleted. Methods using the struct can be added but generally cannot be removed once added to a given scope.
So a struct in julia supports only a small part of what you call "intercession."
If actually needed, a Julia struct can take a Dict field which can mimic full "intercession" with name-value pairs, at the price of efficiency of access.
I am developing a interpreter of a functional programming language, which uses Hindley-Milner type system.
The question is, where should type errors occur(be detected)?
For example, if I apply Integer type value to a function that has type Bool -> Integer, this is obviously a type error. Can type inferer always detect this?
My speculation is that, the type inferer doesn't always fully know the types of expressions, i.e. in process of inference. Therefore some errors detected by the type inferer would be wrong, or some errors would not be detected.
However, expression evaluator should detect type errors properly, because the evaluator fully knows the types of expressions.
If the type inferer cannot detect type errors correctly, then how statically typed interpreted languages such as OCaml, process static type error checking?
... a type error. Can type inferer always detect this?
If your type inference is sound, then yes, it should always detect an error.
With Hindley-Milner type system, in particular, the algorithm relies on unification in order to find a principal type. In the case there's none, you'll end up with an unification error.
Julia's parametric types really define a family of types containing different layout in memory. I was wondering if this works also for the names and number of fields in a composite type? A simple example would be something like:
type mytype{Float64}
a::Float64
b::Float64
end
type mytype{Int64}
a::Int64
end
This gives me an error for redefining mytype.
Here, I want to have two fields if mytype's type parameter was Float64 and just one if its Int64. (Actually what I want is more complicated, but this is a basic example). One could imagine having abstract types and <:, etc in the above.
I realize this might not be possible in other languages, but to me it seems the compiler should be able to figure this out much the same way functions can be specialized. After all, real (compiled) code will involve concrete types and everything will be known by the compiler. (for truly dynamical types, perhaps an additional layer of encapsulation would be required in this case?)
Perhaps there is a different/better way of achieving similar results?
You could define the two types separately (mytypeF & mytypeI) and define a new type mytype as the union of the two. Then functions which really could statically determine which type they'd received would be specialized as you requested. But I'm not sure if that's sensible or what you're really after.
This is currently not possible, but the feature has been speculatively proposed as "generated types" in issue #8472. Sebastian's answer is a reasonable work around so long as you take care that the grouped mytype constructor is type-stable. For a more complete example, see how ImmutableArrays.jl programmatically defines a group of types around the abstract ImmutableArray locus.
I'm fairly new to C so be gentle.
I want to use the library interception method for Linux to replace calls to the OpenCL library with my own library. I understand that this can be done using LD_PRELOAD. So I can just re-implement the OpenCL functions as defined in the OpenCL header file within my own library which can then be linked against.
The problem is that this OpenCL header also contains some extern struct definitions, e.g.
typedef struct _cl_mem * cl_mem;
which are not defined within the OpenCL header. Is it possible these structs are defined within the OpenCL shared lib? If not, where might they be defined?
Cheers
Chris
That typedef declares a type pointing to a struct, the contents of which are undeclared. This means code using it can't do things like checking its size, copying the struct, or inspecting its contents - it simply has no idea what size it is.
This is a traditional technique in C to create an opaque, or private, type. You can declare the struct inside your OpenCL library, and the official header puts no restrictions on what that struct contains. It could even be empty, if all you need is an ID you can store in the pointer itself, though this is rarely done.
An example of the same technique used in the standard C library is the FILE type. It might be as simple as an integer file descriptor, or as complex as a struct containing the entire filesystem state; standard C code won't know. The particulars are known to the library only.
In short, you can declare that struct however you like - as long as you implement every function that handles that struct. The program that links to your library never handles the struct, only pointers to it.