I have been looking at Julia documentation on reflection and metaprogramming. It covers introspection broadly (ability to check datatype fields, methods in generic functions, expanding macros and lowering functions) but I have not seen any point where it talks about intercession (ability to change its structure e.g. edit data fields). Does it mean that Julia does not support intercession?
Julia's struct is immutable by default, so a struct cannot change fields or values. A mutable struct can have the values of its fields changed but fields cannot be added or deleted. Methods using the struct can be added but generally cannot be removed once added to a given scope.
So a struct in julia supports only a small part of what you call "intercession."
If actually needed, a Julia struct can take a Dict field which can mimic full "intercession" with name-value pairs, at the price of efficiency of access.
Related
I know this question is very similar to this one I asked some time ago: Why F#'s default set collection is sorted while C#'s isn't?
However, I'd like to confirm if the reason given there is the same for this case? And I wonder if there's an implementation of an immutable Map in F# out there, written by someone, which is less strict and doesn't require K to be comparable? I'd happily use it because I don't care so much about performance.
Why F#'s idiomatic dictionary collection Map<K,V> needs the type K
to implement comparable while C#'s Dictionary<K,V> doesn't?
F# Map<Key, Value> requires keys to be comparable, because map is implemented as a tree structure (you have to decide which subtree to go depending on comparison result). C# Dictionary<Key, Value> is implemented as a bucket of linked lists. You get a bucket by hash code of key and then iterate list until you (not)find the equal key. In both data structures keys are compared. The only difference is that for dictionary equality comparison is enough.
So, the question is why F# Map has explicit comparison constraint, but C# Dictionary has implicit equality requirement?
Let's start with C#. What would be if the dictionary would have an IEquatable key constraint? Well, you would have to manually implement this interface for every custom data type used as dictionary key. But what if you want different implementations of equality? E.g. in some dictionaries you want your key string case insensitive. Of course, you can pass IEqualityComparer implementation to be used for keys comparison (not only to the dictionary, but anywhere when you need comparison). But why then enforce key to be comparable if external comparer will be used? Note that there is always a default comparer which is used if you don't pass anything to the dictionary. Default comparer checks if key implements IComparable and uses that implementation.
Why then F# has an explicit comparable constraint on key data type? Because this constraint will not enforce you to manually implement IComparable for every custom data type used as map key. One big difference between C# and F# type system is that F# types comparable and equatable by default. F# compiler generates IComparable, IComparable<T>, and IStructuralComparable implementations unless you explicitly mark type with NoComparison attribute. So this constraint doesn't enforce you to write any additional code when you use F# data types.
One more benefit of using comparison/equality constrains - F# has a number of predefined generic operations on types that implement comparison or equality (=,<,<=,>=,=,max,min). Which makes code with generic comparable/equatable types much more readable.
I noticed that when a struct is declared, its fields are immutable by default unless the mutable keyword is used.
What is the reason for this? Is it good practice to use mutable structs?
This is discussed in this PR to Julia. I'll just quote from Jeff Bezanson:
We should steer people to immutable types by default. Immutable is generally better and faster, has better default behavior with ===, and it's really pretty rare to need to update object fields. Base has many more immutable types than mutable, and in making this change it seemed to me that even more types could be made immutable. But type Foo looks more natural than immutable Foo, so type tends to be the default choice. This change reverses that, with struct Foo being more natural, and immutable. Writing mutable struct Foo makes you ask "does this really need to be mutable?", which is a good thing!
Immutable struct types are the closest thing we have to value type structs in other languages. We inline them in arrays in at least some cases, and being immutable makes it much harder to tell whether they are value or reference types.
FWIW, Rust also uses struct and makes them immutable. Not that we need to copy Rust, but it adds confidence that this is a reasonable thing to do.
In short, immutable types are more performant, because they can be stack allocated and the compiler can reason better about them. They are also easier for humans to reason about, since they can't change from under you.
Julia has this feature for a few reasons:
Structs can be packed efficiently into arrays and in some cases, the compiler can avoid
allocating space for them entirely.
Code that uses immutable structs can be easier to process for the reader.
The type's invariants will never be violated.
Generally, using mutable structs is bad practice because changing an object's internals can lead to violation of its invariants. Structs should only be declared with the mutable keyword when absolutely necessary.
The Racket FFI's documentation has types for _ptr, _cpointer, and _pointer.1
However, the documentation (as of writing this question) does not seem to compare the three different types. Obviously the first two are functions that produce ctype?s, where as the last one is a ctype? itself. But when would I use one type over the other?
1It also has as other types such as _box, _list, _gcpointer, and _cpointer/null. These are all variants of those three functions.
_ptr is a macro that is used to create types that are suitable for function types in which you need to pass data via a pointer passed as an argument (a very common idiom in C).
_pointer is a generic pointer ctype that can be used pretty much wherever a pointer is expected or returned. On the Racket side, it becomes an opaque value that you can't manipulate very easily (you can use ptr-ref if you need it). Note the docs have some caveats about interactions with GC when using this.
_cpointer constructs safer variants of _pointer that use tags to ensure that you don't mix up pointers of different types. It's generally more convenient to use define-cpointer-type instead of manually constructing these. In other words, these help you build abstractions represented by Racket's C pointers. You can do it manually with cpointer-push-tag! and _pointer but that's less convenient.
There's also a blog post I wrote that goes into more detail about some of these pointer issues: http://prl.ccs.neu.edu/blog/2016/06/27/tutorial-using-racket-s-ffi/
In go I seem to have two options:
foo := Thing{}
foo.bar()
foo := &Thing{}
foo.bar()
func (self Thing) bar() {
}
func (self *Thing) bar() {
}
What's the better way to define my funcs with self Thing or with self *Thing?
Edit: this is not a duplicate of the question about methods and functions. This question has to do with Thing and &Thing and I think it's different enough to warrent it's own url.
Take a look at this item from the official FAQ:
For programmers unaccustomed to pointers, the distinction between
these two examples can be confusing, but the situation is actually
very simple. When defining a method on a type, the receiver (s in the
above examples) behaves exactly as if it were an argument to the
method. Whether to define the receiver as a value or as a pointer is
the same question, then, as whether a function argument should be a
value or a pointer. There are several considerations.
First, and most important, does the method need to modify the
receiver? If it does, the receiver must be a pointer. (Slices and maps
act as references, so their story is a little more subtle, but for
instance to change the length of a slice in a method the receiver must
still be a pointer.) In the examples above, if pointerMethod modifies
the fields of s, the caller will see those changes, but valueMethod is
called with a copy of the caller's argument (that's the definition of
passing a value), so changes it makes will be invisible to the caller.
By the way, pointer receivers are identical to the situation in Java,
although in Java the pointers are hidden under the covers; it's Go's
value receivers that are unusual.
Second is the consideration of efficiency. If the receiver is large, a
big struct for instance, it will be much cheaper to use a pointer
receiver.
Next is consistency. If some of the methods of the type must have
pointer receivers, the rest should too, so the method set is
consistent regardless of how the type is used. See the section on
method sets for details.
For types such as basic types, slices, and small structs, a value
receiver is very cheap so unless the semantics of the method requires
a pointer, a value receiver is efficient and clear.
There isn't a clear answer but they're completely different. When you don't use a pointer you 'pass by value' meaning the object you called it on will be immutable (modifying a copy), when you use the pointer you 'pass by reference'. I would say more often you use the pointer variety but it is completely situational, there is no 'better way'.
If you look at various programming frameworks/class libraries you will see many examples where the authors have deliberately chosen to do things by value or reference. For example, in C# .NET this is the fundamental difference between a struct and a class and types like Guid and DateTime were deliberately implemented as structs (value type). Again, I think the pointer is more often the better choice (if you look through .NET almost everything is a class, the reference type), but it definitely depends on what you wish to achieve with the type and/or how you want consumers/other developers to interact with it. Your may need to consider performance and concurrency (maybe you want everything to be by value so you don't have to worry about concurrent ops on a type, maybe you need a pointer because the objects memory footprint is large and copying it would make your program too slow or consumptive).
In Ada, Primitive operations of a type T can only be defined in the package where T is defined. For example, if a Vehicules package defines Car and Bike tagged record, both inheriting a common Vehicle abstract tagged type, then all operations than can dispatch on the class-wide Vehicle'Class type must be defined in this Vehicles package.
Let's say that you do not want to add primitive operations: you do not have the permission to edit the source file, or you do not want to clutter the package with unrelated features.
Then, you cannot define operations in other packages that implicitely dispatches on type Vehicle'Class.
For example, you may want to serialize vehicles (define a Vehicles_XML package with a To_Xml dispatching function) or display them as UI elements (define a Vehicles_GTK package with Get_Label, Get_Icon, ... dispatching functions), etc.
The only way to perform dynamic dispatch is to write the code explicitely; for example, inside Vechicle_XML:
if V in Car'Class then
return Car_XML (Car (V));
else
if V in Bike'Class then
return Bike_XML (Bike (V));
else
raise Constraint_Error
with "Vehicle_XML is only defined for Car and Bike."
end if;
(And a Visitor pattern defined in Vehicles and used elsewhere would work, of course, but that still requires the same kind of explicit dispatching code. edit in fact, no, but there is still some boilerplate code to write)
My question is then:
is there a reason why operations dynamically dispatching on T are restricted to be defined in the defining package of T?
Is this intentional? Is there some historical reasons behind this?
Thanks
EDIT:
Thanks for the current answers: basically, it seems that it is a matter of language implementation (freezing rules/virtual tables).
I agree that compilers are developped incrementally over time and that not all features fit nicely in an existing tool.
As such, isolating dispatching operators in a unique package seems to be a decision mostly guided by existing implementations than by language design. Other languages outside of the C++/Java family provide dynamic dispatch without such requirement (e.g. OCaml, Lisp (CLOS); if that matters, those are also compiled languages, or more precisely, language for which compilers exist).
When I asked this question, I wanted to know if there were more fundamental reasons, at language specification level, behind this part of Ada specifications (otherwise, does it really mean that the specification assumes/enforces a particular implementation of dynamic disapatch?)
Ideally, I am looking for an authoritative source, like a rationale or guideline section in Reference Manuals, or any kind of archived discussion about this specific part of the language.
I can think of several reasons:
(1) Your example has Car and Bike defined in the same package, both derived from Vehicles. However, that's not the "normal" use case, in my experience; it's more common to define each derived type in its own package. (Which I think is close to how "classes" are used in other compiled languages.) And note also that it's not uncommon to define new derived types afterwards. That's one of the whole points of object-oriented programming, to facilitate reuse; and it's a good thing if, when designing a new feature, you can find some existing type that you can derive from, and reuse its features.
So suppose you have your Vehicles package that defines Vehicle, Car, and Bike. Now in some other package V2, you want to define a new dispatching operation on a Vehicle. For this to work, you have to provide the overriding operations for Car and Bike, with their bodies; and assuming you are not allowed to modify Vehicles, then the language designers have to decide where the bodies of the new operation have to be. Presumably, you'd have to write them in V2. (One consequence is that the body that you write in V2 would not have access to the private part of Vehicles, and therefore it couldn't access implementation details of Car or Bike; so you could only write the body of that operation if terms of already-defined operations.) So then the question is: does V2 need to provide operations for all types that are derived from Vehicle? What about types derived from Vehicle that don't become part of the final program (maybe they're derived to be used in someone else's project)? What about types derived from Vehicle that haven't yet been defined (see preceding paragraph)? In theory, I suppose this could be made to work by checking everything at link time. However, that would be a major paradigm change for the language. It's not something that could be easily. (It's pretty common, by the way, for programmers to think "it would be nice to add feature X to a language, and it shouldn't be too hard because X is simple to talk about", without realizing just what a vast impact such a "simple" feature would have.)
(2) A practical reason has to do with how dispatching is implemented. Typically, it's done with a vector of procedure/function pointers. (I don't know for sure what the exact implementation is in all cases, but I think this is basically the case for every Ada compiler as well as for C++ and Java compilers, and probably C#.) What this means is that when you define a tagged type (or a class, in other languages), the compiler will set up a vector of pointers, and based on how many operations are defined for the type, say N, it will reserve slots 1..N in the vector for the addresses of the subprograms. If a type is derived from that type and defines overriding subprograms, the derived type gets its own vector, where slots 1..N will be pointers to the actual overriding subprograms. Then, when calling a dispatching subprogram, a program can look up the address in some known slot index assigned to that subprogram, and it will jump to the correct address depending on the object's actual type. If a derived type defines new primitive subprograms, new slots are assigned N+1..N2, and types derived from that could define new subprograms that get slots N2+1..N3, and so on.
Adding new dispatching subprograms to Vehicle would interfere with this. Since new types have been derived from Vehicle, you can't insert a new area into the vector after N, because code has already been generated that assumes the slots starting at N+1 have been assigned to new operations derived for derived types. And since we may not know all the types that have been derived from Vehicle and we don't know what other types will be derived from Vehicle in the future and how many new operations will be defined for them, it's hard to pick some other location in the vector that could be used for the new operations. Again, this could be done if all of the slot assignment were deferred until link time, but that would be a major paradigm change, again.
To be honest, I can think of other ways to make this work, by adding new operations not in the "main" dispatch vector but in an auxiliary one; dispatching would probably require a search for the correct vector (perhaps using an ID assigned to the package that defines the new operations). Also, adding interface types to Ada 2005 has already complicated the simple vector implementation somewhat. But I do think this (i.e. it doesn't fit into the model) is one reason why the ability to add new dispatching operations like you suggest isn't present in Ada (or in any other compiled language that I know of).
Without having checked the rationale for Ada 95 (where tagged types were introduced), I am pretty sure the freezing rules for tagged types are derived from the simple requirement that all objects in T'Class should have all the dispatching operations of type T.
To fulfill that requirement, you have to freeze type and say that no more dispatching operations can be added to type T once you:
Derive a type from T, or
Are at the end of the package specification where T was declared.
If you didn't do that, you could have a type derived from type T (i.e. in T'Class), which hadn't inherited all the dispatching operations of type T. If you passed an object of that type as a T'Class parameter to a subprogram, which knew of one more dispatching operation on type T, a call to that operation would have to fail. - We wouldn't want that to happen.
Answering your extended question:
Ada comes with both a Reference Manual (the ISO standard), a Rationale and an Annotated Reference Manual. And a large part of the discussions behind these documents are public as well.
For Ada 2012 see http://www.adaic.org/ada-resources/standards/ada12/
Tagged types (dynamic dispatching) was introduced in Ada 95. The documents related to that version of the standard can be found at http://www.adaic.org/ada-resources/standards/ada-95-documents/