Passing auto typed vars to function in D? - dynamic-data

This doesn't work in D:
void doSomething(auto a, auto b){
// ...
}
I'm just curious, will this ever work? Or is this just technically impossible? (Or just plain stupid?)
In anyway, can this be accomplished in any other way? I suppose I could use the ... and look through the argument list, but I'm kinda making a library for lazy newbie people and want them to be able to create functions easily without really caring about data types. I'm playing with the idea of creating a struct called var like
struct var{
byte type;
void* data
// ...
}
// and overload like all operators so that a lazy devver can do
var doSomething(var a, var b){
if(a == "hello")
b = 8;
var c = "No.:" ~ b ~ " says:" ~ a;
return c;
}
But my head is already starting to hurt right there. And, I'm kinda feeling I'm missing something. I'm also painfully aware that this is probably what templates are for... Are they? From the little I know, a template would look like this (?)
void doSomething(T, U)( T a, U b){
// ...
}
But now it doesn't look so clean anymore. Maybe I'm getting all this backwards. Maybe my confusion stems from my belief that auto is a dynamic type, comparable to var i javascript, but when in reality, it's something else?
And if it isn't a dynamic type, and this is probably a whole other topic, is it possible to create one? Or is there maybe even an open source lib available? A liblazy maybe?
(PS. Yeah, maybe the lazy devver is me : )

If doSomething is generic over any type of a and b, then the template version is the right thing to do:
void doSomething(T, U)(T a, U b) {
// etc.
}
The compiler will detect the types represented by T and U at compile time and generate the proper code.

I'll just add that something close to your idea of runtime type polymorphism (the var structure) is already implemented in the std.variant module (D2 only).
Also, technically auto is really a keyword that does nothing - it's used where you need a type modifier without a type. If you don't specify a type, D will infer it for you from the initialization expression. For example, all of these work:
auto i = 5;
const j = 5;
static k = 5;

Stop, don't do that! I fear that if you ignore types early on it will be harder to grasp them, especially since D is not dynamically typed. I believe types are important even in dynamic languages.
You can ignore the type in D2 by using std.variant and it can make the code look like the language is dynamically typed.
You are correct on you Template usage, but not auto. D allows you the use of this word in many circumstances, such as the return type, since D has a very good system for inferring a type. Parameters are something that doesn't come with duplicate information for the type to be inferred with.
Additional info on std.variant:
Looking into this more, this is still more complicated than what you desire. For example, to assign the value to a typed variable requires a method call. and you can't access class methods from the reference.
int b = a.get!(int);
a.goFish() // Error Variant type doesn't have goFish method

Maybe my confusion stems from my belief that auto is a dynamic type, comparable to var i javascript, but when in reality, it's something else?
auto is a storage class. In fact, it's the default storage class; it doesn't mean anything beyond 'this has a storage class'.
Now if the compiler can infer the type unambiguously, then everything is fine. Otherwise, it's an error:
auto i = 42; // An integer literal is inferred to be of type 'int'.
auto j; // Error!
So auto isn't a type at all. Remember that this all has to happen at compile time, not run time. So let's look at a function:
auto add(auto a, auto b)
{
return a + b;
}
Can the compiler automatically infer the types of a, b, or the return value? Nope. In addition to the primitives that work with the add operator (int, byte, double, and so on), user defined types could overload it, and so on. Because the compiler cannot unambiguously infer any of these types, it is an error. Types must be specified in parameters because in general, they cannot be inferred (unlike declarations, for example).

Related

Why is fmt.Println not consistent when printing pointers?

I'm an experienced programmer but have never before touched Go in my life.
I just started playing around with it and I found that fmt.Println() will actually print the values of pointers prefixed by &, which is neat.
However, it doesn't do this with all types. I'm pretty sure it is because the types it does not work with are primitives (or at least, Java would call them that, does Go?).
Does anyone know why this inconsistent behaviour exists in the Go fmt library? I can easily retrieve the value by using *p, but for some reason Println doesn't do this.
Example:
package main
import "fmt"
type X struct {
S string
}
func main() {
x := X{"Hello World"}
fmt.Println(&x) // &{Hello World} <-- displays the pointed-to value prefixed with &
fmt.Println(*(&x)) // {Hello World}
i := int(1)
fmt.Println(&i) // 0x10410028 <-- instead of &1 ?
fmt.Println(*(&i)) // 1
}
The "technical" answer to your question can be found here:
https://golang.org/src/fmt/print.go?#L839
As you can see, when printing pointers to Array, Slice, Struct or Map types, the special rule of printing "&" + value applies, but in all other cases the address is printed.
As for why they decided to only apply the rule for those, it seems the authors considered that for "compound" objects you'd be interested in always seeing the values (even when using a pointer), but for other simple values this was not the case.
You can see that reasoning here, where they added the rule for the Map type which was not there before:
https://github.com/golang/go/commit/a0c5adc35cbfe071786b6115d63abc7ad90578a9#diff-ebda2980233a5fb8194307ce437dd60a
I would guess this had to do with the fact that it is very common to use for example pointers to Struct to pass them around (so many times you'd just forget to de-reference the pointer when wanting to print the value), but no so common to use pointers to int or string to pass those around (so if you were printing the pointer you were probably interested in seeing the actual address).

Collection of Unique Functions in Go

I am trying to implement a set of functions in go. The context is an event server; I would like to prevent (or at least warn) adding the same handler more than once for an event.
I have read that maps are idiomatic to use as sets because of the ease of checking for membership:
if _, ok := set[item]; ok {
// don't add item
} else {
// do add item
}
I'm having some trouble with using this paradigm for functions though. Here is my first attempt:
// this is not the actual signature
type EventResponse func(args interface{})
type EventResponseSet map[*EventResponse]struct{}
func (ers EventResponseSet) Add(r EventResponse) {
if _, ok := ers[&r]; ok {
// warn here
return
}
ers[&r] = struct{}{}
}
func (ers EventResponseSet) Remove(r EventResponse) {
// if key is not there, doesn't matter
delete(ers, &r)
}
It is clear why this doesn't work: functions are not reference types in Go, though some people will tell you they are. I have proof, though we shouldn't need it since the language specification says that everything other than maps, slices, and pointers are passed by value.
Attempt 2:
func (ers EventResponseSet) Add(r *EventResponse) {
// ...
}
This has a couple of problems:
Any EventResponse has to be declared like fn := func(args interface{}){} because you can't address functions declared in the usual manner.
You can't pass a closure at all.
Using a wrapper is not an option because any function passed to the wrapper will get a new address from the wrapper - no function will be uniquely identifiable by address, and all this careful planning is for nought.
Is it silly of me to not accept defining functions as variables as a solution? Is there another (good) solution?
To be clear, I accept that there are cases that I can't catch (closures), and that's fine. The use case that I envision is defining a bunch of handlers and being relatively safe that I won't accidentally add one to the same event twice, if that makes sense.
You could use reflect.Value presented by Uvelichitel, or the function address as a string acquired by fmt.Sprint() or the address as uintptr acquired by reflect.Value.Pointer() (more in the answer How to compare 2 functions in Go?), but I recommend against it.
Since the language spec does not allow to compare function values, nor does it allow to take their addresses, you have no guarantee that something that works at a time in your program will work always, including a specific run, and including different (future) Go compilers. I would not use it.
Since the spec is strict about this, this means compilers are allowed to generate code that would for example change the address of a function at runtime (e.g. unload an unused function, then load it again later if needed again). I don't know about such behavior currently, but this doesn't mean that a future Go compiler will not take advantage of such thing.
If you store a function address (in whatever format), that value does not count as keeping the function value anymore. And if no one else would "own" the function value anymore, the generated code (and the Go runtime) would be "free" to modify / relocate the function (and thus changing its address) – without violating the spec and Go's type safety. So you could not be rightfully angry at and blame the compiler, but only yourself.
If you want to check against reusing, you could work with interface values.
Let's say you need functions with signature:
func(p ParamType) RetType
Create an interface:
type EventResponse interface {
Do(p ParamType) RetType
}
For example, you could have an unexported struct type, and a pointer to it could implement your EventResponse interface. Make an exported function to return the single value, so no new values may be created.
E.g.:
type myEvtResp struct{}
func (m *myEvtResp) Do(p ParamType) RetType {
// Your logic comes here
}
var single = &myEvtResp{}
func Get() EventResponse { return single }
Is it really needed to hide the implementation in a package, and only create and "publish" a single instance? Unfortunately yes, because else you could create other value like &myEvtResp{} which may be different pointers still having the same Do() method, but the interface wrapper values might not be equal:
Interface values are comparable. Two interface values are equal if they have identical dynamic types and equal dynamic values or if both have value nil.
[...and...]
Pointer values are comparable. Two pointer values are equal if they point to the same variable or if both have value nil. Pointers to distinct zero-size variables may or may not be equal.
The type *myEvtResp implements EventResponse and so you can register a value of it (the only value, accessible via Get()). You can have a map of type map[EventResponse]bool in which you may store your registered handlers, the interface values as keys, and true as values. Indexing a map with a key that is not in the map yields the zero value of the value type of the map. So if the value type of the map is bool, indexing it with a non-existing key will result in false – telling it's not in the map. Indexing with an already registered EventResponse (an existing key) will result in the stored value – true – telling it's in the map, it's already registered.
You can simply check if one already been registered:
type EventResponseSet map[*EventResponse]bool
func (ers EventResponseSet) Add(r EventResponse) {
if ers[r] {
// warn here
return
}
ers[r] = true
}
Closing: This may seem a little too much hassle just to avoid duplicated use. I agree, and I wouldn't go for it. But if you want to...
Which functions you mean to be equal? Comparability is not defined for functions types in language specification. reflect.Value gives you the desired behaviour more or less
type EventResponseSet map[reflect.Value]struct{}
set := make(EventResponseSet)
if _, ok := set[reflect.ValueOf(item)]; ok {
// don't add item
} else {
// do add item
set[reflect.ValueOf(item)] = struct{}{}
}
this assertion will treat as equal items produced by assignments only
//for example
item1 := fmt.Println
item2 := fmt.Println
item3 := item1
//would have all same reflect.Value
but I don't think this behaviour guaranteed by any documentation.

How can I convert type dynamically in runtime in Golang?

Here is my example:
http://play.golang.org/p/D608cYqtO5
Basically I want to do this:
theType := reflect.TypeOf(anInterfaceValue)
theConvertedValue := anInterfaceValue.(theType)
The notation
value.(type)
is called a type-assertion. The type in an assertion has to be known at compile-time and it's always a type name.
In your playground example SetStruct2 could use a type-switch to handle different types for its second argument:
switch v := value.(type) {
case Config:
// code that uses v as a Config
case int:
// code that uses v as an int
}
You cannot, however, assert an interface to be something dynamic (like in your code). Because otherwise the compiler could not type-check your program.
Edit:
I don't want to case them one by one if there is another way to do so?
You can use reflection to work type-agnostically. You can then set stuff randomly on values but it will panic if you perform an illegal operation for a type.
If you want to benefit from the compiler's type checks you'll have to enumerate the different cases.

Go Programming - bypassing access privileges using pointers

Let's say I have the following hierarchy for my project:
fragment/fragment.go
main.go
And in the fragment.go I have the following code, with one getter and no setter:
package fragment
type Fragment struct {
number int64 // private variable - lower case
}
func (f *Fragment) GetNumber() *int64 {
return &f.number
}
And in the main.go I create a Fragment and try to change Fragment.number without a setter:
package main
import (
"fmt"
"myproject/fragment"
)
func main() {
f := new(fragment.Fragment)
fmt.Println(*f.GetNumber()) // prints 0
//f.number = 8 // error - number is private
p := f.GetNumber()
*p = 4 // works. Now f.number is 4
fmt.Println(*f.GetNumber()) // prints 4
}
So by using the pointer, I changed the private variable outside of the fragment package. I understand that in for example C, pointers help to avoid copying large struct/arrays and they are supposed to enable you to change whatever they're pointing to. But I don't quite understand how they are supposed to work with private variables.
So my questions are:
Shouldn't the private variables stay private, no matter how they are accessed?
How is this compared to other languages such as C++/Java? Is it the case there too, that private variables can be changed using pointers outside of the class?
My Background: I know a bit C/C++, rather fluent in Python and new to Go. I learn programming as a hobby so don't know much about technical things happening behind the scenes.
You're not bypassing any access privilegies. If you acquire a *T from any imported package then you can always mutate *T, ie. the pointee at whole, as in an assignment. The imported package designer controls what you can get from the package, so the access control is not yours.
The restriction to what's said above is for structured types (structs), where the previous still holds, but the finer granularity of access control to a particular field is controlled by the field's name case even when referred to by a pointer to the whole structure. The field name must be uppercase to be visible outside its package.
Wrt C++: I believe you can achieve the same with one of the dozens C++ pointer types. Not sure which one, though.
Wrt Java: No, Java has no pointers. Not really comparable to pointers in Go (C, C++, ...).

Assigning block pointers: differences between Objective-C vs C++ classes

I’ve found that assigning blocks behaves differently with respect to Objective-C class parameters and C++ classes parameters.
Imagine I have this simple Objective-C class hierarchy:
#interface Fruit : NSObject
#end
#interface Apple : Fruit
#end
Then I can write stuff like this:
Fruit *(^getFruit)();
Apple *(^getApple)();
getFruit = getApple;
This means that, with respect to Objective-C classes, blocks are covariant in their return type: a block which returns something more specific can be seen as a “subclass” of a block returning something more general. Here, the getApple block, which delivers an apple, can be safely assigned to the getFruit block. Indeed, if used later, it's always save to receive an Apple * when you're expecting a Fruit *. And, logically, the converse does not work: getApple = getFruit; doesn't compile, because when we really want an apple, we're not happy getting just a fruit.
Similarly, I can write this:
void (^eatFruit)(Fruit *);
void (^eatApple)(Apple *);
eatApple = eatFruit;
This shows that blocks are covariant in their argument types: a block that can process an argument that is more general can be used where a block that processes an argument that is more specific is needed. If a block knows how to eat a fruit, it will know how to eat an apple as well. Again, the converse is not true, and this will not compile: eatFruit = eatApple;.
This is all good and well — in Objective-C. Now let's try that in C++ or Objective-C++, supposing we have these similar C++ classes:
class FruitCpp {};
class AppleCpp : public FruitCpp {};
class OrangeCpp : public FruitCpp {};
Sadly, these block assignments don't compile any more:
FruitCpp *(^getFruitCpp)();
AppleCpp *(^getAppleCpp)();
getFruitCpp = getAppleCpp; // error!
void (^eatFruitCpp)(FruitCpp *);
void (^eatAppleCpp)(AppleCpp *);
eatAppleCpp = eatFruitCpp; // error!
Clang complains with an “assigning from incompatible type” error. So, with respect to C++ classes, blocks appear to be invariant in the return type and parameter types.
Why is that? Doesn't the same argument I made with Objective-C classes also hold for C++ classes? What am I missing?
This distinction is intentional, due to the differences between the Objective-C and C++ object models. In particular, given a pointer to an Objective-C object, one can convert/cast that pointer to point at a base class or a derived class without actually changing the value of the pointer: the address of the object is the same regardless.
Because C++ allows multiple and virtual inheritance, this is not the case for C++ objects: if I have a pointer to a C++ class and I cast/convert that pointer to point at a base class or a derived class, I may have to adjust the value of the pointer. For example, consider:
class A { int x; }
class B { int y; }
class C : public A, public B { }
B *getC() {
C *c = new C;
return c;
}
Let's say that the new C object in getC() gets allocated at address 0x10. The value of the pointer 'c' is 0x10. In the return statement, that pointer to C needs to be adjusted to point at the B subobject within C. Because B comes after A in C's inheritance list, it will (generally) be laid out in memory after A, so this means adding an offset of 4 bytes (
== sizeof(A)) to the pointer, so the returned pointer will be 0x14. Similarly, casting a B* to a C* would subtract 4 bytes from the pointer, to account for B's offset within C. When dealing with virtual base classes, the idea is the same but the offsets are no longer known, compile-time constants: they're accessed through the vtable during execution.
Now, consider the effect this has on an assignment like:
C (^getC)();
B (^getB)();
getB = getC;
The getC block returns a pointer to a C. To turn it into a block that returns a pointer to a B, we would need to adjust the pointer returned from each invocation of the block by adding 4 bytes. This isn't an adjustment to the block; it's an adjustment to the pointer value returned by the block. One could implement this by synthesizing a new block that wraps the previous block and performs the adjustment, e.g.,
getB = ^B() { return getC() }
This is implementable in the compiler, which already introduces similar "thunks" when overriding a virtual function with one that has a covariant return type needing adjustment. However, with blocks it causes an additional problem: blocks allow equality comparison with ==, so to evaluate whether "getB == getC", we would have to be able to look through the thunk that would be generated by the assignment "getB = getC" to compare the underlying block pointers. Again, this is implementable, but would require a much more heavyweight blocks runtime that is able to create (uniqued) thunks able to perform these adjustments to the return value (and as well as for any contravariant parameters). While all of this is technically possible, the cost (in runtime size, complexity, and execution time) outweighs the benefits.
Getting back to Objective-C, the single-inheritance object model never needs any adjustments to the object pointer: there's only a single address to point at a given Objective-C object, regardless of the static type of the pointer, so covariance/contravariance never requires any thunks, and the block assignment is a simple pointer assignment (+ _Block_copy/_Block_release under ARC).
the feature was probably overlooked. There are commits that show Clang people caring about making covariance and contravariance work in Objective-C++ for Objective-C types but I couldn't find anything for C++ itself. The language specification for blocks doesn't mention covariance or contravariance for either C++ or Objective-C.

Resources