I have a CustomQMLComponent. It has 3 properties. p3 is dependent on p1 & p2. p1 & p2 are set when an instance of CustomQMLComponent is created.
Questions:
By the time p3 is evaluated, will p1 & p2 always have the values set by the caller.
What is the recommended way to set p3, as shown below or as in the commented statement?
CustomQMLComponent.qml:
Item {
required property string p1
property bool p2 : false
property int p3: cppContextPropertyObj.slot(p1, p2)
//Component.onCompleted: p3 = cppContextPropertyObj.slot(p1, p2)
}
main.qml:
CustomQMLComponent{
p1: "my_string"
p2: true
}
UPDATE:
p1 and p2 have static value assignments, whereas p3 has a binding value assignment.
As per this old article: https://www.kdab.com/qml-engine-internals-part-2-bindings/, static value assignments happen during creation phase and binding value assignments happen at the end of creation phase.
Case 1:
CustomQMLComponent{}
In this case, based on the above article, p1 & p2 values are set by the time p3 value is set.
Case 2:
CustomQMLComponent{
p1: "my_string"
p2: true
}
What happens in this case?
In a more general sense, what happens when properties of a component are set when creating an instance of the component? Are the properties initialized with default values and then overridden by the new instance's values? Or, the properties are initialized just once with the default/new values.
This line is not just an assignment, it is also a binding:
property int p3: cppContextPropertyObj.slot(p1, p2)
What that means is that it will be reevaluated every time a property referred to in it (in this case p1 and p2) changes - i.e. their *Changed event is emitted.
So, it doesn't really matter as long as anything dependent on p3 can handle p3 changing value multiple times and the slot method can handle the various values for p1 and p2 as they settle.
To see the exact sequence of what will happen in this particular case, add onP1Changed, onP2Changed and onP3Changed event handlers and log their values to the console.
Response from Qt Support:
As per this old article: https://www.kdab.com/qml-engine-internals-part-2-bindings/, static value assignments happen during creation phase and binding value assignments happen at the end of creation phase.
That is how it should be, literals first, then functions. But it is
not actually documented so in theory it could change. This is also
only true for simple literal assignments. Anything even slightly
hinting about complexity makes QML engine postpone them together with
all those needing the evaluation. For example it happens if you wrap
the value with {} like this: p2: {true}
Case 1:
CustomQMLComponent{}
In this case, based on the above article, p1 & p2 values are set by the time p3 value is set.
The order in which these "static" properties are set is undefined and
also the order in which more complex expressions are done are
undefined. So it is best to avoid making assumptions about the order.
Case 2:
CustomQMLComponent{
p1: "my_string"
p2: true
}
What happens in this case?
In a more general sense, what happens when properties of a component are set when creating an instance of the component? Are the properties initialized with default values and then overridden by the new instance's values? Or, the properties are initialized just once with the default/new values.
Just once. Although the properties do of course have some default
value before the value in QML is assigned. The initial value set in
constructor of a C++ class, or in case of QML defined property,
default constructed value of the type (empty string, 0, false or null
in case it is QObject* type).
And why this could be important is because something like onXXXChanged
signals are handled immediately when they occur and thus it could be
ran before all those "static" assignment are done. Consider for
example:
onP1Changed: if (p2) {...} else {...}
QML engine does not know that there is some dependency to p2 on p1
value change and in case p1 gets assigned before p2, this could take
unexpected path and if p2 value change is not explicitly also handled
properly in this case, could lead to mismatched state.
Related
In Dave Thomas's book Programming Elixir he states "Elixir enforces immutable data" and goes on to say:
In Elixir, once a variable references a list such as [1,2,3], you know it will always reference those same values (until you rebind the variable).
This sounds like "it won't ever change unless you change it" so I'm confused as to what the difference between mutability and rebinding is. An example highlighting the differences would be really helpful.
Don't think of "variables" in Elixir as variables in imperative languages, "spaces for values". Rather look at them as "labels for values".
Maybe you would better understand it when you look at how variables ("labels") work in Erlang. Whenever you bind a "label" to a value, it remains bound to it forever (scope rules apply here of course).
In Erlang you cannot write this:
v = 1, % value "1" is now "labelled" "v"
% wherever you write "1", you can write "v" and vice versa
% the "label" and its value are interchangeable
v = v+1, % you can not change the label (rebind it)
v = v*10, % you can not change the label (rebind it)
instead you must write this:
v1 = 1, % value "1" is now labelled "v1"
v2 = v1+1, % value "2" is now labelled "v2"
v3 = v2*10, % value "20" is now labelled "v3"
As you can see this is very inconvenient, mainly for code refactoring. If you want to insert a new line after the first line, you would have to renumber all the v* or write something like "v1a = ..."
So in Elixir you can rebind variables (change the meaning of the "label"), mainly for your convenience:
v = 1 # value "1" is now labelled "v"
v = v+1 # label "v" is changed: now "2" is labelled "v"
v = v*10 # value "20" is now labelled "v"
Summary: In imperative languages, variables are like named suitcases: you have a suitcase named "v". At first you put sandwich in it. Than you put an apple in it (the sandwich is lost and perhaps eaten by the garbage collector). In Erlang and Elixir, the variable is not a place to put something in. It's just a name/label for a value. In Elixir you can change a meaning of the label. In Erlang you cannot. That's the reason why it doesn't make sense to "allocate memory for a variable" in either Erlang or Elixir, because variables do not occupy space. Values do. Now perhaps you see the difference clearly.
If you want to dig deeper:
1) Look at how "unbound" and "bound" variables work in Prolog. This is the source of this maybe slightly strange Erlang concept of "variables which do not vary".
2) Note that "=" in Erlang really is not an assignment operator, it's just a match operator! When matching an unbound variable with a value, you bind the variable to that value. Matching a bound variable is just like matching a value it's bound to. So this will yield a match error:
v = 1,
v = 2, % in fact this is matching: 1 = 2
3) It's not the case in Elixir. So in Elixir there must be a special syntax to force matching:
v = 1
v = 2 # rebinding variable to 2
^v = 3 # matching: 2 = 3 -> error
Immutability means that data structures don't change. For example the function HashSet.new returns an empty set and as long as you hold on to the reference to that set it will never become non-empty. What you can do in Elixir though is to throw away a variable reference to something and rebind it to a new reference. For example:
s = HashSet.new
s = HashSet.put(s, :element)
s # => #HashSet<[:element]>
What cannot happen is the value under that reference changing without you explicitly rebinding it:
s = HashSet.new
ImpossibleModule.impossible_function(s)
s # => #HashSet<[:element]> will never be returned, instead you always get #HashSet<[]>
Contrast this with Ruby, where you can do something like the following:
s = Set.new
s.add(:element)
s # => #<Set: {:element}>
Erlang and obviously Elixir that is built on top of it, embraces immutability.
They simply don’t allow values in a certain memory location to change. Never Until the variable gets garbage collected or is out of scope.
Variables aren't the immutable thing. The data they point to is the immutable thing. That's why changing a variable is referred to as rebinding.
You're point it at something else, not changing the thing it points to.
x = 1 followed by x = 2 doesn't change the data stored in computer memory where the 1 was to a 2. It puts a 2 in a new place and points x at it.
x is only accessible by one process at a time so this has no impact on concurrency and concurrency is the main place to even care if something is immutable anyway.
Rebinding doesn’t change the state of an object at all, the value is still in the same memory location, but it’s label (variable) now points to another memory location, so immutability is preserved. Rebinding is not available in Erlang, but while it is in Elixir this is not braking any constraint imposed by the Erlang VM, thanks to its implementation.
The reasons behind this choice are well explained by Josè Valim in this gist .
Let's say you had a list
l = [1, 2, 3]
and you had another process that was taking lists and then performing "stuff" against them repeatedly and changing them during this process would be bad. You might send that list like
send(worker, {:dostuff, l})
Now, your next bit of code might want to update l with more values for further work that's unrelated to what that other process is doing.
l = l ++ [4, 5, 6]
Oh no, now that first process is going to have undefined behavior because you changed the list right? Wrong.
That original list remains unchanged. What you really did was make a new list based on the old one and rebind l to that new list.
The separate process never has access to l. The data l originally pointed at is unchanged and the other process (presumably, unless it ignored it) has its own separate reference to that original list.
What matters is you can't share data across processes and then change it while another process is looking at it. In a language like Java where you have some mutable types (all primitive types plus references themselves) it would be possible to share a structure/object that contained say an int and change that int from one thread while another was reading it.
In fact, it's possible to change a large integer type in java partially while it's read by another thread. Or at least, it used to be, not sure if they clamped that aspect of things down with the 64 bit transition. Anyway, point is, you can pull the rug out from under other processes/threads by changing data in a place that both are looking at simultaneously.
That's not possible in Erlang and by extension Elixir. That's what immutability means here.
To be a bit more specific, in Erlang (the original language for the VM Elixir runs on) everything was single-assignment immutable variables and Elixir is hiding a pattern Erlang programmers developed to work around this.
In Erlang, if a=3 then that was what a was going to be its value for the duration of that variable's existence until it dropped out of scope and was garbage collected.
This was useful at times (nothing changes after assignment or pattern match so it is easy to reason about what a function is doing) but also a bit cumbersome if you were doing multiple things to a variable or collection over the course executing a function.
Code would often look like this:
A=input,
A1=do_something(A),
A2=do_something_else(A1),
A3=more_of_the_same(A2)
This was a bit clunky and made refactoring more difficult than it needed to be. Elixir is doing this behind the scenes, but hiding it from the programmer via macros and code transforms performed by the compiler.
Great discussion here
immutability-in-elixir
The variables really are immutable in sense, every new rebinding (assignment) is only visible to access that come after that. All previous access, still refer to old value(s) at the time of their call.
foo = 1
call_1 = fn -> IO.puts(foo) end
foo = 2
call_2 = fn -> IO.puts(foo) end
foo = 3
foo = foo + 1
call_3 = fn -> IO.puts(foo) end
call_1.() #prints 1
call_2.() #prints 2
call_3.() #prints 4
To make it a very simple
variables in elixir are not like container where you keep adding and removing or modifying items from the container.
Instead they are like Labels attached to a container, when you reassign a variable is as simple a you pick a label from one container and place it on a new container with expected data in it.
The following &greet function is pure, and can appropriately be marked with the is pure trait.
sub greet(Str:D $name) { say "Hello, $name" }
my $user = get-from-db('USER');
greet($user);
This one, however, is not:
sub greet {
my $name = get-from-db('USER');
say "Hello, $name"
}
greet($user);
What about this one, though?
sub greet(Str:D $name = get-from-db('USER')) { say "Hello, $name" }
greet();
From "inside" the function, it seems pure – when is parameters are bound to the same values, it always produces the same output, without side effects. But from outside the function, it seems impure – when called twice with the same argument, it can produce different return values. Which prospective does Raku/Rakudo take?
There are at least two strategies a language might take when implementing default values for parameters:
Treat the parameter default value as something that the compiler, upon encountering a call without enough arguments, should emit at the callsite in order to produce the extra argument to pass to the callee. This means that it's possible to support default values for parameters without any explicit support for it in the calling conventions. This also, however, requires that you always know where the call is going at compile time (or at least know it accurately enough to insert the default value, and one can't expect to use different default values in method overrides in subclasses and have it work out).
Have a calling convention powerful enough that the callee can discover that a value was not passed for the parameter, and then compute the default value.
With its dynamic nature, only the second of these really makes sense for Raku, and so that is what it does.
In a language doing strategy 1 it could arguably make sense to mark such a function as pure, insofar as the code that calculates the default lives at each callsite, and so anything doing an analysis and perhaps transformation based upon the purity will already be having to deal with the code that evaluates the default value, and can see that it is not a source of a pure value.
Under strategy 2, and thus Raku, we should understand default values as an implementation detail of the block or routine that has the default in its signature. Thus if the code calculating the default value is impure, then the routine as a whole is impure, and so the is pure trait is not suitable.
More generally, the is pure trait is applicable if for a given argument capture we can always expect the same return value. In the example given, the argument capture \() contradicts this.
An alternative factoring here would be to use multi subs instead of parameter defaults, and to mark only one candidate with is pure.
When you say that a sub is pure, then the you are guaranteeing that any given input will always produce the same output. In your last example of sub greet it looks to me that you cannot guarantee that for the default value case, as the content of the database may change, or the get-from-db may have side-effects.
Of course, if you are sure that the database doesn't change, and there aren't any side-effects, you could still apply is pure to the sub, but why would you be using a database then?
Why would you mark a sub as is pure anyway? Well, it allows the compiler to constant-fold a call to a subroutine at compile time. Take e.g.:
sub foo($a) is pure {
2 * $a
}
say foo(21); # 42
If you look at the code that is generated for this:
$ raku --target=optimize -e 'sub foo($a) is pure { 2 * $a }; say foo(21)'
then you will see this near the end:
│ │ - QAST::IVal(42)
The 42 is the constant folded call for foo(21). So this way the entire call is optimized away, because the sub was marked is pure and the parameter you provided was a constant.
It seems that both ways to create a new object pointer with all "0" member values, both returns a pointer:
type T struct{}
...
t1:=&T{}
t2:=new(T)
So what is the core difference between t1 and t2, or is there anything that "new" can do while &T{} cannot, or vice versa?
[…] is there anything that "new" can do while &T{} cannot, or vice versa?
I can think of three differences:
The "composite literal" syntax (the T{} part of &T{}) only works for "structs, arrays, slices, and maps" [link], whereas the new function works for any type [link].
For a struct or array type, the new function always generates zero values for its elements, whereas the composite literal syntax lets you initialize some of the elements to non-zero values if you like.
For a slice or map type, the new function always returns a pointer to nil, whereas the composite literal syntax always returns an initialized slice or map. (For maps this is very significant, because you can't add elements to nil.) Furthermore, the composite literal syntax can even create a non-empty slice or map.
(The second and third bullet-points are actually two aspects of the same thing — that the new function always creates zero values — but I list them separately because the implications are a bit different for the different types.)
For structs and other composites, both are same.
t1:=&T{}
t2:=new(T)
//Both are same
You cannot return the address of un-named variable initialised to zero value of other basic types like int without using new. You would need to create a named variable and then take its address.
func newInt() *int {
return new(int)
}
func newInt() *int {
// return &int{} --> invalid
var dummy int
return &dummy
}
See ruakh's answer. I want to point out some of the internal implementation details, though. You should not make use of them in production code, but they help illuminate what really happens behind the scenes, in the Go runtime.
Essentially, a slice is represented by three values. The reflect package exports a type, SliceHeader:
SliceHeader is the runtime representation of a slice. It cannot be used safely or portably and its representation may change in a later release. Moreover, the Data field is not sufficient to guarantee the data it references will not be garbage collected, so programs must keep a separate, correctly typed pointer to the underlying data.
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
If we use this to inspect a variable of type []T (for any type T), we can see the three parts: the pointer to the underlying array, the length, and the capacity. Internally, a slice value v always has all three of these parts. There's a general condition that I think should hold, and if you don't use unsafe to break it, it seems by inspection that it will hold (based on limited testing anyway):
either the Data field is not zero (in which case Len and Cap can but need not be nonzero), or
the Data field is zero (in which case the Len and Cap should both be zero).
That slice value v is nil if the Data field is zero.
By using the unsafe package, we can break it deliberately (and then put it all back—and hopefully nothing goes wrong while we have it broken) and thus inspect the pieces. When this code on the Go Playground is run (there's a copy below as well), it prints:
via &literal: base of array is 0x1e52bc; len is 0; cap is 0.
Go calls this non-nil.
via new: base of array is 0x0; len is 0; cap is 0.
Go calls this nil even though we clobbered len() and cap()
Making it non-nil by unsafe hackery, we get [42] (with cap=1).
after setting *p1=nil: base of array is 0x0; len is 0; cap is 0.
Go calls this nil even though we clobbered len() and cap()
Making it non-nil by unsafe hackery, we get [42] (with cap=1).
The code itself is a bit long so I have left it to the end (or use the above link to the Playground). But it shows that the actual p == nil test in the source compiles to just an inspection of the Data field.
When you do:
p2 := new([]int)
the new function actually allocates only the slice header. It sets all three parts to zero and returns the pointer to the resulting header. So *p2 has three zero fields in it, which makes it a correct nil value.
On the other hand, when you do:
p1 := &[]int{}
the Go compiler builds an empty array (of size zero, holding zero ints) and then builds a slice header: the pointer part points to the empty array, and the length and capacity are set to zero. Then p1 points to this header, with the non-nil Data field. A later assignment, *p1 = nil, writes zeros into all three fields.
Let me repeat this with boldface: these are not promised by the language specification, they're just the actual implementation in action.
Maps work very similarly. A map variable is actually a pointer to a map header. The details of map headers are even less accessible than those of slice headers: there is no reflect type for them. The actual implementation is viewable here under type hmap (note that it is not exported).
What this means is that m2 := new(map[T1]T2) really only allocates one pointer, and set that pointer itself to nil. There is no actual map! The new function returns the nil pointer, and m2 is then nil. Likewise var m1 map[T1]T2 just sets a simple pointer value in m1 to nil. But var m3 map[T1]T2{} allocates an actual hmap structure, fills it in, and makes m3 point to it. We can once again peek behind the curtain on the Go Playground, with code that is not guaranteed to work tomorrow, to see this in effect.
As someone writing Go programs, you don't need to know any of this. But if you have worked with lower-level languages (assembly and C for instance), these explain a lot. In particular, these explain why you cannot insert into a nil map: the map variable itself holds a pointer value, and until the map variable itself has a non-nil pointer to a (possibly empty) map-header, there is no way to do the insertion. An insertion could allocate a new map and insert the data, but the map variable wouldn't point to the correct hmap header object.
(The language authors could have made this work by using a second level of indirection: a map variable could be a pointer pointing to the variable that points to the map header. Or they could have made map variables always point to a header, and made new actually allocate a header, the way make does; then there would never be a nil map. But they didn't do either of these, and we get what we get, which is fine: you just need to know to initialize the map.)
Here's the slice inspector. (Use the playground link to view the map inspector: given that I had to copy hmap's definition out of the runtime, I expect it to be particularly fragile and not worth showing. The slice header's structure seems far less likely to change over time.)
package main
import (
"fmt"
"reflect"
"unsafe"
)
func main() {
p1 := &[]int{}
p2 := new([]int)
show("via &literal", *p1)
show("\nvia new", *p2)
*p1 = nil
show("\nafter setting *p1=nil", *p1)
}
// This demonstrates that given a slice (p), the test
// if p == nil
// is really a test on p.Data. If it's zero (nil),
// the slice as a whole is nil. If it's nonzero, the
// slice as a whole is non-nil.
func show(what string, p []int) {
pp := unsafe.Pointer(&p)
sh := (*reflect.SliceHeader)(pp)
fmt.Printf("%s: base of array is %#x; len is %d; cap is %d.\n",
what, sh.Data, sh.Len, sh.Cap)
olen, ocap := len(p), cap(p)
sh.Len, sh.Cap = 1, 1 // evil
if p == nil {
fmt.Println(" Go calls this nil even though we clobbered len() and cap()")
answer := 42
sh.Data = uintptr(unsafe.Pointer(&answer))
fmt.Printf(" Making it non-nil by unsafe hackery, we get %v (with cap=%d).\n",
p, cap(p))
sh.Data = 0 // restore nil-ness
} else {
fmt.Println("Go calls this non-nil.")
}
sh.Len, sh.Cap = olen, ocap // undo evil
}
Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified? This requires, if we want to use it anyway, to go through a very artificial process, by representing these data in the form of a ridiculous single element table.
Ada, which had the same kind of limitation, abandoned it in its 2012 redesign to the great satisfaction of its users. A small keyword (like out in Ada) could very well indicate that the possibility of keeping the modifications of a parameter at the output is required.
From my experience in Julia it is useful to understand the difference between a value and a binding.
Values
Each value in Julia has a concrete type and location in memory. Value can be mutable or immutable. In particular when you define your own composite type you can decide if objects of this type should be mutable (mutable struct) or immutable (struct).
Of course Julia has in-built types and some of them are mutable (e.g. arrays) and other are immutable (e.g. numbers, strings). Of course there are design trade-offs between them. From my perspective two major benefits of immutable values are:
if a compiler works with immutable values it can perform many optimizations to speed up code;
a user is can be sure that passing an immutable to a function will not change it and such encapsulation can simplify code analysis.
However, in particular, if you want to wrap an immutable value in a mutable wrapper a standard way to do it is to use Ref like this:
julia> x = Ref(1)
Base.RefValue{Int64}(1)
julia> x[]
1
julia> x[] = 10
10
julia> x
Base.RefValue{Int64}(10)
julia> x[]
10
You can pass such values to a function and modify them inside. Of course Ref introduces a different type so method implementation has to be a bit different.
Variables
A variable is a name bound to a value. In general, except for some special cases like:
rebinding a variable from module A in module B;
redefining some constants, e.g. trying to reassign a function name with a non-function value;
rebinding a variable that has a specified type of allowed values with a value that cannot be converted to this type;
you can rebind a variable to point to any value you wish. Rebinding is performed most of the time using = or some special constructs (like in for, let or catch statements).
Now - getting to the point - function is passed a value not a binding. You can modify a binding of a function parameter (in other words: you can rebind a value that a parameter is pointing to), but this parameter is a fresh variable whose scope lies inside a function.
If, for instance, we wanted a call like:
x = 10
f(x)
change a binding of variable x it is impossible because f does not even know of existence of x. It only gets passed its value. In particular - as I have noted above - adding such a functionality would break the rule that module A cannot rebind variables form module B, as f might be defined in a module different than where x is defined.
What to do
Actually it is easy enough to work without this feature from my experience:
What I typically do is simply return a value from a function that I assign to a variable. In Julia it is very easy because of tuple unpacking syntax like e.g. x,y,z = f(x,y,z), where f can be defined e.g. as f(x,y,z) = 2x,3y,4z;
You can use macros which get expanded before code execution and thus can have an effect modifying a binding of a variable, e.g. macro plusone(x) return esc(:($x = $x+1)) end and now writing y=100; #plusone(y) will change the binding of y;
Finally you can use Ref as discussed above (or any other mutable wrapper - as you have noted in your question).
"Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?" asked by Schemer
Your question is wrong because you assume the wrong things.
Parameters are variables
When you pass things to a function, often those things are values and not variables.
for example:
function double(x::Int64)
2 * x
end
Now what happens when you call it using
double(4)
What is the point of the function modifying it's parameter x , it's pointless. Furthermore the function has no idea how it is called.
Furthermore, Julia is built for speed.
A function that modifies its parameter will be hard to optimise because it causes side effects. A side effect is when a procedure/function changes objects/things outside of it's scope.
If a function does not modifies a variable that is part of its calling parameter then you can be safe knowing.
the variable will not have its value changed
the result of the function can be optimised to a constant
not calling the function will not break the program's behaviour
Those above three factors are what makes FUNCTIONAL language fast and NON FUNCTIONAL language slow.
Furthermore when you move into Parallel programming or Multi Threaded programming, you absolutely DO NOT WANT a variable having it's value changed without you (The programmer) knowing about it.
"How would you implement with your proposed macro, the function F(x) which returns a boolean value and modifies c by c:= c + 1. F can be used in the following piece of Ada code : c:= 0; While F(c) Loop ... End Loop;" asked by Schemer
I would write
function F(x)
boolean_result = perform_some_logic()
return (boolean_result,x+1)
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
"Unfortunately no, because, and I should have said that, c has to take again the value 0 when F return the value False (c increases as long the Loop lives and return to 0 when it dies). " said Schemer
Then I would write
function F(x)
boolean_result = perform_some_logic()
if boolean_result == true
return (true,x+1)
else
return (false,0)
end
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
I have a question that may be simple and/or redundant, but I could not find an answer to my version. I hope someone will answer without flaming at me.
I have two pointers p1 and p2 as follow:
1. p1 is created using a new (p1 = new structObject;)
2. p2 is a copy of p1 (p2 = p1).
What is the effect of deleting any of the two pointers (e.g., delete p1)?
In other word, is using p2 safe after deleting p1?
No. It is not safe to access via p2. The memory is freed and any access to it (from any pointer) is undefined behavior
Both p1 ans p2 are assigned to the same address.
Since allocated object at this address was deleted (no matter which pointer was used), standard states that accessing it leads to undefined behavior, hence its not safe to access it using any pointer.