I would like to check if a variable is scalar in julia, such as Integer, String, Number, but not AstractArray, Tuple, type, struct, etc. Is there a simple method to do this (i.e. isscalar(x))
The notion of what is, or is not a scalar is under-defined without more context.
Mathematically, a scalar is defined; (Wikipedia)
A scalar is an element of a field which is used to define a vector space.
That is to say, you need to define a vector space, based on a field, before you can determine if something is, or is not a scalar (relative to that vector space.).
For the right vector space, tuples could be a scalar.
Of-course we are not looking for a mathematically rigorous definition.
Just a pragmatic one.
Base it off what Broadcasting considers to be scalar
I suggest that the only meaningful way in which a scalar can be defined in julia, is of the behavior of broadcast.
As of Julia 1:
using Base.Broadcast
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = BroadcastStyle(T) isa Broadcast.DefaultArrayStyle{0}
See the docs for Broadcast.
In julia 0.7, Scalar is the default. So it is basically anything that doesn't have specific broadcasting behavior, i.e. it knocks out things like array and tuples etc.:
using Base.Broadcast
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = BroadcastStyle(T) isa Broadcast.Scalar
In julia 0.6 this is a bit more messy, but similar:
isscalar(x::T) where T = isscalar(T)
isscalar(::Type{T}) where T = Base.Broadcast._containertype(T)===Any
The advantage of using the methods for Broadcast to determine if something is scalar, over using your own methods, is that anyone making a new type that is going to act in a scalar way must make sure it works with those methods correctly
(or actually nonscalar since scalar is the default.)
Structs are not not scalar
That is to say: sometimes structs are scalar and sometimes they are not and it depends on the struct.
Note however that these methods do not consider struct to be non-scalar.
I think you are mistaken in your desire to.
Julia structs are not (necessarily or usually) a collection type.
Consider that: BigInteger, BigFloat, Complex128 etc etc
are all defined using structs
I was tempted to say that having a start method makes a type nonscalar, but that would be incorrect as start(::Number) is defined.
(This has been debated a few times)
For completeness, I am copying Tasos Papastylianou's answer from the comments to here. If all you want to do is distinguish scalars from arrays you can use:
isa(x, Number)
This will output true if x is a Number (like a float or an int), and output false if x is an Array (vector, matrix, etc.)
I found myself needing to capture the notion of if something was scalar or not recently in MultiResolutionIterators.jl.
I found the boardcasting based rules from the other answer,
did not meet my needs.
In particular I wanted to consider strings as nonscalar.
I defined a trait,
bases on method_exists(start, (T,)),
with some exceptions as mentioned e.g. for Number.
abstract type Scalarness end
struct Scalar <: Scalarness end
struct NotScalar <: Scalarness end
isscalar(::Type{Any}) = NotScalar() # if we don't know the type we can't really know if scalar or not
isscalar(::Type{<:AbstractString}) = NotScalar() # We consider strings to be nonscalar
isscalar(::Type{<:Number}) = Scalar() # We consider Numbers to be scalar
isscalar(::Type{Char}) = Scalar() # We consider Sharacter to be scalar
isscalar(::Type{T}) where T = method_exists(start, (T,)) ? NotScalar() : Scalar()
Something similar is also done by AbstractTrees.jl
isscalar(x) == applicable(start, x) && !isa(x, Integer) && !isa(x, Char) && !isa(x, Task)
Related
Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified? This requires, if we want to use it anyway, to go through a very artificial process, by representing these data in the form of a ridiculous single element table.
Ada, which had the same kind of limitation, abandoned it in its 2012 redesign to the great satisfaction of its users. A small keyword (like out in Ada) could very well indicate that the possibility of keeping the modifications of a parameter at the output is required.
From my experience in Julia it is useful to understand the difference between a value and a binding.
Values
Each value in Julia has a concrete type and location in memory. Value can be mutable or immutable. In particular when you define your own composite type you can decide if objects of this type should be mutable (mutable struct) or immutable (struct).
Of course Julia has in-built types and some of them are mutable (e.g. arrays) and other are immutable (e.g. numbers, strings). Of course there are design trade-offs between them. From my perspective two major benefits of immutable values are:
if a compiler works with immutable values it can perform many optimizations to speed up code;
a user is can be sure that passing an immutable to a function will not change it and such encapsulation can simplify code analysis.
However, in particular, if you want to wrap an immutable value in a mutable wrapper a standard way to do it is to use Ref like this:
julia> x = Ref(1)
Base.RefValue{Int64}(1)
julia> x[]
1
julia> x[] = 10
10
julia> x
Base.RefValue{Int64}(10)
julia> x[]
10
You can pass such values to a function and modify them inside. Of course Ref introduces a different type so method implementation has to be a bit different.
Variables
A variable is a name bound to a value. In general, except for some special cases like:
rebinding a variable from module A in module B;
redefining some constants, e.g. trying to reassign a function name with a non-function value;
rebinding a variable that has a specified type of allowed values with a value that cannot be converted to this type;
you can rebind a variable to point to any value you wish. Rebinding is performed most of the time using = or some special constructs (like in for, let or catch statements).
Now - getting to the point - function is passed a value not a binding. You can modify a binding of a function parameter (in other words: you can rebind a value that a parameter is pointing to), but this parameter is a fresh variable whose scope lies inside a function.
If, for instance, we wanted a call like:
x = 10
f(x)
change a binding of variable x it is impossible because f does not even know of existence of x. It only gets passed its value. In particular - as I have noted above - adding such a functionality would break the rule that module A cannot rebind variables form module B, as f might be defined in a module different than where x is defined.
What to do
Actually it is easy enough to work without this feature from my experience:
What I typically do is simply return a value from a function that I assign to a variable. In Julia it is very easy because of tuple unpacking syntax like e.g. x,y,z = f(x,y,z), where f can be defined e.g. as f(x,y,z) = 2x,3y,4z;
You can use macros which get expanded before code execution and thus can have an effect modifying a binding of a variable, e.g. macro plusone(x) return esc(:($x = $x+1)) end and now writing y=100; #plusone(y) will change the binding of y;
Finally you can use Ref as discussed above (or any other mutable wrapper - as you have noted in your question).
"Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?" asked by Schemer
Your question is wrong because you assume the wrong things.
Parameters are variables
When you pass things to a function, often those things are values and not variables.
for example:
function double(x::Int64)
2 * x
end
Now what happens when you call it using
double(4)
What is the point of the function modifying it's parameter x , it's pointless. Furthermore the function has no idea how it is called.
Furthermore, Julia is built for speed.
A function that modifies its parameter will be hard to optimise because it causes side effects. A side effect is when a procedure/function changes objects/things outside of it's scope.
If a function does not modifies a variable that is part of its calling parameter then you can be safe knowing.
the variable will not have its value changed
the result of the function can be optimised to a constant
not calling the function will not break the program's behaviour
Those above three factors are what makes FUNCTIONAL language fast and NON FUNCTIONAL language slow.
Furthermore when you move into Parallel programming or Multi Threaded programming, you absolutely DO NOT WANT a variable having it's value changed without you (The programmer) knowing about it.
"How would you implement with your proposed macro, the function F(x) which returns a boolean value and modifies c by c:= c + 1. F can be used in the following piece of Ada code : c:= 0; While F(c) Loop ... End Loop;" asked by Schemer
I would write
function F(x)
boolean_result = perform_some_logic()
return (boolean_result,x+1)
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
"Unfortunately no, because, and I should have said that, c has to take again the value 0 when F return the value False (c increases as long the Loop lives and return to 0 when it dies). " said Schemer
Then I would write
function F(x)
boolean_result = perform_some_logic()
if boolean_result == true
return (true,x+1)
else
return (false,0)
end
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
Assume I want to store I vector together with its norm. I expected the corresponding type definition to be straightforward:
immutable VectorWithNorm1{Vec <: AbstractVector}
vec::Vec
norm::eltype(Vec)
end
However, this doesn't work as intended:
julia> fieldtype(VectorWithNorm1{Vector{Float64}},:norm)
Any
It seems I have to do
immutable VectorWithNorm2{Vec <: AbstractVector, Eltype}
vec::Vec
norm::Eltype
end
and rely on the user to not abuse the Eltype parameter. Is this correct?
PS: This is just a made-up example to illustrate the problem. It is not the actual problem I'm facing.
Any calculations on a type parameter currently do not work
(although I did discuss the issue with Jeff Bezanson at JuliaCon, and he seemed amenable to fixing it).
The problem currently is that the expression for the type of norm gets evaluated when the parameterized type is defined, and gets called with a TypeVar, but it is not yet bound to a value, which is what you really need it to be called with, at the time that that parameter is actually bound to create a concrete type.
I've run into this a lot, where I want to do some calculation on the number of bits of a floating point type, i.e. to calculate and use the number of UInts needed to store a fp value of a particular precision, and use an NTuple{N,UInt} to hold the mantissa.
I would like to use a subtype of a function parameter in my function definition. Is this possible? For example, I would like to write something like:
g{T1, T2<:T1}(x::T1, y::T2) = x + y
So that g will be defined for any x::T1 and any y that is a subtype of T1. Obviously, if I knew, for example, that T1 would always be Number, then I could write g{T<:Number}(x::Number, y::T) = x + y and this would work fine. But this question is for cases where T1 is not known until run-time.
Read on if you're wondering why I would want to do this:
A full description of what I'm trying to do would be a bit cumbersome, but what follows is a simplified example.
I have a parameterised type, and a simple method defined over that type:
type MyVectorType{T}
x::Vector{T}
end
f1!{T}(m::MyVectorType{T}, xNew::T) = (m.x[1] = xNew)
I also have another type, with an abstract super-type defined as follows
abstract MyAbstract
type MyType <: MyAbstract ; end
I create an instance of MyVectorType with vector element type set to MyAbstract using:
m1 = MyVectorType(Array(MyAbstract, 1))
I now want to place an instance of MyType in MyVectorType. I can do this, since MyType <: MyAbstract. However, I can't do this with f1!, since the function definition means that xNew must be of type T, and T will be MyAbstract, not MyType.
The two solutions I can think of to this problem are:
f2!(m::MyVectorType, xNew) = (m.x[1] = xNew)
f3!{T1, T2}(m::MyVectorType{T1}, xNew::T2) = T2 <: T1 ? (m.x[1] = xNew) : error("Oh dear!")
The first is essentially a duck-typing solution. The second performs the appropriate error check in the first step.
Which is preferred? Or is there a third, better solution I am not aware of?
The ability to define a function g{T, S<:T}(::Vector{T}, ::S) has been referred to as "triangular dispatch" as an analogy to diagonal dispatch: f{T}(::Vector{T}, ::T). (Imagine a table with a type hierarchy labelling the rows and columns, arranged such that the super types are to the top and left. The rows represent the element type of the first argument, and the columns the type of the second. Diagonal dispatch will only match the cells along the diagonal of the table, whereas triangular dispatch matches the diagonal and everything below it, forming a triangle.)
This simply isn't implemented yet. It's a complicated problem, especially once you start considering the scoping of T and S outside of function definitions and in the context of invariance. See issue #3766 and #6984 for more details.
So, practically, in this case, I think duck-typing is just fine. You're relying upon the implementation of myVectorType to do the error checking when it assigns its elements, which it should be doing in any case.
The solution in base julia for setting elements of an array is something like this:
f!{T}(A::Vector{T}, x::T) = (A[1] = x)
f!{T}(A::Vector{T}, x) = f!(A, convert(T, x))
Note that it doesn't worry about the type hierarchy or the subtype "triangle." It just tries to convert x to T… which is a no-op if x::S, S<:T. And convert will throw an error if it cannot do the conversion or doesn't know how.
UPDATE: This is now implemented on the latest development version (0.6-dev)! In this case I think I'd still recommend using convert like I originally answered, but you can now define restrictions within the static method parameters in a left-to-right manner.
julia> f!{T1, T2<:T1}(A::Vector{T1}, x::T2) = "success!"
julia> f!(Any[1,2,3], 4.)
"success!"
julia> f!(Integer[1,2,3], 4.)
ERROR: MethodError: no method matching f!(::Array{Integer,1}, ::Float64)
Closest candidates are:
f!{T1,T2<:T1}(::Array{T1,1}, ::T2<:T1) at REPL[1]:1
julia> f!([1.,2.,3.], 4.)
"success!"
This is similar to my previous question, but a bit more complicated.
Before I was defining a type with an associated integer as a parameter, Intp{p}. Now I would like to define a type using a vector as a parameter.
The following is the closest I can write to what I want:
type Extp{g::Vector{T}}
c::Vector{T}
end
In other words, Extp should be defined with respect to a Vector, g, and I want the contents, c, to be another Vector, whose entries should be the of the same type as the entries of g.
Well, this does not work.
Problem 1: I don't think I can use :: in the type parameter.
Problem 2: I could work around that by making the types of g and c arbitary and just making sure the types in the vectors match up in the constructor. But, even if I completely take everything out and use
type Extp{g}
c
end
it still doesn't seem to like this. When I try to use it the way I want to,
julia> Extp{[1,1,1]}([0,0,1])
ERROR: type: apply_type: in Extp, expected Type{T<:Top}, got Array{Int64,1}
So, does Julia just not like particular Vectors being associated with types? Does what I'm trying to do only work with integers, like in my Intp question?
EDIT: In the documentation I see that type parameters "can be any type at all (or an integer, actually, although here it’s clearly used as a type)." Does that mean that what I'm asking is impossible, and that that only types and integers work for Type parameters? If so, why? (what makes integers special over other types in Julia in this way?)
In Julia 0.4, you can use any "bitstype" as a parameter of a type. However, a vector is not a bitstype, so this is not going to work. The closest analog is to use a tuple: for example, (3.2, 1.5) is a perfectly valid type parameter.
In a sense vectors (or any mutable object) are antithetical to types, which cannot change at runtime.
Here is the relevant quote:
Both abstract and concrete types can be parameterized by other types
and by certain other values (currently integers, symbols, bools, and
tuples thereof).
So, your EDIT is correct. Widening this has come up on the Julia issues page (e.g., #5102 and #6081 were two related issues I found with some discussion), so this may change in the future - I'm guessing not in v0.4 though. It'd have to be an immutable type really to make any sense, so not Vector. I'm not sure I understand your application, but would a Tuple work?
In OCaml, we have two kinds of equity comparisons:
x = y and x == y,
So what's exact the difference between them?
Is that x = y in ocaml just like x.equals(y) in Java?
and x == y just like x == y (comparing the address) in Java?
I don't know exactly how x.equals(y) works in Java. If it does a "deep" comparison, then the analogy is pretty close. One thing to be careful of is that physical equality is a slippery concept in OCaml (and functional languages in general). The compiler and runtime system are going to move values around, and may merge and unmerge pure (non-mutable) values at will. So you should only use == if you really know what you're doing. At some level, it requires familiarity with the implementation (which is something to avoid unless necessary).
The specific guarantees that OCaml makes for == are weak. Mutable values compare as physically equal in the way you would expect (i.e., if mutating one of the two will actually mutate the other also). But for non-mutable values, the only guarantee is that values that compare physically equal (==) will also compare as equal (=). Note that the converse is not true, as sepp2k points out for floating values.
In essence, what the language spec is telling you for non-mutable values is that you can use == as a quick check to decide if two non-mutable values are equal (=). If they compare physically equal, they are equal value-wise. If they don't compare physically equal, you don't know if they're equal value-wise. You still have to use = to decide.
Edit: this answer delves into details of the inner working of OCaml, based on the Obj module. That knowledge isn't meant to be used without extra care (let me emphasis on that very important point once more: don't use it for your program, but only if you wish to experiment with the OCaml runtime). That information is also available, albeit perhaps in a more understandable form in the O'Reilly book on OCaml, available online (pretty good book, though a bit dated now).
The = operator is checking structural equality, whereas == only checks physical equality.
Equality checking is based on the way values are allocated and stored within memory. A runtime value in OCaml may roughly fit into 2 different categories : either boxed or unboxed. The former means that the value is reachable in memory through an indirection, and the later means that the value is directly accessible.
Since int (int31 on 32 bit systems, or int63 on 64 bit systems) are unboxed values, both operators are behaving the same with them. A few other types or values, whose runtime implementations are actually int, will also see both operators behaving the same with them, like unit (), the empty list [], constants in algebraic datatypes and polymorphic variants, etc.
Once you start playing with more complex values involving structures, like lists, arrays, tuples, records (the C struct equivalent), the difference between these two operators emerges: values within structures will be boxed, unless they can be runtime represented as native ints (1). This necessity arises from how the runtime system must handle values, and manage memory efficiently. Structured values are allocated when constructed from other values, which may be themselves structured values, in which case references are used (since they are boxed).
Because of allocations, it is very unlikely that two values instantiated at different points of a program could be physically equal, although they'd be structurally equal. Each of the fields, or inner elements within the values could be identical, even up to physical identity, but if these two values are built dynamically, then they would end up using different spaces in memory, and thus be physically different, but structurally equal.
The runtime tries to avoid unecessary allocations though: for instance, if you have a function returning always the same value (in other words, if the function is constant), either simple or structured, that function will always return the same physical value (ie, the same data in memory), so that testing for physical equality the result of two invocations of that function will be successful.
One way to observe when the physical operator will actually return true is to use the Obj.is_block function on its runtime representation (That is to say, the result of Obj.repr on it). This function simply tells whether its parameter runtime representation is boxed.
A more contrived way is to use the following function:
let phy x : int = Obj.magic (Obj.repr x);;
This function will return an int which is the actual value of the pointer to the value bound to x in memory, if this value is boxed. If you try it on a int literal, you will get the exact same value! That's because int are unboxed (ie. the value is stored directly in memory, not through a reference).
Now that we know that boxed values are actually "referenced" values, we can deduce that these values can be modified, even though the language says that they are immutable.
consider for instance the reference type:
# type 'a ref = {mutable contents : 'a };;
We could define an immutable ref like this:
# type 'a imm = {i : 'a };;
type 'a imm = {i : 'a; }
And then use the Obj.magic function to coerce one type into the other, because structurally, these types will be reduced to the same runtime representation.
For instance:
# let x = { i = 1 };;
- : val x : int imm = { i = 1 }
# let y : int ref = Obj.magic x;;
- : val y : int ref = { contents = 1 }
# y := 2;;
- : unit = ()
# x
- : int imm = { i = 2 }
There are a few exceptions to this:
if values are objects, then even seemingly structurally identical values will return false on structural comparison
# let o1 = object end;;
val o1 : < > = <obj>
# let o2 = object end;;
val o2 : < > = <obj>
# o1 = o2;;
- : bool = false
# o1 = o1;;
- : bool = true
here we see that = reverts to physical equivalence.
If values are functions, you cannot compare them structurally, but physical comparison works as intended.
lazy values may or may not be structurally comparable, depending on whether they have been forced or not (respectively).
# let l1 = lazy (40 + 2);;
val l1 : lazy_t = <lazy>
# let l2 = lazy (40 + 2);;
val l2 : lazy_t = <lazy>
# l1 = l2;;
Exception: Invalid_argument "equal: functional value".
# Lazy.force l1;;
- : int = 42
# Lazy.force l2;;
- : int = 42
# l1 = l2;;
- : bool = true
module or record values are also comparable if they don't contain any functional value.
In general, I guess that it is safe to say that values which are related to functions, or may hold functions inside are not comparable with =, but may be compared with ==.
You should obviously be very cautious with all this: relying on the implementation details of the runtime is incorrect (Note: I jokingly used the word evil in my initial version of that answer, but changed it by fear of it being taken too seriously). As you aptly pointed out in comments, the behaviour of the javascript implementation is different for floats (structurally equivalent in javascript, but not in the reference implementation, and what about the java one?).
(1) If I recall correctly, floats are also unboxed when stored in arrays to avoid a double indirection, but they become boxed once extracted, so you shouldn't see a difference in behaviour with boxed values.
Is that x = y in ocaml just like x.equals(y) in Java?
and x == y just like x == y (comparing the address) in Java?
Yes, that's it. Except that in OCaml you can use = on every kind of value, whereas in Java you can't use equals on primitive types. Another difference is that floating point numbers in OCaml are reference types, so you shouldn't compare them using == (not that it's generally a good idea to compare floating point numbers directly for equality anyway).
So in summary, you basically should always be using = to compare any kind of values.
according to http://rigaux.org/language-study/syntax-across-languages-per-language/OCaml.html, == checks for shallow equality, and = checks for deep equality