What do U and R stand for in UIO, RIO, and URIO? - zio

ZIO provides these convenient aliases for common ZIO usages:
UIO[A] // Equivalent to ZIO[Any, Nothing, A]
RIO[R, A] // Equivalent to ZIO[R, Throwable, A]
URIO[R, A] // Equivalent to ZIO[R, Nothing, A]
What do U, R, and UR stand for in these aliases? It seems like R is maybe a reference to the fact that the first type parameter of a ZIO is named R. Is that true?

U stands for “unexceptional” as in it isn’t expected to produce exceptions in the error channel (thus the Nothing In the error channel of the underlying ZIO).
R stands for “Resource” or “Requirement”. I personally prefer to think of it as the latter because it fits better with my mental model of not being able to run an effect until all the requirements are satisfies.

Related

Puzzling results for Julia typeof

I am puzzled by the following results of typeof in the Julia 1.0.0 REPL:
# This makes sense.
julia> typeof(10)
Int64
# This surprised me.
julia> typeof(function)
ERROR: syntax: unexpected ")"
# No answer at all for return example and no error either.
julia> typeof(return)
# In the next two examples the REPL returns the input code.
julia> typeof(in)
typeof(in)
julia> typeof(typeof)
typeof(typeof)
# The "for" word returns an error like the "function" word.
julia> typeof(for)
ERROR: syntax: unexpected ")"
The Julia 1.0.0 documentation says for typeof
"Get the concrete type of x."
The typeof(function) example is the one that really surprised me. I expected a function to be a first-class object in Julia and have a type. I guess I need to understand types in Julia.
Any suggestions?
Edit
Per some comment questions below, here is an example based on a small function:
julia> function test() return "test"; end
test (generic function with 1 method)
julia> test()
"test"
julia> typeof(test)
typeof(test)
Based on this example, I would have expected typeof(test) to return generic function, not typeof(test).
To be clear, I am not a hardcore user of the Julia internals. What follows is an answer designed to be (hopefully) an intuitive explanation of what functions are in Julia for the non-hardcore user. I do think this (very good) question could also benefit from a more technical answer provided by one of the more core developers of the language. Also, this answer is longer than I'd like, but I've used multiple examples to try and make things as intuitive as possible.
As has been pointed out in the comments, function itself is a reserved keyword, and is not an actual function istself per se, and so is orthogonal to the actual question. This answer is intended to address your edit to the question.
Since Julia v0.6+, Function is an abstract supertype, much in the same way that Number is an abstract supertype. All functions, e.g. mean, user-defined functions, and anonymous functions, are subtypes of Function, in the same way that Float64 and Int are subtypes of Number.
This structure is deliberate and has several advantages.
Firstly, for reasons I don't fully understand, structuring functions in this way was the key to allowing anonymous functions in Julia to run just as fast as in-built functions from Base. See here and here as starting points if you want to learn more about this.
Secondly, because each function is its own subtype, you can now dispatch on specific functions. For example:
f1(f::T, x) where {T<:typeof(mean)} = f(x)
and:
f1(f::T, x) where {T<:typeof(sum)} = f(x) + 1
are different dispatch methods for the function f1
So, given all this, why does, e.g. typeof(sum) return typeof(sum), especially given that typeof(Float64) returns DataType? The issue here is that, roughly speaking, from a syntactical perspective, sum needs to serves two purposes simultaneously. It needs to be both a value, like e.g. 1.0, albeit one that is used to call the sum function on some input. But, it is also needs to be a type name, like Float64.
Obviously, it can't do both at the same time. So sum on its own behaves like a value. You can write f = sum ; f(randn(5)) to see how it behaves like a value. But we also need some way of representing the type of sum that will work not just for sum, but for any user-defined function, and any anonymous function. The developers decided to go with the (arguably) simplest option and have the type of sum print literally as typeof(sum), hence the behaviour you observe. Similarly if I write f1(x) = x ; typeof(f1), that will also return typeof(f1).
Anonymous functions are a bit more tricky, since they are not named as such. What should we do for typeof(x -> x^2)? What actually happens is that when you build an anonymous function, it is stored as a temporary global variable in the module Main, and given a number that serves as its type for lookup purposes. So if you write f = (x -> x^2), you'll get something back like #3 (generic function with 1 method), and typeof(f) will return something like getfield(Main, Symbol("##3#4")), where you can see that Symbol("##3#4") is the temporary type of this anonymous function stored in Main. (a side effect of this is that if you write code that keeps arbitrarily generating the same anonymous function over and over you will eventually overflow memory, since they are all actually being stored as separate global variables of their own type - however, this does not prevent you from doing something like this for n = 1:largenumber ; findall(y -> y > 1.0, x) ; end inside a function, since in this case the anonymous function is only compiled once at compile-time).
Relating all of this back to the Function supertype, you'll note that typeof(sum) <: Function returns true, showing that the type of sum, aka typeof(sum) is indeed a subtype of Function. And note also that typeof(typeof(sum)) returns DataType, in much the same way that typeof(typeof(1.0)) returns DataType, which shows how sum actually behaves like a value.
Now, given everything I've said, all the examples in your question now make sense. typeof(function) and typeof(for) return errors as they should, since function and for are reserved syntax. typeof(typeof) and typeof(in) correctly return (respectively) typeof(typeof), and typeof(in), since typeof and in are both functions. Note of course that typeof(typeof(typeof)) returns DataType.

dealing with types in kwargs in Julia

How can I use kwargs in a Julia function and declare their types for speed?
function f(x::Float64; kwargs...)
kwargs = Dict(kwargs)
if haskey(kwargs, :c)
c::Float64 = kwargs[:c]
else
c::Float64 = 1.0
end
return x^2 + c
end
f(0.0, c=10.0)
yields:
ERROR: LoadError: syntax: multiple type declarations for "c"
Of course I can define the function as f(x::Float64, c::Float64=1.0) to achieve the result, but I have MANY optional arguments with default values to pass, so I'd prefer to use kwargs.
Thanks.
Related post
As noted in another answer, this really only matters if you're going to have a type instability. If you do, the answer is to layer your functions. Have a top layer which does type checking and all sorts of setup, and then call a function which uses dispatch to be fast. For example,
function f(x::Float64; kwargs...)
kwargs = Dict(kwargs)
if haskey(kwargs, :c)
c = kwargs[:c]
else
c = 1.0
end
return _f(x,c)
end
_f(x,c) = x^2 + c
If most of your time is spent in the inner function, then this will be faster (it might not be for very simple functions). This allows for very general usage too, where you have have a keyword argument be by default nothing and do and if nothing ... which could setup a complicated default, and not have to worry about the type stability since it will be shielded from the inner function.
This kind of high-level type-checking wrapper above a performance sensitive inner function is used a lot in DifferentialEquations.jl. Check out the high-level wrapper for the SDE solvers which led to nice speedups by insuring type stability (the inner function is sde_solve) (or check out the solve for ODEProblem, it's much more complex since it handles conversions to different pacakges but it's the same idea).
A simpler answer for small examples like yours may be possible after this PR merges.
To fix some confusion, here's a declaration form:
function f(x::Float64; kwargs...)
local c::Float64 # Ensures the type of `c` will be `Float64`
kwargs = Dict(kwargs)
if haskey(kwargs, :c)
c = float(kwargs[:c])
else
c = 1.0
end
return x^2 + c
end
This will force anything that saves to c to convert to a Float64 or error, resulting in a type-stability, but is not as general of a solution. What form you use really depends on what you're doing.
Lastly, there's also the type assert, as #TotalVerb showed:
function f(x::Float64; c::Float64=1.0, kwargs...)
return x^2 + c
end
That's clean, or you could assert in the function:
function f(x::Float64; kwargs...)
kwargs = Dict(kwargs)
if haskey(kwargs, :c)
c = float(kwargs[:c])::Float64
else
c = 1.0
end
return x^2 + c
end
which will cause convertions only on the lines where the assertion occurs (i.e. the #TotalVerb form won't dispatch, so you can't make another function with c::Int, and it will only assert (convert) when the keyword arg is first read in).
Summary
The first solution will dispatch to be type stable in _f no matter what type the user makes c, and so if _f is a long calculation, this will get pretty much optimal performance, but for really quick calls it will have dispatch overhead.
The second solution will fix any type stability by forcing anything you set c to be a Float64 (it will try to convert, and if it can't, error). Thus this gets speed by forcing type stability, or erroring.
The assert in the keyword spot (#TotalVerb's answer) is the cleanest, but won't auto-convert later (so you could get a type-instability. But if you don't accidentally convert it later, then you have type stability, types can be inferred, and so you'll get optimal performance) and you can't extend it to cases where the function has c passed in as other types (no dispatch).
The last solution is pretty much the same as 3, except not as nice. I wouldn't recommend it. If you're doing something complicated with asserts, you likely are designing something wrong or really want to do something like the first (dispatch in a longer function call which is type stable).
But note that dispatch with version 3 may be fixed in the near future, which would allow you to have a different function with c::Float64 and c::Int (if necessary). Hopefully your solution is in here somewhere.
Note that declaring types does not give you increased performance; you may wish to relax the type constraints on x and c for your code to be more generic. Anyway, this is probably what you want:
function f(x::Float64; c::Float64=1.0, kwargs...)
return x^2 + c
end
See the keyword arguments section of the manual.

How can I dispatch on traits relating two types, where the second type that co-satisfies the trait is uniquely determined by the first?

Say I have a Julia trait that relates to two types: one type is a sort of "base" type that may satisfy a sort of partial trait, the other is an associated type that is uniquely determined by the base type. (That is, the relation from BaseType -> AssociatedType is a function.) Together, these types satisfy a composite trait that is the one of interest to me.
For example:
using Traits
#traitdef IsProduct{X} begin
isnew(X) -> Bool
coolness(X) -> Float64
end
#traitdef IsProductWithMeasurement{X,M} begin
#constraints begin
istrait(IsProduct{X})
end
measurements(X) -> M
#Maybe some other stuff that dispatches on (X,M), e.g.
#fits_in(X,M) -> Bool
#how_many_fit_in(X,M) -> Int64
#But I don't want to implement these now
end
Now here are a couple of example types. Please ignore the particulars of the examples; they are just meant as MWEs and there is nothing relevant in the details:
type Rope
color::ASCIIString
age_in_years::Float64
strength::Float64
length::Float64
end
type Paper
color::ASCIIString
age_in_years::Int64
content::ASCIIString
width::Float64
height::Float64
end
function isnew(x::Rope)
(x.age_in_years < 10.0)::Bool
end
function coolness(x::Rope)
if x.color=="Orange"
return 2.0::Float64
elseif x.color!="Taupe"
return 1.0::Float64
else
return 0.0::Float64
end
end
function isnew(x::Paper)
(x.age_in_years < 1.0)::Bool
end
function coolness(x::Paper)
(x.content=="StackOverflow Answers" ? 1000.0 : 0.0)::Float64
end
Since I've defined these functions, I can do
#assert istrait(IsProduct{Rope})
#assert istrait(IsProduct{Paper})
And now if I define
function measurements(x::Rope)
(x.length)::Float64
end
function measurements(x::Paper)
(x.height,x.width)::Tuple{Float64,Float64}
end
Then I can do
#assert istrait(IsProductWithMeasurement{Rope,Float64})
#assert istrait(IsProductWithMeasurement{Paper,Tuple{Float64,Float64}})
So far so good; these run without error. Now, what I want to do is write a function like the following:
#traitfn function get_measurements{X,M;IsProductWithMeasurement{X,M}}(similar_items::Array{X,1})
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
Generically, this function is meant to be an example of "I want to use the fact that I, as the programmer, know that BaseType is always associated to AssociatedType to help the compiler with type inference. I know that whenever I do a certain task [in this case, get_measurements, but generically this could work in a bunch of cases] then I want the compiler to infer the output type of that function in a consistently patterned way."
That is, e.g.
do_something_that_makes_arrays_of_assoc_type(x::BaseType)
will always spit out Array{AssociatedType}, and
do_something_that_makes_tuples(x::BaseType)
will always spit out Tuple{Int64,BaseType,AssociatedType}.
AND, one such relationship holds for all pairs of <BaseType,AssociatedType>; e.g. if BatmanType is the base type to which RobinType is associated, and SupermanType is the base type to which LexLutherType is always associated, then
do_something_that_makes_tuple(x::BatManType)
will always output Tuple{Int64,BatmanType,RobinType}, and
do_something_that_makes_tuple(x::SuperManType)
will always output Tuple{Int64,SupermanType,LexLutherType}.
So, I understand this relationship, and I want the compiler to understand it for the sake of speed.
Now, back to the function example. If this makes sense, you will have realized that while the function definition I gave as an example is 'correct' in the sense that it satisfies this relationship and does compile, it is un-callable because the compiler doesn't understand the relationship between X and M, even though I do. In particular, since M doesn't appear in the method signature, there is no way for Julia to dispatch on the function.
So far, the only thing I have thought to do to solve this problem is to create a sort of workaround where I "compute" the associated type on the fly, and I can still use method dispatch to do this computation. Consider:
function get_measurement_type_of_product(x::Rope)
Float64
end
function get_measurement_type_of_product(x::Paper)
Tuple{Float64,Float64}
end
#traitfn function get_measurements{X;IsProduct{X}}(similar_items::Array{X,1})
M = get_measurement_type_of_product(similar_items[1]::X)
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
Then indeed this compiles and is callable:
julia> get_measurements(Array{Rope,1}([Rope("blue",1.0,1.0,1.0),Rope("red",2.0,2.0,2.0)]))
2-element Array{Float64,1}:
1.0
2.0
But this is not ideal, because (a) I have to redefine this map each time, even though I feel as though I already told the compiler about the relationship between X and M by making them satisfy the trait, and (b) as far as I can guess--maybe this is wrong; I don't have direct evidence for this--the compiler won't necessarily be able to optimize as well as I want, since the relationship between X and M is "hidden" inside the return value of the function call.
One last thought: if I had the ability, what I would ideally do is something like this:
#traitdef IsProduct{X} begin
isnew(X) -> Bool
coolness(X) -> Float64
∃ ! M s.t. measurements(X) -> M
end
and then have some way of referring to the type that uniquely witnesses the existence relationship, so e.g.
#traitfn function get_measurements{X;IsProduct{X},IsWitnessType{IsProduct{X},M}}(similar_items::Array{X,1})
all_measurements = Array{M,1}(length(similar_items))
for i in eachindex(similar_items)
all_measurements[i] = measurements(similar_items[i])::M
end
all_measurements::Array{M,1}
end
because this would be somehow dispatchable.
So: what is my specific question? I am asking, given that you presumably by this point understand that my goals are
Have my code exhibit this sort of structure generically, so that
I can effectively repeat this design pattern across a lot of cases
and then program in the abstract at the high-level of X and M,
and
do (1) in such a way that the compiler can still optimize to the best of its ability / is as aware of the relationship among
types as I, the coder, am
then, how should I do this? I think the answer is
Use Traits.jl
Do something pretty similar to what you've done so far
Also do ____some clever thing____ that the answerer will indicate,
but I'm open to the idea that in fact the correct answer is
Abandon this approach, you're thinking about the problem the wrong way
Instead, think about it this way: ____MWE____
I'd also be perfectly satisfied by answers of the form
What you are asking for is a "sophisticated" feature of Julia that is still under development, and is expected to be included in v0.x.y, so just wait...
and I'm less enthusiastic about (but still curious to hear) an answer such as
Abandon Julia; instead use the language ________ that is designed for this type of thing
I also think this might be related to the question of typing Julia's function outputs, which as I take it is also under consideration, though I haven't been able to puzzle out the exact representation of this problem in terms of that one.

What functional programming terminology distinguishes between avoding modifying variables and objects?

In functional programming, what terminology is used to distinguish between avoiding modifying what a variable refers to, and avoiding modifying an object itself?
For example, in Ruby,
name += title
avoids modifying the object previously referred to by name, instead creating a new object, but regrettably makes name refer to the new object, whereas
full_title = name + title
not only avoids modifying objects, it avoids modifying what name refers to.
What terminology would you use for code that avoids the former?
Using a name to refer to something other than what it did in an enclosing/previous scope is known as "shadowing" that name. It is indeed distinct from mutation. In Haskell, for example I can write
return 1 >>= \x -> return (x + 1) >>= \x -> print x.
The x that is printed is the one introduced by the second lambda, i.e., 2.
In do notation this looks a bit more familiar:
foo = do
x <- return 1
x <- return (x + 1)
print x
As I understand it, Erlang forbids aliasing altogether.
However, I suspect that mathepic is right in terms of Ruby -- its not just shadowing the name but mutating some underlying obect. On the other hand, I don't know Ruby that well...
I think functional programming languages simply do not have any operators that destructively updates one of the source operands (is destructive update, perhaps, the term you're looking for?). A similar philosophy is seen in instruction set design: the RISC philosophy (increasingly used in even the x86 architecture, in the newer extensions) is to have three-operand instructions for binary operators, where you have to explicitly specify that the target operand is the same as one of the sources if you want destructive update.
For the latter, some hybrid languages (like Scala; the same terminologies are used in X10) distinguish between values (val) and variables (var). The former cannot be reassigned, the latter can. If they point to a mutable object, then of course that object itself can still be modified.

Pointer equality in Haskell?

Is there any notion of pointer quality in Haskell? == requires things to be deriving Eq, and I have something which contains a (Value -> IO Value), and neither -> nor IO derive Eq.
EDIT: I'm creating an interpreter for another language which does have pointer equality, so I'm trying to model this behavior while still being able to use Haskell functions to model closures.
EDIT: Example: I want a function special that would do this:
> let x a = a * 2
> let y = x
> special x y
True
> let z a = a * 2
> special x z
False
EDIT: Given your example, you could model this with the IO monad. Just assign your functions to IORefs and compare them.
Prelude Data.IORef> z <- newIORef (\x -> x)
Prelude Data.IORef> y <- newIORef (\x -> x)
Prelude Data.IORef> z == z
True
Prelude Data.IORef> z == y
False
Pointer equality would break referential transparency, so NO.
Perhaps surprisingly, it is actually possible to compute extensional equality of total functions on compact spaces, but in general (e.g. functions on the integers with possible non-termination) this is impossible.
EDIT: I'm creating an interpreter for another language
Can you just keep the original program AST or source location alongside the Haskell functions you've translated them into? It seems that you want "equality" based on that.
== requires things to be deriving Eq
Actually (==) requires an instance of Eq, not necessarily a derived instance.
What you probably need to do is to provide your own instance of Eq which simply ignores the (Value -> IO Value) part. E.g.,
data D = D Int Bool (Value -> IO Value)
instance Eq D where
D x y _ == D x' y' _ = x==x && y==y'
Does that help, maybe?
Another way to do this is to exploit StableNames.
However, special would have to return its results inside of the IO monad unless you want to abuse unsafePerformIO.
The IORef solution requires IO throughout the construction of your structure. Checking StableNames only uses it when you want to check referential equality.
IORefs derive Eq. I don't understand what else you need to get the pointer equality of.
Edit: The only things that it makes sense to compare the pointer equality of are mutable structures. But as I mentioned above, mutable structures, like IORefs, already instance Eq, which allows you to see if two IORefs are the same structure, which is exactly pointer equality.
I'm creating an interpreter for
another language which does have
pointer equality, so I'm trying to
model this behavior while still being
able to use Haskell functions to model
closures.
I am pretty sure that in order to write such an interpreter, your code should be monadic. Don't you have to maintain some sort of environment or state? If so, you could also maintain a counter for function closures. Thus, whenever you create a new closure you equip it with a unique id. Then for pointer equivalence you simply compare these identifiers.

Resources