How to reassign result of concatenation in Julia? - julia

I need to create Vector of Vector of predefined structure in Julia. As of now I am trying to do it via iterative concatenation:
struct Scenario
prob::Float64 # probability
time::Float64 # duration of visit
profit::Int64 # profit of visit
end
possible_times = [60, 90, 120, 150, 180]
scenarios = Scenario[]
for point in 1:num_points
profit = rand(1:4)
new_scenario = [Scenario(0.2, possible_times[i], profit) for i=1:5]
scenarios = vcat(scenarios, new_scenario)
end
display(scenarios)
But I got the following
Warning: Assignment to `scenarios` in soft scope is ambiguous because a global variable by the same name exists: `scenarios` will be treated as a new local. Disambiguate by using `local scenarios` to suppress this warning or `global scenarios` to assign to the existing global variable.
ERROR: LoadError: UndefVarError: scenarios not defined
So the first question is how to save the result of intermediate concatenation? And the second question is that way correct to achieve the goal? Or I do it wrong and there is another way?

Normally use append! instead of vcat:
for point in 1:num_points
profit = rand(1:4)
new_scenario = [Scenario(0.2, possible_times[i], profit) for i=1:5]
append!(scenarios, new_scenario)
end
If you want to use vcat use the global keyword:
for point in 1:num_points
profit = rand(1:4)
new_scenario = [Scenario(0.2, possible_times[i], profit) for i=1:5]
global scenarios = vcat(scenarios, new_scenario)
end
The point is that in scenarios = vcat(scenarios, new_scenario) you reassign the scenarios variable which is in global scope.
In general, the situation is a bit more complex (Julia behavior will depend on whether the code is run in interactive or non-interactive session), as you can read in this section of the Julia Manual (bullet 3 in this section on Soft scope). But if you do not want to dig into the details of scoping a simple and safe rule is: if you assign to a global variable then prefix the assignment operation with global.

Related

How to define that some arguments in a data frame might not be used?

I have this data frame:
df <- data.frame(ref_ws = ref_ws,
turb_ws = turb_ws,
ref_wd = ref_wd,
fcf = turb_ws/ref_ws,
ref_fi = ref_fi,
shear = shear,
turbulence_intensity = turbulence_intensity,
inflow = inflow,
veer = veer)
that is part of a function where I define optional arguments (shear, turbulence_intensity, inflow and veer )
trial_plots <- function(ref_ws,turb_ws,ref_wd,shear,turbulence_intensity,inflow,veer)
the variables ref_ws,turb_ws,ref_wd are mandatory but the others are optional.
The optional ones will generate an individual plot for each one in case that we define the argument in the function.
For example, if shear is not used, I want to continue and see if it can generate the next plot regarding the turbulence_intensity and so on.
At the moment this is is the error:
Error in data.frame(ref_ws = ref_ws, turb_ws = turb_ws,ref_wd = ref_wd, :
argument "veer" is missing, with no default
How can I define these arguments to be optional?
Hadley recommends to use NULL value as default argument and use is.null test in the function body:
Sometimes you want to add a non-trivial default value, which might take several lines of code to compute. Instead of inserting that code in the function definition, you could use missing() to conditionally compute it if needed. However, this makes it hard to know which arguments are required and which are optional without carefully reading the documentation. Instead, I usually set the default value to NULL and use is.null() to check if the argument was supplied.
From Advanced R book
I think it's a useful advice and personally use it a lot.

R: Enriched debugging for linear code chains

I am trying to figure out if it is possible, with a sane amount of programming, to create a certain debugging function by using R's metaprogramming features.
Suppose I have a block of code, such that each line uses as all or part of its input the output from thee line before -- the sort of code you might build with pipes (though no pipe is used here).
{
f1(args1) -> out1
f2(out1, args2) -> out2
f3(out2, args3) -> out3
...
fn(out<n-1>, args<n>) -> out<n>
}
Where for example it might be that:
f1 <- function(first_arg, second_arg, ...){my_body_code},
and you call f1 in the block as:
f1(second_arg = 1:5, list(a1 ="A", a2 =1), abc = letters[1:3], fav = foo_foo)
where foo_foo is an object defined in the calling environment of f1.
I would like a function I could wrap around my block that would, for each line of code, create an entry in a list. Each entry would be named (line1, line2) and each line entry would have a sub-entry for each argument and for the function output. the argument entries would consist, first, of the name of the formal, to which the actual argument is matched, second, the expression or name supplied to that argument if there is one (and a placeholder if the argument is just a constant), and third, the value of that expression as if it were immediately forced on entry into the function. (I'd rather have the value as of the moment the promise is first kept, but that seems to me like a much harder problem, and the two values will most often be the same).
All the arguments assigned to the ... (if any) would go in a dots = list() sublist, with entries named if they have names and appropriately labeled (..1, ..2, etc.) if they are assigned positionally. The last element of each line sublist would be the name of the output and its value.
The point of this is to create a fairly complete record of the operation of the block of code. I think of this as analogous to an elaborated version of purrr::safely that is not confined to iteration and keeps a more detailed record of each step, and indeed if a function exits with an error you would want the error message in the list entry as well as as much of the matched arguments as could be had before the error was produced.
It seems to me like this would be very useful in debugging linear code like this. This lets you do things that are difficult using just the RStudio debugger. For instance, it lets you trace code backwards. I may not know that the value in out2 is incorrect until after I have seen some later output. Single-stepping does not keep intermediate values unless you insert a bunch of extra code to do so. In addition, this keeps the information you need to track down matching errors that occur before promises are even created. By the time you see output that results from such errors via single-stepping, the matching information has likely evaporated.
I have actually written code that takes a piped function and eliminates the pipes to put it in this format, just using text manipulation. (Indeed, it was John Mount's "Bizarro pipe" that got me thinking of this). And if I, or we, or you, can figure out how to do this, I would hope to make a serious run on a second version where each function calls the next, supplying it with arguments internally rather than externally -- like a traceback where you get the passed argument values as well as the function name and and formals. Other languages have debugging environments like that (e.g. GDB), and I've been wishing for one for R for at least five years, maybe 10, and this seems like a step toward it.
Just issue the trace shown for each function that you want to trace.
f <- function(x, y) {
z <- x + y
z
}
trace(f, exit = quote(print(returnValue())))
f(1,2)
giving the following which shows the function name, the input and output. (The last 3 is from the function itself.)
Tracing f(1, 2) on exit
[1] 3
[1] 3

What does the "Base" keyword mean in Julia?

I saw this example in the Julia language documentation. It uses something called Base. What is this Base?
immutable Squares
count::Int
end
Base.start(::Squares) = 1
Base.next(S::Squares, state) = (state*state, state+1)
Base.done(S::Squares, s) = s > S.count;
Base.eltype(::Type{Squares}) = Int # Note that this is defined for the type
Base.length(S::Squares) = S.count;
Base is a module which defines many of the functions, types and macros used in the Julia language. You can view the files for everything it contains here or call whos(Base) to print a list.
In fact, these functions and types (which include things like sum and Int) are so fundamental to the language that they are included in Julia's top-level scope by default.
This means that we can just use sum instead of Base.sum every time we want to use that particular function. Both names refer to the same thing:
Julia> sum === Base.sum
true
Julia> #which sum # show where the name is defined
Base
So why, you might ask, is it necessary is write things like Base.start instead of simply start?
The point is that start is just a name. We are free to rebind names in the top-level scope to anything we like. For instance start = 0 will rebind the name 'start' to the integer 0 (so that it no longer refers to Base.start).
Concentrating now on the specific example in docs, if we simply wrote start(::Squares) = 1, then we find that we have created a new function with 1 method:
Julia> start
start (generic function with 1 method)
But Julia's iterator interface (invoked using the for loop) requires us to add the new method to Base.start! We haven't done this and so we get an error if we try to iterate:
julia> for i in Squares(7)
println(i)
end
ERROR: MethodError: no method matching start(::Squares)
By updating the Base.start function instead by writing Base.start(::Squares) = 1, the iterator interface can use the method for the Squares type and iteration will work as we expect (as long as Base.done and Base.next are also extended for this type).
I'll grant that for something so fundamental, the explanation is buried a bit far down in the documentation, but http://docs.julialang.org/en/release-0.4/manual/modules/#standard-modules describes this:
There are three important standard modules: Main, Core, and Base.
Base is the standard library (the contents of base/). All modules
implicitly contain using Base, since this is needed in the vast
majority of cases.

Immutable dictionary

Is there a way to enforce a dictionary being constant?
I have a function which reads out a file for parameters (and ignores comments) and stores it in a dict:
function getparameters(filename::AbstractString)
f = open(filename,"r")
dict = Dict{AbstractString, AbstractString}()
for ln in eachline(f)
m = match(r"^\s*(?P<key>\w+)\s+(?P<value>[\w+-.]+)", ln)
if m != nothing
dict[m[:key]] = m[:value]
end
end
close(f)
return dict
end
This works just fine. Since i have a lot of parameters, which i will end up using on different places, my idea was to let this dict be global. And as we all know, global variables are not that great, so i wanted to ensure that the dict and its members are immutable.
Is this a good approach? How do i do it? Do i have to do it?
Bonus answerable stuff :)
Is my code even ok? (it is the first thing i did with julia, and coming from c/c++ and python i have the tendencies to do things differently.) Do i need to check whether the file is actually open? Is my reading of the file "julia"-like? I could also readall and then use eachmatch. I don't see the "right way to do it" (like in python).
Why not use an ImmutableDict? It's defined in base but not exported. You use one as follows:
julia> id = Base.ImmutableDict("key1"=>1)
Base.ImmutableDict{String,Int64} with 1 entry:
"key1" => 1
julia> id["key1"]
1
julia> id["key1"] = 2
ERROR: MethodError: no method matching setindex!(::Base.ImmutableDict{String,Int64}, ::Int64, ::String)
in eval(::Module, ::Any) at .\boot.jl:234
in macro expansion at .\REPL.jl:92 [inlined]
in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at .\event.jl:46
julia> id2 = Base.ImmutableDict(id,"key2"=>2)
Base.ImmutableDict{String,Int64} with 2 entries:
"key2" => 2
"key1" => 1
julia> id.value
1
You may want to define a constructor which takes in an array of pairs (or keys and values) and uses that algorithm to define the whole dict (that's the only way to do so, see the note at the bottom).
Just an added note, the actual internal representation is that each dictionary only contains one key-value pair, and a dictionary. The get method just walks through the dictionaries checking if it has the right value. The reason for this is because arrays are mutable: if you did a naive construction of an immutable type with a mutable field, the field is still mutable and thus while id["key1"]=2 wouldn't work, id.keys[1]=2 would. They go around this by not using a mutable type for holding the values (thus holding only single values) and then also holding an immutable dict. If you wanted to make this work directly on arrays, you could use something like ImmutableArrays.jl but I don't think that you'd get a performance advantage because you'd still have to loop through the array when checking for a key...
First off, I am new to Julia (I have been using/learning it since only two weeks). So do not put any confidence in what I am going to say unless it is validated by others.
The dictionary data structure Dict is defined here
julia/base/dict.jl
There is also a data structure called ImmutableDict in that file. However as const variables aren't actually const why would immutable dictionaries be immutable?
The comment states:
ImmutableDict is a Dictionary implemented as an immutable linked list,
which is optimal for small dictionaries that are constructed over many individual insertions
Note that it is not possible to remove a value, although it can be partially overridden and hidden
by inserting a new value with the same key
So let us call what you want to define as a dictionary UnmodifiableDict to avoid confusion. Such object would probably have
a similar data structure as Dict.
a constructor that takes a Dict as input to fill its data structure.
specialization (a new dispatch?) of the the method setindex! that is called by the operator [] =
in order to forbid modification of the data structure. This should be the case of all other functions that end with ! and hence modify the data.
As far as I understood, It is only possible to have subtypes of abstract types. Therefore you can't make UnmodifiableDict as a subtype of Dict and only redefine functions such as setindex!
Unfortunately this is a needed restriction for having run-time types and not compile-time types. You can't have such a good performance without a few restrictions.
Bottom line:
The only solution I see is to copy paste the code of the type Dict and its functions, replace Dict by UnmodifiableDict everywhere and modify the functions that end with ! to raise an exception if called.
you may also want to have a look at those threads.
https://groups.google.com/forum/#!topic/julia-users/n-lqjybIO_w
https://github.com/JuliaLang/julia/issues/1974
REVISION
Thanks to Chris Rackauckas for pointing out the error in my earlier response. I'll leave it below as an illustration of what doesn't work. But, Chris is right, the const declaration doesn't actually seem to improve performance when you feed the dictionary into the function. Thus, see Chris' answer for the best resolution to this issue:
D1 = [i => sind(i) for i = 0.0:5:3600];
const D2 = [i => sind(i) for i = 0.0:5:3600];
function test(D)
for jdx = 1:1000
# D[2] = 2
for idx = 0.0:5:3600
a = D[idx]
end
end
end
## Times given after an initial run to allow for compiling
#time test(D1); # 0.017789 seconds (4 allocations: 160 bytes)
#time test(D2); # 0.015075 seconds (4 allocations: 160 bytes)
Old Response
If you want your dictionary to be a constant, you can use:
const MyDict = getparameters( .. )
Update Keep in mind though that in base Julia, unlike some other languages, it's not that you cannot redefine constants, instead, it's just that you get a warning when doing so.
julia> const a = 2
2
julia> a = 3
WARNING: redefining constant a
3
julia> a
3
It is odd that you don't get the constant redefinition warning when adding a new key-val pair to the dictionary. But, you still see the performance boost from declaring it as a constant:
D1 = [i => sind(i) for i = 0.0:5:3600];
const D2 = [i => sind(i) for i = 0.0:5:3600];
function test1()
for jdx = 1:1000
for idx = 0.0:5:3600
a = D1[idx]
end
end
end
function test2()
for jdx = 1:1000
for idx = 0.0:5:3600
a = D2[idx]
end
end
end
## Times given after an initial run to allow for compiling
#time test1(); # 0.049204 seconds (1.44 M allocations: 22.003 MB, 5.64% gc time)
#time test2(); # 0.013657 seconds (4 allocations: 160 bytes)
To add to the existing answers, if you like immutability and would like to get performant (but still persistent) operations which change and extend the dictionary, check out FunctionalCollections.jl's PersistentHashMap type.
If you want to maximize performance and take maximal advantage of immutability, and you don't plan on doing any operations on the dictionary whatsoever, consider implementing a perfect hash function-based dictionary. In fact, if your dictionary is a compile-time constant, these can even be computed ahead of time (using metaprogramming) and precompiled.

Variable Names in SWI Prolog

I have been using the chr library along with the jpl interface. I have a general inquiry though. I send the constraints from SWI Prolog to an instance of a java class from within my CHR program. The thing is if the input constraint is leq(A,B) for example, the names of the variables are gone, and the variable names that appear start with _G. This happens even if I try to print leq(A,B) without using the interface at all. It appears that whenever the variable is processed the name is replaced with a fresh one. My question is whether there is a way to do the mapping back. For example whether there is a way to know that _G123 corresponds to A and so on.
Thank you very much.
(This question has nothing to do with CHR nor is it specific to SWI).
The variable names you use when writing a Prolog program are discarded completely by the Prolog system. The reason is that this information cannot be used to print variables accurately. There might be several independent instances of that variable. So one would need to add some unique identifier to the variable name. Also, maintaining that information at runtime would incur significant overheads.
To see this, consider a predicate mylist/1.
?- [user].
|: mylist([]).
|: mylist([_E|Es]) :- mylist(Es).
|: % user://2 compiled 0.00 sec, 4 clauses
true.
Here, we have used the variable _E for each element of the list. The toplevel now prints all those elements with a unique identifier:
?- mylist(Fs).
Fs = [] ;
Fs = [_G295] ;
Fs = [_G295, _G298] .
Fs = [_G295, _G298, _G301] .
The second answer might be printed as Fs = [_E] instead. But what about the third? It cannot be printed as Fs = [_E,_E] since the elements are different variables. So something like Fs = [_E_295,_E_298] is the best we could get. However, this would imply a lot of extra book keeping.
But there is also another reason, why associating source code variable names with runtime variables would lead to extreme complexities: In different places, that variable might have a different name. Here is an artificial example to illustrate this:
p1([_A,_B]).
p2([_B,_A]).
And the query:
?- p1(L), p2(L).
L = [_G337, _G340].
What names, would you like, these two elements should have? The first element might have the name _A or _B or maybe even better: _A_or_B. Or, even _Ap1_and_Bp2. For whom will this be a benefit?
Note that the variable names mentioned in the query at the toplevel are retained:
?- Fs = [_,F|_], mylist(Fs).
Fs = [_G231, F] ;
Fs = [_G231, F, _G375] ;
Fs = [_G231, F, _G375, _G378]
So there is a way to get that information. On how to obtain the names of variables in SWI and YAP while reading a term, please refer to this question.

Resources