I believe there is a bug in this code. For the sake of brevity though I will just write the function which defines the ODE
function clones(du,u,p,t)
(Nmut,f) = p
# average fitness
phi = sum(f.*u)
# constructing mutation kernel
eps = 0.01
Q = Qmatrix(Nmut,eps)
# defining differential equations
Nclones=2^Nmut;
du = zeros(Nclones)
ufQ = transpose(transpose(u.*f)*Q)
du = ufQ .- phi*u
end
If the whole code is needed I can provide it, but it is messy and I'm not sure how to create a minimal example. I tried this when Nmut = 2 so I can compare to a hard-coded version. The output of du at the first time steps are identical. But this version does not seem to ever update u it stays at the u0 prescribed.
Does anyone have an idea why this might be the case? I can also provide the full script, but wanted to avoid that if someone could just see why u would not update.
EDIT:
maxdim=4;
for i in 1:maxdim
du[i] = 0.0;
for j in 1:maxdim
du[i] += u[j].*w[j].*Q[j,i]
end
du[i] -= u[i].*phi
end
The du is updated correctly if we use this version. Why would that be the case?
You are using the in-place form. With this, you have to update the values of du. In your script you are using
du = ufQ .- phi*u
That is replacing the name du with a new array, but not mutating the values in the original du array. A quick fix would be to use the mutating equals:
du .= ufQ .- phi*u
Notice that this is .=.
To understand what this means in a more example based format, think about this. We have an array:
a = [1,2,3,4]
Now we point a new variable to that same array
a2 = a
When we change a value of a we see that reflected in a2 since they point to the same memory:
a[1] = 5
println(a2) # [5,2,3,4]
But now if we replace a we notice nothing happens to a2 since they no longer refer to the same array
a = [1,2,3,4]
println(a2) # [5,2,3,4]
Packages like DifferentialEquations.jl utilize mutating forms so that way users can get rid of array allocations by repeatedly changing the values in cached arrays. As a consequence these means that you should be updating the values of du and not replacing its pointer.
If you feel more comfortable not using mutation, you can use the function syntax f(u,p,t), although there will be a performance hit for doing so if the state variables are (non-static) arrays.
Related
Using top, I manually measured the following memory usages at the specific points designated in the comments of the following code block:
x <- matrix(rnorm(1e9),nrow=1e4)
#~15gb
gc()
# ~7gb after gc()
y <- as.vector(x)
gc()
#~15gb after gc()
It's pretty clear that rnorm(1e9) is a ~7gb vector that's then copied to create the matrix. gc() removes the original vector since it's not assigned to anything. as.vector(x) then coerces and copies the data to vector.
My question is, why can't these three objects all point to the same memory block (at least until one is modified)? Isn't a matrix really just a vector with some additional metadata?
This is in R version 3.6.2
edit: also tested in 4.0.3, same results.
The question you're asking is to the reasoning. That seems more suited for R-devel, and I am assuming the answer in return is "no one knows". The relevant function from R-source is the do_asvector function.
Going down the source code of a call to as.vector(matrix(...)), it is important to note that the default argument for mode is any. This translates to ANYSXP (see R internals). This lets us find the evil culprit (line 1524) of the copy-behaviour.
// source reference: do_asvector
...
if(type == ANYSXP || TYPEOF(x) == type) {
switch(TYPEOF(x)) {
case LGLSXP:
case INTSXP:
case REALSXP:
case CPLXSXP:
case STRSXP:
case RAWSXP:
if(ATTRIB(x) == R_NilValue) return x;
ans = MAYBE_REFERENCED(x) ? duplicate(x) : x; // <== evil culprit
CLEAR_ATTRIB(ans);
return ans;
case EXPRSXP:
case VECSXP:
return x;
default:
;
}
...
Going one step further, we can find the definition for MAYBE_REFERENCED in src/include/Rinternals.h, and by digging a bit we can find that it checks whether sxpinfo.named is equal to 0 (false) or not (true). What I am guessing here is that the assignment operator <- increments the sxpinfo.named counter and thus MAYBE_REFERENCED(x) returns TRUE and we get a duplicate (deep copy).
However, Is this behaviour necessary?
That is a great question. If we had given an argument to mode other than any or class(x) (same as our input class), we skip the duplicate line, and we continue down the function, until we hit a ascommon. So I dug a bit extra and took a look at the source code for ascommon, we can see that if we were to try and convert to list manually (setting mode = "list"), ascommon only calls shallowDuplicate.
// Source reference: ascommon
---
if ((type == LISTSXP) &&
!(TYPEOF(u) == LANGSXP || TYPEOF(u) == LISTSXP ||
TYPEOF(u) == EXPRSXP || TYPEOF(u) == VECSXP)) {
if (MAYBE_REFERENCED(v)) v = shallow_duplicate(v); // <=== ascommon duplication behaviour
CLEAR_ATTRIB(v);
}
return v;
}
---
So one could imagine that the call to duplicate in do_asvector could be replaced by a call to shallow_duplicate. Perhaps a "better safe than sorry" strategy was chosen when the code was originally implemented (prior to R-2.13.0 according to a comment in the source code), or perhaps there is a scenario in one of the types not handled by ascommon that requires a deep-copy.
For now I would test if the function does a deep-copy if we set mode='list' or pass the list without assignment. In either case it might not be a bad idea to send a follow-up question to the R-devel mailing list.
Edit: <- behaviour
I took the liberty to confirm my suspicion, and looked at the source code for <-. I previously stated that I assumed that <- incremented sxpinfo.named, and we can confirm this by looking at do_set (the c source code for <-). When assigning as x <- ... x is a SYMSXP, and this we can see that the source code calls INCREMENT_NAMED which in turn calls SET_NAMED(x, NAMED(X) + 1). So everything else equal we should see a copy behaviour for x <- matrix(...); y <- as.vector(x) while we shouldn't for y <- as.vector(matrix(...)).
At the final gc(), you have x pointing to a vector with a dim attribute, and y pointing to a vector without any dim attribute. The data is an intrinsic part of the object, it's not an attribute, so those two vectors have to be different.
If matrices had been implemented as lists, e.g.
x <- list(data = rnorm(1e9), dim = c(1e4, 1e5))
then a shallow copy would be possible, but that's not how it was done. You can read the details of the internal structure of objects in the R Internals manual. For the current release, that's here: https://cloud.r-project.org/doc/manuals/r-release/R-ints.html#SEXPs .
You may wonder why things were implemented this way. I suspect it's intended to be efficient for the common use cases. Converting a matrix to a vector isn't generally necessary (you can treat x as a vector already, e.g. x[100000] and y[100000] will give the same value), so there's no need for "convert to vector" to be efficient. On the other hand, extracting elements is very common, so you don't want to have an extra pointer dereference slowing that down.
Does anyone know what is going on with the following R code?
library(inline)
increment <- cfunction(c(a = "integer"), "
INTEGER(a)[0]++;
return R_NilValue;
")
print_one = function(){
one = 0L
increment(one)
print(one)
}
print_one() # prints 1
print_one() # prints 2
The printed results are 1, 2. Replacing one = 0L with one = integer(1) gives result 1, 1.
Just to clarify. I know I'm passing the variable one by reference to the C function, so its value should change (become 1). What I don't understand is that, how come resetting one = 0L seem to have no effect at all after the first call to print_one (the second call to print_one prints 2 instead of 1).
This really (as the last comment hinted) have little to do with Rcpp. It's really mostly about the .C() interface used by the old inline package and here by cfunction approach in the question.
That leads to two answers.
First, the consensus among R developers is that .C() is deprecated and should no longer be used. Statements to that effect can be found on the r-devel and r-package-devel lists. .C() uses plain old types as pointers in the interface so here the integer value is passed as int* by reference and can be altered.
If we switch to Rcpp uses, and hence to the underlying .Call() interface using only SEXP types for input and output, then an int no passes by reference. So the code behaves and prints only 0:
Rcpp::cppFunction("void increment2(int a) { a++; }")
print_two <- function(){
two <- 0L
increment2(two)
print(two)
}
print_two() # prints 0
print_two() # prints 0
Lastly, Rcpp (capital R) is of course not the "sucessor" to inline (as it does a whole lot more than inline but it among all its functionality is (since around 2013) a quasi-replacement for inline in Rcpp Attributes. So with Rcpp 'as-is' since about 2013 you no longer need the examples and approach from inline.
Is there a way to enforce a dictionary being constant?
I have a function which reads out a file for parameters (and ignores comments) and stores it in a dict:
function getparameters(filename::AbstractString)
f = open(filename,"r")
dict = Dict{AbstractString, AbstractString}()
for ln in eachline(f)
m = match(r"^\s*(?P<key>\w+)\s+(?P<value>[\w+-.]+)", ln)
if m != nothing
dict[m[:key]] = m[:value]
end
end
close(f)
return dict
end
This works just fine. Since i have a lot of parameters, which i will end up using on different places, my idea was to let this dict be global. And as we all know, global variables are not that great, so i wanted to ensure that the dict and its members are immutable.
Is this a good approach? How do i do it? Do i have to do it?
Bonus answerable stuff :)
Is my code even ok? (it is the first thing i did with julia, and coming from c/c++ and python i have the tendencies to do things differently.) Do i need to check whether the file is actually open? Is my reading of the file "julia"-like? I could also readall and then use eachmatch. I don't see the "right way to do it" (like in python).
Why not use an ImmutableDict? It's defined in base but not exported. You use one as follows:
julia> id = Base.ImmutableDict("key1"=>1)
Base.ImmutableDict{String,Int64} with 1 entry:
"key1" => 1
julia> id["key1"]
1
julia> id["key1"] = 2
ERROR: MethodError: no method matching setindex!(::Base.ImmutableDict{String,Int64}, ::Int64, ::String)
in eval(::Module, ::Any) at .\boot.jl:234
in macro expansion at .\REPL.jl:92 [inlined]
in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at .\event.jl:46
julia> id2 = Base.ImmutableDict(id,"key2"=>2)
Base.ImmutableDict{String,Int64} with 2 entries:
"key2" => 2
"key1" => 1
julia> id.value
1
You may want to define a constructor which takes in an array of pairs (or keys and values) and uses that algorithm to define the whole dict (that's the only way to do so, see the note at the bottom).
Just an added note, the actual internal representation is that each dictionary only contains one key-value pair, and a dictionary. The get method just walks through the dictionaries checking if it has the right value. The reason for this is because arrays are mutable: if you did a naive construction of an immutable type with a mutable field, the field is still mutable and thus while id["key1"]=2 wouldn't work, id.keys[1]=2 would. They go around this by not using a mutable type for holding the values (thus holding only single values) and then also holding an immutable dict. If you wanted to make this work directly on arrays, you could use something like ImmutableArrays.jl but I don't think that you'd get a performance advantage because you'd still have to loop through the array when checking for a key...
First off, I am new to Julia (I have been using/learning it since only two weeks). So do not put any confidence in what I am going to say unless it is validated by others.
The dictionary data structure Dict is defined here
julia/base/dict.jl
There is also a data structure called ImmutableDict in that file. However as const variables aren't actually const why would immutable dictionaries be immutable?
The comment states:
ImmutableDict is a Dictionary implemented as an immutable linked list,
which is optimal for small dictionaries that are constructed over many individual insertions
Note that it is not possible to remove a value, although it can be partially overridden and hidden
by inserting a new value with the same key
So let us call what you want to define as a dictionary UnmodifiableDict to avoid confusion. Such object would probably have
a similar data structure as Dict.
a constructor that takes a Dict as input to fill its data structure.
specialization (a new dispatch?) of the the method setindex! that is called by the operator [] =
in order to forbid modification of the data structure. This should be the case of all other functions that end with ! and hence modify the data.
As far as I understood, It is only possible to have subtypes of abstract types. Therefore you can't make UnmodifiableDict as a subtype of Dict and only redefine functions such as setindex!
Unfortunately this is a needed restriction for having run-time types and not compile-time types. You can't have such a good performance without a few restrictions.
Bottom line:
The only solution I see is to copy paste the code of the type Dict and its functions, replace Dict by UnmodifiableDict everywhere and modify the functions that end with ! to raise an exception if called.
you may also want to have a look at those threads.
https://groups.google.com/forum/#!topic/julia-users/n-lqjybIO_w
https://github.com/JuliaLang/julia/issues/1974
REVISION
Thanks to Chris Rackauckas for pointing out the error in my earlier response. I'll leave it below as an illustration of what doesn't work. But, Chris is right, the const declaration doesn't actually seem to improve performance when you feed the dictionary into the function. Thus, see Chris' answer for the best resolution to this issue:
D1 = [i => sind(i) for i = 0.0:5:3600];
const D2 = [i => sind(i) for i = 0.0:5:3600];
function test(D)
for jdx = 1:1000
# D[2] = 2
for idx = 0.0:5:3600
a = D[idx]
end
end
end
## Times given after an initial run to allow for compiling
#time test(D1); # 0.017789 seconds (4 allocations: 160 bytes)
#time test(D2); # 0.015075 seconds (4 allocations: 160 bytes)
Old Response
If you want your dictionary to be a constant, you can use:
const MyDict = getparameters( .. )
Update Keep in mind though that in base Julia, unlike some other languages, it's not that you cannot redefine constants, instead, it's just that you get a warning when doing so.
julia> const a = 2
2
julia> a = 3
WARNING: redefining constant a
3
julia> a
3
It is odd that you don't get the constant redefinition warning when adding a new key-val pair to the dictionary. But, you still see the performance boost from declaring it as a constant:
D1 = [i => sind(i) for i = 0.0:5:3600];
const D2 = [i => sind(i) for i = 0.0:5:3600];
function test1()
for jdx = 1:1000
for idx = 0.0:5:3600
a = D1[idx]
end
end
end
function test2()
for jdx = 1:1000
for idx = 0.0:5:3600
a = D2[idx]
end
end
end
## Times given after an initial run to allow for compiling
#time test1(); # 0.049204 seconds (1.44 M allocations: 22.003 MB, 5.64% gc time)
#time test2(); # 0.013657 seconds (4 allocations: 160 bytes)
To add to the existing answers, if you like immutability and would like to get performant (but still persistent) operations which change and extend the dictionary, check out FunctionalCollections.jl's PersistentHashMap type.
If you want to maximize performance and take maximal advantage of immutability, and you don't plan on doing any operations on the dictionary whatsoever, consider implementing a perfect hash function-based dictionary. In fact, if your dictionary is a compile-time constant, these can even be computed ahead of time (using metaprogramming) and precompiled.
I have a recursive function which utilizes a global dict to store values already obtained when traversing the tree. However, at least some of the values stored in the dict seem to disappear! This simplified code shows the problem:
type id
level::Int32
x::Int32
end
Vdict = Dict{id,Float64}()
function getV(w::id)
if haskey(Vdict,w)
return Vdict[w]
end
if w.level == 12
return 1.0
end
w.x == -111 && println("dont have: ",w)
local vv = 0.0
for j = -15:15
local wj = id(w.level+1,w.x+j)
vv += getV(wj)
end
Vdict[w] = vv
w.x == -111 && println("just stored: ",w)
vv
end
getV(id(0,0))
The output has many lines like this:
just stored: id(11,-111)
dont have: id(11,-111)
just stored: id(11,-111)
dont have: id(11,-111)
just stored: id(11,-111)
dont have: id(11,-111)
...
Do I have a silly error, or is there a bug in Julia's dict?
By default, custom types come with implementations of equality and hashing by object identity. Since your id type is mutable, Julia is conservative and assumes that you care about distinguishing each instance from another (since they could potentially diverge):
julia> type Id # There's a strong convention to capitalize type names in Julia
level::Int32
x::Int32
end
julia> x = Id(11, -111)
y = Id(11, -111)
x == y
false
julia> x.level = 12; (x,y)
(Id(12,-111),Id(11,-111))
Julia doesn't know whether you care about the object's long-term behavior or its current value.
There are two ways to make this behave as you'd like:
Make your custom type immutable. It looks like you don't need to mutate the contents of Id. The simplest and most straightforward way to solve this is to define it as immutable Id. Now Id(11, -111) is completely indistinguishable from any other construction of Id(11, -111) since its values can never change. As a bonus, you may see better performance, too.
If you do need to mutate the values, you could alternatively define your own implementations of == and Base.hash so they only care about the current value:
==(a::Id, b::Id) = a.level == b.level && a.x == b.x
Base.hash(a::Id, h::Uint) = hash(a.level, hash(a.x, h))
As #StefanKarpinski just pointed out on the mailing list, this isn't the default for mutable values "since it makes it easy to stick something in a dict, then mutate it, and 'lose it'." That is, the object's hash value has changed but the dictionary stored it in a place based upon its old hash value, and now you can no longer access that key/value pair by key lookup. Even if you create a second object with the same original properties as the first it won't be able to find it since the dictionary checks equality after finding a hash match. The only way to lookup that key is to mutate it back to its original value or explicitly asking the dictionary to Base.rehash! its contents.
In this case, I highly recommend option 1.
I have been using the chr library along with the jpl interface. I have a general inquiry though. I send the constraints from SWI Prolog to an instance of a java class from within my CHR program. The thing is if the input constraint is leq(A,B) for example, the names of the variables are gone, and the variable names that appear start with _G. This happens even if I try to print leq(A,B) without using the interface at all. It appears that whenever the variable is processed the name is replaced with a fresh one. My question is whether there is a way to do the mapping back. For example whether there is a way to know that _G123 corresponds to A and so on.
Thank you very much.
(This question has nothing to do with CHR nor is it specific to SWI).
The variable names you use when writing a Prolog program are discarded completely by the Prolog system. The reason is that this information cannot be used to print variables accurately. There might be several independent instances of that variable. So one would need to add some unique identifier to the variable name. Also, maintaining that information at runtime would incur significant overheads.
To see this, consider a predicate mylist/1.
?- [user].
|: mylist([]).
|: mylist([_E|Es]) :- mylist(Es).
|: % user://2 compiled 0.00 sec, 4 clauses
true.
Here, we have used the variable _E for each element of the list. The toplevel now prints all those elements with a unique identifier:
?- mylist(Fs).
Fs = [] ;
Fs = [_G295] ;
Fs = [_G295, _G298] .
Fs = [_G295, _G298, _G301] .
The second answer might be printed as Fs = [_E] instead. But what about the third? It cannot be printed as Fs = [_E,_E] since the elements are different variables. So something like Fs = [_E_295,_E_298] is the best we could get. However, this would imply a lot of extra book keeping.
But there is also another reason, why associating source code variable names with runtime variables would lead to extreme complexities: In different places, that variable might have a different name. Here is an artificial example to illustrate this:
p1([_A,_B]).
p2([_B,_A]).
And the query:
?- p1(L), p2(L).
L = [_G337, _G340].
What names, would you like, these two elements should have? The first element might have the name _A or _B or maybe even better: _A_or_B. Or, even _Ap1_and_Bp2. For whom will this be a benefit?
Note that the variable names mentioned in the query at the toplevel are retained:
?- Fs = [_,F|_], mylist(Fs).
Fs = [_G231, F] ;
Fs = [_G231, F, _G375] ;
Fs = [_G231, F, _G375, _G378]
So there is a way to get that information. On how to obtain the names of variables in SWI and YAP while reading a term, please refer to this question.