I have a function that returns integer values to integer input. The output values are relatively sparse; the function only returns around 2^14 unique outputs for input values 1....2^16. I want to create a dataset that lets me quickly find the inputs that produce any given output.
At present, I'm storing my dataset in a Map of Lists, with each output value serving as the key for a List of input values. This seems slow and appears to use a whole of stack space. Is there a more efficient way to create/store/access my dataset?
Added:
It turns out the time taken by my sparesearray() function varies hugely on the ratio of output values (i.e., keys) to input values (values stored in the lists). Here's the time taken for a function that requires many lists, each with only a few values:
? sparsearray(2^16,x->x\7);
time = 126 ms.
Here's the time taken for a function that requires only a few lists, each with many values:
? sparsearray(2^12,x->x%7);
time = 218 ms.
? sparsearray(2^13,x->x%7);
time = 892 ms.
? sparsearray(2^14,x->x%7);
time = 3,609 ms.
As you can see, the time increases exponentially!
Here's my code:
\\ sparsearray takes two arguments, an integer "n" and a closure "myfun",
\\ and returns a Map() in which each key a number, and each key is associated
\\ with a List() of the input numbers for which the closure produces that output.
\\ E.g.:
\\ ? sparsearray(10,x->x%3)
\\ %1 = Map([0, List([3, 6, 9]); 1, List([1, 4, 7, 10]); 2, List([2, 5, 8])])
sparsearray(n,myfun=(x)->x)=
{
my(m=Map(),output,oldvalue=List());
for(loop=1,n,
output=myfun(loop);
if(!mapisdefined(m,output),
/* then */
oldvalue=List(),
/* else */
oldvalue=mapget(m,output));
listput(oldvalue,loop);
mapput(m,output,oldvalue));
m
}
To some extent, the behavior you are seeing is to be expected. PARI appears to pass lists and maps by value rather than reference except to the special inbuilt functions for manipulating them. This can be seen by creating a wrapper function like mylistput(list,item)=listput(list,item);. When you try to use this function you will discover that it doesn't work because it is operating on a copy of the list. Arguably, this is a bug in PARI, but perhaps they have their reasons. The upshot of this behavior is each time you add an element to one of the lists stored in the map, the entire list is being copied, possibly twice.
The following is a solution that avoids this issue.
sparsearray(n,myfun=(x)->x)=
{
my(vi=vector(n, i, i)); \\ input values
my(vo=vector(n, i, myfun(vi[i]))); \\ output values
my(perm=vecsort(vo,,1)); \\ obtain order of output values as a permutation
my(list=List(), bucket=List(), key);
for(loop=1, #perm,
if(loop==1||vo[perm[loop]]<>key,
if(#bucket, listput(list,[key,Vec(bucket)]);bucket=List()); key=vo[perm[loop]]);
listput(bucket,vi[perm[loop]])
);
if(#bucket, listput(list,[key,Vec(bucket)]));
Mat(Col(list))
}
The output is a matrix in the same format as a map - if you would rather a map then it can be converted with Map(...), but you probably want a matrix for processing since there is no built in function on a map to get the list of keys.
I did a little bit of reworking of the above to try and make something more akin to GroupBy in C#. (a function that could have utility for many things)
VecGroupBy(v, f)={
my(g=vector(#v, i, f(v[i]))); \\ groups
my(perm=vecsort(g,,1));
my(list=List(), bucket=List(), key);
for(loop=1, #perm,
if(loop==1||g[perm[loop]]<>key,
if(#bucket, listput(list,[key,Vec(bucket)]);bucket=List()); key=g[perm[loop]]);
listput(bucket, v[perm[loop]])
);
if(#bucket, listput(list,[key,Vec(bucket)]));
Mat(Col(list))
}
You would use this like VecGroupBy([1..300],i->i%7).
There is no good native GP solution because of the way garbage collection occurs because passing arguments by reference has to be restricted in GP's memory model (from version 2.13 on, it is supported for function arguments using the ~ modifier, but not for map components).
Here is a solution using the libpari function vec_equiv(), which returns the equivalence classes of identical objects in a vector.
install(vec_equiv,G);
sparsearray(n, f=x->x)=
{
my(v = vector(n, x, f(x)), e = vec_equiv(v));
[vector(#e, i, v[e[i][1]]), e];
}
? sparsearray(10, x->x%3)
%1 = [[0, 1, 2], [Vecsmall([3, 6, 9]), Vecsmall([1, 4, 7, 10]), Vecsmall([2, 5, 8])]]
(you have 3 values corresponding to the 3 given sets of indices)
The behaviour is linear as expected
? sparsearray(2^20,x->x%7);
time = 307 ms.
? sparsearray(2^21,x->x%7);
time = 670 ms.
? sparsearray(2^22,x->x%7);
time = 1,353 ms.
Use mapput, mapget and mapisdefined methods on a map created with Map(). If multiple dimensions are required, then use a polynomial or vector key.
I guess that is what you are already doing, and I'm not sure there is a better way. Do you have some code? From personal experience, 2^16 values with 2^14 keys should not be an issue with regards to speed or memory - there may be some unnecessary copying going on in your implementation.
Related
I have a vector of type UInt8 and fixed length 10. I think it contains a null-terminated string but when I do String(v) it shows the string + all of the zeros of the rest of the vector.
v = zeros(UInt8, 10)
v[1:5] = Vector{UInt8}("hello")
String(v)
the output is "hello\0\0\0\0\0".
Either I'm packing it wrong or reading it wrong. Any thoughts?
I use this snippet:
"""
nullstring(Vector{UInt8})
Interpret a vector as null terminated string.
"""
nullstring(x::Vector{UInt8}) = String(x[1:findfirst(==(0), x) - 1])
Although I bet there are faster ways to do this.
You can use unsafe_string: unsafe_string(pointer(v)), this does it without a copy, so is very fast. But #laborg's solution is better in almost all cases, because it's safe.
If you want both safety and maximal performance, you have to write a manual function yourself:
function get_string(v::Vector{UInt8})
# Find first zero
zeropos = 0
#inbounds for i in eachindex(v)
iszero(v[i]) && (zeropos = i; break)
end
iszero(zeropos) && error("Not null-terminated")
GC.#preserve v unsafe_string(pointer(v), zeropos - 1)
end
But eh, what are the odds you REALLY need it to be that fast.
You can avoid copying bytes and preserve safety with the following code:
function nullstring!(x::Vector{UInt8})
i = findfirst(iszero, x)
SubString(String(x),1,i-1)
end
Note that after calling it x will be empty and the returned value is Substring rather than String but in many scenarios it does not matter. This code makes half allocations than code by #laborg and is slightly faster (around 10-20%). The code by Jacob is still unbeatable though.
I'm creating an function to be minimized, basically a function of x1, returning value cmc. BTW I like it return some intermediate value w for later use. I just learnt to create functions return multiple values, you have to make a list, or setClass (which I'm not very clear about, so I did not use it). The executable code is
t1=1; t2=0.6; x2=1;
c=c(0,1)
f<-function(x) c(x/(t2+x),-t1*x/(t2+x)^2)
phi_c<-function(x1){
X=rbind(f(x1),f(x2))
v=solve(X%*%t(X))%*%X%*%c
w=abs(v)/sum(abs(v))
cmc=t(c)%*%solve(t(X)%*%diag(c(w))%*%X)%*%c
return(list(cmc,w))
}
phi_c(0.5)
The output is not very decent but acceptable. The problem is I cannot optimize a function with such output. So now I am doing
t1=1; t2=0.6; x2=1;
c=c(0,1)
f<-function(x) c(x/(t2+x),-t1*x/(t2+x)^2)
phi_c<-function(x1){
X=rbind(f(x1),f(x2))
v=solve(X%*%t(X))%*%X%*%c
w=abs(v)/sum(abs(v))
cmc=t(c)%*%solve(t(X)%*%diag(c(w))%*%X)%*%c
}
x1=optimize(phi_c,c(0,x2))$min
X=rbind(f(x1),f(x2))
v=solve(X%*%t(X))%*%X%*%c
w=abs(v)/sum(abs(v))
very redundant. It's ok when the problem is simple as this one, but obviously not good when things become complicated. Is there a way to create a function with multiple return values and allows you to set a prime value to be optimized? I remember some base functions are like that, give you various output but you can still work with a prime value.
Thanks.
You can just wrap this function in a different function that only returns the intended output. such as
t1=1; t2=0.6; x2=1;
c=c(0,1)
f<-function(x) c(x/(t2+x),-t1*x/(t2+x)^2)
phi_c<-function(x1){
X=rbind(f(x1),f(x2))
v=solve(X%*%t(X))%*%X%*%c
w=abs(v)/sum(abs(v))
cmc=t(c)%*%solve(t(X)%*%diag(c(w))%*%X)%*%c
return(list(cmc,w))
}
> optimize(function(x) phi_c(x)[[1]], lower = 0, upper = 5)
$minimum
[1] 4.999922
$objective
[,1]
[1,] 37.12268
I'm struggling to understand parametric type creation in julia. I know that I can create a type with the following:
type EconData
values
dates::Array{Date}
colnames::Array{ASCIIString}
function EconData(values, dates, colnames)
if size(values, 1) != size(dates, 1)
error("Date/data dimension mismatch.")
end
if size(values, 2) != size(colnames, 2)
error("Name/data dimension mismatch.")
end
new(values, dates, colnames)
end
end
ed1 = EconData([1;2;3], [Date(2014,1), Date(2014,2), Date(2014,3)], ["series"])
However, I can't figure out how to specify how values will be typed. It seems reasonable to me to do something like
type EconData{T}
values::Array{T}
...
function EconData(values::Array{T}, dates, colnames)
...
However, this (and similar attempts) simply produce and error:
ERROR: `EconData{T}` has no method matching EconData{T}(::Array{Int64,1}, ::Array{Date,1}, ::Array{ASCIIString,2})
How can I specify the type of values?
The answer is that things get funky with parametric types and inner constructors - in fact, I think its probably the most confusing thing in Julia. The immediate solution is to provide a suitable outer constructor:
using Dates
type EconData{T}
values::Vector{T}
dates::Array{Date}
colnames::Array{ASCIIString}
function EconData(values, dates, colnames)
if size(values, 1) != size(dates, 1)
error("Date/data dimension mismatch.")
end
if size(values, 2) != size(colnames, 2)
error("Name/data dimension mismatch.")
end
new(values, dates, colnames)
end
end
EconData{T}(v::Vector{T},d,n) = EconData{T}(v,d,n)
ed1 = EconData([1,2,3], [Date(2014,1), Date(2014,2), Date(2014,3)], ["series"])
What also would have worked is to have done
ed1 = EconData{Int}([1,2,3], [Date(2014,1), Date(2014,2), Date(2014,3)], ["series"])
My explanation might be wrong, but I think the probably is that there is no parametric type constructor method made by default, so you have to call the constructor for a specific instantiation of the type (my second version) or add the outer constructor yourself (first version).
Some other comments: you should be explicit about dimensions. i.e. if all your fields are vectors (1D), use Vector{T} or Array{T,1}, and if their are matrices (2D) use Matrix{T} or Array{T,2}. Make it parametric on the dimension if you need to. If you don't, slow code could be generated because functions using this type aren't really sure about the actual data structure until runtime, so will have lots of checks.
I found an answer on SO that explained how to write a randomly weighted drop system for a game. I would prefer to write this code in a more functional-programming style but I couldn't figure out a way to do that for this code. I'll inline the pseudo code here:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
How could this be written in a style that's more functional? I am using CoffeeScript and underscore.js, but I'd prefer this answer to be language agnostic because I'm having trouble thinking about this in a functional way.
Here are two more functional versions in Clojure and JavaScript, but the ideas here should work in any language that supports closures. Basically, we use recursion instead of iteration to accomplish the same thing, and instead of breaking in the middle we just return a value and stop recursing.
Original pseudo code:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
Clojure version (objects are just treated as clojure maps):
(defn recursive-version
[r objects]
(loop [t 0
others objects]
(let [obj (first others)
new_t (+ t (:weight obj))]
(if (> new_t r)
obj
(recur new_t (rest others))))))
JavaScript version (using underscore for convenience).
Be careful, because this could blow out the stack.
This is conceptually the same as the clojure version.
var js_recursive_version = function(objects, r) {
var main_helper = function(t, others) {
var obj = _.first(others);
var new_t = t + obj.weight;
if (new_t > r) {
return obj;
} else {
return main_helper(new_t, _.rest(others));
}
};
return main_helper(0, objects);
};
You can implement this with a fold (aka Array#reduce, or Underscore's _.reduce):
An SSCCE:
items = [
{item: 'foo', weight: 50}
{item: 'bar', weight: 35}
{item: 'baz', weight: 15}
]
r = Math.random() * 100
{item} = items.reduce (memo, {item, weight}) ->
if memo.sum > r
memo
else
{item, sum: memo.sum + weight}
, {sum: 0}
console.log 'r:', r, 'item:', item
You can run it many times at coffeescript.org and see that the results make sense :)
That being said, i find the fold a bit contrived, as you have to remember both the selected item and the accumulated weight between iterations, and it doesn't short-circuit when the item is found.
Maybe a compromise solution between pure FP and the tedium of reimplementing a find algorithm can be considered (using _.find):
total = 0
{item} = _.find items, ({weight}) ->
total += weight
total > r
Runnable example.
I find (no pun intended) this algorithm much more accessible than the first one (and it should perform better, as it doesn't create intermediate objects, and it does short-circuiting).
Update/side-note: the second algorithm is not "pure" because the function passed to _.find is not referentially transparent (it has the side effect of modifying the external total variable), but the whole of the algorithm is referentially transparent. If you were to encapsulate it in a findItem = (items, r) -> function, the function will be pure and will always return the same output for the same input. That's a very important thing, because it means that you can get the benefits of FP while using some non-FP constructs (for performance, readability, or whatever reason) under the hoods :D
I think the underlying task is randomly selecting 'events' (objects) from array os with a frequency defined by their respective weights. The approach is to map (i.e. search) a random number (with uniform distribution) onto the stairstep cumulative probability distribution function.
With positive weights, their cumulative sum is increasing from 0 to 1. The code you gave us simply searches starting at the 0 end. To maximize speed with repeated calls, pre calculate sums, and order the events so the largest weights are first.
It really doesn't matter whether you search with iteration (looping) or recursion. Recursion is nice in a language that tries to be 'purely functional' but doesn't help understanding the underlying mathematical problem. And it doesn't help you package the task into a clean function. The underscore functions are another way of packaging the iterations, but don't change the basic functionality. Only any and all exit early when the target is found.
For small os array this simple search is sufficient. But with a large array, a binary search will be faster. Looking in underscore I find that sortedIndex uses this strategy. From Lo-Dash (an underscore dropin), "Uses a binary search to determine the smallest index at which the value should be inserted into array in order to maintain the sort order of the sorted array"
The basic use of sortedIndex is:
os = [{name:'one',weight:.7},
{name:'two',weight:.25},
{name:'three',weight:.05}]
t=0; cumweights = (t+=o.weight for o in os)
i = _.sortedIndex(cumweights, R)
os[i]
You can hide the cumulative sum calculation with a nested function like:
osEventGen = (os)->
t=0; xw = (t+=y.weight for y in os)
return (R) ->
i = __.sortedIndex(xw, R)
return os[i]
osEvent = osEventGen(os)
osEvent(.3)
# { name: 'one', weight: 0.7 }
osEvent(.8)
# { name: 'two', weight: 0.25 }
osEvent(.99)
# { name: 'three', weight: 0.05 }
In coffeescript, Jed Clinger's recursive search could be written like this:
foo = (x, r, t=0)->
[y, x...] = x
t += y
return [y, t] if x.length==0 or t>r
return foo(x, r, t)
An loop version using the same basic idea is:
foo=(x,r)->
t=0
while x.length and t<=r
[y,x...]=x # the [first, rest] split
t+=y
y
Tests on jsPerf http://jsperf.com/sortedindex
suggest that sortedIndex is faster when os.length is around 1000, but slower than the simple loop when the length is more like 30.
For a homework assignment, we've been instructed to complete a task without introducing any "side-effects". I've looked up "side-effects" on Wikipedia, and though I get that in theory it means "modifies a state or has an observable interaction with calling functions", I'm having trouble figuring out specifics.
For example, would creating a value that holds a non-compile time result be introducing side effects?
Say I had (might not be syntactically perfect):
val myList = (someFunction x y);;
if List.exists ((=) 7) myList then true else false;;
Would this introduce side-effects? I guess maybe I'm confused on what "modifies a state" means in the definition of side-effects.
No; a side-effect refers to e.g. mutating a ref cell with the assignment operator :=, or other things where the value referred to by a name changes over time. In this case, myList is an immutable value that never changes during the program, thus it is effect-free.
See also
http://en.wikipedia.org/wiki/Referential_transparency_(computer_science)
A good way to think about it is "have I changed anything which any later code (including running this same function again later) could ever possibly see other than the value I'm returning?" If so, that's a side effect. If not, then you can know that there isn't one.
So, something like:
let inc_nosf v = v+1
has no side effects because it just returns a new value which is one more than an integer v. So if you run the following code in the ocaml toplevel, you get the corresponding results:
# let x = 5;;
val x : int = 5
# inc_nosf x;;
- : int = 6
# x;;
- : int = 5
As you can see, the value of x didn't change. So, since we didn't save the return value, then nothing really got incremented. Our function itself only modifies the return value, not x itself. So to save it into x, we'd have to do:
# let x = inc_nosf x;;
val x : int = 6
# x;;
- : int = 6
Since the inc_nosf function has no side effects (that is, it only communicates with the outside world using its return value, not by making any other changes).
But something like:
let inc_sf r = r := !r+1
has side effects because it changes the value stored in the reference represented by r. So if you run similar code in the top level, you get this, instead:
# let y = ref 5;;
val y : int ref = {contents = 5}
# inc_sf y;;
- : unit = ()
# y;;
- : int ref = {contents = 6}
So, in this case, even though we still don't save the return value, it got incremented anyway. That means there must have been changes to something other than the return value. In this case, that change was the assignment using := which changed the stored value of the ref.
As a good rule of thumb, in Ocaml, if you avoid using refs, records, classes, strings, arrays, and hash tables, then you will avoid any risk of side effects. Although you can safely use string literals as long as you avoid modifying the string in place using functions like String.set or String.fill. Basically, any function which can modify a data type in place will cause a side effect.