Lua multidimensional array __index metatable error - multidimensional-array

I am trying to make dynamic multidimensional array, but i have a problem understanding how metatables work. It's like it has bug and it doesn't understand how to get and set a value. Here's an example:
function test(A)
local G = {}
local mt = {}
mt.__index = function(self, i)
self[i] = setmetatable({}, mt)
return self[i]
end
setmetatable(G, mt)
G[1] = 10 -- adds 10
G[1][2][3] = 10 -- error, why?breaks on G[1]
--but when i do it like this
G[1][2][3] = 10 -- adds 10
G[1] = 10 -- erases whole table and adds 10
print(G[1][2][3]) -- error
end

You set G[1] to 10. 10 is not a table, so there's no way G[1][2][3] can work.
FYI, __index is only invoked when the key points to nil. If G[1] == 10, then G[1][2] cannot invoke __index.
I suspect that you believe G[1][2][3] and G[1] are two completely separate entities. They are not. When you write:
G[1][2][3] = 42
...it's parsed as:
((G[1])[2])[3] = 42
In other words, in table G, you access index 1, then in that table, you access index 2, then in that table, you assign to index 3.
Does that make it any clearer?

G[1][2][3] = 10 -- error, why?breaks on G[1]
It's breaks on G[1][2], because G[1] is number, not table.
Try proxy table: http://www.lua.org/pil/13.4.4.html
__index are fallback.

Related

Function only works when there is a println() statement at the end of each iteration

I'm trying to implement a 3-clique (triangle) finding algorithm from a paper (p. 212-213) in Julia, but I'm running into a problem with the code
function find_triangle(graph::AdjacencyListGraphs)
new_graph = deepcopy(graph)
sort!(new_graph.edges, by=x->new_graph.D[x], rev=true)
cliques = Vector{Vector{Int64}}()
marked_nodes = Set()
for i in 1:new_graph.n - 2
cur = new_graph.edges[new_graph.edges.keys[1]]
# mark vertices adjacent to i
for j in 1:length(cur)
push!(marked_nodes,cur[j])
end
# search in marked nodes
for j in 1:length(cur)
u = cur[j]
for w in new_graph.edges[u]
if w in marked_nodes
cur_clique = [new_graph.edges.keys[1], u, w]
push!(cliques, cur_clique)
end
delete!(marked_nodes, u)
end
end
# delete node
for key in new_graph.edges.keys
filter!(x->x≠new_graph.edges.keys[1],new_graph.edges[key])
end
delete!(new_graph.edges, new_graph.edges.keys[1])
# this println() call is currently used to prevent an unknown error. Not sure why, but this fixes it
println(new_graph)
end
return cliques
end
The input to the function is the following
nodes = [1,2,3,4,5,6]
edges = OrderedDict{Int64, Vector{Int64}}()
edges[1] = [2,3,5]
edges[2] = [1,3,4,6]
edges[3] = [1,2,4,6]
edges[4] = [2,3,5,6]
edges[5] = [1,4]
edges[6] = [2,3,4]
degrees = [3,4,4,4,2,3]
graph = AdjacencyListGraphs(nodes, 6, 10, degrees, edges)
cliques = find_triangle(graph)
And the type definition for the graph is as follows:
mutable struct AdjacencyListGraphs{Int64}
vals::Vector{Int64} # vertex list
n::Int64 # number of vertices
m::Int64 # number of edges
D::Vector{Int64} # degree sequence
edges::OrderedDict{Int64, Vector{Int64}} # adjacency list
end
The function runs properly if I include the println() statement, but if I remove just that statement, I run into the following bug
ERROR: LoadError: KeyError: key 2 not found
The issue to me looks like an error in the deletion of a node, and someohow the println() statement fixes it. The reason why I need to fix this is because I'm trying to run the code on a much bigger graph with about a million triangles, but the println() call at each step is literally crashing my computer.
Any help would be greatly appreciated; thank you!
The reason of the problem is that you use keys field of OrderedDict which is private. You should use accessor function e.g. like this:
function find_triangle(graph::AdjacencyListGraphs)
new_graph = deepcopy(graph)
sort!(new_graph.edges, by=x->new_graph.D[x], rev=true)
cliques = Vector{Vector{Int64}}()
marked_nodes = Set()
for i in 1:new_graph.n - 2
curk = first(keys(new_graph.edges))
cur = new_graph.edges[curk]
# mark vertices adjacent to i
for j in 1:length(cur)
push!(marked_nodes,cur[j])
end
# search in marked nodes
for j in 1:length(cur)
u = cur[j]
for w in new_graph.edges[u]
if w in marked_nodes
cur_clique = [curk, u, w]
push!(cliques, cur_clique)
end
delete!(marked_nodes, u)
end
end
# delete node
for key in new_graph.edges.keys
filter!(x->x≠curk,new_graph.edges[key])
end
delete!(new_graph.edges, curk)
# this println() call is currently used to prevent an unknown error. Not sure why, but this fixes it
# println(new_graph)
end
return cliques
end
The reason for the problem is that you delete keys in the dictionary, but not call rehash! on it. Incidentally rehash! is called when you call println because it calls iterate which in turn calls rehash!. So this would work:
function find_triangle(graph::AdjacencyListGraphs)
new_graph = deepcopy(graph)
sort!(new_graph.edges, by=x->new_graph.D[x], rev=true)
cliques = Vector{Vector{Int64}}()
marked_nodes = Set()
for i in 1:new_graph.n - 2
DataStructures.OrderedCollections.rehash!(new_graph.edges)
cur = new_graph.edges[new_graph.edges.keys[1]]
# mark vertices adjacent to i
for j in 1:length(cur)
push!(marked_nodes,cur[j])
end
# search in marked nodes
for j in 1:length(cur)
u = cur[j]
for w in new_graph.edges[u]
if w in marked_nodes
cur_clique = [new_graph.edges.keys[1], u, w]
push!(cliques, cur_clique)
end
delete!(marked_nodes, u)
end
end
# delete node
for key in new_graph.edges.keys
filter!(x->x≠new_graph.edges.keys[1],new_graph.edges[key])
end
delete!(new_graph.edges, new_graph.edges.keys[1])
# this println() call is currently used to prevent an unknown error. Not sure why, but this fixes it
#println(new_graph)
end
return cliques
end
but you should not write code like this, but rather use public API.
Maybe some body overwrote println or show for that custom type AdjacencyListGraphs, is the only reason I cuold find for a println to change the state of the code!.

Lua - writing iterator similar to ipairs, but selects indices

I'd like to write an iterator that behaves exactly like ipairs, except which takes a second argument. The second argument would be a table of the indices that ipairs should loop over.
I'm wondering if my current approach is inefficient, and how I could improve it with closures.
I'm also open to other methods of accomplishing the same thing. But I like iterators because they're easy to use and debug.
I'll be making references to and using some of the terminology from Programming in Lua (PiL), especially the chapter on closures (chapter 7 in the link).
So I'd like to have this,
ary = {10,20,30,40}
for i,v in selpairs(ary, {1,3}) do
ary[i] = v+5
print(string.format("ary[%d] is now = %g", i, ary[i]))
end
which would output this:
ary[1] is now = 15
ary[3] is now = 35
My current approach is this : (in order: iterator, factory, then generic for)
iter = function (t, s)
s = s + 1
local i = t.sel[s]
local v = t.ary[i]
if v then
return s, i, v
end
end
function selpairs (ary, sel)
local t = {}
t.ary = ary
t.sel = sel
return iter, t, 0
end
ary = {10,20,30,40}
for _,i,v in selpairs(ary, {1,3}) do
ary[i] = v+5
print(string.format("ary[%d] is now = %g", i, ary[i]))
end
-- same output as before
It works. sel is the array of 'selected' indices. ary is the array you want to perform the loop on. Inside iter, s indexes sel, and i indexes ary.
But there are a few glaring problems.
I must always discard the first returned argument s (_ in the for loop). I never need s, but it has to be returned as the first argument since it is the "control variable".
The "invariant state" is actually two invariant states (ary and sel) packed into a single table. Pil says that this is more expensive, and recommends using closures. (Hence my writing this question).
The rest can of this can be ignored. I'm just providing more context for what I'm wanting to use selpairs for.
I'm mostly concerned with the second problem. I'm writing this for a library I'm making for generating music. Doing simple stuff like ary[i] = v+5 won't really be a problem. But when I do stuff like accessing object properties and checking bounds, then I get concerned that the 'invariant state as a table' approach may be creating unnecessary overhead. Should I be concerned about this?
If anything, I'd like to know how to write this with closures just for the knowledge.
Of course, I've tried using closures, but I'm failing to understand the scope of "locals in enclosing functions" and how it relates to a for loop calling an iterator.
As for the first problem, I imagine I could make the control variable a table of s, i, and v. And at the return in iter, unpack the table in the desired order.
But I'm guessing that this is inefficient too.
Eventually, I'd like to write an iterator which does this, except nested into itself. My main data structure is arrays of arrays, so I'd hope to make something like this:
ary_of_arys = {
{10, 20, 30, 40},
{5, 6, 7, 8},
{0.9, 1, 1.1, 1.2},
}
for aoa,i,v in selpairs_inarrays(ary_of_arys, {1,3}, {2,3,4}) do
ary_of_arys[aoa][i] = v+5
end
And this too, could use the table approach, but it'd be nice to know how to take advantage of closures.
I've actually done something similar: A function that basically does the same thing by taking a function as it's fourth and final argument. It works just fine, but would this be less inefficient than an iterator?
You can hide "control variable" in an upvalue:
local function selpairs(ary, sel)
local s = 0
return
function()
s = s + 1
local i = sel[s]
local v = ary[i]
if v then
return i, v
end
end
end
Usage:
local ary = {10,20,30,40}
for i, v in selpairs(ary, {1,3}) do
ary[i] = v+5
print(string.format("ary[%d] is now = %g", i, ary[i]))
end
Nested usage:
local ary_of_arys = {
{10, 20, 30, 40},
{5, 6, 7, 8},
{0.9, 1, 1.1, 1.2},
}
local outer_indices = {1,3}
local inner_indices = {2,3,4}
for aoa, ary in selpairs(ary_of_arys, outer_indices) do
for i, v in selpairs(ary, inner_indices) do
ary[i] = v+5 -- This is the same as ary_of_arys[aoa][i] = v+5
end
end
Not sure if I understand what you want to achive but why not simply write
local values = {"a", "b", "c", "d"}
for i,key in ipairs {3,4,1} do
print(values[key])
end
and so forth, instead of implementing all that interator stuff? I mean your use case is rather simple. It can be easily extended to more dimensions.
And here's a co-routine based possibility:
function selpairs(t,selected)
return coroutine.wrap(function()
for _,k in ipairs(selected) do
coroutine.yield(k,t[k])
end
end)
end

Push dictionary? How to achieve this in Lua?

Say I have this dictionary in Lua
places = {dest1 = 10, dest2 = 20, dest3 = 30}
In my program I check if the dictionary has met my size limit in this case 3, how do I push the oldest key/value pair out of the dictionary and add a new one?
places["newdest"] = 50
--places should now look like this, dest3 pushed off and newdest added and dictionary has kept its size
places = {newdest = 50, dest1 = 10, dest2 = 20}
It's not too difficult to do this, if you really needed it, and it's easily reusable as well.
local function ld_next(t, i) -- This is an ordered iterator, oldest first.
if i <= #t then
return i + 1, t[i], t[t[i]]
end
end
local limited_dict = {__newindex = function(t,k,v)
if #t == t[0] then -- Pop the last entry.
t[table.remove(t, 1)] = nil
end
table.insert(t, k)
rawset(t, k, v)
end, __pairs = function(t)
return ld_next, t, 1
end}
local t = setmetatable({[0] = 3}, limited_dict)
t['dest1'] = 10
t['dest2'] = 20
t['dest3'] = 30
t['dest4'] = 50
for i, k, v in pairs(t) do print(k, v) end
dest2 20
dest3 30
dest4 50
The order is stored in the numeric indices, with the 0th index indicating the limit of unique keys that the table can have.
Given that dictionary keys do not save their entered position, I wrote something that should be able to help you accomplish what you want, regardless.
function push_old(t, k, v)
local z = fifo[1]
t[z] = nil
t[k] = v
table.insert(fifo, k)
table.remove(fifo, 1)
end
You would need to create the fifo table first, based on the order you entered the keys (for instance, fifo = {"dest3", "dest2", "dest1"}, based on your post, from first entered to last entered), then use:
push_old(places, "newdest", 50)
and the function will do the work. Happy holidays!

Lua Table Comparisons Within Tables

So I have a table that holds references to other tables like:
local a = newObject()
a.collection = {}
for i = 1, 100 do
local b = newObject()
a[#a + 1] = b
end
Now if I want to see if a particular object is within "a" I have to use pairs like so:
local z = a.collection[ 99 ]
for i,j in pairs( a.collection ) do
if j == z then
return true
end
end
The z object is in the 99th spot and I would have to wait for pairs to iterate all the way throughout the other 98 objects. This set up is making my program crawl. Is there a way to make some sort of key that isn't a string or a table to table comparison that is a one liner? Like:
if a.collection[{z}] then return true end
Thanks in advance!
Why are you storing the object in the value slot and not the key slot of the table?
local a = newObject()
a.collection = {}
for i = 1, 100 do
local b = newObject()
a.collection[b] = i
end
to see if a particular object is within "a"
return a.collection[b]
If you need integer indexed access to the collection, store it both ways:
local a = newObject()
a.collection = {}
for i = 1, 100 do
local b = newObject()
a.collection[i] = b
a.collection[b] = i
end
Finding:
local z = a.collection[99]
if a.collection[z] then return true end
Don't know if it's faster or not, but maybe this helps:
Filling:
local a = {}
a.collection = {}
for i = 1, 100 do
local b = {}
a.collection[b] = true -- Table / Object as index
end
Finding:
local z = a.collection[99]
if a.collection[z] then return true end
If that's not what you wanted to do you can break your whole array into smaller buckets and use a hash to keep track which object belongs to which bucket.
you might want to consider switching from using pairs() to using a regular for loop and indexing the table, pairs() seems to be slower on larger collections of tables.
for i=1, #a.collection do
if a.collection[i] == z then
return true
end
end
i compared the speed of iterating through a collection of 1 million tables using both pairs() and table indexing, and the indexing was a little bit faster every time. try it yourself using os.clock() to profile your code.
i can't really think of a faster way of your solution other than using some kind of hashing function to set unique indexes into the a.collection table. however, doing this would make getting a specific table out a non-trivial task (you wouldn't just be able to do a.collection[99], you'd have to iterate through until you found one you wanted. but then you could easily test if the table was in a.collection by doing something like a.collection[hashFunc(z)] ~= nil...)

MATLAB: What happens for a global variable when running in the parallel mode?

What happens for a global variable when running in the parallel mode?
I have a global variable, "to_be_optimized_parameterIndexSet", which is a vector of indexes that should be optimized using gamultiobj and I have set its value only in the main script(nowhere else).
My code works properly in serial mode but when I switch to parallel mode (using "matlabpool open" and setting proper values for 'gaoptimset' ) the mentioned global variable becomes empty (=[]) in the fitness function and causes this error:
??? Error using ==> parallel_function at 598
Error in ==> PF_gaMultiFitness at 15 [THIS LINE: constants(to_be_optimized_parameterIndexSet) = individual;]
In an assignment A(I) = B, the number of elements in B and
I must be the same.
Error in ==> fcnvectorizer at 17
parfor (i = 1:popSize)
Error in ==> gamultiobjMakeState at 52
Score =
fcnvectorizer(state.Population(initScoreProvided+1:end,:),FitnessFcn,numObj,options.SerialUserFcn);
Error in ==> gamultiobjsolve at 11
state = gamultiobjMakeState(GenomeLength,FitnessFcn,output.problemtype,options);
E rror in ==> gamultiobj at 238
[x,fval,exitFlag,output,population,scores] = gamultiobjsolve(FitnessFcn,nvars, ...
Error in ==> PF_GA_mainScript at 136
[x, fval, exitflag, output] = gamultiobj(#(individual)PF_gaMultiFitness(individual, initialConstants), ...
Caused by:
Failure in user-supplied fitness function evaluation. GA cannot continue.
I have checked all the code to make sure I've not changed this global variable everywhere else.
I have a quad-core processor.
Where is the bug? any suggestion?
EDIT 1: The MATLAB code in the main script:
clc
clear
close all
format short g
global simulation_duration % PF_gaMultiFitness will use this variable
global to_be_optimized_parameterIndexSet % PF_gaMultiFitness will use this variable
global IC stimulusMoment % PF_gaMultiFitness will use these variables
[initialConstants IC] = oldCICR_Constants; %initialize state
to_be_optimized_parameterIndexSet = [21 22 23 24 25 26 27 28 17 20];
LB = [ 0.97667 0.38185 0.63529 0.046564 0.23207 0.87484 0.46014 0.0030636 0.46494 0.82407 ];
UB = [1.8486 0.68292 0.87129 0.87814 0.66982 1.3819 0.64562 0.15456 1.3717 1.8168];
PopulationSize = input('Population size? ') ;
GaTimeLimit = input('GA time limit? (second) ');
matlabpool open
nGenerations = inf;
options = gaoptimset('PopulationSize', PopulationSize, 'TimeLimit',GaTimeLimit, 'Generations', nGenerations, ...
'Vectorized','off', 'UseParallel','always');
[x, fval, exitflag, output] = gamultiobj(#(individual)PF_gaMultiFitness(individual, initialConstants), ...
length(to_be_optimized_parameterIndexSet),[],[],[],[],LB,UB,options);
matlabpool close
some other piece of code to show the results...
The MATLAB code of the fitness function, "PF_gaMultiFitness":
function objectives =PF_gaMultiFitness(individual, constants)
global simulation_duration IC stimulusMoment to_be_optimized_parameterIndexSet
%THIS FUNCTION RETURNS MULTI OBJECTIVES AND PUTS EACH OBJECTIVE IN A COLUMN
constants(to_be_optimized_parameterIndexSet) = individual;
[smcState , ~, Time]= oldCICR_CompCore(constants, IC, simulation_duration,2);
targetValue = 1; % [uM]desired [Ca]i peak concentration
afterStimulus = smcState(Time>stimulusMoment,14); % values of [Ca]i after stimulus
peak_Ca_value = max(afterStimulus); % smcState(:,14) is [Ca]i
if peak_Ca_value < 0.8 * targetValue
objectives(1,1) = inf;
else
objectives(1, 1) = abs(peak_Ca_value - targetValue);
end
pkIDX = peakFinder(afterStimulus);
nPeaks = sum(pkIDX);
if nPeaks > 1
peakIndexes = find(pkIDX);
period = Time(peakIndexes(2)) - Time(peakIndexes(1));
objectives(1,2) = 1e5* 1/period;
elseif nPeaks == 1 && peak_Ca_value > 0.8 * targetValue
objectives(1,2) = 0;
else
objectives(1,2) = inf;
end
end
Global variables do not get passed from the MATLAB client to the workers executing the body of the PARFOR loop. The only data that does get sent into the loop body are variables that occur in the text of the program. This blog entry might help.
it really depends on the type of variable you're putting in. i need to see more of your code to point out the flaw, but in general it is good practice to avoid assuming complicated variables will be passed to each worker. In other words anything more then a primitive may need to be reinitialized inside a parallel routine or may need have specific function calls (like using feval for function handles).
My advice: RTM

Resources