I am using this piece of code and a stackoverflow will be triggered, if I use Extlib's Hashtbl the error does not occur. Any hints to use specialized Hashtbl without stackoverflow?
module ColorIdxHash = Hashtbl.Make(
struct
type t = Img_types.rgb_t
let equal = (==)
let hash = Hashtbl.hash
end
)
(* .. *)
let (ctable: int ColorIdxHash.t) = ColorIdxHash.create 256 in
for x = 0 to width -1 do
for y = 0 to height -1 do
let c = Img.get img x y in
let rgb = Color.rgb_of_color c in
if not (ColorIdxHash.mem ctable rgb) then ColorIdxHash.add ctable rgb (ColorIdxHash.length ctable)
done
done;
(* .. *)
The backtrace points to hashtbl.ml:
Fatal error: exception Stack_overflow Raised at file "hashtbl.ml",
line 54, characters 16-40 Called from file "img/write_bmp.ml", line
150, characters 52-108 ...
Any hints?
Well, you're using physical equality (==) to compare the colors in your hash table. If the colors are structured values (I can't tell from this code), none of them will be physically equal to each other. If all the colors are distinct objects, they will all go into the table, which could really be quite a large number of objects. On the other hand, the hash function is going to be based on the actual color R,G,B values, so there may well be a large number of duplicates. This will mean that your hash buckets will have very long chains. Perhaps some internal function isn't tail recursive, and so is overflowing the stack.
Normally the length of the longest chain will be 2 or 3, so it wouldn't be surprising that this error doesn't come up often.
Looking at my copy of hashtbl.ml (OCaml 3.12.1), I don't see anything non-tail-recursive on line 54. So my guess might be wrong. On line 54 a new internal array is allocated for the hash table. So another idea is just that your hashtable is just getting too big (perhaps due to the unwanted duplicates).
One thing to try is to use structural equality (=) and see if the problem goes away.
One reason you may have non-termination or stack overflows is if your type contains cyclic values. (==) will terminates on cyclic values (while (=) may not), but Hash.hash is probably not cycle-safe. So if you manipulate cyclic values of type Img_types.rgb_t, you have to devise your one cycle-safe hash function -- typically, calling Hash.hash on only one of the non-cyclic subfields/subcomponents of your values.
I've already been bitten by precisely this issue in the past. Not a fun bug to track down.
Related
My goal is to understand how Eva shrinks the intervals for a variable. for example:
unsigned int nondet_uint(void);
int main()
{
unsigned int x=nondet_uint();
unsigned int y=nondet_uint();
//# assert x >= 20 && x <= 30;
//# assert y <= 60;
//# assert(x>=y);
return 0;
}
So, we have x=[20,30] and y=[0,60]. However, the results from Eva shrinks y to [0,30] which is where the domain may be valid.
[eva] ====== VALUES COMPUTED ======
[eva:final-states] Values at end of function main:
x ∈ [20..30]
y ∈ [0..30]
__retres ∈ {0}
I tried some options for the Eva plugin, but none showed the steps for it. May I ask you to provide the method or publication on how to compute these values?
Showing values during abstract interpretation
I tried some options for the Eva plugin, but none showed the steps for it.
The most efficient way to follow the evaluation is not via command-line options, but by adding Frama_C_show_each(exp) statements in the code. These are special function calls which, during the analysis, emit the values of the expression contained in them. They are especially useful in loops, for instance to see when a widening is triggered, what happens to the loop counter values.
Note that displaying all of the intermediary evaluation and reduction steps would be very verbose, even for very small programs. By default, this information is not exposed, since it is too dense and rarely useful.
For starters, try adding Frama_C_show_each statements, and use the Frama-C GUI to see the result. It allows focusing on any expression in the code and, in the Values tab, shows the values for the given expression, at the selected statement, for each callstack. You can also press Ctrl+E and type an arbitrary expression to have its value evaluated at that statement.
If you want more details about the values, their reductions, and the overall mechanism, see the section below.
Detailed information about values in Eva
Your question is related to the values used by the abstract interpretation engine in Eva.
Chapter 3 of the Eva User Manual describes the abstractions used by the engine, which are, succinctly:
sets of integers, which are maximally precise but limited to a number of elements (modified by option -eva-ilevel, which on Frama-C 22 is set to 8 by default);
integer intervals with periodicity information (also called modulo, or congruence), e.g. [2..42],2%10 being the set containing {2, 12, 22, 32, 42}. In the simple case, e.g. [2..42], all integers between 2 and 42 are included;
sets of addresses (for pointers), with offsets represented using the above values (sets of integers or intervals);
intervals of floating-point variables (unlike integers, there are no small sets of floating-point values).
Why is all of this necessary? Because without knowing some of these details, you'll have a hard time understanding why the analysis is sometimes precise, sometimes imprecise.
Note that the term reduction is used in the documentation, instead of shrinkage. So look for words related to reduce in the Eva manual when searching for clues.
For instance, in the following code:
int a = Frama_C_interval(-5, 5);
if (a != 0) {
//# assert a != 0;
int b = 5 / a;
}
By default, the analysis will not be able to remove the 0 from the interval inside the if, because [-5..-1];[1..5] is not an interval, but a disjoint union of intervals. However, if the number of elements drops below -eva-ilevel, then the analysis will convert it into a small set, and get a precise result. Therefore, changing some analysis options will result in different ranges, and different results.
In some cases, you can force Eva to compute using disjunctions, for instance by adding the split ACSL annotation, e.g. //# split a < b || a >= b;. But you still need the give the analysis some "fuel" for it to evaluate both branches separately. The easiest way to do so is to use -eva-precision N, with N being an integer between 0 and 11. The higher N is, the more splitting is allowed to happen, but the longer the analysis may take.
Note that, to ensure termination of the analysis, some mechanisms such as widening are used. Without it, a simple loop might require billions of evaluation steps to terminate. This mechanism may introduce extra values which lead to a less precise analysis.
Finally, there are also some abstract domains (option -eva-domains) which allow other kinds of values besides the default ones mentioned above. For instance, the sign domain allows splitting values between negative, zero and positive, and would avoid the imprecision in the above example. The Eva user manual contains examples of usage of each of the domains, indicating when they are useful.
I am trying to read a .tif-file in julia as a Floating Point Array. With the FileIO & ImageMagick-Package I am able to do this, but the Array that I get is of the Type Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}.
I can convert this FixedPoint-Array to Float32-Array by multiplying it with 255 (because UInt8), but I am looking for a function to do this for any type of FixedPointNumber (i.e. reinterpret() or convert()).
using FileIO
# Load the tif
obj = load("test.tif");
typeof(obj)
# Convert to Float32-Array
objNew = real.(obj) .* 255
typeof(objNew)
The output is
julia> using FileIO
julia> obj = load("test.tif");
julia> typeof(obj)
Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
julia> objNew = real.(obj) .* 255;
julia> typeof(objNew)
Array{Float32,2}
I have been looking in the docs quite a while and have not found the function with which to convert a given FixedPoint-Array to a FloatingPont-Array without multiplying it with the maximum value of the Integer type.
Thanks for any help.
edit:
I made a small gist to see if the solution by Michael works, and it does. Thanks!
Note:I don't know why, but the real.(obj) .* 255-code does not work (see the gist).
Why not just Float32.()?
using ColorTypes
a = Gray.(convert.(Normed{UInt8,8}, rand(5,6)));
typeof(a)
#Array{ColorTypes.Gray{FixedPointNumbers.Normed{UInt8,8}},2}
Float32.(a)
The short answer is indeed the one given by Michael, just use Float32.(a) (for grayscale). Another alternative is channelview(a), which generally performs channel separation thus also stripping the color information from the array. In the latter case you won't get a Float32 array, because your image is stored with 8 bits per pixel, instead you'll get an N0f8 (= FixedPointNumbers.Normed{UInt8,8}). You can read about those numbers here.
Your instinct to multiply by 255 is natural, given how other image-processing frameworks work, but Julia has made some effort to be consistent about "meaning" in ways that are worth taking a moment to think about. For example, in another programming language just changing the numerical precision of an array:
img = uint8(255*rand(10, 10, 3)); % an 8-bit per color channel image
figure; image(img)
imgd = double(img); % convert to double-precision, but don't change the values
figure; image(imgd)
produces the following surprising result:
That second "all white" image represents saturation. In this other language, "5" means two completely different things depending on whether it's stored in memory as a UInt8 vs a Float64. I think it's fair to say that under any normal circumstances, a user of a numerical library would call this a bug, and a very serious one at that, yet somehow many of us have grown to accept this in the context of image processing.
These new types arise because in Julia we've gone to the effort to implement new numerical types (FixedPointNumbers) that act like fractional values (e.g., between 0 and 1) but are stored internally with the same bit pattern as the "corresponding" UInt8 (the one you get by multiplying by 255). This allows us to work with 8-bit data and yet allow values to always be interpreted on a consistent scale (0.0=black, 1.0=white).
In Julia, say I have an object_id for a variable but have forgotten its name, how can I retrieve the object using the id?
I.e. I want the inverse of some_id = object_id(some_object).
As #DanGetz says in the comments, object_id is a hash function and is designed not to be invertible. #phg is also correct that ObjectIdDict is intended precisely for this purpose (it is documented although not discussed much in the manual):
ObjectIdDict([itr])
ObjectIdDict() constructs a hash table where the keys are (always)
object identities. Unlike Dict it is not parameterized on its key and
value type and thus its eltype is always Pair{Any,Any}.
See Dict for further help.
In other words, it hashes objects by === using object_id as a hash function. If you have an ObjectIdDict and you use the objects you encounter as the keys into it, then you can keep them around and recover those objects later by taking them out of the ObjectIdDict.
However, it sounds like you want to do this without the explicit ObjectIdDict just by asking which object ever created has a given object_id. If so, consider this thought experiment: if every object were always recoverable from its object_id, then the system could never discard any object, since it would always be possible for a program to ask for that object by ID. So you would never be able to collect any garbage, and the memory usage of every program would rapidly expand to use all of your RAM and disk space. This is equivalent to having a single global ObjectIdDict which you put every object ever created into. So inverting the object_id function that way would require never deallocating any objects, which means you'd need unbounded memory.
Even if we had infinite memory, there are deeper problems. What does it mean for an object to exist? In the presence of an optimizing compiler, this question doesn't have a clear-cut answer. It is often the case that an object appears, from the programmer's perspective, to be created and operated on, but in reality – i.e. from the hardware's perspective – it is never created. Consider this function which constructs a complex number and then uses it for a simple computation:
julia> function f(y::Real)
z = Complex(0,y)
w = 2z*im
return real(w)
end
f (generic function with 1 method)
julia> foo(123)
-246
From the programmer's perspective, this constructs the complex number z and then constructs 2z, then 2z*im, and finally constructs real(2z*im) and returns that value. So all of those values should be inserted into the "Great ObjectIdDict in the Sky". But are they really constructed? Here's the LLVM code for this function applied to an Int:
julia> #code_llvm foo(123)
define i64 #julia_foo_60833(i64) #0 !dbg !5 {
top:
%1 = shl i64 %0, 1
%2 = sub i64 0, %1
ret i64 %2
}
No Complex values are constructed at all! Instead, all of the work is inlined and eliminated instead of actually being done. The whole computation boils down to just doubling the argument (by shifting it left one bit) and negating it (by subtracting it from zero). This optimization can be done first and foremost because the intermediate steps have no observable side effects. The compiler knows that there's no way to tell the difference between actually constructing complex values and operating on them and just doing a couple of integer ops – as long as the end result is always the same. Implicit in the idea of a "Great ObjectIdDict in the Sky" is the assumption that all objects that seem to be constructed actually are constructed and inserted into a large, permanent data structure – which is a massive side effect. So not only is recovering objects from their IDs incompatible with garbage collection, it's also incompatible with almost every conceivable program optimization.
The only other way one could conceive of inverting object_id would be to compute its inverse image on demand instead of saving objects as they are created. That would solve both the memory and optimization problems. Of course, it isn't possible since there are infinitely many possible objects but only a finite number of object IDs. You are vanishingly unlikely to actually encounter two objects with the same ID in a program, but the finiteness of the ID space means that inverting the hash function is impossible in principle since the preimage of each ID value contains an infinite number of potential objects.
I've probably refuted the possibility of an inverse object_id function far more thoroughly than necessary, but it led to some interesting thought experiments, and I hope it's been helpful – or at least thought provoking. The practical answer is that there is no way to get around explicitly stashing every object you might want to get back later in an ObjectIdDict.
I just started learning Erlang and since I found out there is no for loop I tried recreating one with recursion:
display(Rooms, In) ->
Room = array:get(In, Rooms)
io:format("~w", [Room]),
if
In < 59 -> display(Rooms, In + 1);
true -> true
end.
With this code i need to display the content (false or true) of each array in Rooms till the number 59 is reached. However this creates a weird code which displays all of Rooms contents about 60 times (?). When I drop the if statement and only put in the recursive code it is working except for a exception error: Bad Argument.
So basically my question is how do I put a proper end to my "for loop".
Thanks in advance!
Hmm, this code is rewritten and not pasted. It is missing colon after Room = array:get(In, Rooms). The Bad argument error is probably this:
exception error: bad argument
in function array:get/2 (array.erl, line 633)
in call from your_module_name:display/2
This means, that you called array:get/2 with bad arguments: either Rooms is not an array or you used index out of range. The second one is more likely the cause. You are checking if:
In < 59
and then calling display again, so it will get to 58, evaluate to true and call:
display(Rooms, 59)
which is too much.
There is also couple of other things:
In io:format/2 it is usually better to use ~p instead of ~w. It does exactly the same, but with pretty printing, so it is easier to read.
In Erlang if is unnatural, because it evaluates guards and one of them has to match or you get error... It is just really weird.
case is much more readable:
case In < 59 of
false -> do_something();
true -> ok
end
In case you usually write something, that always matches:
case Something of
{One, Two} -> do_stuff(One, Two);
[Head, RestOfList] -> do_other_stuff(Head, RestOfList);
_ -> none_of_the_previous_matched()
end
The underscore is really useful in pattern matching.
In functional languages you should never worry about details like indexes! Array module has map function, which takes function and array as arguments and calls the given function on each array element.
So you can write your code this way:
display(Rooms) ->
DisplayRoom = fun(Index, Room) -> io:format("~p ~p~n", [Index, Room]) end,
array:map(DisplayRoom, Rooms).
This isn't perfect though, because apart from calling the io:format/2 and displaying the contents, it will also construct new array. io:format returns atom ok after completion, so you will get array of 58 ok atoms. There is also array:foldl/3, which doesn't have that problem.
If you don't have to have random access, it would be best to simply use lists.
Rooms = lists:duplicate(58, false),
DisplayRoom = fun(Room) -> io:format("~p~n", [Room]) end,
lists:foreach(DisplayRoom, Rooms)
If you are not comfortable with higher order functions. Lists allow you to easily write recursive algorithms with function clauses:
display([]) -> % always start with base case, where you don't need recursion
ok; % you have to return something
display([Room | RestRooms]) -> % pattern match on list splitting it to first element and tail
io:format("~p~n", [Room]), % do something with first element
display(RestRooms). % recursive call on rest (RestRooms is quite funny name :D)
To summarize - don't write forloops in Erlang :)
This is a general misunderstanding of recursive loop definitions. What you are trying to check for is called the "base condition" or "base case". This is easiest to deal with by matching:
display(0, _) ->
ok;
display(In, Rooms) ->
Room = array:get(In, Rooms)
io:format("~w~n", [Room]),
display(In - 1, Rooms).
This is, however, rather unidiomatic. Instead of using a hand-made recursive function, something like a fold or map is more common.
Going a step beyond that, though, most folks would probably have chosen to represent the rooms as a set or list, and iterated over it using list operations. When hand-written the "base case" would be an empty list instead of a 0:
display([]) ->
ok;
display([Room | Rooms]) ->
io:format("~w~n", [Room]),
display(Rooms).
Which would have been avoided in favor, once again, of a list operation like foreach:
display(Rooms) ->
lists:foreach(fun(Room) -> io:format("~w~n", [Room]) end, Rooms).
Some folks really dislike reading lambdas in-line this way. (In this case I find it readable, but the larger they get the more likely the are to become genuinely distracting.) An alternative representation of the exact same function:
display(Rooms) ->
Display = fun(Room) -> io:format("~w~n", [Room]) end,
lists:foreach(Display, Rooms).
Which might itself be passed up in favor of using a list comprehension as a shorthand for iteration:
_ = [io:format("~w~n", [Room]) | Room <- Rooms].
When only trying to get a side effect, though, I really think that lists:foreach/2 is the best choice for semantic reasons.
I think part of the difficulty you are experiencing is that you have chosen to use a rather unusual structure as your base data for your first Erlang program that does anything (arrays are not used very often, and are not very idiomatic in functional languages). Try working with lists a bit first -- its not scary -- and some of the idioms and other code examples and general discussions about list processing and functional programming will make more sense.
Wait! There's more...
I didn't deal with the case where you have an irregular room layout. The assumption was always that everything was laid out in a nice even grid -- which is never the case when you get into the really interesting stuff (either because the map is irregular or because the topology is interesting).
The main difference here is that instead of simply carrying a list of [Room] where each Room value is a single value representing the Room's state, you would wrap the state value of the room in a tuple which also contained some extra data about that state such as its location or coordinates, name, etc. (You know, "metadata" -- which is such an overloaded, buzz-laden term today that I hate saying it.)
Let's say we need to maintain coordinates in a three-dimensional space in which the rooms reside, and that each room has a list of occupants. In the case of the array we would have divided the array by the dimensions of the layout. A 10*10*10 space would have an array index from 0 to 999, and each location would be found by an operation similar to
locate({X, Y, Z}) -> (1 * X) + (10 * Y) + (100 * Z).
and the value of each Room would be [Occupant1, occupant2, ...].
It would be a real annoyance to define such an array and then mark arbitrarily large regions of it as "unusable" to give the impression of irregular layout, and then work around that trying to simulate a 3D universe.
Instead we could use a list (or something like a list) to represent the set of rooms, but the Room value would now be a tuple: Room = {{X, Y, Z}, [Occupants]}. You may have an additional element (or ten!), like the "name" of the room or some other status information or whatever, but the coordinates are the most certain real identity you're likely to get. To get the room status you would do the same as before, but mark what element you are looking at:
display(Rooms) ->
Display =
fun({ID, Occupants}) ->
io:format("ID ~p: Occupants ~p~n", [ID, Occupants])
end,
lists:foreach(Display, Rooms).
To do anything more interesting than printing sequentially, you could replace the internals of Display with a function that uses the coordinates to plot the room on a chart, check for empty or full lists of Occupants (use pattern matching, don't do it procedurally!), or whatever else you might dream up.
I have been playing with an implementation of lookandsay (OEIS A005150) in J. I have made two versions, both very simple, using while. type control structures. One recurs, the other loops. Because I am compulsive, I started running comparative timing on the versions.
look and say is the sequence 1 11 21 1211 111221 that s, one one, two ones, etc.
For early elements of the list (up to around 20) the looping version wins, but only by a tiny amount. Timings around 30 cause the recursive version to win, by a large enough amount that the recursive version might be preferred if the stack space were adequate to support it. I looked at why, and I believe that it has to do with handling intermediate results. The 30th number in the sequence has 5808 digits. (32nd number, 9898 digits, 34th, 16774.)
When you are doing the problem with recursion, you can hold the intermediate results in the recursive call, and the unstacking at the end builds the results so that there is minimal handling of the results.
In the list version, you need a variable to hold the result. Every loop iteration causes you to need to add two elements to the result.
The problem, as I see it, is that I can't find any way in J to modify an extant array without completely reassigning it. So I am saying
try. o =. o,e,(0&{y) catch. o =. e,(0&{y) end.
to put an element into o where o might not have a value when we start. That may be notably slower than
o =. i.0
.
.
.
o =. (,o),e,(0&{y)
The point is that the result gets the wrong shape without the ravels, or so it seems. It is inheriting a shape from i.0 somehow.
But even functions like } amend don't modify a list, they return a list that has a modification made to it, and if you want to save the list you need to assign it. As the size of the assigned list increases (as you walk the the number from the beginning to the end making the next number) the assignment seems to take more time and more time. This assignment is really the only thing I can see that would make element 32, 9898 digits, take less time in the recursive version while element 20 (408 digits) takes less time in the loopy version.
The recursive version builds the return with:
e,(0&{y),(,lookandsay e }. y)
The above line is both the return line from the function and the recursion, so the whole return vector gets built at once as the call gets to the end of the string and everything unstacks.
In APL I thought that one could say something on the order of:
a[1+rho a] <- new element
But when I try this in NARS2000 I find that it causes an index error. I don't have access to any other APL, I might be remembering this idiom from APL Plus, I doubt it worked this way in APL\360 or APL\1130. I might be misremembering it completely.
I can find no way to do that in J. It might be that there is no way to do that, but the next thought is to pre-allocate an array that could hold results, and to change individual entries. I see no way to do that either - that is, J does not seem to support the APL idiom:
a<- iota 5
a[3] <- -1
Is this one of those side effect things that is disallowed because of language purity?
Does the interpreter recognize a=. a,foo or some of its variants as a thing that it should fastpath to a[>:#a]=.foo internally?
This is the recursive version, just for the heck of it. I have tried a bunch of different versions and I believe that the longer the program, the slower, and generally, the more complex, the slower. Generally, the program can be chained so that if you want the nth number you can do lookandsay^: n ] y. I have tried a number of optimizations, but the problem I have is that I can't tell what environment I am sending my output into. If I could tell that I was sending it to the next iteration of the program I would send it as an array of digits rather than as a big number.
I also suspect that if I could figure out how to make a tacit version of the code, it would run faster, based on my finding that when I add something to the code that should make it shorter, it runs longer.
lookandsay=: 3 : 0
if. 0 = # ,y do. return. end. NB. return on empty argument
if. 1 ~: ##$ y do. NB. convert rank 0 argument to list of digits
y =. (10&#.^:_1) x: y
f =. 1
assert. 1 = ##$ y NB. the converted argument must be rank 1
else.
NB. yw =. y
f =. 0
end.
NB. e should be a count of the digits that match the leading digit.
e=.+/*./\y=0&{y
if. f do.
o=. e,(0&{y),(,lookandsay e }. y)
assert. e = 0&{ o
10&#. x: o
return.
else.
e,(0&{y),(,lookandsay e }. y)
return.
end.
)
I was interested in the characteristics of the numbers produced. I found that if you start with a 1, the numerals never get higher than 3. If you start with a numeral higher than 3, it will survive as a singleton, and you can also get a number into the generated numbers by starting with something like 888888888 which will generate a number with one 9 in it and a single 8 at the end of the number. But other than the singletons, no digit gets higher than 3.
Edit:
I did some more measuring. I had originally written the program to accept either a vector or a scalar, the idea being that internally I'd work with a vector. I had thought about passing a vector from one layer of code to the other, and I still might using a left argument to control code. With I pass the top level a vector the code runs enormously faster, so my guess is that most of the cpu is being eaten by converting very long numbers from vectors to digits. The recursive routine always passes down a vector when it recurs which might be why it is almost as fast as the loop.
That does not change my question.
I have an answer for this which I can't post for three hours. I will post it then, please don't do a ton of research to answer it.
assignments like
arr=. 'z' 15} arr
are executed in place. (See JWiki article for other supported in-place operations)
Interpreter determines that only small portion of arr is updated and does not create entire new list to reassign.
What happens in your case is not that array is being reassigned, but that it grows many times in small increments, causing memory allocation and reallocation.
If you preallocate (by assigning it some large chunk of data), then you can modify it with } without too much penalty.
After I asked this question, to be honest, I lost track of this web site.
Yes, the answer is that the language has no form that means "update in place, but if you use two forms
x =: x , most anything
or
x =: most anything } x
then the interpreter recognizes those as special and does update in place unless it can't. There are a number of other specials recognized by the interpreter, like:
199(1000&|#^)199
That combined operation is modular exponentiation. It never calculates the whole exponentiation, as
199(1000&|^)199
would - that just ends as _ without the #.
So it is worth reading the article on specials. I will mark someone else's answer up.
The link that sverre provided above ( http://www.jsoftware.com/jwiki/Essays/In-Place%20Operations ) shows the various operations that support modifying an existing array rather than creating a new one. They include:
myarray=: myarray,'blah'
If you are interested in a tacit version of the lookandsay sequence see this submission to RosettaCode:
las=: ,#((# , {.);.1~ 1 , 2 ~:/\ ])&.(10x&#.inv)#]^:(1+i.#[)
5 las 1
11 21 1211 111221 312211