My goal is to understand how Eva shrinks the intervals for a variable. for example:
unsigned int nondet_uint(void);
int main()
{
unsigned int x=nondet_uint();
unsigned int y=nondet_uint();
//# assert x >= 20 && x <= 30;
//# assert y <= 60;
//# assert(x>=y);
return 0;
}
So, we have x=[20,30] and y=[0,60]. However, the results from Eva shrinks y to [0,30] which is where the domain may be valid.
[eva] ====== VALUES COMPUTED ======
[eva:final-states] Values at end of function main:
x ∈ [20..30]
y ∈ [0..30]
__retres ∈ {0}
I tried some options for the Eva plugin, but none showed the steps for it. May I ask you to provide the method or publication on how to compute these values?
Showing values during abstract interpretation
I tried some options for the Eva plugin, but none showed the steps for it.
The most efficient way to follow the evaluation is not via command-line options, but by adding Frama_C_show_each(exp) statements in the code. These are special function calls which, during the analysis, emit the values of the expression contained in them. They are especially useful in loops, for instance to see when a widening is triggered, what happens to the loop counter values.
Note that displaying all of the intermediary evaluation and reduction steps would be very verbose, even for very small programs. By default, this information is not exposed, since it is too dense and rarely useful.
For starters, try adding Frama_C_show_each statements, and use the Frama-C GUI to see the result. It allows focusing on any expression in the code and, in the Values tab, shows the values for the given expression, at the selected statement, for each callstack. You can also press Ctrl+E and type an arbitrary expression to have its value evaluated at that statement.
If you want more details about the values, their reductions, and the overall mechanism, see the section below.
Detailed information about values in Eva
Your question is related to the values used by the abstract interpretation engine in Eva.
Chapter 3 of the Eva User Manual describes the abstractions used by the engine, which are, succinctly:
sets of integers, which are maximally precise but limited to a number of elements (modified by option -eva-ilevel, which on Frama-C 22 is set to 8 by default);
integer intervals with periodicity information (also called modulo, or congruence), e.g. [2..42],2%10 being the set containing {2, 12, 22, 32, 42}. In the simple case, e.g. [2..42], all integers between 2 and 42 are included;
sets of addresses (for pointers), with offsets represented using the above values (sets of integers or intervals);
intervals of floating-point variables (unlike integers, there are no small sets of floating-point values).
Why is all of this necessary? Because without knowing some of these details, you'll have a hard time understanding why the analysis is sometimes precise, sometimes imprecise.
Note that the term reduction is used in the documentation, instead of shrinkage. So look for words related to reduce in the Eva manual when searching for clues.
For instance, in the following code:
int a = Frama_C_interval(-5, 5);
if (a != 0) {
//# assert a != 0;
int b = 5 / a;
}
By default, the analysis will not be able to remove the 0 from the interval inside the if, because [-5..-1];[1..5] is not an interval, but a disjoint union of intervals. However, if the number of elements drops below -eva-ilevel, then the analysis will convert it into a small set, and get a precise result. Therefore, changing some analysis options will result in different ranges, and different results.
In some cases, you can force Eva to compute using disjunctions, for instance by adding the split ACSL annotation, e.g. //# split a < b || a >= b;. But you still need the give the analysis some "fuel" for it to evaluate both branches separately. The easiest way to do so is to use -eva-precision N, with N being an integer between 0 and 11. The higher N is, the more splitting is allowed to happen, but the longer the analysis may take.
Note that, to ensure termination of the analysis, some mechanisms such as widening are used. Without it, a simple loop might require billions of evaluation steps to terminate. This mechanism may introduce extra values which lead to a less precise analysis.
Finally, there are also some abstract domains (option -eva-domains) which allow other kinds of values besides the default ones mentioned above. For instance, the sign domain allows splitting values between negative, zero and positive, and would avoid the imprecision in the above example. The Eva user manual contains examples of usage of each of the domains, indicating when they are useful.
Related
Is there any (simple) random generation function that can work without variable assignment? Most functions I read look like this current = next(current). However currently I have a restriction (from SQLite) that I cannot use any variable at all.
Is there a way to generate a number sequence (for example, from 1 to max) with only n (current number index in the sequence) and seed?
Currently I am using this:
cast(((1103515245 * Seed * ROWID + 12345) % 2147483648) / 2147483648.0 * Max as int) + 1
with max being 47, ROWID being n. However for some seed, the repeat rate is too high (3 unique out of 47).
In my requirements, repetition is ok as long as it's not too much (<50%). Is there any better function that meets my need?
The question has sqlite tag but any language/pseudo-code is ok.
P.s: I have tried using Linear congruential generators with some a/c/m triplets and Seed * ROWID as Seed, but it does not work well, it's even worse.
EDIT: I currently use this one, but I do not know where it's from. The rate looks better than mine:
((((Seed * ROWID) % 79) * 53) % "Max") + 1
I am not sure if you still have the same problem but I might have a solution for you.
What you could do is use Pseudo Random M-sequence generators based on shifting registers. Where you just have to take high enough order of you primitive polynomial and you don't need to store any variables really.
For more info you can check the wiki page
What you would need to code is just the primitive polynomial shifting equation and I have checked in an online editor it should be very easy to do. I think the easiest way for you would be to use Binary base and use PRBS sequences and depending on how many elements you will have you can choose your sequence length. For example this is the implementation for length of 2^15 = 32768 (PRBS15), the primitive polynomial I took from the wiki page (There youcan find the primitive polynomials all the way to PRBS31 what would be 2^31=2.1475e+09)
Basically what you need to do is:
SELECT (((ROWID << 1) | (((ROWID >> 14) <> (ROWID >> 13)) & 1)) & 0x7fff)
The beauty of this approach is if you take the sequence of the PRBS with longer period than your ROWID largest value you will have unique random index. Very simple. :)
If you need help with searching for primitive polynomials you can see my github repo which deals exactly with finding primitive polynomials and unique m-sequences. It is currently written in Matlab, but I plan to write it in python in next few days.
Cheers!
What about using good hash function and map result into [1...max] range?
Along the lines (in pseudocode). sha1 was added to SQLite 3.17.
sha1(ROWID) % Max + 1
Or use any external C code for hash (murmur, chacha, ...) as shown here
A linear congruential generator with appropriately-chosen parameters (a, c, and modulus m) will be a full-period generator, such that it cycles pseudorandomly through every integer in its period before repeating. Although you may have tried this idea before, have you considered that m is equivalent to max in your case? For a list of parameter choices for such generators, see L'Ecuyer, P., "Tables of Linear Congruential Generators of Different Sizes and Good Lattice Structure", Mathematics of Computation 68(225), January 1999.
Note that there are some practical issues to implementing this in SQLite, especially if your SQLite version supports only 32-bit integers and 64-bit floating-point numbers (with 52 bits of precision). Namely, there may be a risk of—
overflow if an intermediate multiplication exceeds 32 bits for integers, and
precision loss if an intermediate multiplication results in a greater-than-52-bit number.
Also, consider why you are creating the random number sequence:
Is the sequence intended to be unpredictable? In that case, a linear congruential generator alone is not enough, and you should generate unique identifiers by other means, such as by combining unique numbers with cryptographically random numbers.
Will the numbers generated this way be exposed in any way to end users? If not, there is no need to obfuscate them by "shuffling" them.
Also, depending on the SQLite API you're using (for your programming language), there may be a way to write a custom function to convert the seed and ROWID to a random unique number. The details, however, depend heavily on the specific SQLite API. Another answer shows an example for Perl.
I read a article below about Constant Propagation for hotspot through lattice .
http://www.cliffc.org/blog/2012/02/27/too-much-theory-part-2/
And it described that " (top meet -1) == -1 == (-1 meet top), so this example works. So does: (1 meet 2,3) == bottom == (2,3 meet 1). So does: (0 meet 0,1) == 0,1 == (0,1 meet 0)"
However I cannot understand why (top meet -1)==1 , and (-1 meet top)==-1? also why (1 meet 2,3) and (2,3 meet 1)==bottom ? how the meet is calculated?
Jacky, from your question it seems that you don't get some basic concepts. Have you tried to read linked Lattice wiki article?
I'm not sure if I can be better than collective mind of the Wiki but I'll try.
Let's start with poset aka "Partially ordered set". Having a "poset" means that you have a set of some objects and some comparator <= that you can feed two objects to and it will say which one is less (or rather "less or equal"). What differs partially ordered set from totally ordered one is that in more usual totally ordered set at least one of a <= b and a >= b holds true. In "partially ordered" mean that both might be false at the same time. I.e. you have some elements that you can't compare at all.
Now lattice is a structure over poset (and potentially not every poset can be converted to a lattice). To define a lattice you need to define two methods meet and join. meet is a function from a pair of elements of the poset to an element of the poset such that (I will use meet(a, b) syntax instead of a meet b as it seems to be more friendly for Java-developers):
For every pair of elements a and b there is an element inf = meet(a,b) i.e. meet is defined for every pair of elements
For every pair of elements a and b meet(a,b) <= a and meet(a,b) <= b
For every pair of elements a and b if inf = meet(a,b) there is no other element c in the set such that c <= a AND c <= b AND NOT c <= inf i.e. meet(a,b) defines the largest common minimum element (or more technically an infimum) and such element is unique.
The same goes for join but the join is finding "maximum" of two elements or more technically supremum.
So now let's go back to the example you referenced. The poset in that example contains of 4 types or rather layers of elements:
Top - an artificial element added to poset such that it is greater than any other element
Single integers
Pairs of neighbor integers (range) such as "[0, 1]" (here unlike the author I will use "[" and "]" to define ranges to not confuse with application of meet)
Bottom - an artificial element added to poset such that it is less than any other element
Note that all elements in single layer are not comparable(!) but all elements in any higher layer are greater than all elements in any lower layer. So no 1 is not less than 2 under that poset structure but [1,2] is less than both 1 and 2.
Note that all elements in single layer are not comparable(!). So no 1 is not less than 2 under that poset structure but [1,2] is less than both 1 and 2. Top is greater than anything. Bottom is less than anything. And range [x,y] is comparable with raw number z if and only if z lines inside the range and in that case range is less, otherwise they are not comparable.
You may notice that the structure of the poset "induces" corresponding lattice. So given such structure it is easy to understand how to define the meet function to satisfy all the requirements:
meet(Top, a) = meet(a, Top) = a for any a
meet(Bottom, a) = meet(a, Bottom) = Bottom for any a
meet(x, y) where both x and y are integers (i.e. for layer #2) is either:
Just x if x = y
Range [x, y] if x + 1 = y
Range [y, x] if y + 1 = x
Bottom otherwise
(I'm not sure if this is the right definition, it might always be range [min(x,y), max(x,y)] unless x = y . It is not clear from examples but it is not very important)
meet([x,y], z) = meet(z, [x,y]) where x, y, and z are integers i.e. meet of an integer (layer #2) and range (layer #3) is:
Range [x, y] if x = z or y = z (in other words if [x,y] < z)
Bottom otherwise
So meet of a range and an integer is almost always Bottom except most trivial cases
meet(a, b) where both a and b are ranges i.e. meet of two ranges (layer #3) is:
Range a is a = b
Bottom otherwise
So meet of two ranges is also Bottom except most trivial cases
What that part of example is about is actually about "inducing" the lattice from the structure and verifying that most of the desirable features hold (except for symmetry which is added in the next example).
Hope this helps
Update (answers to comments)
It is hard to answer "why". This is because the author build his poset in that way (probably because that way will be useful later). I think you are confused because set of natural numbers has a "natural" (pun not intended) sort order that we all are used to. Put there is nothing that could prohibit me to get the same set (i.e. the same object = all natural numbers) and define some other sorting order. Are you familiar with java.util.Comparator interface? Using that interface you can specify any sorting rule for Integer type such as "all even numbers are greater than all odd ones and inside even or odd classes works "usual" comparison rule" and you can use such a Comparartor to sort collection if for some reason such sorting order makes sense for your task. This is the same case, for author's task it makes sense to define an alternative (custom) sorting order. Moreover he want to make it only a partial order (which is impossible with Comparator). An the way he defines his order is the way I described.
Also if it is possible to do compare for [1,2] with 0 or 3?
Yes, you can compare and the answer directly follows from "all elements in single layer are not comparable(!) but all elements in any higher layer are greater than all elements in any lower layer": any number such as 0, 3, 42 or Integer.MAX_VALUE from layer #2 is greater than any range (layer #3) including the [1,2] range.
After some more thinking about it, my original answer was wrong. To satisfy author's goal range [1,2] should be not comparable with 0 or 3. So the answer is No. Actually my specification of the meet is correct but my description of sorting order is wrong.
Also the explanation for top and bottom are different from you, the original author explained that "bottom” == “we don’t know what values this might take on , “top” == “we can pick any value we like", I have do idea if you both explanation for the top and bottom actual refer to the same thing.
Here you mix up how the author defines top and bottom as a part of a mathematical structure called "lattice" and how he uses them for his practical task.
What this article is about is that there is an algorithm that analyses code for optimization based on "analyses of constants" and the algorithm is build upon the lattice of the described structure. The algorithm is based on processing different objects of the defined poset and involves finding meet of them multiple times. What the quoted answer describes is how to interpret the final value that algorithm produces rather than how those values are define.
AFAIU the basic idea behind algorithm is following: we have some variable and we see a few places where the value is assigned to it. For various optimizations it is good to know what it the possible range of values that the variable can take without running the actual code with all possible inputs. So the suggested algorithm is based on a simple idea: if we have two assignments (probably conditional) to the variable and in the first we know that values are in range [L1, R1] and in the second one values are in the range [L2, R2], we can be sure that now value is in the range [min(L1, L2), max(R1, R2)] (and this is effectively how meet is defined on that lattice). So now we can analyze all assignments in a function and to each assign range of possible values. Note that this structure of numbers and unlimited ranges also forms a lattice that the author describes in the first article (http://www.cliffc.org/blog/2012/02/12/too-much-theory/).
Note that Top is effectively impossible in Java because it provide some guarantees, but in C/C++ as the author mentions, we can have a variable that is not assigned at all and in such case C Language standard allows the compiler to treat that variable as having any value by compiler's choice i.e. compiler might assume whatever is most useful for optimization and this is what Top stands for. On the other hand if some value comes in as a argument to a method, it is Bottom because it can be any value without any control by compiler i.e. compiler can not assume anything about the value.
In the second article author points out that although the lattice from the first article is good theoretically in practice it can be very inefficient computationally. Thus to simplify computations he reduces his lattice to a much simpler one but the general theory stays the same: we assign ranges to the variables at each assignment so later we can analyze code and optimize it. And when we finished computing all the ranges, the interpretation for optimization assuming that analyzed line is if in the:
if (variable > 0) {
block#1
}
else {
block#2
}
is following
Top - if the line of code might be optimized assuming the variable has some specific value, compiler is free to do that optimization. So in the example compiler is free to eliminate that branch and decide that code will always go to block#1 and remove block#2 altogether OR decide that code will always go to block#2 and remove block#1 whichever alternative seems better to the compiler.
x - i.e. some constant value x - if the line of code might be optimized assuming the variable has value exactly x, compiler is free to do that optimization. So in the example compiler can evaluate x > 0 with that constant and leave only the code branch that corresponds to the calculated boolean value removing the other branch.
[x, y] - i.e. range from x to y. If the line of code might be optimized assuming the variable has value between x and y, compiler is free to do that optimization. So in the example if x > 0 (and thus y > 0), then compiler can remove block #2; y <= 0 (and thus x <= 0) , then compiler can remove block #1; if x <= 0 and y > 0 compiler can't optimize that code
Bottom - compiler can't optimize that code.
I am using this piece of code and a stackoverflow will be triggered, if I use Extlib's Hashtbl the error does not occur. Any hints to use specialized Hashtbl without stackoverflow?
module ColorIdxHash = Hashtbl.Make(
struct
type t = Img_types.rgb_t
let equal = (==)
let hash = Hashtbl.hash
end
)
(* .. *)
let (ctable: int ColorIdxHash.t) = ColorIdxHash.create 256 in
for x = 0 to width -1 do
for y = 0 to height -1 do
let c = Img.get img x y in
let rgb = Color.rgb_of_color c in
if not (ColorIdxHash.mem ctable rgb) then ColorIdxHash.add ctable rgb (ColorIdxHash.length ctable)
done
done;
(* .. *)
The backtrace points to hashtbl.ml:
Fatal error: exception Stack_overflow Raised at file "hashtbl.ml",
line 54, characters 16-40 Called from file "img/write_bmp.ml", line
150, characters 52-108 ...
Any hints?
Well, you're using physical equality (==) to compare the colors in your hash table. If the colors are structured values (I can't tell from this code), none of them will be physically equal to each other. If all the colors are distinct objects, they will all go into the table, which could really be quite a large number of objects. On the other hand, the hash function is going to be based on the actual color R,G,B values, so there may well be a large number of duplicates. This will mean that your hash buckets will have very long chains. Perhaps some internal function isn't tail recursive, and so is overflowing the stack.
Normally the length of the longest chain will be 2 or 3, so it wouldn't be surprising that this error doesn't come up often.
Looking at my copy of hashtbl.ml (OCaml 3.12.1), I don't see anything non-tail-recursive on line 54. So my guess might be wrong. On line 54 a new internal array is allocated for the hash table. So another idea is just that your hashtable is just getting too big (perhaps due to the unwanted duplicates).
One thing to try is to use structural equality (=) and see if the problem goes away.
One reason you may have non-termination or stack overflows is if your type contains cyclic values. (==) will terminates on cyclic values (while (=) may not), but Hash.hash is probably not cycle-safe. So if you manipulate cyclic values of type Img_types.rgb_t, you have to devise your one cycle-safe hash function -- typically, calling Hash.hash on only one of the non-cyclic subfields/subcomponents of your values.
I've already been bitten by precisely this issue in the past. Not a fun bug to track down.
Hey there,
I have a mathematical function (multidimensional which means that there's an index which I pass to the C++-function on which single mathematical function I want to return. E.g. let's say I have a mathematical function like that:
f = Vector(x^2*y^2 / y^2 / x^2*z^2)
I would implement it like that:
double myFunc(int function_index)
{
switch(function_index)
{
case 1:
return PNT[0]*PNT[0]*PNT[1]*PNT[1];
case 2:
return PNT[1]*PNT[1];
case 3:
return PNT[2]*PNT[2]*PNT[1]*PNT[1];
}
}
whereas PNT is defined globally like that: double PNT[ NUM_COORDINATES ]. Now I want to implement the derivatives of each function for each coordinate thus generating the derivative matrix (columns = coordinates; rows = single functions). I wrote my kernel already which works so far and which call's myFunc().
The Problem is: For calculating the derivative of the mathematical sub-function i concerning coordinate j, I would use in sequential mode (on CPUs e.g.) the following code (whereas this is simplified because usually you would decrease h until you reach a certain precision of your derivative):
f0 = myFunc(i);
PNT[ j ] += h;
derivative = (myFunc(j)-f0)/h;
PNT[ j ] -= h;
now as I want to do this on the GPU in parallel, the problem is coming up: What to do with PNT? As I have to increase certain coordinates by h, calculate the value and than decrease it again, there's a problem coming up: How to do it without 'disturbing' the other threads? I can't modify PNT because other threads need the 'original' point to modify their own coordinate.
The second idea I had was to save one modified point for each thread but I discarded this idea quite fast because when using some thousand threads in parallel, this is a quite bad and probably slow (perhaps not realizable at all because of memory limits) idea.
'FINAL' SOLUTION
So how I do it currently is the following, which adds the value 'add' on runtime (without storing it somewhere) via preprocessor macro to the coordinate identified by coord_index.
#define X(n) ((coordinate_index == n) ? (PNT[n]+add) : PNT[n])
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
//*// Example: f[i] = x[i]^3
return (X(function_index)*X(function_index)*X(function_index));
// */
}
That works quite nicely and fast. When using a derivative matrix with 10000 functions and 10000 coordinates, it just takes like 0.5seks. PNT is defined either globally or as constant memory like __constant__ double PNT[ NUM_COORDINATES ];, depending on the preprocessor variable USE_CONST.
The line return (X(function_index)*X(function_index)*X(function_index)); is just an example where every sub-function looks the same scheme, mathematically spoken:
f = Vector(x0^3 / x1^3 / ... / xN^3)
NOW THE BIG PROBLEM ARISES:
myFunc is a mathematical function which the user should be able to implement as he likes to. E.g. he could also implement the following mathematical function:
f = Vector(x0^2*x1^2*...*xN^2 / x0^2*x1^2*...*xN^2 / ... / x0^2*x1^2*...*xN^2)
thus every function looking the same. You as a programmer should only code once and not depending on the implemented mathematical function. So when the above function is being implemented in C++, it looks like the following:
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
double ret = 1.0;
for(int i = 0; i < NUM_COORDINATES; i++)
ret *= X(i)*X(i);
return ret;
}
And now the memory accesses are very 'weird' and bad for performance issues because each thread needs access to each element of PNT twice. Surely, in such a case where each function looks the same, I could rewrite the complete algorithm which surrounds the calls to myFunc, but as I stated already: I don't want to code depending on the user-implemented function myFunc...
Could anybody come up with an idea how to solve this problem??
Thanks!
Rewinding back to the beginning and starting with a clean sheet, it seems you want to be able to do two things
compute an arbitrary scalar valued
function over an input array
approximate the partial derivative of an arbitrary scalar
valued function over the input array
using first order accurate finite differencing
While the function is scalar valued and arbitrary, it seems that there are, in fact, two clear forms which this function can take:
A scalar valued function with scalar arguments
A scalar valued function with vector arguments
You appeared to have started with the first type of function and have put together code to deal with computing both the function and the approximate derivative, and are now wrestling with the problem of how to deal with the second case using the same code.
If this is a reasonable summary of the problem, then please indicate so in a comment and I will continue to expand it with some code samples and concepts. If it isn't, I will delete it in a few days.
In comments, I have been trying to suggest that conflating the first type of function with the second is not a good approach. The requirements for correctness in parallel execution, and the best way of extracting parallelism and performance on the GPU are very different. You would be better served by treating both types of functions separately in two different code frameworks with different usage models. When a given mathematical expression needs to be implemented, the "user" should make a basic classification as to whether that expression is like the model of the first type of function, or the second. The act of classification is what drives algorithmic selection in your code. This type of "classification by algorithm" is almost universal in well designed libraries - you can find it in C++ template libraries like Boost and the STL, and you can find it in legacy Fortran codes like the BLAS.
What is the best way to find out whether two number ranges intersect?
My number range is 3023-7430, now I want to test which of the following number ranges intersect with it: <3000, 3000-6000, 6000-8000, 8000-10000, >10000. The answer should be 3000-6000 and 6000-8000.
What's the nice, efficient mathematical way to do this in any programming language?
Just a pseudo code guess:
Set<Range> determineIntersectedRanges(Range range, Set<Range> setofRangesToTest)
{
Set<Range> results;
foreach (rangeToTest in setofRangesToTest)
do
if (rangeToTest.end <range.start) continue; // skip this one, its below our range
if (rangeToTest.start >range.end) continue; // skip this one, its above our range
results.add(rangeToTest);
done
return results;
}
I would make a Range class and give it a method boolean intersects(Range) . Then you can do a
foreach(Range r : rangeset) { if (range.intersects(r)) res.add(r) }
or, if you use some Java 8 style functional programming for clarity:
rangeset.stream().filter(range::intersects).collect(Collectors.toSet())
The intersection itself is something like
this.start <= other.end && this.end >= other.start
This heavily depends on your ranges. A range can be big or small, and clustered or not clustered. If you have large, clustered ranges (think of "all positive 32-bit integers that can be divided by 2), the simple approach with Range(lower, upper) will not succeed.
I guess I can say the following:
if you have little ranges (clustering or not clustering does not matter here), consider bitvectors. These little critters are blazing fast with respect to union, intersection and membership testing, even though iteration over all elements might take a while, depending on the size. Furthermore, because they just use a single bit for each element, they are pretty small, unless you throw huge ranges at them.
if you have fewer, larger ranges, then a class Range as describe by otherswill suffices. This class has the attributes lower and upper and intersection(a,b) is basically b.upper < a.lower or a.upper > b.lower. Union and intersection can be implemented in constant time for single ranges and for compisite ranges, the time grows with the number of sub-ranges (thus you do not want not too many little ranges)
If you have a huge space where your numbers can be, and the ranges are distributed in a nasty fasion, you should take a look at binary decision diagrams (BDDs). These nifty diagrams have two terminal nodes, True and False and decision nodes for each bit of the input. A decision node has a bit it looks at and two following graph nodes -- one for "bit is one" and one for "bit is zero". Given these conditions, you can encode large ranges in tiny space. All positive integers for arbitrarily large numbers can be encoded in 3 nodes in the graph -- basically a single decision node for the least significant bit which goes to False on 1 and to True on 0.
Intersection and Union are pretty elegant recursive algorithms, for example, the intersection basically takes two corresponding nodes in each BDD, traverse the 1-edge until some result pops up and checks: if one of the results is the False-Terminal, create a 1-branch to the False-terminal in the result BDD. If both are the True-Terminal, create a 1-branch to the True-terminal in the result BDD. If it is something else, create a 1-branch to this something-else in the result BDD. After that, some minimization kicks in (if the 0- and the 1-branch of a node go to the same following BDD / terminal, remove it and pull the incoming transitions to the target) and you are golden. We even went further than that, we worked on simulating addition of sets of integers on BDDs in order to enhance value prediction in order to optimize conditions.
These considerations imply that your operations are bounded by the amount of bits in your number range, that is, by log_2(MAX_NUMBER). Just think of it, you can intersect arbitrary sets of 64-bit-integers in almost constant time.
More information can be for example in the Wikipedia and the referenced papers.
Further, if false positives are bearable and you need an existence check only, you can look at Bloom filters. Bloom filters use a vector of hashes in order to check if an element is contained in the represented set. Intersection and Union is constant time. The major problem here is that you get an increasing false-positive rate if you fill up the bloom-filter too much.
Information, again, in the Wikipedia, for example.
Hach, set representation is a fun field. :)
In python
class nrange(object):
def __init__(self, lower = None, upper = None):
self.lower = lower
self.upper = upper
def intersection(self, aRange):
if self.upper < aRange.lower or aRange.upper < self.lower:
return None
else:
return nrange(max(self.lower,aRange.lower), \
min(self.upper,aRange.upper))
If you're using Java
Commons Lang Range
has a
overlapsRange(Range range) method.