slevel option in value analysis of FramaC - frama-c

I do not quite understand the slevel option in FramaC. Could anyone please explain more about how the value of slevel relates to user assertions (e.g. //#assert ...;) or states, and what factors cause the number of states go up?
For example, I have a program which does not contain loop, but containing if..else branches, go to statements, and there are also some user assertions in some points of the program, such as:
...
a = Frama_C_unsigned_int_interval(0, 100);
//# assert a == 0 || a == 1 || a == 2 || a == 3 || ... || a == 100;
...
b = Frama_C_unsigned_int_interval(a, 200);
//# assert b == 0 || b == 1 || b == 2 || b == 3 || ... || b == 200;
...
In the output from the analysis (in which I set slevel to ~100000), there may be some like: Semantic level unrolling superposing up to 100 states, then this statement may repeat several times with larger number of states.
Could anyone please explain how is this semantic level unrolling computed and
how can I obtain the optimal slevel for the analysis. Thanks.

Value analysis performs an abstract execution of the program. Instead of testing the program with arbitrary values, it executes it with abstract values encompassing all possible concrete execution. For instance, the abstract state associates x with the interval [0;10]. When arriving at an if statement, for instance if (x<=5) y = 1; else y=2;, the abstract state is split according to the condition. For instance, the then branch will be executed with an interval [0;5] and the else branch with an interval [6;10]. Conversely, at the end of the whole if, you have to join the two abstract states. This means that we will associate to each variable the smallest interval that contain all possible values coming from both (or more in case of a switch for instance) parent states.
For instance, after the if, we have x in [0;10] and y in [1;2]. In addition, note that Value sees a normalized version of the C code, in which you always have to branches for an if ({ if (c) { s }} in the source in seen by Value as if (c) { s } else { }).
Setting -slevel to N means that Value will not join states as above, but instead propagate up to N separate states. In our example, this means that we now have two states, one in which x is in [0;5] and y==1 and one in which x is in [6;10] and y==2. This allows for more precision (for instance it rules out states such as x==0 && y == 2), but at the expense of computation time and memory consumption. In addition, the number of possible distinct states grows exponentially with the number of points of choice in your program. Indeed, if you have a second, unrelated, if, both states coming from the first one will be split, resulting in 4 states afterwards (assuming N>=4). Value gives you an indication of the number of distinct states that it is using so that you can see in the log where the slevel is spent, and possibly adjust it to meet a better precision/computation time ratio.
Finally, if are not the only constructions that will separate states. Any branching statement, in particular loops, will result in a state separation. Similarly, ACSL annotations expressed as disjunctions, as the one shown in your question will have the same effect: you end up with a state per element of the disjunction, provided you have enough slevel.

Related

Eva method to compute intervals [frama-c]

My goal is to understand how Eva shrinks the intervals for a variable. for example:
unsigned int nondet_uint(void);
int main()
{
unsigned int x=nondet_uint();
unsigned int y=nondet_uint();
//# assert x >= 20 && x <= 30;
//# assert y <= 60;
//# assert(x>=y);
return 0;
}
So, we have x=[20,30] and y=[0,60]. However, the results from Eva shrinks y to [0,30] which is where the domain may be valid.
[eva] ====== VALUES COMPUTED ======
[eva:final-states] Values at end of function main:
x ∈ [20..30]
y ∈ [0..30]
__retres ∈ {0}
I tried some options for the Eva plugin, but none showed the steps for it. May I ask you to provide the method or publication on how to compute these values?
Showing values during abstract interpretation
I tried some options for the Eva plugin, but none showed the steps for it.
The most efficient way to follow the evaluation is not via command-line options, but by adding Frama_C_show_each(exp) statements in the code. These are special function calls which, during the analysis, emit the values of the expression contained in them. They are especially useful in loops, for instance to see when a widening is triggered, what happens to the loop counter values.
Note that displaying all of the intermediary evaluation and reduction steps would be very verbose, even for very small programs. By default, this information is not exposed, since it is too dense and rarely useful.
For starters, try adding Frama_C_show_each statements, and use the Frama-C GUI to see the result. It allows focusing on any expression in the code and, in the Values tab, shows the values for the given expression, at the selected statement, for each callstack. You can also press Ctrl+E and type an arbitrary expression to have its value evaluated at that statement.
If you want more details about the values, their reductions, and the overall mechanism, see the section below.
Detailed information about values in Eva
Your question is related to the values used by the abstract interpretation engine in Eva.
Chapter 3 of the Eva User Manual describes the abstractions used by the engine, which are, succinctly:
sets of integers, which are maximally precise but limited to a number of elements (modified by option -eva-ilevel, which on Frama-C 22 is set to 8 by default);
integer intervals with periodicity information (also called modulo, or congruence), e.g. [2..42],2%10 being the set containing {2, 12, 22, 32, 42}. In the simple case, e.g. [2..42], all integers between 2 and 42 are included;
sets of addresses (for pointers), with offsets represented using the above values (sets of integers or intervals);
intervals of floating-point variables (unlike integers, there are no small sets of floating-point values).
Why is all of this necessary? Because without knowing some of these details, you'll have a hard time understanding why the analysis is sometimes precise, sometimes imprecise.
Note that the term reduction is used in the documentation, instead of shrinkage. So look for words related to reduce in the Eva manual when searching for clues.
For instance, in the following code:
int a = Frama_C_interval(-5, 5);
if (a != 0) {
//# assert a != 0;
int b = 5 / a;
}
By default, the analysis will not be able to remove the 0 from the interval inside the if, because [-5..-1];[1..5] is not an interval, but a disjoint union of intervals. However, if the number of elements drops below -eva-ilevel, then the analysis will convert it into a small set, and get a precise result. Therefore, changing some analysis options will result in different ranges, and different results.
In some cases, you can force Eva to compute using disjunctions, for instance by adding the split ACSL annotation, e.g. //# split a < b || a >= b;. But you still need the give the analysis some "fuel" for it to evaluate both branches separately. The easiest way to do so is to use -eva-precision N, with N being an integer between 0 and 11. The higher N is, the more splitting is allowed to happen, but the longer the analysis may take.
Note that, to ensure termination of the analysis, some mechanisms such as widening are used. Without it, a simple loop might require billions of evaluation steps to terminate. This mechanism may introduce extra values which lead to a less precise analysis.
Finally, there are also some abstract domains (option -eva-domains) which allow other kinds of values besides the default ones mentioned above. For instance, the sign domain allows splitting values between negative, zero and positive, and would avoid the imprecision in the above example. The Eva user manual contains examples of usage of each of the domains, indicating when they are useful.

In what order we need to put weights on scale?

I' am doing my homework in programming, and I don't know how to solve this problem:
We have a set of n weights, we are putting them on a scale one by one until all weights is used. We also have string of n letters "R" or "L" which means which pen is heavier in that moment, they can't be in balance. There are no weights with same mass. Compute in what order we have to put weights on scale and on which pan.
The goal is to find order of putting weights on scale, so the input string is respected.
Input: number 0 < n < 51, number of weights. Then weights and the string.
Output: in n lines, weight and "R" or "L", side where you put weight. If there are many, output any of them.
Example 1:
Input:
3
10 20 30
LRL
Output:
10 L
20 R
30 L
Example 2:
Input:
3
10 20 30
LLR
Output:
20 L
10 R
30 R
Example 3:
Input:
5
10 20 30 40 50
LLLLR
Output:
50 L
10 L
20 R
30 R
40 R
I already tried to compute it with recursion but unsuccessful. Can someone please help me with this problem or just gave me hints how to solve it.
Since you do not show any code of your own, I'll give you some ideas without code. If you need more help, show more of your work then I can show you Python code that solves your problem.
Your problem is suitable for backtracking. Wikipedia's definition of this algorithm is
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.
and
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution.
Your problem satisfies those requirements. At each stage you need to choose one of the remaining weights and one of the two pans of the scale. When you place the chosen weight on the chosen pan, you determine if the corresponding letter from the input string is satisfied. If not, you reject the choice of weight and pan. If so, you continue by choosing another weight and pan.
Your overall routine first inputs and prepares the data. It then calls a recursive routine that chooses one weight and one pan at each level. Some of the information needed by each level could be put into mutable global variables, but it would be more clear if you pass all needed information as parameters. Each call to the recursive routine needs to pass:
the weights not yet used
the input L/R string not yet used
the current state of the weights on the pans, in a format that can easily be printed when finalized (perhaps an array of ordered pairs of a weight and a pan)
the current weight imbalance of the pans. This could be calculated from the previous parameter, but time would be saved by passing this separately. This would be total of the weights on the right pan minus the total of the weights on the left pan (or vice versa).
Your base case for the recursion is when the unused-weights and unused-letters are empty. You then have finished the search and can print the solution and quit the program. Otherwise you loop over all combinations of one of the unused weights and one of the pans. For each combination, calculate what the new imbalance would be if you placed that weight on that pan. If that new imbalance agrees with the corresponding letter, call the routine recursively with appropriately-modified parameters. If not, do nothing for this weight and pan.
You still have a few choices to make before coding, such as the data structure for the unused weights. Show me some of your own coding efforts then I'll give you my Python code.
Be aware that this could be slow for a large number of weights. For n weights and two pans, the total number of ways to place the weights on the pans is n! * 2**n (that is a factorial and an exponentiation). For n = 50 that is over 3e79, much too large to do. The backtracking avoids most groups of choices, since choices are rejected as soon as possible, but my algorithm could still be slow. There may be a better algorithm than backtracking, but I do not see it. Your problem seems to be designed to be handled by backtracking.
Now that you have shown more effort of your own, here is my un-optimized Python 3 code. This works for all the examples you gave, though I got a different valid solution for your third example.
def weights_on_pans():
def solve(unused_weights, unused_tilts, placement, imbalance):
"""Place the weights on the scales using recursive
backtracking. Return True if successful, False otherwise."""
if not unused_weights:
# Done: print the placement and note that we succeeded
for weight, pan in placement:
print(weight, 'L' if pan < 0 else 'R')
return True # success right now
tilt, *later_tilts = unused_tilts
for weight in unused_weights:
for pan in (-1, 1): # -1 means left, 1 means right
new_imbalance = imbalance + pan * weight
if new_imbalance * tilt > 0: # both negative or both positive
# Continue searching since imbalance in proper direction
if solve(unused_weights - {weight},
later_tilts,
placement + [(weight, pan)],
new_imbalance):
return True # success at a lower level
return False # not yet successful
# Get the inputs from standard input. (This version has no validity checks)
cnt_weights = int(input())
weights = {int(item) for item in input().split()}
letters = input()
# Call the recursive routine with appropriate starting parameters.
tilts = [(-1 if letter == 'L' else 1) for letter in letters]
solve(weights, tilts, [], 0)
weights_on_pans()
The main way I can see to speed up that code is to avoid the O(n) operations in the call to solve in the inner loop. That means perhaps changing the data structure of unused_weights and changing how it, placement, and perhaps unused_tilts/later_tilts are modified to use O(1) operations. Those changes would complicate the code, which is why I did not do them.

lattice puzzles in the Constant Propagation for hotspot

I read a article below about Constant Propagation for hotspot through lattice .
http://www.cliffc.org/blog/2012/02/27/too-much-theory-part-2/
And it described that " (top meet -1) == -1 == (-1 meet top), so this example works. So does: (1 meet 2,3) == bottom == (2,3 meet 1). So does: (0 meet 0,1) == 0,1 == (0,1 meet 0)"
However I cannot understand why (top meet -1)==1 , and (-1 meet top)==-1? also why (1 meet 2,3) and (2,3 meet 1)==bottom ? how the meet is calculated?
Jacky, from your question it seems that you don't get some basic concepts. Have you tried to read linked Lattice wiki article?
I'm not sure if I can be better than collective mind of the Wiki but I'll try.
Let's start with poset aka "Partially ordered set". Having a "poset" means that you have a set of some objects and some comparator <= that you can feed two objects to and it will say which one is less (or rather "less or equal"). What differs partially ordered set from totally ordered one is that in more usual totally ordered set at least one of a <= b and a >= b holds true. In "partially ordered" mean that both might be false at the same time. I.e. you have some elements that you can't compare at all.
Now lattice is a structure over poset (and potentially not every poset can be converted to a lattice). To define a lattice you need to define two methods meet and join. meet is a function from a pair of elements of the poset to an element of the poset such that (I will use meet(a, b) syntax instead of a meet b as it seems to be more friendly for Java-developers):
For every pair of elements a and b there is an element inf = meet(a,b) i.e. meet is defined for every pair of elements
For every pair of elements a and b meet(a,b) <= a and meet(a,b) <= b
For every pair of elements a and b if inf = meet(a,b) there is no other element c in the set such that c <= a AND c <= b AND NOT c <= inf i.e. meet(a,b) defines the largest common minimum element (or more technically an infimum) and such element is unique.
The same goes for join but the join is finding "maximum" of two elements or more technically supremum.
So now let's go back to the example you referenced. The poset in that example contains of 4 types or rather layers of elements:
Top - an artificial element added to poset such that it is greater than any other element
Single integers
Pairs of neighbor integers (range) such as "[0, 1]" (here unlike the author I will use "[" and "]" to define ranges to not confuse with application of meet)
Bottom - an artificial element added to poset such that it is less than any other element
Note that all elements in single layer are not comparable(!) but all elements in any higher layer are greater than all elements in any lower layer. So no 1 is not less than 2 under that poset structure but [1,2] is less than both 1 and 2.
Note that all elements in single layer are not comparable(!). So no 1 is not less than 2 under that poset structure but [1,2] is less than both 1 and 2. Top is greater than anything. Bottom is less than anything. And range [x,y] is comparable with raw number z if and only if z lines inside the range and in that case range is less, otherwise they are not comparable.
You may notice that the structure of the poset "induces" corresponding lattice. So given such structure it is easy to understand how to define the meet function to satisfy all the requirements:
meet(Top, a) = meet(a, Top) = a for any a
meet(Bottom, a) = meet(a, Bottom) = Bottom for any a
meet(x, y) where both x and y are integers (i.e. for layer #2) is either:
Just x if x = y
Range [x, y] if x + 1 = y
Range [y, x] if y + 1 = x
Bottom otherwise
(I'm not sure if this is the right definition, it might always be range [min(x,y), max(x,y)] unless x = y . It is not clear from examples but it is not very important)
meet([x,y], z) = meet(z, [x,y]) where x, y, and z are integers i.e. meet of an integer (layer #2) and range (layer #3) is:
Range [x, y] if x = z or y = z (in other words if [x,y] < z)
Bottom otherwise
So meet of a range and an integer is almost always Bottom except most trivial cases
meet(a, b) where both a and b are ranges i.e. meet of two ranges (layer #3) is:
Range a is a = b
Bottom otherwise
So meet of two ranges is also Bottom except most trivial cases
What that part of example is about is actually about "inducing" the lattice from the structure and verifying that most of the desirable features hold (except for symmetry which is added in the next example).
Hope this helps
Update (answers to comments)
It is hard to answer "why". This is because the author build his poset in that way (probably because that way will be useful later). I think you are confused because set of natural numbers has a "natural" (pun not intended) sort order that we all are used to. Put there is nothing that could prohibit me to get the same set (i.e. the same object = all natural numbers) and define some other sorting order. Are you familiar with java.util.Comparator interface? Using that interface you can specify any sorting rule for Integer type such as "all even numbers are greater than all odd ones and inside even or odd classes works "usual" comparison rule" and you can use such a Comparartor to sort collection if for some reason such sorting order makes sense for your task. This is the same case, for author's task it makes sense to define an alternative (custom) sorting order. Moreover he want to make it only a partial order (which is impossible with Comparator). An the way he defines his order is the way I described.
Also if it is possible to do compare for [1,2] with 0 or 3?
Yes, you can compare and the answer directly follows from "all elements in single layer are not comparable(!) but all elements in any higher layer are greater than all elements in any lower layer": any number such as 0, 3, 42 or Integer.MAX_VALUE from layer #2 is greater than any range (layer #3) including the [1,2] range.
After some more thinking about it, my original answer was wrong. To satisfy author's goal range [1,2] should be not comparable with 0 or 3. So the answer is No. Actually my specification of the meet is correct but my description of sorting order is wrong.
Also the explanation for top and bottom are different from you, the original author explained that "bottom” == “we don’t know what values this might take on , “top” == “we can pick any value we like", I have do idea if you both explanation for the top and bottom actual refer to the same thing.
Here you mix up how the author defines top and bottom as a part of a mathematical structure called "lattice" and how he uses them for his practical task.
What this article is about is that there is an algorithm that analyses code for optimization based on "analyses of constants" and the algorithm is build upon the lattice of the described structure. The algorithm is based on processing different objects of the defined poset and involves finding meet of them multiple times. What the quoted answer describes is how to interpret the final value that algorithm produces rather than how those values are define.
AFAIU the basic idea behind algorithm is following: we have some variable and we see a few places where the value is assigned to it. For various optimizations it is good to know what it the possible range of values that the variable can take without running the actual code with all possible inputs. So the suggested algorithm is based on a simple idea: if we have two assignments (probably conditional) to the variable and in the first we know that values are in range [L1, R1] and in the second one values are in the range [L2, R2], we can be sure that now value is in the range [min(L1, L2), max(R1, R2)] (and this is effectively how meet is defined on that lattice). So now we can analyze all assignments in a function and to each assign range of possible values. Note that this structure of numbers and unlimited ranges also forms a lattice that the author describes in the first article (http://www.cliffc.org/blog/2012/02/12/too-much-theory/).
Note that Top is effectively impossible in Java because it provide some guarantees, but in C/C++ as the author mentions, we can have a variable that is not assigned at all and in such case C Language standard allows the compiler to treat that variable as having any value by compiler's choice i.e. compiler might assume whatever is most useful for optimization and this is what Top stands for. On the other hand if some value comes in as a argument to a method, it is Bottom because it can be any value without any control by compiler i.e. compiler can not assume anything about the value.
In the second article author points out that although the lattice from the first article is good theoretically in practice it can be very inefficient computationally. Thus to simplify computations he reduces his lattice to a much simpler one but the general theory stays the same: we assign ranges to the variables at each assignment so later we can analyze code and optimize it. And when we finished computing all the ranges, the interpretation for optimization assuming that analyzed line is if in the:
if (variable > 0) {
block#1
}
else {
block#2
}
is following
Top - if the line of code might be optimized assuming the variable has some specific value, compiler is free to do that optimization. So in the example compiler is free to eliminate that branch and decide that code will always go to block#1 and remove block#2 altogether OR decide that code will always go to block#2 and remove block#1 whichever alternative seems better to the compiler.
x - i.e. some constant value x - if the line of code might be optimized assuming the variable has value exactly x, compiler is free to do that optimization. So in the example compiler can evaluate x > 0 with that constant and leave only the code branch that corresponds to the calculated boolean value removing the other branch.
[x, y] - i.e. range from x to y. If the line of code might be optimized assuming the variable has value between x and y, compiler is free to do that optimization. So in the example if x > 0 (and thus y > 0), then compiler can remove block #2; y <= 0 (and thus x <= 0) , then compiler can remove block #1; if x <= 0 and y > 0 compiler can't optimize that code
Bottom - compiler can't optimize that code.

How can I bubble an "impossible" value up via a recursive algorithm?

I have a recursive algorithm with special cases, for example a path algorithm where I want to add 1 to the distance if the path is good, but return -1 if it hits a dead-end. This is problematic when solving maximization problems with a bunch of recursive calls.
Is there a better way to code the following:
def rec(n):
if n == 1:
return -1
if n == 0:
return 1
val = rec(n - 2)
if val == -1:
return -1
else:
return val + 1
Therefore, rec(4) = 2, rec(3) = -1
In Python, not really. You could make it clearer in Python by returning None rather than -1; this has the advantage that erroneously adding to the invalid value will throw an exception.
A language that has a more rigorous type system and a good concept of 'maybe' or optional values makes it a snap. Say, Haskell:
rec 1 = Nothing
rec 0 = Just 1
rec n = map ((+) 1) $ rec (n - 2)
The map invocation means that it will add whatever is in the box if it is Just x, and return the invalid value (Nothing) unchanged. Of course, you can design your own more sophisticated type that allows for multiple error conditions or whatever and still obtain similarly simple results. This is pretty much just as easy in OCaml, F#, Standard ML, and Scala.
You can simulate this approach with None in Python by defining a helper function:
def mapMaybe(obj, f):
if obj is None:
return None
else:
return f(obj)
Then you can do:
return mapMaybe(val, lambda x: x + 1)
But I don't know that I would really recommend doing that.
Simulating a trick from Scala's book, it would also be possible to wrap all of this up in generator comprehensions (untested):
def maybe(x):
if x is not None:
yield x
def firstMaybe(it):
try:
return it.next()
except StopIteration:
return None
Then:
return firstMaybe(x + 1 for x in maybe(val))
But that's really non-standard, non-idiomatic Python.
A useful technique is to select a "no solution available" value such that processing it as though it represented a solution would still yield a "no solution available" value. If low numbers represent optimal solutions, for example, one could choose a value which is larger than any valid solution which would be of interest. If one is using integer types, one would have to make sure the "no solution available" value was small enough that operating on it as though it were a valid solution would not cause numerical overflows [e.g. if recursive calls always assume that the cost of a solution from some position will be one greater than the cost of a solution generated by the recursive call, then using values greater than 999,999,999 to represent "no solution avaialble" should work; if code might regard the cost of the solution as being the sum of two other solutions, however, it may be necessary to choose a smaller value]. Alternatively, one might benefit from using floating-point types since a value of "positive infinity" will compare greater than any other value, and adding any positive or finite amount to it won't change that.

Find number range intersection

What is the best way to find out whether two number ranges intersect?
My number range is 3023-7430, now I want to test which of the following number ranges intersect with it: <3000, 3000-6000, 6000-8000, 8000-10000, >10000. The answer should be 3000-6000 and 6000-8000.
What's the nice, efficient mathematical way to do this in any programming language?
Just a pseudo code guess:
Set<Range> determineIntersectedRanges(Range range, Set<Range> setofRangesToTest)
{
Set<Range> results;
foreach (rangeToTest in setofRangesToTest)
do
if (rangeToTest.end <range.start) continue; // skip this one, its below our range
if (rangeToTest.start >range.end) continue; // skip this one, its above our range
results.add(rangeToTest);
done
return results;
}
I would make a Range class and give it a method boolean intersects(Range) . Then you can do a
foreach(Range r : rangeset) { if (range.intersects(r)) res.add(r) }
or, if you use some Java 8 style functional programming for clarity:
rangeset.stream().filter(range::intersects).collect(Collectors.toSet())
The intersection itself is something like
this.start <= other.end && this.end >= other.start
This heavily depends on your ranges. A range can be big or small, and clustered or not clustered. If you have large, clustered ranges (think of "all positive 32-bit integers that can be divided by 2), the simple approach with Range(lower, upper) will not succeed.
I guess I can say the following:
if you have little ranges (clustering or not clustering does not matter here), consider bitvectors. These little critters are blazing fast with respect to union, intersection and membership testing, even though iteration over all elements might take a while, depending on the size. Furthermore, because they just use a single bit for each element, they are pretty small, unless you throw huge ranges at them.
if you have fewer, larger ranges, then a class Range as describe by otherswill suffices. This class has the attributes lower and upper and intersection(a,b) is basically b.upper < a.lower or a.upper > b.lower. Union and intersection can be implemented in constant time for single ranges and for compisite ranges, the time grows with the number of sub-ranges (thus you do not want not too many little ranges)
If you have a huge space where your numbers can be, and the ranges are distributed in a nasty fasion, you should take a look at binary decision diagrams (BDDs). These nifty diagrams have two terminal nodes, True and False and decision nodes for each bit of the input. A decision node has a bit it looks at and two following graph nodes -- one for "bit is one" and one for "bit is zero". Given these conditions, you can encode large ranges in tiny space. All positive integers for arbitrarily large numbers can be encoded in 3 nodes in the graph -- basically a single decision node for the least significant bit which goes to False on 1 and to True on 0.
Intersection and Union are pretty elegant recursive algorithms, for example, the intersection basically takes two corresponding nodes in each BDD, traverse the 1-edge until some result pops up and checks: if one of the results is the False-Terminal, create a 1-branch to the False-terminal in the result BDD. If both are the True-Terminal, create a 1-branch to the True-terminal in the result BDD. If it is something else, create a 1-branch to this something-else in the result BDD. After that, some minimization kicks in (if the 0- and the 1-branch of a node go to the same following BDD / terminal, remove it and pull the incoming transitions to the target) and you are golden. We even went further than that, we worked on simulating addition of sets of integers on BDDs in order to enhance value prediction in order to optimize conditions.
These considerations imply that your operations are bounded by the amount of bits in your number range, that is, by log_2(MAX_NUMBER). Just think of it, you can intersect arbitrary sets of 64-bit-integers in almost constant time.
More information can be for example in the Wikipedia and the referenced papers.
Further, if false positives are bearable and you need an existence check only, you can look at Bloom filters. Bloom filters use a vector of hashes in order to check if an element is contained in the represented set. Intersection and Union is constant time. The major problem here is that you get an increasing false-positive rate if you fill up the bloom-filter too much.
Information, again, in the Wikipedia, for example.
Hach, set representation is a fun field. :)
In python
class nrange(object):
def __init__(self, lower = None, upper = None):
self.lower = lower
self.upper = upper
def intersection(self, aRange):
if self.upper < aRange.lower or aRange.upper < self.lower:
return None
else:
return nrange(max(self.lower,aRange.lower), \
min(self.upper,aRange.upper))
If you're using Java
Commons Lang Range
has a
overlapsRange(Range range) method.

Resources