Can anyone please tell me the usage of the following declarations shown below.I am a beginner in ada language.I had tried the internet but that was not clear enough.
type Unsigned_4 is mod 2 ** 4;
for Unsigned_4'Size use 4;
Unsigned_4 is a "modular type" taking the values 0, 1, .. 14, 15, and wrapping round.
U : Unsigned_4;
begin
U := Unsigned_4'Last; -- 15
U := U + 1; -- 0
You only need 4 bits to implement the type, so it's OK to specify that as its Size (I think this may be simply a confirming spec, since the compiler clearly knows that already; if you were hoping to fit it into 3 bits and said for Unsigned_4'Size use 3; the compiler would tell you that you were wrong).
Most compilers will want to store values of the type in at least a byte, for efficient access. The minimum size comes into its own when you use the type in a packed record (pragma Pack).
The "is mod" is Ada's way of saying that this is a modular type. Modular types work a bit like unsigned types in C: They don't have negative values, and once you reach the largest representable value, if you add one you will get 0.
If you were to try the same with a normal (non-modular) integer in Ada, you'd get constraint_error
Related
I am a beginner trying to teach myself C and I had a problem the other day which I thought would be cool to try and solve with a short program. I found it a bit more difficult to solve than I initially thought. Basically the problem goes like this.
I want to be able to enter a single int value between 0..255 (never outside this range) into a function, and inside the function there is an array of 8 values (1, 2, 4, 8, 16, 32, 64, 128), which can be combined by adding together to get reach the single int value. And then return the different combinations possible. i.e.
Target 192
Returns
64, 128
From what I have read this is a sub set problem and can be solved with recursion, but I am really struggling to put the theory and examples I've found into practice. If someone could help me out or even put me in the right direction to try and solve.
Hint: try the "bitwise and" operator (&)
First of all, it's a good idea to keep I/O and algorithms separated. So you generally shouldn't design functions which take user input and performs some algorithm at the same time.
Next up "can be solved with recursion" is not a goal of it's own. Recursion is dangerous, inefficient and hard to read. There exists very few cases where it should be used in C programming and no cases where beginners should use it at all. Most of the time, recursion in C simply boils down to: "I could paint this barn while standing on my hands at the same time"... well maybe you could, maybe you could do it without risk breaking your neck, maybe you can even do it as quickly as if you were standing upright (not likely), but why would you do it?
Program design aside, the algorithm you are looking for is closely related to binary numbers. Any number in any base can be formed by:
digitn * basen + digitn-1 * basen-1... + digit0 * base0.
In case of binary (base 2) numbers manually, for example 111 can be manually decoded to decimal as:
1 * 22 + 1 * 21 + 1 * 20 = 4 + 2 + 1 decimal = 7 decimal.
Now if we compare this with your algorithm, the multipliers above for base 2 correspond to 1, 2, 4, 8...
Conveniently, all numbers in C are actually raw binary. They only get translated to other bases when doing user input/output. So what you need for your algorithm is simply a way to check individual digits of a binary number are set or not.
This can be done with the & "bitwise AND" and << "bitwise left shift" operators. The bitwise left shift to shift the value 1 left to get the various multipliers: 0b=0, 1b=1, 10b=2, 100b=4 and so on. And then bitwise AND to mask out an individual bit from the rest, to see if it is set or not. If it isn't set, well then by the above formula we get 0*basen for that digit, so it will be zero and can be ignored.
Writing the actual C code for that is actually quite easy:
for(int i=0; i<8; i++)
{
unsigned int mask = 1u << i;
if(mask & number)
{
printf("%u\n", mask);
}
}
(This is using unsigned numbers to avoid various common bugs, but that's a topic of its own.)
My goal is to understand how Eva shrinks the intervals for a variable. for example:
unsigned int nondet_uint(void);
int main()
{
unsigned int x=nondet_uint();
unsigned int y=nondet_uint();
//# assert x >= 20 && x <= 30;
//# assert y <= 60;
//# assert(x>=y);
return 0;
}
So, we have x=[20,30] and y=[0,60]. However, the results from Eva shrinks y to [0,30] which is where the domain may be valid.
[eva] ====== VALUES COMPUTED ======
[eva:final-states] Values at end of function main:
x ∈ [20..30]
y ∈ [0..30]
__retres ∈ {0}
I tried some options for the Eva plugin, but none showed the steps for it. May I ask you to provide the method or publication on how to compute these values?
Showing values during abstract interpretation
I tried some options for the Eva plugin, but none showed the steps for it.
The most efficient way to follow the evaluation is not via command-line options, but by adding Frama_C_show_each(exp) statements in the code. These are special function calls which, during the analysis, emit the values of the expression contained in them. They are especially useful in loops, for instance to see when a widening is triggered, what happens to the loop counter values.
Note that displaying all of the intermediary evaluation and reduction steps would be very verbose, even for very small programs. By default, this information is not exposed, since it is too dense and rarely useful.
For starters, try adding Frama_C_show_each statements, and use the Frama-C GUI to see the result. It allows focusing on any expression in the code and, in the Values tab, shows the values for the given expression, at the selected statement, for each callstack. You can also press Ctrl+E and type an arbitrary expression to have its value evaluated at that statement.
If you want more details about the values, their reductions, and the overall mechanism, see the section below.
Detailed information about values in Eva
Your question is related to the values used by the abstract interpretation engine in Eva.
Chapter 3 of the Eva User Manual describes the abstractions used by the engine, which are, succinctly:
sets of integers, which are maximally precise but limited to a number of elements (modified by option -eva-ilevel, which on Frama-C 22 is set to 8 by default);
integer intervals with periodicity information (also called modulo, or congruence), e.g. [2..42],2%10 being the set containing {2, 12, 22, 32, 42}. In the simple case, e.g. [2..42], all integers between 2 and 42 are included;
sets of addresses (for pointers), with offsets represented using the above values (sets of integers or intervals);
intervals of floating-point variables (unlike integers, there are no small sets of floating-point values).
Why is all of this necessary? Because without knowing some of these details, you'll have a hard time understanding why the analysis is sometimes precise, sometimes imprecise.
Note that the term reduction is used in the documentation, instead of shrinkage. So look for words related to reduce in the Eva manual when searching for clues.
For instance, in the following code:
int a = Frama_C_interval(-5, 5);
if (a != 0) {
//# assert a != 0;
int b = 5 / a;
}
By default, the analysis will not be able to remove the 0 from the interval inside the if, because [-5..-1];[1..5] is not an interval, but a disjoint union of intervals. However, if the number of elements drops below -eva-ilevel, then the analysis will convert it into a small set, and get a precise result. Therefore, changing some analysis options will result in different ranges, and different results.
In some cases, you can force Eva to compute using disjunctions, for instance by adding the split ACSL annotation, e.g. //# split a < b || a >= b;. But you still need the give the analysis some "fuel" for it to evaluate both branches separately. The easiest way to do so is to use -eva-precision N, with N being an integer between 0 and 11. The higher N is, the more splitting is allowed to happen, but the longer the analysis may take.
Note that, to ensure termination of the analysis, some mechanisms such as widening are used. Without it, a simple loop might require billions of evaluation steps to terminate. This mechanism may introduce extra values which lead to a less precise analysis.
Finally, there are also some abstract domains (option -eva-domains) which allow other kinds of values besides the default ones mentioned above. For instance, the sign domain allows splitting values between negative, zero and positive, and would avoid the imprecision in the above example. The Eva user manual contains examples of usage of each of the domains, indicating when they are useful.
I would like to declare a speed range for a record type in Ada. The following won't work, but is there a way to make it work?
--Speed in knots, range 0 to unlimited
Speed : float Range 0.0 .. unlimited ;
I just want a zero positive value for this number...
You can't -- but since Speed is of type Float, its value can't exceed Float'Last anyway.
Speed : Float range 0.0 .. Float'Last;
(You'll likely want to declare an explicit type or subtype.)
Just for completeness, you can also define your own basic float types rather than use one called Float which may or may not have the range you require.
For example, Float is defined somewhere in the compiler or RTS (Runtime System) sources, probably as type Float is digits 7; alongside type Long_Float is digits 15;, giving you 7 and 15 digits precision respectively.
You can define yours likewise to satisfy the precision and range your application requires. The philosophy is, state what you need (in range and precision), and let the compiler satisfy it most efficiently. This is programming in the problem domain, stating what you want - rather than in the solution domain, binding your program to what a specific machine or compiler supports.
The compiler will either use the next highest precision native float (usually IEEE 32-bit or 64-bit floats) or complain that it can't do that
(e.g. if you declare
type Extra_Long_Float is digits 33 range 0.0 .. Long_Float'Last * Long_Float'Last;
your compiler may complain if it doesn't support 128 bit floats.
Unlimited isn't possible. It would require unlimited memory. I'm not aware of any platform that has that. It's possible to write a package that provides rational numbers as big as the available memory can handle (see PragmARC.Rational_Numbers in the PragmAda Reusable Components for an example), but that's probably not what you're interested in. You can declare your own type with the maximal precision supported by your compiler:
type Speed_Value_Base is digits System.Max_Digits;
subtype Speed_Value is Speed_Value_Base range 0.0 .. Speed_Value_Base'Last;
Speed : Speed_Value;
which is probably what you're after.
Consider the following code :
My_Constant : constant := 2;
Is "My_Constant" a variable or a C language macro-like, so does it have a storage in memory ?
Note that constant just means that you cannot modify the object. It doesn't necessarily mean that the value of the constant is known at compile time. So there are three cases to consider:
(1) A constant with a type, whose value is known at compile time:
My_Constant : constant Integer := 3;
In this case, there's no reason for the compiler to allocate memory for the constant; it can use the value 3 whenever it sees My_Constant (and is likely to use 3 as the immediate operand of instructions, where possible; if it sees something like My_Constant * 2 then it can use the value 6 as an immediate operand). The compiler is allowed to allocate memory for the constant, but I don't think any decent compiler would do so, in a simple case like this with a small number. If it were a really large number that wouldn't fit into an immediate operand, it might make more sense to allocate space for the number somewhere (if it can do so in a way that saves on code space).
In a more complex case:
My_Record_Constant : constant Rec := (Field_1 => 100, Field_2 => 201, Field_3 => 44);
Here, a good compiler can decide whether or not to store the constant in memory based on how it's used. If the only uses are accesses of individual fields (My_Record_Constant.Field_1), the compiler could replace those with the integer values, as if they were integer constants, and there would be no need to store the entire record in memory.
However, using aliased will cause any constant to be forced into memory:
My_Constant : aliased constant Integer := 3;
Now memory has to be allocated because the program could say My_Constant'Access (the access type has to be access constant Integer).
(2) A constant whose value is not known at compile time:
My_Constant : constant Integer := Some_Function_Call (Parameter_1);
The function is called once, when the integer's declaration is elaborated. Since it is not a macro expansion, uses of My_Constant do not generate calls to the function. Example:
procedure Some_Procedure is
My_Constant : constant Integer := Some_Function_Call (Parameter_1);
begin
Put_Line (Integer'Image (My_Constant));
Put_Line (Integer'Image (My_Constant));
end Some_Procedure;
Some_Function_Call is called each time Some_Procedure is called, but it is called once, not two or three times.
Most likely, this requires the value to be stored in memory to hold the function result, so space will be allocated for My_Constant. (This still isn't a requirement. If a good optimizing compiler can somehow figure out that Some_Function_Call will return a known value, it can use that information.)
(3) A named number. This is the example you have, where there is no type:
My_Constant : constant := 2;
The language rules say the value must be known at compile time. This is the equivalent of using that number every time My_Constant is seen, so it's the closest thing to a C macro you're going to get in Ada. But the effect is basically the same as in (1) [except with fewer restrictions on type compatibility]. The compiler probably will not allocate space for it, but it might do so for a larger value. Note that this syntax is allowed only for numeric values (integer or real).
Another variation on option (1) above is for a constant array.
primes : constant array(integer range <>) of integer := (1, 3, 5, 7, 11, 13, 17, 19, 23);
If the compiler can see it being accessed by an index, it will have to store the array.
I doubt the compiler writers would try and special case any other obscure corner condition to save some memory - they have enough other special corner cases to worry about in Ada!
I am using this piece of code and a stackoverflow will be triggered, if I use Extlib's Hashtbl the error does not occur. Any hints to use specialized Hashtbl without stackoverflow?
module ColorIdxHash = Hashtbl.Make(
struct
type t = Img_types.rgb_t
let equal = (==)
let hash = Hashtbl.hash
end
)
(* .. *)
let (ctable: int ColorIdxHash.t) = ColorIdxHash.create 256 in
for x = 0 to width -1 do
for y = 0 to height -1 do
let c = Img.get img x y in
let rgb = Color.rgb_of_color c in
if not (ColorIdxHash.mem ctable rgb) then ColorIdxHash.add ctable rgb (ColorIdxHash.length ctable)
done
done;
(* .. *)
The backtrace points to hashtbl.ml:
Fatal error: exception Stack_overflow Raised at file "hashtbl.ml",
line 54, characters 16-40 Called from file "img/write_bmp.ml", line
150, characters 52-108 ...
Any hints?
Well, you're using physical equality (==) to compare the colors in your hash table. If the colors are structured values (I can't tell from this code), none of them will be physically equal to each other. If all the colors are distinct objects, they will all go into the table, which could really be quite a large number of objects. On the other hand, the hash function is going to be based on the actual color R,G,B values, so there may well be a large number of duplicates. This will mean that your hash buckets will have very long chains. Perhaps some internal function isn't tail recursive, and so is overflowing the stack.
Normally the length of the longest chain will be 2 or 3, so it wouldn't be surprising that this error doesn't come up often.
Looking at my copy of hashtbl.ml (OCaml 3.12.1), I don't see anything non-tail-recursive on line 54. So my guess might be wrong. On line 54 a new internal array is allocated for the hash table. So another idea is just that your hashtable is just getting too big (perhaps due to the unwanted duplicates).
One thing to try is to use structural equality (=) and see if the problem goes away.
One reason you may have non-termination or stack overflows is if your type contains cyclic values. (==) will terminates on cyclic values (while (=) may not), but Hash.hash is probably not cycle-safe. So if you manipulate cyclic values of type Img_types.rgb_t, you have to devise your one cycle-safe hash function -- typically, calling Hash.hash on only one of the non-cyclic subfields/subcomponents of your values.
I've already been bitten by precisely this issue in the past. Not a fun bug to track down.