From the following Sean Anderson's page:
http://graphics.stanford.edu/~seander/bithacks.html#MaskedMerge
He says that result of [ (a & ~mask) | (b & mask) ]
can be written as [ a ^ ((a ^ b) & mask); ]
From the problem statement, it is intuitive to do [ (a & ~mask) | (b & mask) ]. But how do I derive the later expression from the first one? Any clues?
Intuitively, & bits means "take these bits", and ^ bits means "flip these bits".
The original expression, (a & ~mask) | (b & mask), means "take bits from b selected by the mask, and take the rest of the bits from a".
Let's start with (b & mask) — the bits of b selected by the mask.
a ^ (b & mask) would flip some bits of that — namely, the 1-bits of a. Which is almost correct. However, we don't actually want to flip the 1-bits of a that were selected by the mask. To correct those bits in the positions where both a and mask are 1, we flip them again: a ^ ((a ^ b) & mask).
Related
I need to simplify this expression. I know the answer must be (Not A or Not C) but I keep getting C or (Not A and C)
Tried this in Lua:
local a = false
local b = true
local c = true
local f = (not b and not c) or (b and not c) or (not a and c)
local f_= not c or (not a and c)
print(f, f_)
Output: false, false
I also tried all the possibilities with all three variables and both 'f' and 'f_' remained identical.
F = (Not B and Not C) or (B and Not C) or (Not A and C)
-> (Not B and Not C) or (B and Not C) == Not C
F = Not C or (Not A and C)
I like using Karnaugh maps for boolean simplification:
https://en.wikipedia.org/wiki/Karnaugh_map
For your example, we build a 2D truth table:
Then fill in the terms from your question, they all get 'or'd together:
Then you find the smallest number of squares/rectangles that cover the needed parts. The squares and rectangles must have powers of two as a dimension, so 2x2 is ok, 1x4 etc, but not 3x2 for example. These are called 'minterms' and the bigger the square, the simpler the boolean expression they represent. In the example below, the minterm for 'not C' wraps off one end of the map and on to the other, but is still considered a 2x2 square.
You can also do it by covering the unused space with 'maxterms', and then invert it again to get the original expression:
The results of 'not A or not C' and 'not (A and C)' are equivalent by De Morgan's laws. (https://en.wikipedia.org/wiki/De_Morgan%27s_laws)
(Not B and Not C) or (B and Not C) or (Not A and C)
|
| Distributive Law
V
((Not B or B) and Not C) or (Not A and C)
|
| Complement Law
V
Not C or (Not A and C)
|
| Absorption Law
V
Not C or Not A
I am trying to analyze a C source code using Eva plugin of Frama-C.
In the following example, I found that Eva can calculate the value of a shift expression in which the right-hand-side value is big, but cannot calculate when the right-hand-side value is small.
extern unsigned char a;
int main() {
int b, c, d;
b = (a >> 15) & 1;
c = (a >> 0) & 1;
d = b + c;
}
In this example above, Eva can calculate the value of b but not c.
Is (a >> 0) & 1 more complex than (a >> 15) & 1 to Eva?
Eva is giving the most precise answer from the initial context where a, an external variable, is in the range [0..255], namely that b is equal to 0 and c may be either 0 or 1.
If you want to explore what is going on, the Values panel at the bottom of of the graphical user interface (launched with frama-c-gui -eva file.c) is the way to go. Its usage is documented in section 4.3 of the Eva user manual, but basically if you click on any (sub)-expression in the normalized code view (the left buffer), the abstract value computed by Eva for this expression at this point will be displayed in the panel.
In your example, we have the following intermediate values for the computation of b:
a -> [0..255]
(int)a -> [0..255] (implicit promotion of an arithmetic operand)
(int)a >> 15 -> {0} (only the first 8 bits might be non 0)
(int)a >> 15 & 1 -> {0} (first argument is 0, the bitwise and is also 0}
For c, the first two steps are identical, but then we have
(int)a >> 0 -> [0..255] (no-op, we have the same values as (int)a)
((int)a >> 0) & 1 -> {0; 1} (in the interval [0..255], we have odd and even values, hence the result of the bitwise and can be either 0 or 1).
I'm trying to figure out an equivalent expressions of the following equations using bitwise, addition, and/or subtraction operators. I know there's suppose to be an answer (which furthermore generalizes to work for any modulus 2^a-1, where a is a power of 2), but for some reason I can't seem to figure out what the relation is.
Initial expressions:
x = n % (2^32-1);
c = (int)n / (2^32-1); // ints are 32-bit, but x, c, and n may have a greater number of bits
My procedure for the first expression was to take the modulo of 2^32, then try to make up the difference between the two modulo's. I'm having trouble on this second part.
x = n & 0xFFFFFFFF + difference // how do I calculate difference?
I know that the difference n%(2^32)-n%(2^32-1) is periodic (with a period of 2^32*(2^32-1)), and there's a "spike up' starting at multiples of 2^32-1 and ending at 2^32. After each 2^32 multiple, the difference plot decreases by 1 (hopefully my descriptions make sense)
Similarly, the second expression could be calculated in a similar fashion:
c = n >> 32 + makeup // how do I calculate makeup?
I think makeup steadily increases by 1 at multiples of 2^32-1 (and decreases by 1 at multiples of 2^32), though I'm having troubles expressing this idea in terms of the available operators.
You can use these identities:
n mod (x - 1) = (((n div x) mod (x - 1)) + ((n mod x) mod (x - 1))) mod (x - 1)
n div (x - 1) = (n div x) + (((n div x) + (n mod x)) div (x - 1))
First comes from (ab+c) mod d = ((a mod d) (b mod d) + (c mod d)) mod d.
Second comes from expanding n = ax + b = a(x-1) + a + b, while dividing by x-1.
I think I've figured out the answer to my question:
Compute c first, then use the results to compute x. Assumes that the comparison returns 1 for true, 0 for false. Also, the shifts are all logical shifts.
c = (n>>32) + ((t & 0xFFFFFFFF) >= (0xFFFFFFFF - (n>>32)))
x = (0xFFFFFFFE - (n & 0xFFFFFFFF) - ((c - (n>>32))<<32)-c) & 0xFFFFFFFF
edit: changed x (only need to keep lower 32 bits, rest is "junk")
I have to write a function dump which takes an expression
type expression =
| Int of int
| Float of float
| Add of expression * expression
| Sub of expression * expression
| Mult of expression * expression
| Div of expression * expression
;;
and returns a string representation of it.
For example:
dump (Add (Int 1, Int 2));;
dump (Mult (Int 5, Add(Int 2, Int 3)), Int 1)
should return respectively
- : string = "1+2"
- : string = "5*(2+3)-1"
I've written something like this:
let rec dump e = match e with
| Int a -> string_of_int a
| Float a -> string_of_float a
| Add (e1,e2) -> "(" ^ (dump e1) ^ "+" ^ (dump e2) ^ ")"
| Sub (e1,e2) -> "(" ^ (dump e1) ^ "-" ^ (dump e2) ^ ")"
| Mult (e1,e2) -> (dump e1) ^ "*" ^ (dump e2)
| Div (e1,e2) -> (dump e1) ^ "/" ^ (dump e2)
;;
and returned expressions are correct, but still not optimal.
(for Add (Int 1, Int 2)) it is (1+2) and should be 1+2 ). How can I fix this?
(without nested pattern matching which isn't a good idea)
Let's think about when you need parens:
First of all always wrapping parens around certain operations is the wrong approach. Whether a term needs to be parenthesized or not does not only depend on which operator is used in the term, but also which operator the term is an operand to.
E.g. when 1+2 and 3+4 are operands to +, it should be 1+2+3+4 - no parens. However if the operator is *, it needs to be (1+2) * (3+4).
So for which combinations of operators do we need parens?
The operands to + never need to be parenthesized. If the operands are products or quotients, they have higher precedence anyway, and if the operands are differences, you need no parens because x + (y - z) = x + y -z.
With - it's a bit different. * and / still don't need to be parenthesized because they have higher precedence, but + and - do iff they're in the second operand because x + y - z = (x + y) - z, but x - y + z != x - (y + z).
With Mult both operands need to be parenthesized if they're Add or Sub, but not if they're Mult or Div.
With Div the first operand needs to be parenthesized if it's Add or Sub and the second always needs to be parenthesized (unless it's an Int or Float, of course).
First, define a list of priority levels for your operators:
module Prio = struct
let div = 4
let mul = 3
let sub = 2
let add = 1
end
An useful construct is "wrap in brackets if this condition is true" :
let wrap_if c str = if c then "("^str^")" else str
Finally, define an auxiliary printing function which is provided with a "priority" argument meaning "by the way, you're wrapped in an expression which has priority X, so protect your output accordingly":
let dump e =
let rec aux prio = function
| Int a -> string_of_int a
| Float a -> string_of_float a
| Add (e1,e2) ->
wrap_if (prio > Prio.add) (aux Prio.add e1 ^ "+" ^ aux Prio.add e2)
| Sub (e1,e2) ->
wrap_if (prio > Prio.add) (aux Prio.add e1 ^ "-" ^ aux Prio.sub e2)
| Mult (e1,e2) ->
wrap_if (prio > Prio.mul) (aux Prio.mul e1 ^ "*" ^ aux Prio.mul e2)
| Div (e1,e2) ->
wrap_if (prio > Prio.mul) (aux Prio.mul e1 ^ "/" ^ aux Prio.div e2)
in aux Prio.add e
;;
It sounds to me like you want to build some set of reduction rules which can be applied to yield the "prettified" or most-reduced form of your expressions, based on order of operations and e.g. commutativity, associativity, etc. For instance (a + a) => a + a, (a * b) + c => a * b + c and so on.
A rather simple and yet rather generic answer (works for other syntaxes than mathematical expressions) : pick precedences (and, if you're picky, associativities) for your constructors, and only add parentheses when a subterm constructor has lower precedence than the current constructor.
More precisely : when you want to print a constructor C(x1,x2,x3..), you look at the head constructor of each xi (if x1 is D(y1,y2..), its head constructor is D), compare the precedence levels of C and D. If the precendence of D is lower, you add parenthesis around the string representation of x2.
According to wiki shifts can be used to calculate powers of 2:
A left arithmetic shift by n is
equivalent to multiplying by 2^n
(provided the value does not
overflow), while a right arithmetic
shift by n of a two's complement value
is equivalent to dividing by 2^n and
rounding toward negative infinity.
I was always wondering if any other bitwise operators (~,|,&,^) make any mathematical sense when applied to base-10? I understand how they work, but do results of such operations can be used to calculate anything useful in decimal world?
"yep base-10 is what I mean"
In that case, yes, they can be extended to base-10 in several ways, though they aren't nearly as useful as in binary.
One idea is that &, |, etc. are the same as doing arithmetic mod-2 to the individual binary digits. If a and b are single binary-digits, then
a & b = a * b (mod 2)
a ^ b = a + b (mod 2)
~a = 1-a (mod 2)
a | b = ~(~a & ~b) = 1 - (1-a)*(1-b) (mod 2)
The equivalents in base-10 would be (note again these are applied per-digit, not to the whole number)
a & b = a * b (mod 10)
a ^ b = a + b (mod 10)
~a = 9-a (mod 10)
a | b = ~(~a & ~b) = 9 - (9-a)*(9-b) (mod 10)
The first three are useful when designing circuits which use BCD (~a being the 9's complement), such as non-graphing calculators, though we just use * and + rather than & and ^ when writing the equations. The first is also apparently used in some old ciphers.
A fun trick to swap two integers without a temporary variable is by using bitwise XOR:
void swap(int &a, int &b) {
a = a ^ b;
b = b ^ a; //b now = a
a = a ^ b; //knocks out the original a
}
This works because XOR is a commutative so a ^ b ^ b = a.
Yes, there are other useful operations, but they tend to be oriented towards operations involving powers of 2 (for obvious reasons), e.g. test for odd/even, test for power of 2, round up/down to nearest power of 2, etc.
See Hacker's Delight by Henry S. Warren.
In every language I've used (admittedly, almost exclusively C and C-derivatives), the bitwise operators are exclusively integer operations (unless, of course, you override the operation).
While you can twiddle the bits of a decimal number (they have their own bits, after all), it's not necessarily going to get you the same result as twiddling the bits of an integer number. See Single Precision and Double Precision for descriptions of the bits in decimal numbers. See Fast Inverse Square Root for an example of advantageous usage of bit twiddling decimal numbers.
EDIT
For integral numbers, bitwise operations always make sense. The bitwise operations are designed for the integral numbers.
n << 1 == n * 2
n << 2 == n * 4
n << 3 == n * 8
n >> 1 == n / 2
n >> 2 == n / 4
n >> 3 == n / 8
n & 1 == {0, 1} // Set containing 0 and 1
n & 2 == {0, 2} // Set containing 0 and 2
n & 3 == {0, 1, 2, 3} // Set containing 0, 1, 2, and 3
n | 1 == {1, n, n+1}
n | 2 == {2, n, n+2}
n | 3 == {3, n, n+1, n+2, n+3}
And so on.
You can calculate logarithms using just bitwise operators...
Finding the exponent of n = 2**x using bitwise operations [logarithm in base 2 of n]
You can sometime substitute bitwise operations for boolean operations. For example, the following code:
if ((a < 0) && (b < 0)
{
do something
{
In C this can be replaced by:
if ((a & b) < 0)
{
do something
{
This works because one bit in an integer is used as the sign bit (1 indicates negative). The and operation (a & b) will be a meaningless number, but its sign will be the bitwise and of the signs of the numbers and hence checking the sign of the result will work.
This may or may not benefit performance. Doing two boolean tests/branches will be worse on a number of architectures and compilers. Modern x86 compilers can probably generate a single branch using a some of the newer instruction even with the normal syntax.
As always, if it does result in a performance increase... Comment the code - i.e. put the "normal" way of doing it in a comment and say it's equivalent but faster.
Likewise, ~ | and ^ can be used in a similar way it all the conditions are (x<0).
For comparison conditions you can generally use subtraction:
if ((a < b) | (b < c))
{
}
becomes:
if (((a-b) | (b-c)) < 0)
{
}
because a-b will be negative only if a is less than b. There can be issues with this one if you get within a factor of 2 of max int - i.e. arithmetic overflow, so be careful.
These are valid optimizations in some cases, but otherwise quite useless. And to get really ugly, floating point numbers also have sign bits... ;-)
EXAMPLE:
As an example, lets say you want to take action depending on the order of a,b,c. You can do some nested if/else constructs, or you can do this:
x = ((a < b) << 2) | ((b < c) << 1) | (c < a);
switch (x):
I have used this in code with up to 9 conditions and also using the subtractions mentioned above with extra logic to isolate the sign bits instead of less-than. It's faster than the branching equivalent. However, you no longer need to do subtraction and sign bit extraction because the standard was updated long ago to specify true as 1, and with conditional moves and such, the actual less-than can be quite efficient these days.