How do I write a boolean expression that is able to verify the amount of variables that are set to true? I will provide a simplified example of the problem I am trying to solve. Please read the whole question before answering because the actual question is at the very end.
Declarations:
3 boolean variables: a b c
int expected = 2
How do I write a boolean expression that is able to verify that out of the 3 boolean variables there are exactly 2 set to true. Something along the lines of problem = (a + b + c) == 2 where problem would be true if exactly 2 of the boolean variables are set to true.
That is the simplified version of the problem, with exactly 3 boolean variables and expected = 2 we can solve the problem with problem = (a & b) | (b & c) | (c & a)
My question however is, how do we solve this using n number of boolean variables, and a varying amount for the expected variable using first order logic including logical connectives, predicates, and quantifiers.
I want to stress that I am not looking for actual code from any specific language but just a proposition/predicate expression.
If you assume that false resolves to 0 and true to 1, then you could do:
ans = ((a + b + c + ... + z) == expected)
Where a..z are boolean variables (possibly results from other expressions' evaluations) and expected is the number of true conditions that you want.
Related
Since this is (at least it seems to me) tightly related to programming, I'm asking here rather than on math or cs, but if you it think it best fits there or in another side, please just give your opinion.
At the end of Chapter 2 of Bartosz Milewski's Category Theory for Programmers, there's this question:
How many different functions are there from Bool to Bool? Can you implement them all?
This is my reasoning:
Bool has only two elements in it, True and False;
different refers to what the functions do if considered blackboxes, regardless of what happens within them (for instance, two functions coding the sum of two Ints as arg1 + arg2 and arg2 + arg1 respectively, would be the same function from Int to Int);
so the different functions are those going from one of the two Bools to another of the two Bools:
T to T
T to F
F to T
F to F
What functions do I need to make those in-out scenarii possible? Well, I think I need only two, for instance the identity function, which would allow 1 and 4, and the negation, which would allow 2 and 3.
Is my reasoning correct?
the different functions are those going from one of the two Bools to another of the two Bools
No. A function does map every value from its domain to one value from its codomain. You need to consider all possible combinations of mappings. For this, it might make sense to look at the function as a relation, and list them all:
f -> f, t -> f
f -> f, t -> t
f -> t, t -> f
f -> t, t -> t
These correspond to the 4 functions
x => f (constant false)
x => x (identity)
x => not(x) (negation)
x => t (constant true)
The fact that you asked on Programming, rather than math or CS, is important.
On Math, they'd tell you there are four such functions, listed by the other answers.
On CS, they'd tell you there are 27: one for each of three possible inputs T F and ⊥ to each of three possible outputs T F and ⊥.
Here in programming, I can tell you there are eleven:
(T->T, F->F, ⊥->⊥) identity
(T->F, F->T, ⊥->⊥) not
(T->T, F->T, ⊥->T) lazy constant true
(T->F, F->F, ⊥->F) lazy constant false
(T->T, F->T, ⊥->⊥) strict constant true
(T->F, F->F, ⊥->⊥) strict constant false
(T->⊥, F->F, ⊥->⊥) identity fail on true
(T->T, F->⊥, ⊥->⊥) identity fail on false
(T->⊥, F->T, ⊥->⊥) not fail on true
(T->F, F->⊥, ⊥->⊥) not fail on false
(T->⊥, F->⊥, ⊥->⊥) fail
(This answer is quite tongue-in-cheek: I think in reality most scholarly CS types would say either 4 or 11.)
There are four functions:
1
false -> false
true -> false
2
false -> false
true -> true
3
false -> true
true -> false
4
false -> true
true -> true
Explanation
Your reasoning is mostly correct. The functions are blackboxes, we view them as values. Since the input is a boolean and has two possible values and the function might have two separate values, basically the number if 2^2 = 4
I am trying to write a recursive function that will return true if second number is power of first number.
For example:
find_power 3 9 will return true
find_power 2 9 will return false because the power of 2 is 8 not 9
This is what I have tried but I need a recursive solution
let rec find_power first second =
if (second mod first = 0)
return true
else
false ;;
A recursive function has the following rough form
let rec myfun a b =
if answer is obvious then
obvious_answer
else
let (a', b') = smaller_example_of_same_problem a b in
myfun a' b'
In your case, I'd say the answer is obvious if the second number is not a multiple of the first or if it's 1. That is essentially all your code is doing now, it's testing the obvious part. (Except you're not handling the 0th power, i.e., 1.)
So, you need to figure out how to make a smaller example of the same problem. You know (by hypothesis) that the second number is a multiple of the first one. And you know that x * a is a power of a if and only if x is a power of a. Since x is smaller than x * a, this is a smaller example of the same problem.
This approach doesn't work particularly well in some edge cases, like when the first number is 1 (since x is not smaller than x * 1). You can probably handle them separately.
I have a vector of 2500 values composed of repeated values and NaN values. I want to remove all the NaN values and compute the number of occurrences of each other value.
y
2500-element Array{Int64,1}:
8
43
NaN
46
NaN
8
8
3
46
NaN
For example:
the number of occurences of 8 is 3
the number of occurences of 46 is 2
the number of occurences of 43 is 1.
To remove the NaN values you can use the filter function. From the Julia docs:
filter(function, collection)
Return a copy of collection, removing elements for which function is false.
x = filter(y->!isnan(y),y)
filter!(y->!isnan(y),y)
Thus, we create as our function the conditional !isnan(y) and use it to filter the array y (note, we could also have written filter(z->!isnan(z),y) using z or any other variable we chose, since the first argument of filter is just defining an inline function). Note, we can either then save this as a new object or use the modify in place version, signaled by the ! in order to simply modify the existing object y
Then, either before or after this, depending on whether we want to include the NaNs in our count, we can use the countmap() function from StatsBase. From the Julia docs:
countmap(x)
Return a dictionary mapping each unique value in x to its number of
occurrences.
using StatsBase
a = countmap(y)
you can then access specific elements of this dictionary, e.g. a[-1] will tell you how many occurrences there are of -1
Or, if you wanted to then convert that dictionary to an Array, you could use:
b = hcat([[key, val] for (key, val) in a]...)'
Note: Thanks to #JeffBezanon for comments on correct method for filtering NaN values.
y=rand(1:10,20)
u=unique(y)
d=Dict([(i,count(x->x==i,y)) for i in u])
println("count for 10 is $(d[10])")
countmap is the best solution I've seen so far, but here's a written out version, which is only slightly slower. It only passes over the array once, so if you have many unique values, it is very efficient:
function countmemb1(y)
d = Dict{Int, Int}()
for val in y
if isnan(val)
continue
end
if val in keys(d)
d[val] += 1
else
d[val] = 1
end
end
return d
end
The solution in the accepted answer can be a bit faster if there are a very small number of unique values, but otherwise scales poorly.
Edit: Because I just couldn't leave well enough alone, here's a version that is more generic and also faster (countmap doesn't accept strings, sets or tuples, for example):
function countmemb(itr)
d = Dict{eltype(itr), Int}()
for val in itr
if isa(val, Number) && isnan(val)
continue
end
d[val] = get(d, val, 0) + 1
end
return d
end
Using boolean algebra (not a specific language implementation), can we evaluate 1 ^ 1 + 1 (or 1 XOR 1 OR 1) unambiguously?
I can derive two evaluations:
[I]: (1 ^ 1) + 1 = 0 + 1 = 1
[II]: 1 ^ (1 + 1) = 1 ^ 1 = 0
Perhaps there's some stated order of operations, or of a left-to-right evaluation? Or is this not defined in Boolean algebra?
We can use the rules of boolean algebra to attempt to evaluate the expression 1 XOR 1 OR 1.
Now:
XOR is derived from OR such that A XOR B = (¬A AND B) OR (¬B AND A);
Associativity tells us that A OR (B OR C) = (A OR B) OR C;
Associativity also tells us that A AND (B AND C) = (A AND B) AND C
So, taking either of the possible interpretations of evaluation order:
(1 XOR 1) OR 1 1 XOR (1 OR 1)
Even though we have no left-to-right "evaluation order" defined, these rules are all we need to show that the two possible interpretations are not equivalent:
= (¬1 AND 1) OR (¬1 AND 1) OR 1 = (¬1 AND (1 OR 1)) OR (¬(1 OR 1) AND 1)
= (0 AND 1) OR (0 AND 1) OR 1 = (0 AND 1) OR (0 AND 1)
= 0 OR 0 OR 1 = 0 OR 0
= 1 = 0
Unless I'm forgetting some crucially pertinent axiom, I've confirmed that you need more context to evaluate the given expression.
(And, of course, examining the expression A XOR B OR C ∀A,B,C is of course substantially more tricky! But if the expression is ambiguous for just one value of all three inputs, then why bother checking for any others?)
This context is usually provided in language-specific evaluation-order rules. C-type languages give XOR a low precedence (something Richie came to dislike); by contrast, mathematical convention dictates a left-to-right associativity where no other axiom can be applied and an ambiguity is otherwise present.
So, basically, since we're ignoring language-specific rules, you'd probably go with [I].
Most languages would do XOR then OR; experienced coders will put parentheses in anyway just to make the intent clear.
Many more modern languages do what's called fast- or short-circuit- evaluation, so 0 & ? is always 0, so ? will not be evaluated; same with 1 + ?.
I don't think there is a generally agreed upon order of precedence with logical operators. The evaluation would be entirely specific to a language.
In c# for instance, the Operator precedence for bitwise operators is:
AND
XOR
OR
But this might be different for another language. If the precedence is the same, then the operations are executed from left to right. There is no logical XOR operator, but you can use NOT EQUAL (!=) instead.
Just having some problems with a simple simplification. I am doing a simplification for the majority decoder with 3 inputs A, B and C. Its output Y assumes 1 if 2 or all 3 inputs assume 1. Y assumes 0 otherwise. Select its correct switching function Y=f(A,B,C).
So, after doing out a truth table I found the Canonical Sum of Products comes to
NOT(A).B.C + A.NOT(B).C + A.B.NOT(C) + A.B.C
This, simplified, apparently comes to Y = A * B + B * C + A * C
What are the steps taken to simply an expression like this? How is it done? How was this value gotten in this case?
First, note that for a Boolean expression:
A= A + A
Now, see that
NOT(A).B.C + A.NOT(B).C + A.B.NOT(C) + A.B.C
= NOT(A).B.C + A.NOT(B).C + A.B.NOT(C) + A.B.C + A.B.C + A.B.C
= (NOT(A)+A).B.C + A.(NOT(B)+B).C + A.B.(NOT(C)+C)
= B.C + A.C + A.B
Incidentally WolframAlpha is great for doing (checking) Boolean maths in which case the format for your example is:
~A && B && C || A && ~B && C || A && B && ~C || A && B && C
Also your specific expression is actually on this page as an example, done differently to the other answer given.
You would benefit from understanding some basic logic concepts:
De Morgan's Laws explain how to translate ANDed terms into ORed terms (and vice versa). This is a very powerful concept worth learning, it allows the translation of a logic expression into pure NAND or pure NOR form for which there are very good reasons
A Karnaugh map can be used to visually translate logic expressions into their first canonical form. Using a Karnaugh map is impractical in many real life cases but a really great learning technique
One straightforward way of finding the first canonical form for any logic expression is to generate the appropriate truth table and then examine the inputs that result in an output of 1.
For each row in a truth table where the output is 1, you can relatively easily form a logic expression for that row only. The full logic expression comes from ORing all the expressions for each row. This will be a minimal expression (there may be others, none will be more minimal).
Another explanation.
We have (1):
(not(A) and B and C ) or (A and not(B) and C) or (A and B and not C) or (A and B and C).
We know that:
A = A or A.
So we can rewrite (1) to (2):
(not(A) and B and C ) or (A and B and C) or
(A and not(B) and C) or (A and B and C) or
(A and B and not C) or (A and B and C)
We also know that:
(A and B) or (A and not B) = A and (B or not B) = A
So we can rewrite (2) to (3):
(B and C) or (A and C) or (A and B)
The idea is to find groups that can be (partially) eliminated to simplify the equation.