What the meaning of AB+C? - logical-operators

I want to know that what is the truth table and logic circuit gate for following Boolean expression.
(A+B).(AB+C)
I have a doubt in what the meaning of AB+C
please, I want some help to get through this. It's important to my ICT examination.

For AB+C, what you do is ((A and B) or C)

Related

How to mimic logical XOR in ZX Spectrum basic?

Sometimes when coding in ZX Spectrum Basic I need to evaluate logical expressions that are formed by two operands and a logical xor like this:
IF (left operand) xor (right operand) THEN
Since ZX Basic does only know NOT, OR and AND I have to resort to some sort of fancy calculation which includes multiple uses of left/right operands. This is awkward since it consumes time and memory, both sparse if you're working on an 8-bit machine. I wonder if there's a neat trick to mimic the xor operator.
To test the outcome I provide a small code sample:
5 DEF FN x(a,b)=(a ??? b) : REM the xor formula, change here
10 FOR a=-1 TO 1 : REM left operand
20 FOR b=-1 TO 1 : REM right operand
30 LET r=FN x(a,b) : REM compute xor
40 PRINT "a:";a;" b:";b;" => ";r
50 NEXT b
60 NEXT a
Can you help me find a performant solution? So far I tried DEF FN x(a,b)=(a AND NOT b) OR (b AND NOT a) but it's somewhat clumsy.
Edit:
If you want to test your idea I suggest the BasinC v1.69 ZX emulator (Windows only).
As #Jeff pointed out most Basics, such as ZX one's, do consider zero values as false and non-zero ones as true.
I have adapted the sample to test with a variety of non-zero values.
The logical xor is semantically equivalent to not equal.
IF (left operand) <> (right operand) THEN
should work.
Edit: In the case of integer operands you can use
IF ((left operand) <> 0) <> ((right operand) <> 0) THEN
DEF FN x(a,b)=((NOT a) <> (NOT b))
Using NOT as coercion to a boolean value.
EDIT Previously had each side with NOT NOT which is unnecessary for establishing difference between the two, as one will still coerce!
EDIT 2 Added parens to sort out precedence issue.
Considering very interesting and fun this question and the answers in here, I would like to share the results of some performance tests (performed on an emulator):
elapsed times are in seconds , less is best.
the x1 test is only to see if the expression meets the requirements and includes the print out of results, the x256 repeat the same test 256times without printing any output; the without FN tests are the same but without factoring the expression in an FN statement.
I share also the code and test suite on github: https://github.com/rondinif/XOR-in-ZX-Spectrum-basic for the benefit of all retro computing fanatics (..like me) and share our opinions
Keep in mind, value are integer:
I think mathematical operation could be fun : (A-B)*(A-B) should work
It should be less time consuming based on simple operation.
Or with ABS : ABS (A-B)

Help to understand Google Code Jam 2011 Candy Splitting problem

I'm participating in google code jam. Before anything I want to say that I don't want anyone to solve me a problem "to win", or something like that. I just want some help to understand a problem I couldn't solve in a round that has already FINISHED.
Here is the link of the problem, called Candy Splitting. I won't explain it here because it is nosense, I wont be able to explain it better than google does.
I would like to know some "good" solution to the problem, for example, I've downloaded the first English solution and I've seen the code has only 30 lines!!! Thats amazing! (Anyone can download it so I think there is no problem with saying it: the solution of theycallhimtom from here). I can't understand the solution even watching the code. (My ignorance of Java doesn't help.)
Thanks!
Google themselves provide discussions about the problems and the solution
See this link for the Candy Splitting problem: http://code.google.com/codejam/contest/dashboard?c=975485#s=a&a=2
Basically, the candies can be divided into two equal value piles (from Patrick's point of view) if
C[0] xor C[1] xor C[2] xor ... xor C[N] == 0.
One such split is the sum of all candy values except one. To maximimise the value of one pile, take the lowest value candy and put it in a pile of its own.
Why is it so?
The way I thought about it, is that by definition Patrick's addition is actually equal to xoring values. From the definition of the problem, we want
C[i] xor C[j] xor ... xor C[k] == C[x] xor C[y] xor ... xor C[z]
for some elements on each side.
Adding the RHS to both the LHS and RHS yields
C[i] xor C[j] xor ... xor C[k] xor C[x] xor C[y] xor ... xor C[z] == 0
Since xoring a value with itself gives 0, and the order of xor operations is not important, the RHS becomes 0.
Any of the elements in the LHS can be moved over the to right side and the equality still holds. Picking the lowest value element makes the best split between the piles.

algorithm to find derivative

I'm writing program in Python and I need to find the derivative of a function (a function expressed as string).
For example: x^2+3*x
Its derivative is: 2*x+3
Are there any scripts available, or is there something helpful you can tell me?
If you are limited to polynomials (which appears to be the case), there would basically be three steps:
Parse the input string into a list of coefficients to x^n
Take that list of coefficients and convert them into a new list of coefficients according to the rules for deriving a polynomial.
Take the list of coefficients for the derivative and create a nice string describing the derivative polynomial function.
If you need to handle polynomials like a*x^15125 + x^2 + c, using a dict for the list of coefficients may make sense, but require a little more attention when doing the iterations through this list.
sympy does it well.
You may find what you are looking for in the answers already provided. I, however, would like to give a short explanation on how to compute symbolic derivatives.
The business is based on operator overloading and the chain rule of derivatives. For instance, the derivative of v^n is n*v^(n-1)dv/dx, right? So, if you have v=3*x and n=3, what would the derivative be? The answer: if f(x)=(3*x)^3, then the derivative is:
f'(x)=3*(3*x)^2*(d/dx(3*x))=3*(3*x)^2*(3)=3^4*x^2
The chain rule allows you to "chain" the operation: each individual derivative is simple, and you just "chain" the complexity. Another example, the derivative of u*v is v*du/dx+u*dv/dx, right? If you get a complicated function, you just chain it, say:
d/dx(x^3*sin(x))
u=x^3; v=sin(x)
du/dx=3*x^2; dv/dx=cos(x)
d/dx=v*du+u*dv
As you can see, differentiation is only a chain of simple operations.
Now, operator overloading.
If you can write a parser (try Pyparsing) then you can request it to evaluate both the function and derivative! I've done this (using Flex/Bison) just for fun, and it is quite powerful. For you to get the idea, the derivative is computed recursively by overloading the corresponding operator, and recursively applying the chain rule, so the evaluation of "*" would correspond to u*v for function value and u*der(v)+v*der(u) for derivative value (try it in C++, it is also fun).
So there you go, I know you don't mean to write your own parser - by all means use existing code (visit www.autodiff.org for automatic differentiation of Fortran and C/C++ code). But it is always interesting to know how this stuff works.
Cheers,
Juan
Better late than never?
I've always done symbolic differentiation in whatever language by working with a parse tree.
But I also recently became aware of another method using complex numbers.
The parse tree approach consists of translating the following tiny Lisp code into whatever language you like:
(defun diff (s x)(cond
((eq s x) 1)
((atom s) 0)
((or (eq (car s) '+)(eq (car s) '-))(list (car s)
(diff (cadr s) x)
(diff (caddr s) x)
))
; ... and so on for multiplication, division, and basic functions
))
and following it with an appropriate simplifier, so you get rid of additions of 0, multiplying by 1, etc.
But the complex method, while completely numeric, has a certain magical quality. Instead of programming your computation F in double precision, do it in double precision complex.
Then, if you need the derivative of the computation with respect to variable X, set the imaginary part of X to a very small number h, like 1e-100.
Then do the calculation and get the result R.
Now real(R) is the result you would normally get, and imag(R)/h = dF/dX
to very high accuracy!
How does it work? Take the case of multiplying complex numbers:
(a+bi)(c+di) = ac + i(ad+bc) - bd
Now suppose the imaginary parts are all zero, except we want the derivative with respect to a.
We set b to a very small number h. Now what do we get?
(a+hi)(c) = ac + hci
So the real part of this is ac, as you would expect, and the imaginary part, divided by h, is c, which is the derivative of ac with respect to a.
The same sort of reasoning seems to apply to all the differentiation rules.
Symbolic Differentiation is an impressive introduction to the subject-at least for non-specialist like me :) The code is written in C++ btw.
Look up automatic differentiation. There are tools for Python. Also, this.
If you are thinking of writing the differentiation program from scratch, without utilizing other libraries as help, then the algorithm/approach of computing the derivative of any algebraic equation I described in my blog will be helpful.
You can try creating a class that will represent a limit rigorously and then evaluate it for (f(x)-f(a))/(x-a) as x approaches a. That should give a pretty accurate value of the limit.
if you're using string as an input, you can separate individual terms using + or - char as a delimiter, which will give you individual terms. Now you can use power rule to solve for each term, say you have x^3 which using power rule will give you 3x^2, or suppose you have a more complicated term like a/(x^3) or a(x^-3), again you can single out other variables as a constant and now solving for x^-3 will give you -3a/(x^2). power rule alone should be enough, however it will require extensive use of the factorization.
Unless any already made library deriving it's quite complex because you need to parse and handle functions and expressions.
Deriving by itself it's an easy task, since it's mechanical and can be done algorithmically but you need a basic structure to store a function.

How to prove by induction that a program does something?

I have a computer program that reads in an array of chars that operands and operators written in postfix notation. The program then scans through the array works out the result by using a stack as shown :
get next char in array until there are no more
if char is operand
push operand into stack
if char is operator
a = pop from stack
b = pop from stack
perform operation using a and b as arguments
push result
result = pop from stack
How do I prove by induction that this program correctly evaluates any postfix expression? (taken from exercise 4.16 Algorithms in Java (Sedgewick 2003))
I'm not sure which expressions you need to prove the algorithm against. But if they look like typical RPN expressions, you'll need to establish something like the following:
1) algoritm works for 2 operands (and one operator)
and
algorithm works for 3 operands (and 2 operators)
==> that would be your base case
2) if algorithm works for n operands (and n-1 operators)
then it would have to work for n+1 operands.
==> that would be the inductive part of the proof
Good luck ;-)
Take heart concerning mathematical proofs, and also their sometimes confusing names. In the case of an inductive proof one is still expected to "figure out" something (some fact or some rule), sometimes by deductive logic, but then these facts and rules put together constitute an broader truth, buy induction; That is: because the base case is established as true and because one proved that if X was true for an "n" case then X would also be true for an "n+1" case, then we don't need to try every case, which could be a big number or even infinite)
Back on the stack-based expression evaluator... One final hint (in addtion to Captain Segfault's excellent explanation you're gonna feel over informed...).
The RPN expressions are such that:
- they have one fewer operator than operand
- they never provide an operator when the stack has fewer than 2 operands
in it (if they didn;t this would be the equivalent of an unbalanced
parenthesis situation in a plain expression, i.e. a invalid expression).
Assuming that the expression is valid (and hence doesn't provide too many operators too soon), the order in which the operand/operators are fed into the algorithm do not matter; they always leave the system in a stable situtation:
- either with one extra operand on the stack (but the knowledge that one extra operand will eventually come) or
- with one fewer operand on the stack (but the knowledge that the number of operands still to come is also one less).
So the order doesn't matter.
You know what induction is? Do you generally see how the algorithm works? (even if you can't prove it yet?)
Your induction hypothesis should say that, after processing the N'th character, the stack is "correct". A "correct" stack for a full RPN expression has just one element (the answer). For a partial RPN expression the stack has several elements.
Your proof is then to think of this algorithm (minus the result = pop from stack line) as a parser that turns partial RPN expressions into stacks, and prove that it turns them into the correct stacks.
It might help to look at your definition of an RPN expression and work backwards from it.

Logic question (universal and existential quantifications)

I have a logical statement that says "If everyone plays the game, we will have fun".
In formal logic we can write this as:
Let D mean the people playing.
Let G be the predicate for play the game.
Let F be the predicate for having fun.
Thus [VxeD, G(x)] -> [VyeD, F(y)]
V is the computer science symbol for universal quantification. E below is the existential quantifier.
I'm looking for a way to write a similar statement using only existential quantifiers. My best guess would be that we simply need to find a way to find the counter-example where it doesn't happen, thus negate the above.
The problem is negating it doesn't make sense. We get:
[VxeD, G(x)] ^ [EyeD, !L(y)]
It's not a proper statement since the universal is still in there though it is also equivalent. Thus I need to re-fabricate my statement to something like: VxeD, VyeD, G(x) ^ F(y) I would get ExeD, EyeD, !G(x) v !F(y) which would mean "There exists someone who doesn't learn or someone else who doesn't have fun" which doesn't seem correct to me.
Some guidance or clarification would be fantastic :-)
Thanks!
I don't understand your ^ symbol, but I believe you are looking for the contrapositive. In your example, if the original statement is:
[VxeD, G(x)] -> [VyeD, F(y)]
then the contrapositive is
[ExeD, !F(x)] -> [EyeD, !G(y)]
meaning "if there is someone who is not having fun, then there exists someone not playing the game." Note that this is different than the statement in your comment above: it may well be the case that everyone is having fun, but not everyone is playing.
In general, p -> q is equivalent to !q -> !p.
(Of course I may not have understood your notation correctly.)
I'm having trouble reading your notation. I'll use A for the universal quantifier, E for the existence quantifier, F for the predicate 'having fun', G for the predicate 'playingng learned the game', then
AxL(x) -> AxF(x)
Now, you can just apply the usual gymnastics:
<==> !AxL(x) <- !AxF(x)
<==> Ex!G(x) <- Ex!F(x)
<==> Ex!F(x) -> Ex!G(x)
so, indeed, when someone's not having fun, it means not everybody played the game.

Resources