Propositional logic, logical equivalent - math

a) Determine whether the following statement forms are logically equivalent:
p -> (q -> r) and (p -> q) -> r
b) Use the logical equivalence established in part (a) to rewrite the following sentence in two different ways. (Assume that n represents a fixed integer.)
If n is prime, then n is odd or n is 2.
Can someone help me with the B one? its really confusing

If n is prime, then n is odd or n is 2.
The question is asking you to rewrite the sentence in two different ways in English
If n is prime and n is not odd, then n is 2.
If n is prime and n is not 2, then n is odd.
The following links do a better job of explaining it:
Logically_Equivalent_Statements
Exercises on Logic of Compound Statements and Valid Arguments

Related

how to solve ALICESIE on spoj. How it has common pattern for its answer

What is the logic behind pattern i.e.(ans=(n+1)/2) in question ALICESIE on spoj.
Algorithm_given:
1.Create a list of consecutive integers from N to 2 (N, N-1, N-2, ..., 3, 2). All of those N-1numbers are initially unmarked.
2.Initially, let P equal N, and leave this number unmarked.
3.Mark all the proper divisors of P (i.e. P remains unmarked).
4.Find the largest unmarked number from 2 to P – 1, and now let P equal this number.
5.If there were no more unmarked numbers in the list, stop. Otherwise, repeat from step 3.
Find total number of unmarked numbers.
i know its O(sqrt(n)) solution but answer is expected in O(1),it can found by seeing the common pattern i.e.(N+1)/2
But how to prove it Mathematically
link: ALICESIE

White-box and Black-box testing of recursive functions

I learned white-box and black-box testing in terms of iterative functions. Now i need to do white-box and black-box testing of several recursive functions (in F#). take the following recursive algorithm for gcd:
gcd (m, n)
if (m % n) = 0 then
n
else
gcd n ( m % n)
For the white-box test: how exactly do i go about covering the different branches of the algorithm? Naively one could say there are two branches but when the function is called more than once the possible branches will obviously increase. Should i do testing with arguments which results in different amounts of recursive calls or how exactly do i determine which values to test with?
black-box: i get the general idea of black box testing. we should look at possible values we might want to call the function with without having knowledge of its inner workings. In this case i am just not sure which are values we might want to call it with. one way could be just to start with two values m and n for which gcd = 1 and then do the same for values m and for which gcd = 2 up to some gcd= n for some arbitrary number n. Is this how one is supposed to go about this?
First of all, I don't think there is one single established definition of how to do white-box and black-box testing of recursive functions, but here is how I interpret it.
White-box testing. We want to test the function based on its inner working. In case of recursive functions, I think this means that we want to test that the recursive calls it makes are the ones we would expect. One way to do this is to log all recursive calls. A simple implementation of gcd that does this adds a parameter to keep a log and returns it with the result:
let rec gcd log m n =
let log = (m, n)::log
if (m % n) = 0 then List.rev log, n
else gcd log n (m % n)
Now, for some two parameters, say 54 and 22, you can do the calculation by hand, decide what the parameters of the recursive calls should be and write a test for that:
let log, res = gcd [] 54 22
log |> shouldEqual [ (54, 22); (22, 10); (10, 2) ]
Black-box testing. Here, we assume we do not know how exactly the function works, so we cannot test its internals. All we can do is to test it using a number of inputs. It is probably a good idea to think of corner-case or tricky inputs because those are the ones that could cause problems. Given a simple implementation:
let rec gcd m n =
if (m % n) = 0 then n
else gcd n (m % n)
I would probably write tests for the following:
// A random case where one of the numbers is the result
gcd 100 50 |> shouldEqual 50
gcd 50 100 |> shouldEqual 50
// A random case where the only divisor is 1
gcd 13 123 |> shouldEqual 1
gcd 123 13 |> shouldEqual 1
// The following are problematic and I'm not sure what the right behaviour is
gcd 0 0 // This probably should not be allowed
gcd 10 -5 // This returns -5, but I'm not sure that's what we want
Random testing.
You could also use random testing (which is a form of black box testing) to generate multiple test cases automatically. There are at least two random tests I can think of:
Generate two random numbers, a and b and check that gcd a b = gcd b a. This is testing only a very basic property, but it can cover quite a lot of cases.
Pick a random number a and a couple of primes p1, p2, .... Then split the primes into two groups and produce a*p1*p3*p5 and a*p2*p4*p6. Write a test that checks that the GCD of the two numbers is a.

Check whether prime or not in Prolog

I'm trying to learn Prolog and I've found an example where I need to implement a program to check whether a number is prime or not with a single predicate.
The logic I'm trying to follow is to make a recursive rule to divide by all the number less than that predicate till it reaches the base case which is either X>2 because 0 and 1 aren't primes and divisible by itself
My code till now is :
isPrime(2).
isPrime(X):-
X>2, %0,1 aren't primes
1 is mod(X,2),
Can someone help ?
It's pretty easy provided you don't care about efficiency.
isPrime(X) :-
X > 1,
succ(X0, X),
\+ (between(2, X0, N), 0 is X mod N).
:)

Multiply without + or *

I'm working my way through How to Design Programs on my own. I haven't quite grasped complex linear recursion, so I need a little help.
The problem:
Define multiply, which consumes two natural numbers, n and x, and produces n * x without using Scheme's *. Eliminate + from this definition, too.
Straightforward with the + sign:
(define (multiply n m)
(cond
[(zero? m) 0]
[else (+ n (multiply n (sub1 m)))]))
(= (multiply 3 3) 9)
I know to use add1, but I can't it the recursion right.
Thanks.
Split the problem in two functions. First, you need a function (add m n) which adds m to n. What is the base case? when n is zero, return m. What is the recursive step? add one to the result of calling add again, but decrementing n. You guessed it, add1 and sub1 will be useful.
The other function, (mul m n) is similar. What is the base case? if either m or n are zero, return 0. What is the recursive step? add (using the previously defined function) m to the result of calling mul again, but decrementing n. And that's it!
Since this is almost certainly a homework-type question, hints only.
How do you add 7 and 2? While most people just come up with 9, is there a more basic way?
How about you increment the first number and decrement the second number until one of them reaches zero?
Then the other one is the answer. Let's try the sample:
7 2
8 1
9 0 <- bingo
This will work fine for natural numbers though you need to be careful if you ever want to apply it to negatives. You can get into the situation (such as with 10 and -2) where both numbers are moving away from zero. Of course, you could check for that before hand and swap the operations.
So now you know can write + in terms of an increment and decrement instruction. It's not fantastic for recursion but, since your multiply-by-recursive-add already suffers the same problem, it's probably acceptable.
Now you just have to find out how to increment and decrement in LISP without using +. I wonder whether there might be some specific instructions for this :-)

Why is modulus defined the way it is in programming languages

I'm not asking about the definition but rather why the language creators chose to define modulus with asymmetric behavior in C++. (I think Java too)
Suppose I want to find the least number greater than or equal to n that is divisible by f.
If n is positive, then I do:
if(n % f)
ans = n + f - n % f;
If n is negative:
ans = n - n % f;
Clearly, this definition is not the most expedient when dealing with negative and positive numbers. So why was it defined like this? In what case does it yield expediency?
Because it's using "modulo 2 arithmetic", where each binary digit is treated independently of the other. Look at the example on "division" here
You're mistaken. When n is negative, C++ allows the result of the modulus operator to be either negative or positive as long as the results from % and / are consistent, so for any given a and b, the expression (a/b)*b + a%b will always yield a. C99 requires that the result of a % b will have the same sign as a. Some other languages (e.g., Python) require that the sign of a % b have the same sign as b.
This means the expression you've given for negative n is not actually required to work in C++. When/if n%f yields a positive number (even though n is negative), it will give ans that's less than n.

Resources