Make-change: Beginner Trouble - recursion

I'm trying to create "make-change" that will return a ls of coins whose sum = the input, and it needs to contain the least number of coins possible.
Ex: (make-change 99)
=> (quarter quarter quarter dime dime penny penny penny penny)

Here's the lines along which make-change should operate:
If the remaining amount is exactly equal to 1, 5, 10, or 25 then return the appropriate coin.
Otherwise, cons the largest coin you can use onto the result of (make-change (- x value)) where value is the amount of the coin that you just used.
You can tell this procedure will terminate, since the amount will become smaller and smaller via step 2 until it is finally amenable to concluding with step 1.

Related

Counting Unique Paths in an Array with Special Rules

Description of the Problem:
I have an array of two-digit numbers going from 00 to 99. I must choose a number at random anywhere in the array; let's call this result r. I may now take up to n steps within the array to "travel" to another number in the array, according to the following rules:
Adding or subtracting 1 from r takes one (1) step; I cannot add 1 if there is a 9 in the ones place (ex: 09, 19, 29, ...) and I cannot subtract 1 if there is a 0 in the ones place (ex: 00, 10, 20, ...)
Adding or subtracting 10 from r takes one (1) step; I cannot bring the result to lower than 00 or higher than 99.
By taking two (2) steps, I can swap the digits in the ones and tens place (ex: 13 -> 31, 72 -> 27); however, I can't perform the swap if the digits are the same (ex: can't swap 00, 11, 22, ...)
For a given number x (00 <= x <= 99) I want to count the set of unique values of r from which I can travel to x, given that I can take between 0 and n steps. I call this count how "accessible" x is. I'd like to express this as a formula, A(x, n), rather than just brute-forcing the results for each combination of x and n.
What I Have Tried:
A(x, 0) is easy enough to calculate: A(x, 0) = 1 for all values in the array, because the only way to reach x from r is for r = x; I take zero (0) steps to reach it.
A(x, 1) is trickier, but still simple: you just take into account the new paths available if I spend my one step on either Rule #1 or Rule #2, and add them to A(x, 0). A(x, 2) is where I have to start including Rule #3, but also includes the problem of backtracking. For instance, if I want to reach x and x = r, and I have two (2) steps available, I could perform the following operation: Step 1, r -> r' = r+1 (Rule #1); Step 2, r' -> r'' = r'-1 (Rule #1); r'' = x AND r'' = r. This does not add to my count of unique values from which I can travel to x from.
Where I am Stuck:
I cannot figure out how to count the number of backtracking paths in order to remove them from the otherwise simple calculations of A(x, n), so my values of accessibility are coming out too high.

Statistical probability of N contiguous true-bits in a sequence of bits?

Let's assume I have an N-bit stream of generated bits. (In my case 64kilobits.)
Whats the probability of finding a sequence of X "all true" bits, contained within a stream of N bits. Where X = (2 to 16), and N = (16 to 1000000), and X < N.
For example:
If N=16 and X=5, whats the likelyhood of finding 11111 within a 16-bit number.
Like this pseudo-code:
int N = 1<<16; // (64KB)
int X = 5;
int Count = 0;
for (int i = 0; i < N; i++) {
int ThisCount = ContiguousBitsDiscovered(i, X);
Count += ThisCount;
}
return Count;
That is, if we ran an integer in a loop from 0 to 64K-1... how many times would 11111 appear within those numbers.
Extra rule: 1111110000000000 doesn't count, because it has 6 true values in a row, not 5. So:
1111110000000000 = 0x // because its 6 contiguous true bits, not 5.
1111100000000000 = 1x
0111110000000000 = 1x
0011111000000000 = 1x
1111101111100000 = 2x
I'm trying to do some work involving physically-based random-number generation, and detecting "how random" the numbers are. Thats what this is for.
...
This would be easy to solve if N were less than 32 or so, I could just "run a loop" from 0 to 4GB, then count how many contiguous bits were detected once the loop was completed. Then I could store the number and use it later.
Considering that X ranges from 2 to 16, I'd literally only need to store 15 numbers, each less than 32 bits! (if N=32)!
BUT in my case N = 65,536. So I'd need to run a loop, for 2^65,536 iterations. Basically impossible :)
No way to "experimentally calculate the values for a given X, if N = 65,536". So I need maths, basically.
Fix X and N, obiously with X < N. You have 2^N possible values of combinations of 0 and 1 in your bit number, and you have N-X +1 possible sequences of 1*X (in this part I'm only looking for 1's together) contained in you bit number. Consider for example N = 5 and X = 2, this is a possible valid bit number 01011, so fixed the last two characteres (the last two 1's) you have 2^2 possible combinations for that 1*Xsequence. Then you have two cases:
Border case: Your 1*X is in the border, then you have (2^(N -X -1))*2 possible combinations
Inner case: You have (2^(N -X -2))*(N-X-1) possible combinations.
So, the probability is (border + inner )/2^N
Examples:
1)N = 3, X =2, then the proability is 2/2^3
2) N = 4, X = 2, then the probaility is 5/16
A bit brute force, but I'd do something like this to avoid getting mired in statistics theory:
Multiply the probabilities (1 bit = 0.5, 2 bits = 0.5*0.5, etc) while looping
Keep track of each X and when you have the product of X bits, flip it and continue
Start with small example (N = 5, X=1 - 5) to make sure you get edge cases right, compare to brute force approach.
This can probably be expressed as something like Sum (Sum 0.5^x (x = 1 -> 16) (for n = 1 - 65536) , but edge cases need to be taken into account (i.e. 7 bits doesn't fit, discard probability), which gives me a bit of a headache. :-)
#Andrex answer is plain wrong as it counts some combinations several times.
For example consider the case N=3, X=1. Then the combination 101 happens only 1/2^3 times but the border calculation counts it two times: one as the sequence starting with 10 and another time as the sequence ending with 01.
His calculations gives a (1+4)/8 probability whereas there are only 4 unique sequences that have at least a single contiguous 1 (as opposed to cases such as 011):
001
010
100
101
and so the probability is 4/8.
To count the number of unique sequences you need to account for sequences that can appear multiple times. As long as X is smaller than N/2 this will happens. Not sure how you can count them tho.

Fibonacci Tree-Recursion in Structure and Interpretation of Computer Programs

In the classic text by Abelson/Sussman, Structure and Interpretation of Computer Programs, in Section 1.2.2 on tree recursion and the Fibonacci sequence, they show this image:
The tree-recursive process generated in computing for the 5th Fibonacci number
Then they write: "Notice that the entire computation of (fib 3) - almost half the work - is duplicated. In fact, it is not hard to show that the number of times the procedure will compute (fib 1) or (fib 0) (the number of leaves in the above tree, in general) is precisely Fib(n + 1)."
I understand that they're making a point about tree-recursion and how this classic case of the Fibonacci tree-recursion is inefficient because the recursive function calls itself twice:
The tree-recursive function for computing a Fibonacci number
My question is, why is it obvious (i.e. "not hard to show") that the number of leaves is equal to the next Fibonacci number in the sequence? I can see visually that it is the case, but I'm not seeing the connection as to why the number of leaves (the reduced down fib 1 and fib 0 calculations) should be an indicator for the next Fibonacci number (in this case 8, which is Fib 6, i.e. the 6th Fibonacci number, i.e. Fib n+1 where n is 5).
It is obvious how the Fibonacci sequence is computed - the sum of the previous two numbers in the sequence yields the current number, but why does the number of leaves precisely equal the next number in the sequence? What is the connection there (other than the obvious, that looking at it and adding up the 1 and 0 leaves does, in fact, yield a total count of 8 in this case, which is the next (6th) Fibonacci number, and so on)?
"Not hard to show" is harder than "obvious".
Use induction with two base cases.
Let's call the number of computations in Fib(x), Fib01(x).
Then,
Fib01(0) = 1 by definition, which is Fib(1)
Fib01(1) = 1 by definition, which is Fib(2)
Now assume that Fib01(k) = Fib(k+1) for k < n:
Fib01(n) = Fib01(n-1) + Fib01(n-2)
= Fib(n) + Fib(n-1)
= Fib(n+1) by definition
QED.
The number of n=1 clauses must be equal to fib(n), because that is the only place a non-zero number comes from, and if the sum of some number of 1s is equal to fib(n), there must be fib(n) of them.
Since fib(n+1) = fib(n) + fib(n-1), we just need to show that there are fib(n-1) leaves computing fib(0). It's less obvious to me how to show this, but perhaps it falls inductively out of the previous case?
Perhaps a simpler approach is to just do the whole thing inductively, then.
For our base cases:
N=0: there are fib(N+1)=fib(1)=1 leaves in the tree. Proof by inspection.
N=1: there are fib(N+1)=fib(2)=1 leaves in the tree. Proof by inspection.
Induction step: to compute fib(N) for an arbitrary N, we compute fib(N-1) once, and fib(N-2) once, and add their results. By induction, there are fib(N) leaves in the tree coming from our computation of fib(N-1), and fib(N-1) leaves in the tree coming from our computation of fib(N-2).
There are therefore fib(N) + fib(N-1) leaves in our overall tree, which is equal to fib(N+1). QED.
We can prove this by extrapolation.
The number of leaves for Fib(0) = 1.
The number of leaves for Fib(1) = 1.
Now, the expression Fib(2) is basically the sum of Fib(1) + Fib(0), i.e., Fib(2) = Fib(1) + Fib(0). So from the tree itself, you can see that the number of leaves for Fib(2) is equal to the sum of leaves in case of Fib(1) and Fib(0). Therefore, the number of leaves for Fib(2) is equal to 2.
Next, for Fib(3) the number of leaves will be sum of leaves for Fib(2) and Fib(1), i.e., 2 + 1 = 3
As you must have observed by now, this follows a pattern similar to Fibonacci series. Infact if we define the number of leaves for Fib(n) to be FibLeaves(n), then we can see that this series is Fib(n) shifted left by 1 space.
Fib(n) = 0, 1, 1, 2, 3, 5, 8, 13, 21, ..
FibLeaves(n) = 1, 1, 2, 3, 5, 8, 13, 21, ..
And thus, the number of leaves will be equal to Fib(n + 1)
Look at it this way:
It is true that we can generate a part of the fibonnaci sequence by picking any two consecutive terms from anywhere in the full fibonnaci sequence and following the rules of generating the next term
i.e Full fibonnaci sequence = 0,1,1,2,3,5,8,13,21....
So if I picked any two consecutive terms e.x 3 and 5, and followed the rules of generating the next term in the sequence, I would generate a part of the fibonnaci sequence
i.e Part of fibonnaci sequence = 3,5,8,13,21,34...
It is true that the number of leaves for a term is equal to the sum of the number of leaves of its two previous terms. This rule is the same as that for generating a term in the fibonnaci sequence
So lets try to get the number of leaves for the second term i.e number of leaves for zeroth term + number of leaves for first term
The number of leaves for the zeroth term and first term is 1
The number of leaves for the second term becomes 1 + 1 = 2
Now, 1,1 are consecutive terms from the full fibonacci sequence and the rule for getting the number of leaves is the same as the rule for getting a term in the fibonacci sequence
Nice question! It is immediately obvious (after a moment's thought), because the number of leaves in a binary tree node is the sum of respective numbers for its two branches, and, vacuously, 1 for leaves -- which is the definition of Fibonacci numbers ... with this specific shape of a tree.
Implicit in the above imprecise general statement is the proof by induction that
N(0) = 1
N(1) = 1
N(n+2) = N(n+1) + N(n)
which directly maps onto that statement, making it specific and concrete!

How to find n as sum of dustinct prime numbers (when n is even number)

This problem gives you a positive integer number which is less than or equal to 100000 (10^5). You have to find out the following things for the number:
i. Is the number prime number? If it is a prime number, then print YES.
ii. If the number is not a prime number, then can we express the number as summation of unique prime numbers? If it is possible, then print YES. Here unique means, you can use any prime number only for one time.
If above two conditions fail for any integer number, then print NO. For more clarification please see the input, output section and their explanations.
Input
At first you are given an integer T (T<=100), which is the number of test cases. For each case you will be given a positive integer X which is less than or equal 100000.
Output
For every test case, print only YES or NO.
Sample
Input Output
3
7
6
10 YES
NO
YES
Case – 1 Explanation: 7 is a prime number.
Case – 2 Explanation: 6 is not a prime number. 6 can be expressed as 6 = 3 + 3 or 6 = 2 + 2 + 2. But you can’t use any prime number more than 1 time. Also there is no way to express 6 as two or three unique prime numbers summation.
Case – 3 Explanation: 10 is not prime number but 10 can be expressed as 10 = 3 + 7 or 10 = 2 + 3 + 5. In this two expressions, every prime number is used only for one time.
Without employing any mathematical tricks (not sure if any exist...you'd think as a mathematician I'd have more insight here), you will have to iterate over every possible summation. Hence, you'll definitely need to iterate over every possible prime, so I'd recommend the first step being to find all the primes at most 10^5. A basic (Sieve of Eratosthenes)[https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes] will probably be good enough, though faster sieves exist nowadays. I know your question is language agnostic, but you could consider the following as vectorized pseudocode for such a sieve.
import numpy as np
def sieve(n):
index = np.ones(n+1, dtype=bool)
index[:2] = False
for i in range(2, int(np.sqrt(n))):
if index[i]:
index[i**2::i] = False
return np.where(index)[0]
There are some other easy optimizations, but for simplicity this assumes that we have an array index where the indices correspond exactly to whether the number is prime or not. We start with every number being prime, mark 0 and 1 as not prime, and then for every prime we find we mark every multiple of it as not prime. The np.where() at the end just returns the indices where our index corresponds to True.
From there, we can consider a recursive algorithm for actually solving your problem. Note that you might feasibly have a huge number of distinct primes necessary. The number 26 is the sum of 4 distinct primes. It is also the sum of 3 and 23. Since the checks are more expensive for 4 primes than for 2, I think it's reasonable to start by checking the smallest number possible.
In this case, the way we're going to do that is to define an auxiliary function to find whether a number is the sum of precisely k primes and then sequentially test that auxiliary function for k from 1 to whatever the maximum possible number of addends is.
primes = sieve(10**5)
def sum_of_k_primes(x, k, excludes=()):
if k == 1:
if x not in excludes and x in primes:
return (x,)+excludes
else:
return ()
for p in (p for p in primes if p not in excludes):
if x-p < 2:
break
temp = sum_of_k_primes(x-p, k-1, (p,)+excludes)
if temp:
return temp
return ()
Running through this, first we check the case where k is 1 (this being the base case for our recursion). That's the same as asking if x is prime and isn't in one of the primes we've already found (the tuple excludes, since you need uniqueness). If k is at least 2, the rest of the code executes instead. We check all the primes we might care about, stopping early if we'd get an impossible result (no primes in our list are less than 2). We recursively call the same function for smaller k, and if we succeed we propagate that result up the call stack.
Note that we're actually returning the smallest possible tuple of unique prime addends. This is empty if you want your answer to be "NO" as specified, but otherwise it allows you to easily come up with an explanation for why you answered "YES".
partial = np.cumsum(primes)
def max_primes(x):
return np.argmax(partial > x)
def sum_of_primes(x):
for k in range(1, max_primes(x)+1):
temp = sum_of_k_primes(x, k)
if temp:
return temp
return ()
For the rest of the code, we store the partial sums of all the primes up to a given point (e.g. with primes 2, 3, 5 the partial sums would be 2, 5, 10). This gives us an easy way to check what the maximum possible number of addends is. The function just sequentially checks if x is prime, if it is a sum of 2 primes, 3 primes, etc....
As some example output, we have
>>> sum_of_primes(1001)
(991, 7, 3)
>>> sum_of_primes(26)
(23, 3)
>>> sum_of_primes(27)
(19, 5, 3)
>>> sum_of_primes(6)
()
At a first glance, I thought caching some intermediate values might help, but I'm not convinced that the auxiliary function would ever be called with the same arguments twice. There might be a way to use dynamic programming to do roughly the same thing but in a table with a minimum number of computations to prevent any duplicated efforts with the recursion. I'd have to think more about it.
As far as the exact output your teacher is expecting and the language this needs to be coded in, that'll be up to you. Hopefully this helps on the algorithmic side of things a little.

Right shifting a carry save number

Carry save arithmetic uses twice the number of bits, one word to hold the "virtual sum", one to hold the "virtual carry" to avoid propagating the carry which is the limiting factor in hardware speed.
I have a system that requires dividing these numbers by powers of two, but simply right shifting both numbers does not work in all cases eg. two 16 bit carry save numbers, which you add to produce 4000, C001 is the Virtual Sum, 7FFF is the virtual carry.
C001 + 7FFF = 4000 (discard overflow bits)
but after right shift
6000 + 3FFF = 9FFF (when it should be 2000)
In short: How do you divide a carry save number by a power of two? (While keeping it a carry save number)
First, right shift by 1 effectively does deleting by 2 with forgetting a remainder. But the remainder could be needed for having the exact result. For instance, change your initial example with adding C000 to 8000, or C002 to 7FFE. Both give the same sum but, sum of shifted values is A000 instead of your 9FFF, and this is definitely more correct. So, you can do such shifting only if sum of LSBs could be lost. In your case with 2 summands and 1 bit shift, this means no more than 1 summand could have 1 in its LSB.
Second, consider this is fixed and you've got A000. A simple ideal math says (a+b)/2 == a/2 + b/2. For your case, the carry bit you initially ignored weighed 0x10000, but after shifting by 1, it weighs 0x8000. That is exactly how A000 differs from your expected 2000. So, if you are sure in other aspects of your method, finish it with logical AND with ~0x8000 == 0x7FFF.
There is a technique to correct the representation such that it is shiftable. This originates from a paper "Carry-save architectures for high-speed digital signal processing" by Tobias Noll. You can compute new sign-bits of the carry and sum vectors as
c' = c_out
s' = s xor c xor c_out
where s and c are the original sign-bits and c_out is the discarded carry-bit from the carry-save addition.

Resources