Digital root recursive algorithm - recursion

There is a lot of ways to find a digital root of a number and they are all similar to each other, but I can't understand the following one:
int digitalRoot(int n)
if (n < 10)
return n;
else
return digitalRoot(n / 10 + n % 10);
I see what the algorithm does, but can't understand why it works, how digital root of a number is related to sum of n / 10 + n % 10. Maybe someone could explain it to me in simple terms if there is a simple explanation?
I'm hard to see any relations between the following ways of getting the digital root, but they give the same result and that's exactly what I'm trying to understand..
1729 => 1 + 7 + 2 + 9 = 19
19 => 1 + 9 = 10
10 => 1 + 0 = 1
and
1729 => 172 + 9 = 181
181 => 18 + 1 = 19
19 => 1 + 9 = 10
10 => 1 + 0 = 1

Maybe an example explains better:
Let n be 1234
First call to function returns 123 + 4
Now n=127, second call returns 12 + 3+4
Now n=19, third call returns 1 + 2+3+4
The result is 1 after fourth call.
So, it basically adds up all the digits.

Related

Alternating between reading forwards and backwards in a loop

My array is 1D m in length. say m = 16
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
The way I actually interpret the array is n x n = m
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
I require to read the array in this manner due to the way my physical environment is set up
0 4 8 12 13 9 5 1 2 6 10 14 15 11 7 3
What I came up with works but I really don't think it is the best way to do this:
bool isFlipped = true;
int x = 0; x < m; x++
if(isFlipped)
newLine[x] = line[((n-1)-x%n)*n + x/n)]
else
newLine[x] = line[x%n*n +x/n]
if(x != 0 && x % n == 0)
isFlipped = !isFlipped
This gives me the required result but I really think there is a way to get rid of this boolean by purely using a math formula. I am stuffing this into a 8kb microcontroller and I need to conserve as much space as I can because I will have some bluetooth communication and more math going into it later on.
Edit:
Thanks to a user I got to a one line solution-ish. (the below would replace the lines in the for-loop)
c=x/n
newLine[x] = line[((c+1)%2)*((x%n)*n+c) + (c%2)*((n-1)-2*(x%n))*n ];
You should be able to utilize the fact that odd columns in the n*n matrix are read from down up, and even columns are read from up down.
A number at index x in newLine is located in column number c=floor(x/n) in the n*n matrix. c%2 is 0 for even columns and 1 for odd columns. So something like this should work:
int c = x/n;
newLine[x] = line[(x%n)*n + (c%2)*((n-1)-2*(x%n))*n + c];

Number of divisiors upto 10^6

I have been trying to solve this problem.
http://www.spoj.com/problems/DIV/
for calcuating interger factors, I tried two ways
first: normal sqrt(i) iteration.
int divCount = 2;
for (int j = 2; j * j <= i ; ++j) {
if( i % j == 0) {
if( i / j == j )
divCount += 1;
else
divCount += 2;
}
}
second: Using prime factorization (primes - sieve)
for(int j = 0; copy != 1; ++j){
int count = 0;
while(copy % primes.get(j) == 0){
copy /= primes.get(j);
++count;
}
divCount *= ( count + 1);}
While the output is correct, I am getting TLE. Any more optimization can be done? Please help. Thanks
You're solving the problem from the wrong end. For any number
X = p1^a1 * p2^a2 * ... * pn^an // p1..pn are prime
d(X) = (a1 + 1)*(a2 + 1)* ... *(an + 1)
For instance
50 = 4 * 25 = 2^2 * 5^2
d(50) = (1 + 2) * (1 + 2) = 9
99 = 3^2 * 11^1
d(99) = (2 + 1) * (1 + 1) = 6
So far so good you need to generate all the numbers such that
X = p1^a1 * p2^a2 <= 1e6
such that
(a1 + 1) is prime
(a2 + 1) is prime
having a table of prime numbers from 1 to 1e6 it's a milliseconds task
It is possible to solve this problem without doing any factoring. All you need is a sieve.
Instead of a traditional Sieve of Eratosthenes that consists of bits (representing either prime or composite) arrange your sieve so each element of the array is a pointer to an initially-null list of factors. Then visit each element of the array, as you would with the Sieve of Eratosthenes. If the element is a non-null list, it is composite, so skip it. Otherwise, for each element and for each of its powers less than the limit, add the element to each multiple of the power. At the end of this process you will have a list of prime factors of the number. That wasn't very clear, so let me give an example for the numbers up to 20. Here's the array, initially empty:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
Now we sieve by 2, adding 2 to each of its multiples:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
2 2 2 2 2 2 2 2 2 2
Since we also sieve by powers, we add 2 to each multiple of 4:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
And likewise, by each multiple of 8 and 16:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
2 2
2
Now we're finished with 2, so we go to the next number, 3. The entry for 3 is null, so we sieve by 3 and its power 9:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
2 2
2
3 3 3 3 3 3
3 3
Then we sieve by 5, 7, 11, 13, 17 and 19:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
2 2 2 2 2 2 2 2 2 2
2 2 2 2 2
2 2
2
3 3 3 3 3 3
3 3
5 5 5 5
7 7
11
13
17
19
Now we have a list of all the prime factors of all the numbers less than the limit, computed by sieving rather than factoring. It's easy then to calculate the number of divisors by scanning the lists; count the number of occurrences of each factor in the list, add 1 to each total, and multiply the results. For instance, 12 has 2 factors of 2 and 1 factor of 3, so take (2+1) * (1+1) = 3 * 2 = 6, and indeed 12 has 6 factors: 1, 2, 3, 4, 6 and 12.
The final step is to check if the number of divisors has exactly two factors. That's easy: just look at the list of prime divisors and count them.
Thus, you have solved the problem without doing any factoring. That ought to be very fast, just a little bit slower than a traditional Sieve of Eratosthenes and very much faster than factoring each number to compute the number of divisors.
The only potential problem is space consumption for the lists of prime factors. But you shouldn't worry too much about that; the largest list will have only 19 factors (since the smallest factor is 2, and 2^20 is greater than your limit), and 78498 of the lists will have only a single factor (the primes less than a million).
Even though the above mentioned problem doesn't require calculating number of divisors, It still can be solved by calculating d(N) (divisors of N) within the time limit (0.07s).
The idea is to pretty simple. Keep track of smallest prime factor f(N) of every number. This can be done by standard prime sieve. Now, for every number i keep dividing it by f(i) and increment the count till i = 1. You now have set of prime counts for each number i.
int d[MAX], f[MAX];
void sieve() {
for (int i = 2; i < MAX; i++) {
if (!f[i]) {
f[i] = i;
for (int j = i * 2; j < MAX; j += i) {
if (!f[j]) f[j] = i;
}
}
d[i] = 1;
}
for (int i = 1; i < MAX; i++) {
int k = i;
while (k != 1) {
int s = 0, fk = f[k];
while (k % fk == 0) {
k /= fk; s++;
}
d[i] *= (s + 1);
}
}
}
Once, d(N) is figured out, rest of the problem becomes much simpler. Keeping a smallest prime factor of every number also helps to solve lots of other problems.

McCabe's Complexity Metric and Independent Paths

int maxValue = m[0][0];
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
if ( m[i][j] >maxValue )
{
maxValue = m[i][j];
}
}
}
cout<<maxValue<<endl;
int sum = 0;
for (int i = 0; i < N; i++)
{
for (int j = 0; j < N; j++)
{
sum = sum + m[i][j];
}
}
cout<< sum <<endl;
For the above code if we draw a flow graph like this basic independent paths would be following six
Path 1: 1 2 3 10 11 12 13 19
Path 2: 1 2 3 10 11 12 13 14 15 18 13 19
Path 3: 1 2 3 10 11 12 13 14 15 16 17 15 18 13 19
Path 4: 1 2 3 4 5 9 3 10 11 12 13 19
Path 5: 1 2 3 4 5 6 8 5 9 3 10 11 12 13 14 15 16 17 15 18 13 19
Path 6: 1 2 3 4 5 6 7 8 5 9 3 10 11 12 13 14 15 16 17 15 18 13 19
So the question here is according to the given code path 2, 3, 4 can not be tested (Note the "N" in loops). So is it okay not to have a actual execution path as given in the basic set?...
or according to macabe complexity metric do we have to change the code given above. Because a tutor of mine said we have to change the code also he said that there are unstructured loops so we have to change the code. (I don't see an unstructured loop as well)
But my feeling is, if we change the code actual output may differ to expected output. So please someone explain this
1) McCabe's complexity can be calculated as the number of decision points + 1. In your case there are 5 decision points (nodes 3, 5, 6, 13 and 15) meaning that the McCabe complexity of the code fragment is 5+1 = 6. 6 is by no means too high in terms of McCabe complexity: one could, of course, still argue that it is too high given the functionality the implementation has to provide.
2) McCabe's complexity is related to testability of a method/procedure but not to testability of a specific path. Paths can be feasible (= there exist values of the variables that force the execution through this path) or not, but McCabe's complexity is happily unaware of such complications. If you really want to look into feasibility of paths keep in mind that the problem in general is undecidable but many practical data flow analysis algorithms are available.
3) if we change the code actual output may differ to expected output Of course, you cannot introduce an arbitrary change and hope that the results will be the same. However, and, this is probably what your tutor intended, there is a way of restructuring your code such that the output produced remains the same, and the McCabe's complexity goes down. Think, e.g., on whether you really need to separate the tasks of calculating the maximum and the sum.

Grouping R variables based on sub-groups

I have a data formatted as
PERSON_A PERSON_B MEET LEAVE
That describes basically when a PERSON_A met a PERSON_B at time MEET and they said "bye" to each other at moment LEAVE. The time is expressed in seconds, and there is a small part of the data on http://pastie.org/2825794 (simple.dat).
What I need is to count the number of meetings grouping it by day. At the moment, I have a code that works, the appearance is not beautiful. Anyway, I'd like a help in order to transform it in a code that reflects the grouping Im trying to do, e.g, using ddply, etc. Therefore, my main aim is to learn from this case. Probably there are many mistakes in this code regarding good practices in R.
library(plyr)
data = read.table("simple.dat", stringsAsFactors=FALSE)
names(data)=c('PERSON_A','PERSON_B','MEET','LEAVE')
attach(data)
min_interval = min(MEET)
max_interval = max(LEAVE)
interval = max_interval - min_interval
day = 86400
number_of_days = floor(interval/day)
g = data.frame(MEETINGS=c(0:number_of_days)) # just to store the result
g[,1] = 0
start_offset = min_interval # start of the first day
for (interval in c(0:number_of_days)) {
end_offset = start_offset + day
meetings = (length(data[data$MEET >= start_offset & data$LEAVE <= end_offset, ]$PERSON_A) + length(data[data$MEET >= start_offset & data$LEAVE <= end_offset, ]$PERSON_B))
g[interval+1, ] = meetings
start_offset = end_offset # start next day
}
g
This code iterates over the days (intervals of 86400 seconds) and stores the number of meetings on the dataframe g. The correct output (shown bellow) of this code when executed on the linked dataset gives for each line (day) the number o meetings.
MEETINGS
1 38
2 10
3 16
4 18
5 24
6 6
7 4
8 10
9 28
10 14
11 22
12 2
13 .. 44 0 # I simplified the output here
45 2
Anyway, I know that I could use ddply to get the number of meetings for each pair o nodes:
contacts <- ddply(data, .(PERSON_A, PERSON_B), summarise
, CONTACTS = length(c(PERSON_A, PERSON_B)) /2
)
but there is a huge hill for me between this and the result I need.
As a end note, I read How to make a great R reproducible example? and tried my best :)
Thanks,
try this:
> d2 <- transform(data, m = floor(MEET/86400) + 1, l = floor(LEAVE/86400) + 1)
> d3 <- subset(d2, m == l)
> table(d3$m) * 2
1 2 3 4 5 6 7 8 9 10 11 12 45
38 10 16 18 24 6 4 10 28 14 22 2 2
floor(x/(60*60*24)) is a quick way to convert second into day.

Converting between numbering systems

I'm trying to understand the reason for a rule when converting.
I'm sure there must be a simple explanation, but I can't seem to wrap my head around it.
Appreciate any help!
Converting from base10 to any other base is done like this:
number / desiredBase = number + remainder
You do this until number = 0.
But after all of the calculations, you have to take all the remainders upside down. I don't understand why.
For example: base10 number to base2
11 / 2 = 5 + 1
5 / 2 = 2 + 1
2 / 2 = 1 + 0
1 / 2 = 0 + 1
Why is the correct answer: 1011 and not 1101 ?
I know it's a little petty, but it would really help me remember better if I could understand this.
Think of the same in decimal system, even if it doesn't make that much sense to actually do the math in this case :)
1234 / 10 = 123 | 4
123 / 10 = 12 | 3
12 / 10 = 1 | 2
1 / 10 = 0 | 1
Every time you divide, you strip the least significant digit, so the first result, is the least significant result -- digit on the right.
Because 11 =
1 * 2 ^ 3 + 0 * 2 ^ 2 + 1 * 2 ^ 1 + 1 * 2 ^ 0 (1011)
and not
1 * 2 ^ 3 + 1 * 2 ^ 2 + 0 * 2 ^ 1 + 1 * 2 ^ 0 (1101)

Resources