Help to understand Google Code Jam 2011 Candy Splitting problem - math

I'm participating in google code jam. Before anything I want to say that I don't want anyone to solve me a problem "to win", or something like that. I just want some help to understand a problem I couldn't solve in a round that has already FINISHED.
Here is the link of the problem, called Candy Splitting. I won't explain it here because it is nosense, I wont be able to explain it better than google does.
I would like to know some "good" solution to the problem, for example, I've downloaded the first English solution and I've seen the code has only 30 lines!!! Thats amazing! (Anyone can download it so I think there is no problem with saying it: the solution of theycallhimtom from here). I can't understand the solution even watching the code. (My ignorance of Java doesn't help.)
Thanks!

Google themselves provide discussions about the problems and the solution
See this link for the Candy Splitting problem: http://code.google.com/codejam/contest/dashboard?c=975485#s=a&a=2
Basically, the candies can be divided into two equal value piles (from Patrick's point of view) if
C[0] xor C[1] xor C[2] xor ... xor C[N] == 0.
One such split is the sum of all candy values except one. To maximimise the value of one pile, take the lowest value candy and put it in a pile of its own.
Why is it so?
The way I thought about it, is that by definition Patrick's addition is actually equal to xoring values. From the definition of the problem, we want
C[i] xor C[j] xor ... xor C[k] == C[x] xor C[y] xor ... xor C[z]
for some elements on each side.
Adding the RHS to both the LHS and RHS yields
C[i] xor C[j] xor ... xor C[k] xor C[x] xor C[y] xor ... xor C[z] == 0
Since xoring a value with itself gives 0, and the order of xor operations is not important, the RHS becomes 0.
Any of the elements in the LHS can be moved over the to right side and the equality still holds. Picking the lowest value element makes the best split between the piles.

Related

asymmetric difference betwen two binary numbers (bitsets)

It is quick and easy to determine shared/different bits between two binary numbers by AND or XOR. Let's say we have A: 10011 and B:11001 we can get the difference.
10011 XOR 11001 = 01010 (1s are different 0 similar.)
Are there any quick and easy logic or arithmetic operations that could produce similar but asymmetric output (1s showing these that are for example present in A but missing in B or vice vs.)
Example 10011 ??? 11001 = 00010 (1s mean present in left hand operand missing in right)
Could it be done with some quick arithmetic/logic or would I have to start some loop to go through the comparisons one by one?
I got to this question when I was contemplating on storing some presence/absence data in bytes as bit flags (for memory efficiency) -- and were already gleeful in the fact that were I do thusly I could then do quick and easy data diffing operations, but for many applications the direction of difference is also important.
The more canonical way to express this is A AND (NOT B), where NOT flips all bits.
tkausl - comment on the question answers it successfully.
(A XOR B) AND A would really do the trick.
XOR would generate the difference between A and B and then AND mask with A to show only these that are present in A. Result difference showing these bits that are set in A but not in B.

Finding all permutations for a given number of football games in ocaml

I have to write the function series : int -> int -> result list list, so the first int for the number of games and the second int for the points to earn.
I already thought about an empirical solution by creating all permutations and filtering the list, but I think this would be in ocaml very dirty solution with many lines of code. And I cant find another way to solve this problem.
The following types are given
type result = Win (* 3 points *)
| Draw (* 1 point *)
| Loss (* 0 points *)
so if i call
series 3 4
the solution should be:
[[Win ;Draw ;Loss]; [Win ;Loss ;Draw]; [Draw ;Win ;Loss];
[Draw ;Loss ;Win]; [Loss ;Win ;Draw]; [Loss ;Draw ;Win]]
Maybe someone can give me a hint or a code example how to start.
Consider calls of the form series n (n / 2), and consider cases where all the games were Draw or Loss. Under these restrictions the number of answers is proportional to 2^n/sqrt(n). (Guys online get this from Stirling's approximation.)
This doesn't include any series where anybody wins a game. So the actual result lists will be longer than this in general.
I conclude that the number of possible answers is gigantic, and hence that your actual cases are going to be small.
If your actual cases are small, there might be no problem with using a brute-force approach.
Contrary to your claim, brute-force code is usually quite short and easy to understand.
You can easily write a function to list all possible sequences of length n taken from Win, Lose, Draw. You can then filter them for the correct sum. Asymptotically this is probably only a little worse than the fastest algorithm, due to the near-exponential behavior described above.
A simple recursive solution would go along this way:
if there's 0 game to play and 0 point to earn, then there is exactly one (empty) solution
if there's 0 game to play and 1 or more points to earn, there is no solution.
otherwise, p points must be earned in g games: any solution for p points in g-1 game can be extended to a solution by adding a Loss in front of it. If p>=1, you can similarly add a Draw to any solution for p-1 in g-1 games, and if p>=3, there might also be possibilities starting with a Win.

How to mimic logical XOR in ZX Spectrum basic?

Sometimes when coding in ZX Spectrum Basic I need to evaluate logical expressions that are formed by two operands and a logical xor like this:
IF (left operand) xor (right operand) THEN
Since ZX Basic does only know NOT, OR and AND I have to resort to some sort of fancy calculation which includes multiple uses of left/right operands. This is awkward since it consumes time and memory, both sparse if you're working on an 8-bit machine. I wonder if there's a neat trick to mimic the xor operator.
To test the outcome I provide a small code sample:
5 DEF FN x(a,b)=(a ??? b) : REM the xor formula, change here
10 FOR a=-1 TO 1 : REM left operand
20 FOR b=-1 TO 1 : REM right operand
30 LET r=FN x(a,b) : REM compute xor
40 PRINT "a:";a;" b:";b;" => ";r
50 NEXT b
60 NEXT a
Can you help me find a performant solution? So far I tried DEF FN x(a,b)=(a AND NOT b) OR (b AND NOT a) but it's somewhat clumsy.
Edit:
If you want to test your idea I suggest the BasinC v1.69 ZX emulator (Windows only).
As #Jeff pointed out most Basics, such as ZX one's, do consider zero values as false and non-zero ones as true.
I have adapted the sample to test with a variety of non-zero values.
The logical xor is semantically equivalent to not equal.
IF (left operand) <> (right operand) THEN
should work.
Edit: In the case of integer operands you can use
IF ((left operand) <> 0) <> ((right operand) <> 0) THEN
DEF FN x(a,b)=((NOT a) <> (NOT b))
Using NOT as coercion to a boolean value.
EDIT Previously had each side with NOT NOT which is unnecessary for establishing difference between the two, as one will still coerce!
EDIT 2 Added parens to sort out precedence issue.
Considering very interesting and fun this question and the answers in here, I would like to share the results of some performance tests (performed on an emulator):
elapsed times are in seconds , less is best.
the x1 test is only to see if the expression meets the requirements and includes the print out of results, the x256 repeat the same test 256times without printing any output; the without FN tests are the same but without factoring the expression in an FN statement.
I share also the code and test suite on github: https://github.com/rondinif/XOR-in-ZX-Spectrum-basic for the benefit of all retro computing fanatics (..like me) and share our opinions
Keep in mind, value are integer:
I think mathematical operation could be fun : (A-B)*(A-B) should work
It should be less time consuming based on simple operation.
Or with ABS : ABS (A-B)

Benefits of starting arrays at 0?

What's the purpose of array indices starting at 0 in most programming languages, in contrast to the ordinal way in which we refer to most things IRL (first, second, third, etc.)? What's the logic or utility behind that?
I'm completely used to it by now, but never stopped to think about the reason behind it.
Update: One benefit I read about from Googling is that for loops can have i < n if you want to go up to n.
Dijkstra lays out the reasoning in Why numbering should start at zero.
When dealing with a sequence of length N, the elements of which we wish to distinguish by subscript, the next vexing question is what subscript value to assign to its starting element...
when starting with subscript 1, the subscript range 1 ≤ i < N+1; starting with 0, however, gives the nicer range 0 ≤ i < N. So let us let our ordinals start at zero: an element's ordinal (subscript) equals the number of elements preceding it in the sequence. And the moral of the story is that we had better regard —after all those centuries!— zero as a most natural number.
When we're accessing item by index like a[i] the compiler converts it to [a+i]. So the index of first element is zero because [a+0] will give us 'a' that points to the first item in array. This is quite obviously for, say, C++ but not for more recent languages such as C#.
Dijkstra wrote a really interesting paper about this, in 1982: Why numbering should start at zero.
You may google for it, there's been a lot of discussions about it. I'd say that the fact that the offset of the first element from the beginning (which it is) is zero certainly makes sense.
Because Dijkstra said so.
In my old assembler days it was natural for the offset to start at zero.
dcl foo(9)
ldx0 0 'offset to index register 0
lda foo,x0 'get first element
adx0 1,du 'get 2nd
ldq foo,x0
When looking at it from the perspective of the hardware it makes more sense.

Big O Log problem solving

I have question that comes from a algorithms book I'm reading and I am stumped on how to solve it (it's been a long time since I've done log or exponent math). The problem is as follows:
Suppose we are comparing implementations of insertion sort and merge sort on the same
machine. For inputs of size n, insertion sort runs in 8n^2 steps, while merge sort runs in 64n log n steps. For which values of n does insertion sort beat merge sort?
Log is base 2. I've started out trying to solve for equality, but get stuck around n = 8 log n.
I would like the answer to discuss how to solve this mathematically (brute force with excel not admissible sorry ;) ). Any links to the description of log math would be very helpful in my understanding your answer as well.
Thank you in advance!
http://www.wolframalpha.com/input/?i=solve%288+log%282%2Cn%29%3Dn%2Cn%29
(edited since old link stopped working)
Your best bet is to use Newton;s method.
http://en.wikipedia.org/wiki/Newton%27s_method
One technique to solving this would be to simply grab a graphing calculator and graph both functions (see the Wolfram link in another answer). Find the intersection that interests you (in case there are multiple intersections, as there are in your example).
In any case, there isn't a simple expression to solve n = 8 log₂ n (as far as I know). It may be simpler to rephrase the question as: "Find a zero of f(n) = n - 8 log₂ n". First, find a region containing the intersection you're interested in, and keep shrinking that region. For instance, suppose you know your target n is greater than 42, but less than 44. f(42) is less than 0, and f(44) is greater than 0. Try f(43). It's less than 0, so try 43.5. It's still less than 0, so try 43.75. It's greater than 0, so try 43.625. It's greater than 0, so keep going down, and so on. This technique is called binary search.
Sorry, that's just a variation of "brute force with excel" :-)
Edit:
For the fun of it, I made a spreadsheet that solves this problem with binary search: binary‑search.xls . The binary search logic is in the second data column, and I just auto-extended that.

Resources