How to convert formula to CNF? [closed] - formula

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I know the 4 rules to convert the formulas to CNF, but I'm not quite sure how to apply them to this formula
((x v y) ^ ¬ z) ->w
Could someone give me a hand and a bit of an explanation? Thanks!

Recipe for converting an expression into CNF
One way to convert a formula to CNF is:
Write a truth table (only the terms are relevant where the formula evaluates to false)
In every term of the truth table invert all literals
Take the resulting terms as CNF clauses
This recipe can be made plausible as follows:
None of the clauses must be false, unless the overall expression is false. Therefore, every clause is representing a minterm of the inverted expression. To make sure that this term is false, none of the inverted literals can be true.
For n input variables, a truth table has 2^n terms. Therefore, this method is only practical for small expressions.
To demonstrate the steps with your sample formula
((x or y) and not z) implies w
Truth table (F = expression value):
wxyz|F
----+-
0000|1
1000|1
0100|0
1100|1
0010|0
1010|1
0110|0
1110|1
0001|1
1001|1
0101|1
1101|1
0011|1
1011|1
0111|1
1111|1
----+-
False terms (F = 0):
wxyz|F
----+-
0100|0
0010|0
0110|0
----+-
Literals inverted and output column omitted:
wxyz
----
1011
1101
1001
----
Resulting CNF clauses:
(w or !x or y or z) and (w or x or !y or z) and (w or !x or !y or z)
Resulting CNF clauses minimized by merging clauses:
(w or !x or z) and (w or !y or z)
If in doubt, you can ask Wolfram|Alpha.
From the Karnaugh map, it becomes clear how the three false terms can be merged into two clauses:
wx
00 01 11 10
+---+---+---+---+
00 | 1 | 0 | 1 | 1 |
+---+---+---+---+
01 | 1 | 1 | 1 | 1 |
yz +---+---+---+---+
11 | 1 | 1 | 1 | 1 |
+---+---+---+---+
10 | 0 | 0 | 1 | 1 |
+---+---+---+---+

Related

How to factor polynomial with other polynomials?

This is a question on division algorithm. Consider polynomial f=-4x^4y^2z^2+y^6+3z^5 and polynomials G={y^6-z^5, x*z-y^2, x*y^4-z^4, x^2*y^2-z^3 *x^3-z^2}.
How can you factor f with respect to G computationally such that the linear combination f=\sum_i C_i*G_i is satisfied?
I know that the remainder is zero but not which are the coefficients C_i in the above formula, example with Macaulay2
This can be related to the more general mathematical question about ideals here.
This is a very late response. You probably already have the answer, but here it is anyway. "//" computes the coefficients using the division algorithm.
R=QQ[x,y,z,MonomialOrder=>Lex];
f=-4*x^2*y^2*z^2+y^6+3*z^5;
I=ideal(x*z-y^2,x^3-z^2);
G=gb(I);
f//(gens G)
o5 = {6} | 0 |
{2} | 3x2z2-xy2z-y4 |
{5} | 0 |
{4} | 0 |
{3} | -3z3 |
So
f=-4*x^2*y^2*z^2+y^6+3*z^5
=0*(y^6-z^5)+(3*x^2*z^2-x*y^2*z-y^4)(xz-y^2)+0*(xy^4-z^4)+0(x^2*y^2-z^3)+(-3*z^3)*(x^3-z^2).
Another tip is to copy and paste your code, so that others can copy and paste it. If you post an image then we have to type it out manually. If you put four spaces before each line then it will appear as code, as I have done here.
Maybe it's enough to just do a repeated polynomial division, something
like this (a rough pseudo code..)
order G lexicographically
total_rest = 0
coefficients = {g[0]:None, g[1]:None,...}
while f > 0:
for g in G:
quotient, reminder = f / g # polynomial division
coefficients[g] += quotient
if reminder == 0:
return # We are done. f was devisible by G.
f = reminder
total_rest += lt(f) # lt: leading term
f -= lt(f)
# Now it should hold that
# f = coefficient[g]*g + ... + total_rest

Efficient access a non-linearly filled 1D array using (x, y) co-ords and bit manipulation

I'm making a quad tree to hold values for a large square grid. Many regions of the grid are filled with the same value so instead of holding the same value in many nodes I hold one node that tells the function looking for values anywhere in that region that the value is the one that node holds. When lots of nodes together are very different I put them in a single node that holds an array of those values to cut out the overhead of having many middle-man nodes.
For simplicity I'll use byte sized examples, but my grid is much larger. A 16x16 gird of 256 values might look like this:
[ root ] <----16x16
/ | | \
/ [1][1] [64] <-+--8x8
[ node ] <-+
/ | | \
/[16][16][16] <-+--4x4
[ node ] <-+
| | | \
[1][1][1] [4] <----2x2
These values change frequently during the course of my application so the arrays in leaf nodes have to get divided and concatenated a lot. I started out using a standard 2D array but realized that if I try to take out a quadrant of that array it'd be grabbing data from many places because I'm essentially asking for one half of one half of the 1D arrays in the 2D arrays. My solution was to nest quadrants inside larger quadrants so that dividing a 1D array into quarters would give you the values for the four nested quadrants.
I've arranged them top to bottom, left to right. These grids illustrate the allocation scheme on two scales that remains consistent across all scales.
0 1 0 1 2 3
0 | 0| 1| 0 | 0 | 1 |
1 | 2| 3| 1 |_____|_____|
2 | 2 | 3 |
3 |_____|_____|
Here is what it'd look like if you printed the index of the 1D array out onto a 2D grid.
0 1 2 3 4 5 6
0 | 0| 1| 4| 5|16|17|20|
1 | 2| 3| 6| 7|18|19|
2 | 8| 9|12|13|
3 |10|11|14|15|
4 |32|33|
5 |34|35|
6 |40| etc.
So of course now that I've got a solution to cutting up the grid I've just made it annoying to retrieve anything from it. Here is how I currently get the index of the 1D array from (x, y) co-ords.
uint index( uint x, uint y ){
uint l = sqrt(array.length);
uint index;
uint chunk = array.length;
while ( l > 1 ){
l /= 2;
chunk /= 2;
if( y >= l ){
y -= l;
index += chunk;
}
chunk /= 2;
if( x >= l ){
x -= l;
index += chunk;
}
}
return index;
}
It works but it's painful... while thinking about it after I'd written it, it occurred to me that I was manipulating bits at a high level. So it should theoretically be possible to look at the bits of (x, y) directly to determine bits of the index for the array without doing as much work.
I've been trying to work out what I need to do in binary by looking at x, y, and index binary together, but I'm not having any luck deriving a method beyond "If x ends in 1, the index is odd".
N7 N6 N5 N4 N3 N2 N1 N0
x 5 |--|--|--|--|00|00|01|01|
y 1 |--|--|--|--|00|00|00|01|
Index 17 |00|00|00|00|00|01|00|01|
N7 N6 N5 N4 N3 N2 N1 N0
x 1 |--|--|--|--|00|00|00|01|
y 6 |--|--|--|--|00|00|01|10|
Index 41 |00|00|00|00|01|00|10|01|
I'm certain that the x y values can tell me which quadrant the index is in with x giving me east or west and y giving me north or south at any scale. I think I might need to make a bit mask or something, idk, I've never had to deal with bits directly outside of college, well, beyond bit-flags. So if someone can help me out with what I can do to get the index that'd be a huge help!

What does the ^ (XOR) operator do? [duplicate]

This question already has answers here:
What does the caret (^) operator do?
(5 answers)
Closed 2 years ago.
What mathematical operation does XOR perform?
XOR is a binary operation, it stands for "exclusive or", that is to say the resulting bit evaluates to one if only exactly one of the bits is set.
This is its function table:
a | b | a ^ b
--|---|------
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
This operation is performed between every two corresponding bits of a number.
Example: 7 ^ 10
In binary: 0111 ^ 1010
0111
^ 1010
======
1101 = 13
Properties: The operation is commutative, associative and self-inverse.
It is also the same as addition modulo 2.
^ is the Python bitwise XOR operator. It is how you spell XOR in python:
>>> 0 ^ 0
0
>>> 0 ^ 1
1
>>> 1 ^ 0
1
>>> 1 ^ 1
0
XOR stands for exclusive OR. It is used in cryptography because it let's you 'flip' the bits using a mask in a reversable operation:
>>> 10 ^ 5
15
>>> 15 ^ 5
10
where 5 is the mask; (input XOR mask) XOR mask gives you the input again.
A little more information on XOR operation.
XOR a number with itself odd number of times the result is number
itself.
XOR a number even number of times with itself, the result is 0.
Also XOR with 0 is always the number itself.
One thing that other answers don't mention here is XOR with negative numbers -
a | b | a ^ b
----|-----|------
0 | 0 | 0
0 | 1 | 1
1 | 0 | 1
1 | 1 | 0
While you could easily understand how XOR will work using the above functional table, it doesn't tell how it will work on negative numbers.
How XOR works with Negative Numbers :
Since this question is also tagged as python, I will be answering it with that in mind. The XOR ( ^ ) is an logical operator that will return 1 when the bits are different and 0 elsewhere.
A negative number is stored in binary as two's complement. In 2's complement, The leftmost bit position is reserved for the sign of the value (positive or negative) and doesn't contribute towards the value of number.
In, Python, negative numbers are written with a leading one instead of a leading
zero. So if you are using only 8 bits for your two's-complement
numbers, then you treat patterns from 00000000 to 01111111 as the
whole numbers from 0 to 127, and reserve 1xxxxxxx for writing negative
numbers.
With that in mind, lets understand how XOR works on negative number with an example. Lets consider the expression - ( -5 ^ -3 ) .
The binary representation of -5 can be considered as 1000...101 and
Binary representation of -3 can be considered as 1000...011.
Here, ... denotes all 0s, the number of which depends on bits used for representation (32-bit, 64-bit, etc). The 1 at the MSB ( Most Significant Bit ) denotes that the number represented by the binary representation is negative. The XOR operation will be done on all bits as usual.
XOR Operation :
-5 : 10000101 |
^ |
-3 : 10000011 |
=================== |
Result : 00000110 = 6 |
________________________________|
∴ -5 ^ -3 = 6
Since, the MSB becomes 0 after the XOR operation, so the resultant number we get is a positive number. Similarly, for all negative numbers, we consider their representation in binary format using 2's complement (one of most commonly used) and do simple XOR on their binary representation.
The MSB bit of result will denote the sign and the rest of the bits will denote the value of the final result.
The following table could be useful in determining the sign of result.
a | b | a ^ b
------|-------|------
+ | + | +
+ | - | -
- | + | -
- | - | +
The basic rules of XOR remains same for negative XOR operations as well, but how the operation really works in negative numbers could be useful for someone someday 🙂.
Another application for XOR is in circuits. It is used to sum bits.
When you look at a truth table:
x | y | x^y
---|---|-----
0 | 0 | 0 // 0 plus 0 = 0
0 | 1 | 1 // 0 plus 1 = 1
1 | 0 | 1 // 1 plus 0 = 1
1 | 1 | 0 // 1 plus 1 = 0 ; binary math with 1 bit
You can notice that the result of XOR is x added with y, without keeping track of the carry bit, the carry bit is obtained from the AND between x and y.
x^y // is actually ~xy + ~yx
// Which is the (negated x ANDed with y) OR ( negated y ANDed with x ).
The (^) XOR operator generates 1 when it is applied on two different bits (0 and 1). It generates 0 when it is applied on two same bits (0 and 0 or 1 and 1).

What does bitwise XOR (exclusive OR) mean?

I'm trying to understand the binary operators in C# or in general, in particular ^ - exclusive or.
For example:
Given an array of positive integers. All numbers occur even number of times except one number which occurs odd number of times. Find the number in O(n) time and constant space.
This can be done with ^ as follows: Do bitwise XOR of all the elements. Finally we get the number which has odd occurrences.
How does it work?
When I do:
int res = 2 ^ 3;
res = 1;
int res = 2 ^ 5;
res = 7;
int res = 2 ^ 10;
res = 8;
What's actually happening? What are the other bit magics? Any reference I can look up and learn more about them?
I know this is a rather old post but I wanted simplify the answer since I stumbled upon it while looking for something else.
XOR (eXclusive OR/either or), can be translated simply as toggle on/off.
Which will either exclude (if exists) or include (if nonexistent) the specified bits.
Using 4 bits (1111) we get 16 possible results from 0-15:
decimal | binary | bits (expanded)
0 | 0000 | 0
1 | 0001 | 1
2 | 0010 | 2
3 | 0011 | (1+2)
4 | 0100 | 4
5 | 0101 | (1+4)
6 | 0110 | (2+4)
7 | 0111 | (1+2+4)
8 | 1000 | 8
9 | 1001 | (1+8)
10 | 1010 | (2+8)
11 | 1011 | (1+2+8)
12 | 1100 | (4+8)
13 | 1101 | (1+4+8)
14 | 1110 | (2+4+8)
15 | 1111 | (1+2+4+8)
The decimal value to the left of the binary value, is the numeric value used in XOR and other bitwise operations, that represents the total value of associated bits. See Computer Number Format and Binary Number - Decimal for more details.
For example: 0011 are bits 1 and 2 as on, leaving bits 4 and 8 as off. Which is represented as the decimal value of 3 to signify the bits that are on, and displayed in an expanded form as 1+2.
As for what's going on with the logic behind XOR here are some examples
From the original post
2^3 = 1
2 is a member of 1+2 (3) remove 2 = 1
2^5 = 7
2 is not a member of 1+4 (5) add 2 = 1+2+4 (7)
2^10 = 8
2 is a member of 2+8 (10) remove 2 = 8
Further examples
1^3 = 2
1 is a member of 1+2 (3) remove 1 = 2
4^5 = 1
4 is a member of 1+4 (5) remove 4 = 1
4^4 = 0
4 is a member of itself remove 4 = 0
1^2^3 = 0Logic: ((1^2)^(1+2))
(1^2) 1 is not a member of 2 add 2 = 1+2 (3)
(3^3) 1 and 2 are members of 1+2 (3) remove 1+2 (3) = 0
1^1^0^1 = 1 Logic: (((1^1)^0)^1)
(1^1) 1 is a member of 1 remove 1 = 0
(0^0) 0 is a member of 0 remove 0 = 0
(0^1) 0 is not a member of 1 add 1 = 1
1^8^4 = 13 Logic: ((1^8)^4)
(1^8) 1 is not a member of 8 add 1 = 1+8 (9)
(9^4) 1 and 8 are not members of 4 add 1+8 = 1+4+8 (13)
4^13^10 = 3 Logic: ((4^(1+4+8))^(2+8))
(4^13) 4 is a member of 1+4+8 (13) remove 4 = 1+8 (9)
(9^10) 8 is a member of 2+8 (10) remove 8 = 2
1 is not a member of 2+8 (10) add 1 = 1+2 (3)
4^10^13 = 3 Logic: ((4^(2+8))^(1+4+8))
(4^10) 4 is not a member of 2+8 (10) add 4 = 2+4+8 (14)
(14^13) 4 and 8 are members of 1+4+8 (13) remove 4+8 = 1
2 is not a member of 1+4+8 (13) add 2 = 1+2 (3)
To see how it works, first you need to write both operands in binary, because bitwise operations work on individual bits.
Then you can apply the truth table for your particular operator. It acts on each pair of bits having the same position in the two operands (the same place value). So the leftmost bit (MSB) of A is combined with the MSB of B to produce the MSB of the result.
Example: 2^10:
0010 2
XOR 1010 8 + 2
----
1 xor(0, 1)
0 xor(0, 0)
0 xor(1, 1)
0 xor(0, 0)
----
= 1000 8
And the result is 8.
The other way to show this is to use the algebra of XOR; you do not need to know anything about individual bits.
For any numbers x, y, z:
XOR is commutative: x ^ y == y ^ x
XOR is associative: x ^ (y ^ z) == (x ^ y) ^ z
The identity is 0: x ^ 0 == x
Every element is its own inverse: x ^ x == 0
Given this, it is easy to prove the result stated. Consider a sequence:
a ^ b ^ c ^ d ...
Since XOR is commutative and associative, the order does not matter. So sort the elements.
Now any adjacent identical elements x ^ x can be replaced with 0 (self-inverse property). And any 0 can be removed (because it is the identity).
Repeat as long as possible. Any number that appears an even number of times has an integral number of pairs, so they all become 0 and disappear.
Eventually you are left with just one element, which is the one appearing an odd number of times. Every time it appears twice, those two disappear. Eventually you are left with one occurrence.
[update]
Note that this proof only requires certain assumptions about the operation. Specifically, suppose a set S with an operator . has the following properties:
Assocativity: x . (y . z) = (x . y) . z for any x, y, and z in S.
Identity: There exists a single element e such that e . x = x . e = x for all x in S.
Closure: For any x and y in S, x . y is also in S.
Self-inverse: For any x in S, x . x = e
As it turns out, we need not assume commutativity; we can prove it:
(x . y) . (x . y) = e (by self-inverse)
x . (y . x) . y = e (by associativity)
x . x . (y . x) . y . y = x . e . y (multiply both sides by x on the left and y on the right)
y . x = x . y (because x . x = y . y = e and the e's go away)
Now, I said that "you do not need to know anything about individual bits". I was thinking that any group satisfying these properties would be enough, and that such a group need not necessarily be isomorphic to the integers under XOR.
But #Steve Jessup proved me wrong in the comments. If you define scalar multiplication by {0,1} as:
0 * x = 0
1 * x = x
...then this structure satisfies all of the axioms of a vector space over the integers mod 2.
Thus any such structure is isomorphic to a set of vectors of bits under component-wise XOR.
This is based on the simple fact that XOR of a number with itself results Zero.
and XOR of a number with 0 results the number itself.
So, if we have an array = {5,8,12,5,12}.
5 is occurring 2 times.
8 is occurring 1 times.
12 is occurring 2 times.
We have to find the number occurring odd number of times. Clearly, 8 is the number.
We start with res=0 and XOR with all the elements of the array.
int res=0;
for(int i:array)
res = res ^ i;
1st Iteration: res = 0^5 = 5
2nd Iteration: res = 5^8
3rd Iteration: res = 5^8^12
4th Iteration: res = 5^8^12^5 = 0^8^12 = 8^12
5th Iteration: res = 8^12^12 = 8^0 = 8
The bitwise operators treat the bits inside an integer value as a tiny array of bits. Each of those bits is like a tiny bool value. When you use the bitwise exclusive or operator, one interpretation of what the operator does is:
for each bit in the first value, toggle the bit if the corresponding bit in the second value is set
The net effect is that a single bit starts out false and if the total number of "toggles" is even, it will still be false at the end. If the total number of "toggles" is odd, it will be true at the end.
Just think "tiny array of boolean values" and it will start to make sense.
The definition of the XOR (exclusive OR) operator, over bits, is that:
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
One of the ways to imagine it, is to say that the "1" on the right side changes the bit from the left side, and 0 on the right side doesn't change the bit on the left side. However, XOR is commutative, so the same is true if the sides are reversed.
As any number can be represented in binary form, any two numbers can be XOR-ed together.
To prove it being commutative, you can simply look at its definition, and see that for every combination of bits on either side, the result is the same if the sides are changed. To prove it being associative, you can simply run through all possible combinations of having 3 bits being XOR-ed to each other, and the result will stay the same no matter what the order is.
Now, as we proved the above, let's see what happens if we XOR the same number at itself. Since the operation works on individual bits, we can test it on just two numbers: 0 and 1.
0 XOR 0 = 0
1 XOR 1 = 0
So, if you XOR a number onto itself, you always get 0 (believe it or not, but that property of XOR has been used by compilers, when a 0 needs to be loaded into a CPU register. It's faster to perform a bit operation than to explicitly push 0 into a register. The compiler will just produce assembly code to XOR a register onto itself).
Now, if X XOR X is 0, and XOR is associative, and you need to find out what number hasn't repeated in a sequence of numbers where all other numbers have been repeated two (or any other odd number of times). If we had the repeating numbers together, they will XOR to 0. Anything that is XOR-ed with 0 will remain itself. So, out of XOR-ing such a sequence, you will end up being left with a number that doesn't repeat (or repeats an even number of times).
This has a lot of samples of various functionalities done by bit fiddling. Some of can be quite complex so beware.
What you need to do to understand the bit operations is, at least, this:
the input data, in binary form
a truth table that tells you how to "mix" the inputs to form the result
For XOR, the truth table is simple:
1^1 = 0
1^0 = 1
0^1 = 1
0^0 = 0
To obtain bit n in the result you apply the rule to bits n in the first and second inputs.
If you try to calculate 1^1^0^1 or any other combination, you will discover that the result is 1 if there is an odd number of 1's and 0 otherwise. You will also discover that any number XOR'ed with itself is 0 and that is doesn't matter in what order you do the calculations, e.g. 1^1^(0^1) = 1^(1^0)^1.
This means that when you XOR all the numbers in your list, the ones which are duplicates (or present an even number of times) will XOR to 0 and you will be left with just the one which is present an odd number of times.
As it is obvious from the name(bitwise), it operates between bits.
Let's see how it works,
for example, we have two numbers a=3 and b=4,
the binary representation of 3 is 011 and of 4 is 100, so basically xor of the same bits is 0 and for opposite bits, it is 1.
In the given example 3^4, where "^" is a xor symbol, will give us 111 whose decimal value will be 7.
for another example, if you've given an array in which every element occurs twice except one element & you've to find that element.
How can you do that? simple xor of the same numbers will always be 0 and the number which occur exactly once will be your output. because the output of any one number with 0 will be the same name number because the number will have set bits which zero don't have.

How to analyze this pseudocode

I have the following pseudocode:
sum<-0
inc<-0
for i from 1-n
for j from 1 to i
sum<-sum+inc
inc<-inc+1
I am asked to find a closed formula. The hint is to use common summations. No matter how I look at it I cannot write this code in summation form. Can someone give me an idea of what the summations would look like or even a recursive formula?
Assuming the for i from 1-n means:
for i from 1 to n
A closed formula for that can be obtained through some numerical analysis. Let's examine the number of times through the loop for a couple of values of n (5 and 6).
The outer loop is always n times and the inner loop is whatever i is for each iteration, so for values of n, here are the iteration counts:
n count
= ===========================================
1 (1) = 1
2 (1),(12) = 3
3 (1),(12),(123) = 6
4 (1),(12),(123),(1234) = 10
5 (1),(12),(123),(1234),(12345) = 15
6 (1),(12),(123),(1234),(12345),(123456) = 21
The closed formula for these is best illustrated as follows:
n = 5: 5 + 4 + 3 + 2 + 1
| | | | |
| | V | |
| | 3 | | Formula is: (n+1)*((n-1)/2) + ((n+1)/2)
| +-> 6 <-+ | [outer pair sets] + [inner value]
+-----> 6 <-----+
--
15
This is a formula for all odd values of n. For even values, a similar method can be used:
n = 6: 6 + 5 + 4 + 3 + 2 + 1
| | | | | |
| | +-> 7 <-+ | | Formula is: (n+1)*(n/2)
| +-----> 7 <-----+ | [outer pair sets]
+---------> 7 <---------+
--
21
This tells you the number of iterations of the nested loop for each value of n (we'll call this x).
The calculation of the final value of sum is very similar. On the first iteration you add zero. On the second iteration, you add one. On the third iteration, you add two. That's pretty much exactly the same thing you had to do to figure out the number of iterations, only now it's based on x rather than n and it's 0+1+2+... rather than 1+2+3+..., meaning we can use exactly the same formula just by applying it to x-1 rather than x.
So we can use:
if n is odd:
x <- (n+1) * ((n-1)/2) + ((n+1)/2)
else:
x <- (n+1) * (n/2)
x <- x - 1
if x is odd:
sum <- (x+1) * ((x-1)/2) + ((x+1)/2)
else:
sum <- (x+1) * (x/2)
Checking this against the algorithm for the first few values on n:
n algorithm formula
- --------- -------
0 0 0
1 0 0
2 3 3
3 15 15
4 45 45
5 105 105
So, a perfect match, at least withing the sample space chosen. You could actually go further and turn that into a single formula based on n alone rather than working out an intermediate value, but I'll leave that as an exercise for the reader.
Hint: A C formula which works for both odd and even numbers is:
x <- ((n+1) * ((n-(n%2))/2)) + ((n%2) * ((n+1)/2))
(though still not tested for negative values of n - you should put a check for that before you use the formulaic version).
Innermost loop (let's just call i a fixed number):
inc is incremented i times.
sum has inc added to it i times. (i*(i-1)/2, right?)
If we assume that inc and sum start art value 0, then that's valid. If we assume that they start at some different value, let's call them k and l, then we know that inc will end up at value k+i. We know that sum will end up at l+k*i + i*(i-1)/2.
Now, i itself is going from 1 to n. Um... hum. Let me think about this a bit more.

Resources