Why XOR have first letter X? - logical-operators

I began to disassemble the bit operations.And I have a simple question.I disassebled such operations as OR and XOR ,but something that is not understood.Why XOR have first letter X?

It is easier to remember that way, since XOR sounds more like eXclusive OR.
http://en.wikipedia.org/wiki/Exclusive_or

XOR Truthtable for two variables is nothing but either ... or.
In English, the construct "either ... or" is usually used to indicate exclusive or and "or" generally used for inclusive. So XOR is Exclusive OR
Source: Wikipedia

Related

What is a minimum set of math operators that can generate all calculations?

I am thinking about a minimal instruction set CPU, say ten or fewer opcodes. I want to find the smallest set of math opcodes that can still perform any general-purpose function.
For example, in logic the operators AND, OR, NOT are redundant. You can calculate any function with the other two, so a minimum set of logical operators needs only two.
For arithmetic functions, could I get by with two opcodes, perhaps ADD and bit-wise invert?
(assume a carry bit and a jump-on-carry instruction).
With ADD and BITINV I can do subtraction without an explicit SUB operator. Multiplication and division are easy extensions from addition and subtraction. SHL is multiply by two, and SHR is divide by two.
To cover logic and arithmetic it looks like AND, BITINV, ADD are a complete minimum set. Did I miss anything?
You probably also need to include control instructions like conditionals and jumps to make your instruction set Turing complete (or maybe your instruction set should be able to construct all computable functions).
For example, in your question,
Multiplication and division are easy extensions from addition and
subtraction
This is not true without the notion of loops and conditionals (consider, e.g. the Euclidean algorithm for division). You have to have loops somewhere to make a division.

Two's Complement -- How are negative numbers handled?

It is my understanding that numbers are negated using the two's compliment, which to my understanding is: !num + 1.
So my question is does this mean that, for variable 'foo'=1, a negated 'foo' will be the exactly the same as variable 'bar'=255.
f we were to check if -'foo' == 'bar' or if -'foo' == 255, would we get that they are equal?
I know that some languages, such as Java, keep a sign bit -- so the comparisons would yield false. What of languages that do not? And I'm assuming that assembler/native machine does not have a sign bit.
In addition to all of this, I read about a zero flag or a carry-over flag that is set when a 'negative' number is added to another (of any sign) number. This flag being set whenever it is added because of the way two's complement works, 0x01 + 0xff = 0x00 (with the leading 1 truncated). What exactly is this flag used for?
And my last question, for other math operations (such as multiplication), would I have to re-negate the number (so it is now positive), perform the operation, and negate the result? E.g., !((!neg + 1) * pos) + 1.
Edit
Finished the question, so feel free fire away.
Yes, in two’s complement, the number x is represented as ~x+1, where ~x is the bitwise complement of the binary numeral for x in some fixed number bits. E.g., for eight bits, the binary numeral for x is 000000001, so the bitwise complement is 11111110, and adding one produces 11111111.
There is no way to distinguish -1 in eight-bit two’s complement from 255 in eight-bit binary (with no sign). They both have the same representation in bits: 11111111. If you are using both of these numbers, you must either separately remember which one is eight-bit two’s complement and which one is plain eight-bit binary or you must use more than eight bits. In other words, at the raw bit level, 11111111 is just eight bits; it has no value until we decide how to interpret it.
Java and typical other languages do not maintain a sign bit separate from the value of a number; the sign is part of the encoding of the number. Also, typical languages do not allow you to compare different types. If you have a two’s complement x and an unsigned y, then either one must be converted to the type of the other before comparison or they must both be converted to a third type. Thus, if you compare x and y, and one is converted to the other, then the conversion will overflow or wrap, and you cannot expect to get the correct mathematical result. To compare these two numbers, we might convert each of them to a wider integer, such as 32-bits, then compare. Converting the eight-bit two’s complement 11111111 to a 32-bit integer produces -1, and converting the eight-bit plain binary 11111111 to a 32-bit integer produces 255, and then the comparison reports they are unequal.
The zero flag and the carry flag you read about are flags that are set when a comparison instruction is executed in a computer processor. Most high-level languages do not give you direct access to these flags. Many processors have an instruction with a form like this:
cmp a, b
That instruction subtracts b from a and discards the difference but remembers several flags that describe the subtraction: Was the result zero (zero flag)? Did a borrow occur (borrow flag)? Was the result negative (sign flag)? Did an overflow occur (overflow flag)?
The compare instruction requires that the two things being compared be the same type (two’s complement or unsigned), but it does not care which type. The results can be tested later by checking particular combinations of the flags depending on the type. That is, the information recorded in the flags can distinguish whether one two’s complement number was greater than another or whether one unsigned number was greater than another, depending on what tests are made. There are conditional branch instructions that test the desired flag properties.
There is generally no need to “un-negate” a number to perform arithmetic operations. Processors include arithmetic instructions that work on two’s complement numbers. Usually the add and subtract instructions are type-agnostic, the same way the compare instruction is, but the multiply and divide instructions are not (except for some forms of multiply that return partial results). The add and subtract instructions can be type-agnostic because the wrapping that occurs in the arithmetic works for both two’s complement and unsigned. However, that wrapping does not work for multiplication and division.

ternary operators for calculus class

I was wondering about the use ternary operators outside of programming. For example, in those pesky calculus classes that are required for a CS degree. Could a person describe something like a hyperbolic function with a ternary operator like this:
1/x ? 1/x : infinity;
This assumes that x is a positive float and should say that if x != 0 then the function returns 1/x, otherwise it returns infinity. Would this circumvent the whole need for limits?
I'm not entirely certian as to the specific question, but yes, a ternary can answer any question posed as 'if/else' or 'if and only if, else'. Traditionally however, math is not written in a conditional format with any real flow control. 'if' and other flow control mechanisms let code execute in differant ways, but with most math, the flow is the same; just the results differ.
Mathematically, any operator can be equivalently described as a function, as in a + b = add(a,b); note that this is true for programming as well. In either case, binary operators are a common way to describe functions of two arguments because they are easy to read that way.
Ternary operators are more difficult to read, and they are correspondingly less common. But, since mathematical typography is not limited to a one-dimensional text string, many mathematical operators have large arity -- for instance, a definite integral arguably has 4 arguments (start, end, integrand, and differential).
To answer your second question: no, this does not circumvent the need for limits; you could just as easily say that the alternative was 42 instead of infinity.
I will also mention that your 1/x example doesn't really match the programming usage of the ?: ternary operator anyway. Note that 1/x is not a boolean; it looks like you're trying to use ?: to handle an exception-like condition, which would be better suited to a try/catch form.
Also, when you say "This assumes that x is a positive float", how is a reader supposed to know this? You may recall that there is mathematical notation that solves this specific problem by indicating limits from above....

Help to understand Google Code Jam 2011 Candy Splitting problem

I'm participating in google code jam. Before anything I want to say that I don't want anyone to solve me a problem "to win", or something like that. I just want some help to understand a problem I couldn't solve in a round that has already FINISHED.
Here is the link of the problem, called Candy Splitting. I won't explain it here because it is nosense, I wont be able to explain it better than google does.
I would like to know some "good" solution to the problem, for example, I've downloaded the first English solution and I've seen the code has only 30 lines!!! Thats amazing! (Anyone can download it so I think there is no problem with saying it: the solution of theycallhimtom from here). I can't understand the solution even watching the code. (My ignorance of Java doesn't help.)
Thanks!
Google themselves provide discussions about the problems and the solution
See this link for the Candy Splitting problem: http://code.google.com/codejam/contest/dashboard?c=975485#s=a&a=2
Basically, the candies can be divided into two equal value piles (from Patrick's point of view) if
C[0] xor C[1] xor C[2] xor ... xor C[N] == 0.
One such split is the sum of all candy values except one. To maximimise the value of one pile, take the lowest value candy and put it in a pile of its own.
Why is it so?
The way I thought about it, is that by definition Patrick's addition is actually equal to xoring values. From the definition of the problem, we want
C[i] xor C[j] xor ... xor C[k] == C[x] xor C[y] xor ... xor C[z]
for some elements on each side.
Adding the RHS to both the LHS and RHS yields
C[i] xor C[j] xor ... xor C[k] xor C[x] xor C[y] xor ... xor C[z] == 0
Since xoring a value with itself gives 0, and the order of xor operations is not important, the RHS becomes 0.
Any of the elements in the LHS can be moved over the to right side and the equality still holds. Picking the lowest value element makes the best split between the piles.

keyless ciphers of ROT13/47 ilk

Do you know of any other ciphers that performs like the ROT47 family?
My major requirement is that it'd be keyless.
Sounds like you might be looking for some "classical cryptography" solutions.
SUBSTITUTION CIPHERS are encodings where one character is substituted with another. E.g. A->Y, B->Q, C->P, and so on. The "Caesar Cipher" is a special case where the order is preserved, and the "key" is the offset. In the rot13/47 case, the "key" is 13 or 47, respectively, though it could be something like 3 (A->D, B->E, C->F, ...).
TRANSPOSITION CIPHERS are ones that don't substitute letters, but ones that rearrange letters in a pre-defined way. For example:
CRYPTOGRAPHY
may be written as
C Y T G A H
R P O R P Y
So the ciphered output is created by reading the two lines left to right
CYTGAHRPORPY
Another property of rot13/47 is that it's REVERSABLE:
encode(encode(plaintext)) == plaintext
If this is the property you want, you could simply XOR the message with a known (previously decided) XOR value. Then, XOR-ing the ciphertext with the same value will return the original plaintext. An example of this would be the memfrob function, which just XORs a buffer with the binary representation of the number 42.
You also might check out other forms of ENCODINGS, such as Base64 if that's closer to what you're looking for.
!! Disclaimer - if you have data that you're actually trying to protect from anyone, don't use any of these methods. While entertaining, all of these methods are trivial to break.

Resources