Why are 8 and 256 such important numbers in computer sciences? - math

I don't know very well about RAM and HDD architecture, or how electronics deals with chunks of memory, but this always triggered my curiosity:
Why did we choose to stop at 8 bits for the smallest element in a computer value ?
My question may look very dumb, because the answer are obvious, but I'm not very sure...
Is it because 2^3 allows it to fit perfectly when addressing memory ?
Are electronics especially designed to store chunk of 8 bits ? If yes, why not use wider words ?
It is because it divides 32, 64 and 128, so that processor words can be be given several of those words ?
Is it just convenient to have 256 value for such a tiny space ?
What do you think ?
My question is a little too metaphysical, but I want to make sure it's just an historical reason and not a technological or mathematical reason.
For the anecdote, I was also thinking about the ASCII standard, in which most of the first characters are useless with stuff like UTF-8, I'm also trying to think about some tinier and faster character encoding...

Historically, bytes haven't always been 8-bit in size (for that matter, computers don't have to be binary either, but non-binary computing has seen much less action in practice). It is for this reason that IETF and ISO standards often use the term octet - they don't use byte because they don't want to assume it means 8-bits when it doesn't.
Indeed, when byte was coined it was defined as a 1-6 bit unit. Byte-sizes in use throughout history include 7, 9, 36 and machines with variable-sized bytes.
8 was a mixture of commercial success, it being a convenient enough number for the people thinking about it (which would have fed into each other) and no doubt other reasons I'm completely ignorant of.
The ASCII standard you mention assumes a 7-bit byte, and was based on earlier 6-bit communication standards.
Edit: It may be worth adding to this, as some are insisting that those saying bytes are always octets, are confusing bytes with words.
An octet is a name given to a unit of 8 bits (from the Latin for eight). If you are using a computer (or at a higher abstraction level, a programming language) where bytes are 8-bit, then this is easy to do, otherwise you need some conversion code (or coversion in hardware). The concept of octet comes up more in networking standards than in local computing, because in being architecture-neutral it allows for the creation of standards that can be used in communicating between machines with different byte sizes, hence its use in IETF and ISO standards (incidentally, ISO/IEC 10646 uses octet where the Unicode Standard uses byte for what is essentially - with some minor extra restrictions on the latter part - the same standard, though the Unicode Standard does detail that they mean octet by byte even though bytes may be different sizes on different machines). The concept of octet exists precisely because 8-bit bytes are common (hence the choice of using them as the basis of such standards) but not universal (hence the need for another word to avoid ambiguity).
Historically, a byte was the size used to store a character, a matter which in turn builds on practices, standards and de-facto standards which pre-date computers used for telex and other communication methods, starting perhaps with Baudot in 1870 (I don't know of any earlier, but am open to corrections).
This is reflected by the fact that in C and C++ the unit for storing a byte is called char whose size in bits is defined by CHAR_BIT in the standard limits.h header. Different machines would use 5,6,7,8,9 or more bits to define a character. These days of course we define characters as 21-bit and use different encodings to store them in 8-, 16- or 32-bit units, (and non-Unicode authorised ways like UTF-7 for other sizes) but historically that was the way it was.
In languages which aim to be more consistent across machines, rather than reflecting the machine architecture, byte tends to be fixed in the language, and these days this generally means it is defined in the language as 8-bit. Given the point in history when they were made, and that most machines now have 8-bit bytes, the distinction is largely moot, though it's not impossible to implement a compiler, run-time, etc. for such languages on machines with different sized bytes, just not as easy.
A word is the "natural" size for a given computer. This is less clearly defined, because it affects a few overlapping concerns that would generally coïncide, but might not. Most registers on a machine will be this size, but some might not. The largest address size would typically be a word, though this may not be the case (the Z80 had an 8-bit byte and a 1-byte word, but allowed some doubling of registers to give some 16-bit support including 16-bit addressing).
Again we see here a difference between C and C++ where int is defined in terms of word-size and long being defined to take advantage of a processor which has a "long word" concept should such exist, though possibly being identical in a given case to int. The minimum and maximum values are again in the limits.h header. (Indeed, as time has gone on, int may be defined as smaller than the natural word-size, as a combination of consistency with what is common elsewhere, reduction in memory usage for an array of ints, and probably other concerns I don't know of).
Java and .NET languages take the approach of defining int and long as fixed across all architecutres, and making dealing with the differences an issue for the runtime (particularly the JITter) to deal with. Notably though, even in .NET the size of a pointer (in unsafe code) will vary depending on architecture to be the underlying word size, rather than a language-imposed word size.
Hence, octet, byte and word are all very independent of each other, despite the relationship of octet == byte and word being a whole number of bytes (and a whole binary-round number like 2, 4, 8 etc.) being common today.

Not all bytes are 8 bits. Some are 7, some 9, some other values entirely. The reason 8 is important is that, in most modern computers, it is the standard number of bits in a byte. As Nikola mentioned, a bit is the actual smallest unit (a single binary value, true or false).
As Will mentioned, this article http://en.wikipedia.org/wiki/Byte describes the byte and its variable-sized history in some more detail.
The general reasoning behind why 8, 256, and other numbers are important is that they are powers of 2, and computers run using a base-2 (binary) system of switches.

ASCII encoding required 7 bits, and EBCDIC required 8 bits. Extended ASCII codes (such as ANSI character sets) used the 8th bit to expand the character set with graphics, accented characters and other symbols.Some architectures made use of proprietary encodings; a good example of this is the DEC PDP-10, which had a 36 bit machine word. Some operating sytems on this architecture used packed encodings that stored 6 characters in a machine word for various purposes such as file names.
By the 1970s, the success of the D.G. Nova and DEC PDP-11, which were 16 bit architectures and IBM mainframes with 32 bit machine words was pushing the industry towards an 8 bit character by default. The 8 bit microprocessors of the late 1970s were developed in this environment and this became a de facto standard, particularly as off-the shelf peripheral ships such as UARTs, ROM chips and FDC chips were being built as 8 bit devices.
By the latter part of the 1970s the industry settled on 8 bits as a de facto standard and architectures such as the PDP-8 with its 12 bit machine word became somewhat marginalised (although the PDP-8 ISA and derivatives still appear in embedded sytem products). 16 and 32 bit microprocessor designs such as the Intel 80x86 and MC68K families followed.

Since computers work with binary numbers, all powers of two are important.
8bit numbers are able to represent 256 (2^8) distinct values, enough for all characters of English and quite a few extra ones. That made the numbers 8 and 256 quite important.
The fact that many CPUs (used to and still do) process data in 8bit helped a lot.
Other important powers of two you might have heard about are 1024 (2^10=1k) and 65536 (2^16=65k).

Computers are build upon digital electronics, and digital electronics works with states. One fragment can have 2 states, 1 or 0 (if the voltage is above some level then it is 1, if not then it is zero). To represent that behavior binary system was introduced (well not introduced but widely accepted).
So we come to the bit. Bit is the smallest fragment in binary system. It can take only 2 states, 1 or 0, and it represents the atomic fragment of the whole system.
To make our lives easy the byte (8 bits) was introduced. To give u some analogy we don't express weight in grams, but that is the base measure of weight, but we use kilograms, because it is easier to use and to understand the use. One kilogram is the 1000 grams, and that can be expressed as 10 on the power of 3. So when we go back to the binary system and we use the same power we get 8 ( 2 on the power of 3 is 8). That was done because the use of only bits was overly complicated in every day computing.
That held on, so further in the future when we realized that 8 bytes was again too small and becoming complicated to use we added +1 on the power ( 2 on the power of 4 is 16), and then again 2^5 is 32, and so on and the 256 is just 2 on the power of 8.
So your answer is we follow the binary system because of architecture of computers, and we go up in the value of the power to represent get some values that we can simply handle every day, and that is how you got from a bit to an byte (8 bits) and so on!
(2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and so on) (2^x, x=1,2,3,4,5,6,7,8,9,10 and so on)

The important number here is binary 0 or 1. All your other questions are related to this.
Claude Shannon and George Boole did the fundamental work on what we now call information theory and Boolean arithmetic. In short, this is the basis of how a digital switch, with only the ability to represent 0 OFF and 1 ON can represent more complex information, such as numbers, logic and a jpg photo. Binary is the basis of computers as we know them currently, but other number base computers or analog computers are completely possible.
In human decimal arithmetic, the powers of ten have significance. 10, 100, 1000, 10,000 each seem important and useful. Once you have a computer based on binary, there are powers of 2, likewise, that become important. 2^8 = 256 is enough for an alphabet, punctuation and control characters. (More importantly, 2^7 is enough for an alphabet, punctuation and control characters and 2^8 is enough room for those ASCII characters and a check bit.)

We normally count in base 10, a single digit can have one of ten different values. Computer technology is based on switches (microscopic) which can be either on or off. If one of these represents a digit, that digit can be either 1 or 0. This is base 2.
It follows from there that computers work with numbers that are built up as a series of 2 value digits.
1 digit,2 values
2 digits, 4 values
3 digits, 8 values etc.
When processors are designed, they have to pick a size that the processor will be optimized to work with. To the CPU, this is considered a "word". Earlier CPUs were based on word sizes of fourbits and soon after 8 bits (1 byte). Today, CPUs are mostly designed to operate on 32 bit and 64 bit words. But really, the two state "switch" are why all computer numbers tend to be powers of 2.

I believe the main reason has to do with the original design of the IBM PC. The Intel 8080 CPU was the first precursor to the 8086 which would later be used in the IBM PC. It had 8-bit registers. Thus, a whole ecosystem of applications was developed around the 8-bit metaphor. In order to retain backward compatibility, Intel designed all subsequent architectures to retain 8-bit registers. Thus, the 8086 and all x86 CPUs after that kept their 8-bit registers for backwards compatibility, even though they added new 16-bit and 32-bit registers over the years.
The other reason I can think of is 8 bits is perfect for fitting a basic Latin character set. You cannot fit it into 4 bits, but you can in 8. Thus, you get the whole 256-value ASCII charset. It is also the smallest power of 2 for which you have enough bits into which you can fit a character set. Of course, these days most character sets are actually 16-bit wide (i.e. Unicode).

Charles Petzold wrote an interesting book called Code that covers exactly this question. See chapter 15, Bytes and Hex.
Quotes from that chapter:
Eight bit values are inputs to the
adders, latches and data selectors,
and also outputs from these units.
Eight-bit values are also defined by
switches and displayed by lightbulbs,
The data path in these circuits is
thus said to be 8 bits wide. But
why 8 bits? Why not 6 or 7 or 9 or
10?
... there's really no reason why
it had to be built that way. Eight
bits just seemed at the time to be a
convenient amount, a nice biteful of
bits, if you will.
...For a while, a byte meant simply
the number of bits in a particular
data path. But by the mid-1960s. in
connection with the development of
IBM's System/360 (their large complex
of business computers), the word came
to mean a group of 8 bits.
... One reason IBM gravitated toward
8-bit bytes was the ease in storing
numbers in a format known as BCD.
But as we'll see in the chapters ahead, quite by coincidence a byte is
ideal for storing text because most
written languages around the world
(with the exception of the ideographs
used in Chinese, Japanese and Korean)
can be represented with fewer than 256
characters.

Historical reasons, I suppose. 8 is a power of 2, 2^2 is 4 and 2^4 = 16 is far too little for most purposes, and 16 (the next power of two) bit hardware came much later.
But the main reason, I suspect, is the fact that they had 8 bit microprocessors, then 16 bit microprocessors, whose words could very well be represented as 2 octets, and so on. You know, historical cruft and backward compability etc.
Another, similarily pragmatic reason against "scaling down": If we'd, say, use 4 bits as one word, we would basically get only half the troughtput compared with 8 bit. Aside from overflowing much faster.
You can always squeeze e.g. 2 numbers in the range 0..15 in one octet... you just have to extract them by hand. But unless you have, like, gazillions of data sets to keep in memory side-by-side, this isn't worth the effort.

Related

What is the difference between AVX2 and AVX-512?

In terms of SIMD and parallelization, what is the difference between AVX2 and AVX-512? Are they the same thing or different? I just see that double8 is used in AVX-512 and double4 is used for AVX2?
I am using PyOpenCL to write kernel code in C and not sure what the difference would be.
AVX2 is a 256 bit vector instruction set. You have 256 bit registers which can be interpreted several ways (8 floats, 4 doubles, 32 bytes, etc). AVX1 supports only floating point operations, AVX2 adds 256 bit integer operations. AVX-512 is a set of 512 bit vector instructions. There are only 2 flavors of AVX, plain old AVX and AVX2. AVX-512 comes in many different flavors. You may find Intel's Intrinsics Guide interesting.
The biggest difference is simply getting twice as many operations processed per instruction. Though, there are certain instructions in AVX-512 which may make some specific things more optimal (exponent approximations, for example).

Encoding DNA strand in Binary

Hey guys I have the following question:
Suppose we are working with strands of DNA, each strand consisting of
a sequence of 10 nucleotides. Each nucleotide can be any one of four
different types: A, G, T or C. How many bits does it take to encode a
DNA strand?
Here is my approach to it and I want to know if that is correct.
We have 10 spots. Each spot can have 4 different symbols. This means we require 4^10 combinations using our binary digits.
4^10 = 1048576.
We will then find the log base 2 of that. What do you guys think of my approach?
Each nucleotide (aka base-pair) takes two bits (one of four states -> 2 bits of information). 10 base-pairs thus take 20 bits. Reasoning that way is easier than doing the log2(4^10), but gives the same answer.
It would be fewer bits of information if there were any combinations that couldn't appear. e.g. some codons (sequence of three base-pairs) that never appear. But ten independent 2-bit pieces of information sum to 20 bits.
If some sequences appear more frequently than others, and a variable-length representation is viable, then Huffman coding or other compression schemes could save bits most of the time. This might be good in a file-format, but unlikely to be good in-memory when you're working with them.
Densely packing your data into an array of 2bit fields makes it slower to access a single base-pair, but comparing the whole chunk for equality with another chunk is still efficient. (memcmp).
20 bits is unfortunately just slightly too large for a 16bit integer (which computers are good at). Storing in an array of 32bit zero-extended values wastes a lot of space. On hardware with good unaligned support, storing 24bit zero-extended values is ok (do a 32bit load and mask the high 8 bits. Storing is even less convenient though: probably a 16b store and an 8b store, or else load the old value and merge the high 8, then do a 32b store. But that's not atomic.).
This is a similar problem for storing codons (groups of three base-pairs that code for an amino acid): 6 bits of information doesn't fill a byte. Only wasting 2 of every 8 bits isn't that bad, though.
Amino-acid sequences (where you don't care about mutations between different codons that still code for the same AA) have about 20 symbols per position, which means a symbol doesn't quite fit into a 4bit nibble.
I used to work for the phylogenetics research group at Dalhousie, so I've sometimes thought about having a look at DNA-sequence software to see if I could improve on how they internally store sequence data. I never got around to it, though. The real CPU intensive work happens in finding a maximum-likelihood evolutionary tree after you've already calculated a matrix of the evolutionary distance between every pair of input sequences. So actual sequence comparison isn't the bottleneck.
do the maths:
4^10 = 2^2^10 = 2^20
Answer: 20 bits

How does a processor calculate bigger than its register value?

So far I learned that a processor has registers, for 32 bit processor
they are 32 bits, for 64 bit they are 64 bits. So can someone explain
what happens if I give to the processor a larger value than its register
size? How is the calculation performed?
It depends.
Assuming x86 for the sake of discussion, 64-bit integers can still be handled "natively" on a 32-bit architecture. In this case, the program often uses a pair of 32-bit registers to hold the 64-bit value. For example, the value 0xDEADBEEF2B84F00D might be stored in the EDX:EAX register pair:
eax = 0x2B84F00D
edx = 0xDEADBEEF
The CPU actually expects 64-bit numbers in this format in some cases (IDIV, for example).
Math operations are done in multiple instructions. For example, a 64-bit add on a 32-bit x86 CPU is done with an add of the lower DWORDs, and then an adc of the upper DWORDs, which takes into account the carry flag from the first addition.
For even bigger integers, an arbitrary-precision arithmetic (or "big int") library is used. Here, a dynamically-sized array of bytes is used to represent the integer, with additional information (like the number of bits used). GMP is a popular choice.
Mathematical operations on big integers are done iteratively, probably in native word-size values at-a-time. For the gory details, I suggest you have a look through the source code of one of these open-source libraries.
The key to all of this, is that numeric operations are carried out in manageable pieces, and combined to produce the final result.

Assembler memory address representation

I'm trying to get into assembler and I often come across numbers in the following form:
org 7c00h
; initialize the stack:
mov ax, 07c0h
mov ss, ax
mov sp, 03feh ; top of the stack.
7c00h, 07c0h, 03feh - What is the name of this number notation? What do they mean? Why are they used over "normal" decimal numbers?
It's hexadecimal, the numeral system with 16 digits 0-9 and A-F. Memory addresses are given in hex, because it's shorter, easier to read, and the numbers that represent memory locations don't mean anything special to humans, so no sense to have long numbers. I would guess that somewhere in the past someone had to type in some addresses by hand as well, might as well have started there.
Worth noting also, 0:7C00 is the boot sector load address.
Further worth noting: 07C0:03FE is the same address as 0:7FFE due to the way segmented addressing works.
This guy's left himself a 510 byte stack (he made the very typical off-by-two error in setting up the boot sector's stack).
These are numbers in hexadecimal notation, i.e. in base 16, where A to F have the digit values 10 to 15.
One advantage is that there is a more direct conversion to binary numbers. With a little bit of practice it is easy to see which bits in the number are 1 and which are 0.
Another is is that many numbers used internally, such as memory addresses, are round numbers in hexadecimal, i.e. contain a lot of zeros.

Practical example of 32 bit unsigned, signed, 64 bit differences

I'm a fairly new programmer and I was wondering if someone could give me a practical explanation/example on the differences and uses between working with signed, unsigned and 32 bit vs 64 bit?
i.e. I read an article about how Twitter had developers switch to 64 bit last year but I wasn't sure the reasoning and the specific nature to it.
Thank you!
For n bits, you can have 2^n different numbers represented by those bits. So 32 bit unsigned numbers go from 0 to 4,294,967,295 (2^32-1, the -1 is because 0 is a valid number). Signed numbers divide that 4 billion evenly between positive and negative. 32-bit computers use this in their memory addresses, which means a program can natively access 4 GB of memory. 64-bit computers have a limit of 2^64, which is much, much higher.
You also run across that 4 billion limit if you're using 32-bit numbers to represent other things, like for example users, tweets, or seconds since a certain date. So 32-bit works just fine up to a certain scale, then above that, even though there are ways to work around the limit, it makes more sense to go to 64-bit.
The disadvantage is it takes twice as much memory to store your numbers.

Resources