Converting bytes to megabytes - math

I've seen three ways of doing conversion from bytes to megabytes:
megabytes=bytes/1000000
megabytes=bytes/1024/1024
megabytes=bytes/1024/1000
Ok, I think #3 is totally wrong but I have seen it. I think #2 is right, but I am looking for some respected authority (like W3C, ISO, NIST, etc) to clarify which megabyte is a true megabyte. Can anyone cite a source that explicitly explains how this calculation is done?
Bonus question: if #2 is a megabyte what are #1 and #3 called?
BTW: Hard drive manufacturers don't count as authorities on this one!

Traditionally by megabyte we mean your second option -- 1 megabyte = 220 bytes. But it is not correct actually because mega means 1 000 000. There is a new standard name for 220 bytes, it is mebibyte (http://en.wikipedia.org/wiki/Mebibyte) and it gathers popularity.

There's an IEC standard that distinguishes the terms, e.g. Mebibyte = 1024^2 bytes but Megabyte = 1000^2 (in order to be compatible to SI units like kilograms where k/M/... means 1000/1000000). Actually most people in the IT area will prefer Megabyte = 1024^2 and hard disk manufacturers will prefer Megabyte = 1000^2 (because hard disk sizes will sound bigger than they are).
As a matter of fact, most people are confused by the IEC standard (multiplier 1000) and the traditional meaning (multiplier 1024). In general you shouldn't make assumptions on what people mean. For example, 128 kBit/s for MP3s usually means 128000 bits because the multiplier 1000 is mostly used with the unit bits. But often people then call 2048 kBit/s equal to 2 MBit/s - confusing eh?
So as a general rule, don't trust bit/byte units at all ;)

Divide by 2 to the power of 20, (1024*1024) bytes = 1 megabyte
1024*1024 = 1,048,576
2^20 = 1,048,576
1,048,576/1,048,576 = 1
It is the same thing.

BTW: Hard drive manufacturers don't count as authorities on this one!
Oh, yes they do (and the definition they assume from the S.I. is the correct one). On a related issue, see this post on CodingHorror.

for convert byte to megabyte(MB)
use totalbyte/1000/1000
for convert byte to mebibyte (MiB)
use totalbyte/1024/1024
https://en.wikipedia.org/wiki/Byte#Multiple-byte_units

The answer is that #1 is technically correct based on the real meaning of the Mega prefix, however (and in life there is always a however) the math for that doesn't come out so nice in base 2, which is how computers count, so #2 is what people really use.

Megabyte means 2^20 bytes. I know that technically that doesn't mesh with the SI units, and that some folks have come up with a new terminology to mean 2^20. None of that matters. Efforts to change the language to "clarify" things are doomed to failure.
Hard-drive manufacturers use it to mean 1,000,000 bytes, because that's what it means in SI so they figure technically they aren't lying (while actually they are). That falls under lies, damn lies, and marketing.

Use the computation your users will most likely expect. Do your users care to know how many actual bytes are on a disk or in memory or whatever, or do they only care about usable space? The answer to that question will tell you which calculation makes the most sense.
This isn't a precision question as much as it is a usability question. Provide the calculation that is most useful to your users.

In general, it's wrong to use decimal SI prefixes (e.g. kilo, mega) when referring to binary data sizes (except in casual usage). It's ambiguous and causes confusion. To be precise you can use binary prefixes (e.g. 1 mebibyte = 1 MiB = 1024 kibibytes = 2^20 bytes). When someone else uses decimal SI prefixes for binary data you need to get more information before you can know what is meant.

Microsoft Windows Explorer shows file size in the "Properties" window. This is a conversion from the byte count using 2^20

Related

How does sound data look like?

I read how sounds represented with numbers in computer here.
And I figured out that usual representation is that, we get 44,100 numbers between [-32767, 32767] per second.
Then to my imagination, there's got to be a big one-column matrix, right?
I'm a R user, so speaking in R, sound data of 3 seconds would be,
s <- 3
sound <- matrix(0, ncol = 1, nrow = 44100 * s)
nrow(sound)
#> [1] 132300
one-column matrix with 132,300 rows.
Is this really the case?
I want some analogous picture in my head, say, in case of a picture with 256 * 256,
if we RGB that picture, we get 3 matrices each with 256 * 256.
And in the case of sounds, we get a long long column? As I think about this again, it's not even a matrix after all. It's a column.
Am I right? I can't find any similar dataset searching Internet.
Any advices will be welcomed. Thanks.
The raw format that is created early in that linked question could look a lot like a single dimension array. And probably the signal that is sent to the speaker to make the sound could be represented similarly.
But you're unlikely to find a file on your computer that looks like that for several reasons:
Sound can be stored at different bit depth - that is how many bits for each 'number' CD Audio tracks have a 16 bit depth, but you could have 8 or 32 bits etc. In a straight stream of these numbers you need some how to know how far to read to the next number, so that information needs to be safed somewhere.
Sample rate can vary. If you've got a sequence of numbers representing an audio signal, then you need to know how long each number lasts for.
mostly sounds are more complex. Instead of a single source, you have stereo, or 5 channel, or whatever, so the system needs to be able to store / decode multiple pieces of information for the sounds you want to hear at a particular time
much of sound is repetitive, and so can often benefit from compression.
So most sounds are stored in a compressed format that includes wrapper information about how to decode it. The wrapper information includes how to decode the different audio channels, what sort of compression was used etc.
The closest you're likely to find are a .wav file (Windows) or .aiff (Mac). But even these include some metadata (sample rate and bit depth to start).

Human readable alternative for UUIDs

I am working on a system that makes heavy use of pseudonyms to make privacy-critical data available to researchers. These pseudonyms should have the following properties:
They should not contain any information (e.g. time of creation, relation to other pseudonyms, encoded data, …).
It should be easy to create unique pseudonyms.
They should be human readable. That means they should be easy for humans to compare, copy, and understand when read out aloud.
My first idea was to use UUID4. They are quite good on (1) and (2), but not so much on (3).
An variant is to encode UUIDs with a wider alphabet, resulting in shorter strings (see for example shortuuid). But I am not sure whether this actually improves readability.
Another approach I am currently looking into is a paper from 2005 titled "An optimal code for patient identifiers" which aims to tackle exactly my problem. The algorithm described there creates 8-character pseudonyms with 30 bits of entropy. I would prefer to use a more widely reviewed standard though.
Then there is also the git approach: only display the first few characters of the actual pseudonym. But this would mean that a pseudonym could lose its uniqueness after some time.
So my question is: Is there any widely-used standard for human-readable unique ids?
Not aware of any widely-used standard for this. Here’s a non-widely-used one:
Proquints
https://arxiv.org/html/0901.4016
https://github.com/dsw/proquint
A UUID4 (128 bit) would be converted into 8 proquints. If that’s too much, you can take the last 64 bits of the UUID4 (= just take 64 random bits). This doesn’t make it magically lose uniqueness; only increases the likelihood of collisions, which was non-zero to begin with, and which you can estimate mathematically to decide if it’s still OK for your purposes.
Here you go UUID Readable
Generate Easy to Remember, Readable UUIDs, that are Shakespearean and Grammatically Correct Sentences
This article suggests to use the first few characters from a SHA-256 hash, similarly to what git does. UUIDs are typically based on SHA-1, so this is not all that different. The tradeoff between property (2) and (3) is in the number of characters.
With d being the number of digits, you get 2 ** (4 * d) identifiers in total, but the first collision is expected to happen after 2 ** (2 * d).
The big question is really not about the kind of identifier you use, it is how you handle collisions.

memory segmentation in 8086 !! . About the not so popular memory segments in 8086 and christsake this title thingy is really hard

I know that the 1mb memory of 8086 is split into 16 logical sections but i only know about 4 such locations ,would anyone tell about the rest ?
I know that the 1mb memory of 8086 is split into 16 logical sections
I understand what you're saying but I'm afraid it's worse than that!
The 1MB memory has actually 65536 logical sections each overlapping the next by 65520 bytes. Your 16 logical sections are just special cases that happen to start at linear addresses divisable by 65536.
but i only know about 4 such locations
It's not clear what you mean by this but I think that you are referring to the segment registers CS, DS, ES, and SS. These are not locations as such but rather they each provide a pointer to any one of the forementioned sections. A linear address gets calculated by multiplying the appropiate segment register by 16 and then adding in the offset address. The result of this calculation is then truncated to have 20 bits only.
,would anyone tell about the rest ?
Simple enough. There's nothing else.

Why are 8 and 256 such important numbers in computer sciences?

I don't know very well about RAM and HDD architecture, or how electronics deals with chunks of memory, but this always triggered my curiosity:
Why did we choose to stop at 8 bits for the smallest element in a computer value ?
My question may look very dumb, because the answer are obvious, but I'm not very sure...
Is it because 2^3 allows it to fit perfectly when addressing memory ?
Are electronics especially designed to store chunk of 8 bits ? If yes, why not use wider words ?
It is because it divides 32, 64 and 128, so that processor words can be be given several of those words ?
Is it just convenient to have 256 value for such a tiny space ?
What do you think ?
My question is a little too metaphysical, but I want to make sure it's just an historical reason and not a technological or mathematical reason.
For the anecdote, I was also thinking about the ASCII standard, in which most of the first characters are useless with stuff like UTF-8, I'm also trying to think about some tinier and faster character encoding...
Historically, bytes haven't always been 8-bit in size (for that matter, computers don't have to be binary either, but non-binary computing has seen much less action in practice). It is for this reason that IETF and ISO standards often use the term octet - they don't use byte because they don't want to assume it means 8-bits when it doesn't.
Indeed, when byte was coined it was defined as a 1-6 bit unit. Byte-sizes in use throughout history include 7, 9, 36 and machines with variable-sized bytes.
8 was a mixture of commercial success, it being a convenient enough number for the people thinking about it (which would have fed into each other) and no doubt other reasons I'm completely ignorant of.
The ASCII standard you mention assumes a 7-bit byte, and was based on earlier 6-bit communication standards.
Edit: It may be worth adding to this, as some are insisting that those saying bytes are always octets, are confusing bytes with words.
An octet is a name given to a unit of 8 bits (from the Latin for eight). If you are using a computer (or at a higher abstraction level, a programming language) where bytes are 8-bit, then this is easy to do, otherwise you need some conversion code (or coversion in hardware). The concept of octet comes up more in networking standards than in local computing, because in being architecture-neutral it allows for the creation of standards that can be used in communicating between machines with different byte sizes, hence its use in IETF and ISO standards (incidentally, ISO/IEC 10646 uses octet where the Unicode Standard uses byte for what is essentially - with some minor extra restrictions on the latter part - the same standard, though the Unicode Standard does detail that they mean octet by byte even though bytes may be different sizes on different machines). The concept of octet exists precisely because 8-bit bytes are common (hence the choice of using them as the basis of such standards) but not universal (hence the need for another word to avoid ambiguity).
Historically, a byte was the size used to store a character, a matter which in turn builds on practices, standards and de-facto standards which pre-date computers used for telex and other communication methods, starting perhaps with Baudot in 1870 (I don't know of any earlier, but am open to corrections).
This is reflected by the fact that in C and C++ the unit for storing a byte is called char whose size in bits is defined by CHAR_BIT in the standard limits.h header. Different machines would use 5,6,7,8,9 or more bits to define a character. These days of course we define characters as 21-bit and use different encodings to store them in 8-, 16- or 32-bit units, (and non-Unicode authorised ways like UTF-7 for other sizes) but historically that was the way it was.
In languages which aim to be more consistent across machines, rather than reflecting the machine architecture, byte tends to be fixed in the language, and these days this generally means it is defined in the language as 8-bit. Given the point in history when they were made, and that most machines now have 8-bit bytes, the distinction is largely moot, though it's not impossible to implement a compiler, run-time, etc. for such languages on machines with different sized bytes, just not as easy.
A word is the "natural" size for a given computer. This is less clearly defined, because it affects a few overlapping concerns that would generally coïncide, but might not. Most registers on a machine will be this size, but some might not. The largest address size would typically be a word, though this may not be the case (the Z80 had an 8-bit byte and a 1-byte word, but allowed some doubling of registers to give some 16-bit support including 16-bit addressing).
Again we see here a difference between C and C++ where int is defined in terms of word-size and long being defined to take advantage of a processor which has a "long word" concept should such exist, though possibly being identical in a given case to int. The minimum and maximum values are again in the limits.h header. (Indeed, as time has gone on, int may be defined as smaller than the natural word-size, as a combination of consistency with what is common elsewhere, reduction in memory usage for an array of ints, and probably other concerns I don't know of).
Java and .NET languages take the approach of defining int and long as fixed across all architecutres, and making dealing with the differences an issue for the runtime (particularly the JITter) to deal with. Notably though, even in .NET the size of a pointer (in unsafe code) will vary depending on architecture to be the underlying word size, rather than a language-imposed word size.
Hence, octet, byte and word are all very independent of each other, despite the relationship of octet == byte and word being a whole number of bytes (and a whole binary-round number like 2, 4, 8 etc.) being common today.
Not all bytes are 8 bits. Some are 7, some 9, some other values entirely. The reason 8 is important is that, in most modern computers, it is the standard number of bits in a byte. As Nikola mentioned, a bit is the actual smallest unit (a single binary value, true or false).
As Will mentioned, this article http://en.wikipedia.org/wiki/Byte describes the byte and its variable-sized history in some more detail.
The general reasoning behind why 8, 256, and other numbers are important is that they are powers of 2, and computers run using a base-2 (binary) system of switches.
ASCII encoding required 7 bits, and EBCDIC required 8 bits. Extended ASCII codes (such as ANSI character sets) used the 8th bit to expand the character set with graphics, accented characters and other symbols.Some architectures made use of proprietary encodings; a good example of this is the DEC PDP-10, which had a 36 bit machine word. Some operating sytems on this architecture used packed encodings that stored 6 characters in a machine word for various purposes such as file names.
By the 1970s, the success of the D.G. Nova and DEC PDP-11, which were 16 bit architectures and IBM mainframes with 32 bit machine words was pushing the industry towards an 8 bit character by default. The 8 bit microprocessors of the late 1970s were developed in this environment and this became a de facto standard, particularly as off-the shelf peripheral ships such as UARTs, ROM chips and FDC chips were being built as 8 bit devices.
By the latter part of the 1970s the industry settled on 8 bits as a de facto standard and architectures such as the PDP-8 with its 12 bit machine word became somewhat marginalised (although the PDP-8 ISA and derivatives still appear in embedded sytem products). 16 and 32 bit microprocessor designs such as the Intel 80x86 and MC68K families followed.
Since computers work with binary numbers, all powers of two are important.
8bit numbers are able to represent 256 (2^8) distinct values, enough for all characters of English and quite a few extra ones. That made the numbers 8 and 256 quite important.
The fact that many CPUs (used to and still do) process data in 8bit helped a lot.
Other important powers of two you might have heard about are 1024 (2^10=1k) and 65536 (2^16=65k).
Computers are build upon digital electronics, and digital electronics works with states. One fragment can have 2 states, 1 or 0 (if the voltage is above some level then it is 1, if not then it is zero). To represent that behavior binary system was introduced (well not introduced but widely accepted).
So we come to the bit. Bit is the smallest fragment in binary system. It can take only 2 states, 1 or 0, and it represents the atomic fragment of the whole system.
To make our lives easy the byte (8 bits) was introduced. To give u some analogy we don't express weight in grams, but that is the base measure of weight, but we use kilograms, because it is easier to use and to understand the use. One kilogram is the 1000 grams, and that can be expressed as 10 on the power of 3. So when we go back to the binary system and we use the same power we get 8 ( 2 on the power of 3 is 8). That was done because the use of only bits was overly complicated in every day computing.
That held on, so further in the future when we realized that 8 bytes was again too small and becoming complicated to use we added +1 on the power ( 2 on the power of 4 is 16), and then again 2^5 is 32, and so on and the 256 is just 2 on the power of 8.
So your answer is we follow the binary system because of architecture of computers, and we go up in the value of the power to represent get some values that we can simply handle every day, and that is how you got from a bit to an byte (8 bits) and so on!
(2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and so on) (2^x, x=1,2,3,4,5,6,7,8,9,10 and so on)
The important number here is binary 0 or 1. All your other questions are related to this.
Claude Shannon and George Boole did the fundamental work on what we now call information theory and Boolean arithmetic. In short, this is the basis of how a digital switch, with only the ability to represent 0 OFF and 1 ON can represent more complex information, such as numbers, logic and a jpg photo. Binary is the basis of computers as we know them currently, but other number base computers or analog computers are completely possible.
In human decimal arithmetic, the powers of ten have significance. 10, 100, 1000, 10,000 each seem important and useful. Once you have a computer based on binary, there are powers of 2, likewise, that become important. 2^8 = 256 is enough for an alphabet, punctuation and control characters. (More importantly, 2^7 is enough for an alphabet, punctuation and control characters and 2^8 is enough room for those ASCII characters and a check bit.)
We normally count in base 10, a single digit can have one of ten different values. Computer technology is based on switches (microscopic) which can be either on or off. If one of these represents a digit, that digit can be either 1 or 0. This is base 2.
It follows from there that computers work with numbers that are built up as a series of 2 value digits.
1 digit,2 values
2 digits, 4 values
3 digits, 8 values etc.
When processors are designed, they have to pick a size that the processor will be optimized to work with. To the CPU, this is considered a "word". Earlier CPUs were based on word sizes of fourbits and soon after 8 bits (1 byte). Today, CPUs are mostly designed to operate on 32 bit and 64 bit words. But really, the two state "switch" are why all computer numbers tend to be powers of 2.
I believe the main reason has to do with the original design of the IBM PC. The Intel 8080 CPU was the first precursor to the 8086 which would later be used in the IBM PC. It had 8-bit registers. Thus, a whole ecosystem of applications was developed around the 8-bit metaphor. In order to retain backward compatibility, Intel designed all subsequent architectures to retain 8-bit registers. Thus, the 8086 and all x86 CPUs after that kept their 8-bit registers for backwards compatibility, even though they added new 16-bit and 32-bit registers over the years.
The other reason I can think of is 8 bits is perfect for fitting a basic Latin character set. You cannot fit it into 4 bits, but you can in 8. Thus, you get the whole 256-value ASCII charset. It is also the smallest power of 2 for which you have enough bits into which you can fit a character set. Of course, these days most character sets are actually 16-bit wide (i.e. Unicode).
Charles Petzold wrote an interesting book called Code that covers exactly this question. See chapter 15, Bytes and Hex.
Quotes from that chapter:
Eight bit values are inputs to the
adders, latches and data selectors,
and also outputs from these units.
Eight-bit values are also defined by
switches and displayed by lightbulbs,
The data path in these circuits is
thus said to be 8 bits wide. But
why 8 bits? Why not 6 or 7 or 9 or
10?
... there's really no reason why
it had to be built that way. Eight
bits just seemed at the time to be a
convenient amount, a nice biteful of
bits, if you will.
...For a while, a byte meant simply
the number of bits in a particular
data path. But by the mid-1960s. in
connection with the development of
IBM's System/360 (their large complex
of business computers), the word came
to mean a group of 8 bits.
... One reason IBM gravitated toward
8-bit bytes was the ease in storing
numbers in a format known as BCD.
But as we'll see in the chapters ahead, quite by coincidence a byte is
ideal for storing text because most
written languages around the world
(with the exception of the ideographs
used in Chinese, Japanese and Korean)
can be represented with fewer than 256
characters.
Historical reasons, I suppose. 8 is a power of 2, 2^2 is 4 and 2^4 = 16 is far too little for most purposes, and 16 (the next power of two) bit hardware came much later.
But the main reason, I suspect, is the fact that they had 8 bit microprocessors, then 16 bit microprocessors, whose words could very well be represented as 2 octets, and so on. You know, historical cruft and backward compability etc.
Another, similarily pragmatic reason against "scaling down": If we'd, say, use 4 bits as one word, we would basically get only half the troughtput compared with 8 bit. Aside from overflowing much faster.
You can always squeeze e.g. 2 numbers in the range 0..15 in one octet... you just have to extract them by hand. But unless you have, like, gazillions of data sets to keep in memory side-by-side, this isn't worth the effort.

Hardware Cache Formulas (Parameter)

The image below was scanned (poorly) from Computer Systems: A Programmer's Perspective. (I apologize to the publisher). This appears on page 489.
Figure 6.26: Summary of cache parameters http://theopensourceu.com/wp-content/uploads/2009/07/Figure-6.26.jpg
I'm having a terribly difficult time understanding some of these calculations. At the current moment, what is troubling me is the calculation for M, which is supposed to be the number of unique addresses. "Maximum number of unique memory addresses." What does 2m suppose to mean? I think m is calculated as log2(M). This seems circular....
For the sake of this post, assume the following in the event you want to draw up an example: 512 sets, 8 blocks per set, 32 words per block, 8 bits per word
Update: All of the answers posted thus far have been helpful but I still think I'm missing something. cwrea's answer provides the biggest bridge for my understand. I feel like the answer is on the tip of my mental tongue. I know it is there but I can't identify it.
Why does M = 2m but then m = log2(M)?
Perhaps the detail I'm missing is that for a 32-bit machine, we'd assume M = 232. Does this single fact allow me to solve for m? m = log2(232)? But then this gets me back to 32... I have to be missing something...
m & M are related to each other, not defined in terms of each other. They call M a derived quantity however since usually the processor/controller is the limiting factor in terms of the word length it uses.
On a real system they are predefined. If you have a 8-bit processor, it generally can handle 8-bit memory addresses (m = 8). Since you can represent 256 values with 8-bits, you can have a total of 256 memory addresses (M = 2^8 = 256). As you can see we start with the little m due to the processor constraints, but you could always decide you want a memory space of size M, and use that to select a processor that can handle it based on word-size = log2(M).
Now if we take your assumptions for your example,
512 sets, 8 blocks per set, 32 words
per block, 8 bits per word
I have to assume this is an 8-bit processor given the 8-bit words. At that point your described cache is larger than your address space (256 words) & therefore pretty meaningless.
You might want to check out Computer Architecture Animations & Java applets. I don't recall if any of the cache ones go into the cache structure (usually they focus on behavior) but it is a resource I saved on the past to tutor students in architecture.
Feel free to further refine your question if it still doesn't make sense.
The two equations for M are just a relationship. They are two ways of saying the same thing. They do not indicate causality, though. I think the assumption made by the author is that the number of unique address bits is defined by the CPU designer at the start via requirements. Then the M can vary per implementation.
m is the width in bits of a memory address in your system, e.g. 32 for x86, 64 for x86-64. Block size on x86, for example, is 4K, so b=12. Block size more or less refers to the smallest chunk of data you can read from durable storage -- you read it into memory, work on that copy, then write it back at some later time. I believe tag bits are the upper t bits that are used to look up data cached locally very close to the CPU (not even in RAM). I'm not sure about the set lines part, although I can make plausible guesses that wouldn't be especially reliable.
Circular ... yes, but I think it's just stating that the two variables m and M must obey the equation. M would likely be a given or assumed quantity.
Example 1: If you wanted to use the formulas for a main memory size of M = 4GB (4,294,967,296 bytes), then m would be 32, since M = 2^32, i.e. m = log2(M). That is, it would take 32 bits to address the entire main memory.
Example 2: If your main memory size assumed were smaller, e.g. M = 16MB (16,777,216 bytes), then m would be 24, which is log2(16,777,216).
It seems you're confused by the math rather than the architectural stuff.
2^m ("2 to the m'th power") is 2 * 2... with m 2's. 2^1 = 2, 2^2 = 2 * 2 = 4, 2^3 = 2 * 2 * 2 = 8, and so on. Notably, if you have an m bit binary number, you can only represent 2^m different numbers. (is this obvious? If not, it might help to replace the 2's with 10's and think about decimal digits)
log2(x) ("logarithm base 2 of x") is the inverse function of 2^x. That is, log2(2^x) = x for all x. (This is a definition!)
You need log2(M) bits to represent M different numbers.
Note that if you start with M=2^m and take log2 of both sides, you get log2(M)=m. The table is just being very explicit.

Resources