In this paper, what's mean 10B payload?
"...a packet with 10B payload and 12.25 symbols preamble..."
[LoRa for the Internet of Things]
On the one hand, I vote to close that question because it has nothing to do with programming.
On the other hand, I got interested in the subject and had a look into that paper.
After some minutes of reading, I would say that this is one of the low-quality papers whose authors were in desperate need to publish something in a hurry. For example, they are talking about preambles and headers, don't explain what that should be, are writing "chips" where they obviously mean "chirps", are talking about the payload FEC as being always hard-coded in the header as 4/8, but in their example use a FEC of 4/5, and so on.
Furthermore, the paper is full of orthographic errors.
In summary, I never will get it how such papers can pass the peer review. This does not satisfy any reasonable academic or scientific standard.
The rest of my initial answer was wrong. So I have re-written it based on #arminb's answer.
As #arminb has pointed out, the correct answer is: It means 10 bytes.
In addition to #arminb's answer and for those who don't want to just believe a calculator output, I have tried to figure out the details of the calculation.
First of all, in their world, a symbol is as long as a byte. This can be concluded without any doubt from the following calculations.
Secondly, the preamble is not subject to FEC (coding rate): The preamble is 12.25 symbols which is (12.25 * 4096) chirps = 50176 chirps when taking into account that the spreading factor SF = 12, i.e. there are 4096 chirps / symbol. Given a bandwidth of 125 kHz = 125000 chirps / s, and without FEC, the time the preamble takes is 50176 chirps / (125000 chirps / s)) = 0.401408 s ≈ 401.41 ms. This is exactly the number the calculator in #arminb's answer shows.
Third, in the example, the header is three bytes and is not subject to the FEC as well. Thus, the time the header takes is (3 * 4096 chirps) / (125000 chirps / s) = 0.098304 s ≈ 98.30 ms, provided that a byte is as long as a symbol.
Fourth, the payload is 10 bytes and is subject to the FEC; the FEC is 5/4. Thus, the time the payload takes is ((10 * (5/4) * 4096 chirps) / (125000 chirps / s)) = 0.4096 s = 409.60 ms, provided that a byte is as long as a symbol.
Fifth, the CRC is two bytes and is subject to the FEC as well; the FEC is 5/4. Thus, the time the CRC takes is ((2 * (5/4) * 4096 chirps) / (125000 chirps / s)) = 0.08192 s = 81.92 ms, provided that a byte is as long as a symbol.
Adding all these times, we get 0.401408 s + 0.098304 s + 0.4096 s + 0.08192 s = 0.991232 s ≈ 991.23 ms which is exactly the on-air time for that example packet in the calculator in #arminb's answer.
I hope that this helps further readers who want to understand what is going on behind the scenes.
Semtech provides a calcualtor for its LoRa chip SX1272. When you fill in the parameters to the calculator from the example (LoRa for the Internet of Things):
To give an example we assume SF12, BW125, CR4/5, and TX power 17 dBm
(An energy hungry setting allowing very long ranges which was used in
our experimental evaluation discussed later). A transmission of a
packet with 10 B payload and 12.25 symbols preamble has a transmission
duration of 991.23 ms.
You get exactly 991.23 ms. Also you can see in the calculator that 10 B suppose to mean bytes:
Generally an upper-case B always shall mean Bytes, while a lower-case b shall mean bits.
Related
Suppose an analog audio signal is sampled 16,000 times per second, and each sample is quantized into one of 1024 levels. What would be the resulting bit rate of the PCM digital audio signal?
so that a question in Top down approach book , i answered it but just want to make sure it is correct
my answer is
1024 = 2 ^10
so PCM bit rate = 10 * 16000 = 160 , 000 bps
is that correct
Software often makes the trade off between time and space. Your answer is correct, however to write software you typically read/write data into storage units of bytes (8 bits). Since your answer says 10 bits, your code would use two bytes (16 bits) per sample. So the file consumption rate would be 16 * 160000 == 256000 bits per second (32000 bytes per second). This is mono so stereo would double this. Your software to actually store 10 bits per sample instead of 16 bits would shift this time/space trade-off in the direction of increased computational time (and code complexity) to save storage space.
I am simulating the IEEE802.11b PHY Model. I am building the header of the Packet in the Physical Layer.
As per the Literature
The PLCP LENGTH field shall be an unsigned 16-bit integer that indicates the number of microseconds to transmit the PPDU.
If I assume the packet size to be 1024Bytes, what should be the value of the Length field(16 bit wide)
The calculation of the LENGTH field depends on the number of bytes to send, as well as on the data rate (5.5 or 11 Mbps). The basic idea of the calculation is:
Bytes * 8
LENGTH = Time (µs) = ----------------
Data rate (Mbps)
However, you need to read Section 18.2.3.5, Long PLCP LENGTH field in the 802.11b-1999 Standard, pages 15-17. It has the complete details of how to calculate this value, along with several examples. It unambiguously explains how to properly round the data, as well as when the length extension bit in the SERVICE field should be set.
I will not reproduce the text of the section here since it looks like IEEE might be strict about enforcing their copyright. However, if you don't have the standard already, I suggest you download it now from the link above -- it's free!
If you have any questions about interpreting the standard, don't hesitate to ask.
I've seen three ways of doing conversion from bytes to megabytes:
megabytes=bytes/1000000
megabytes=bytes/1024/1024
megabytes=bytes/1024/1000
Ok, I think #3 is totally wrong but I have seen it. I think #2 is right, but I am looking for some respected authority (like W3C, ISO, NIST, etc) to clarify which megabyte is a true megabyte. Can anyone cite a source that explicitly explains how this calculation is done?
Bonus question: if #2 is a megabyte what are #1 and #3 called?
BTW: Hard drive manufacturers don't count as authorities on this one!
Traditionally by megabyte we mean your second option -- 1 megabyte = 220 bytes. But it is not correct actually because mega means 1 000 000. There is a new standard name for 220 bytes, it is mebibyte (http://en.wikipedia.org/wiki/Mebibyte) and it gathers popularity.
There's an IEC standard that distinguishes the terms, e.g. Mebibyte = 1024^2 bytes but Megabyte = 1000^2 (in order to be compatible to SI units like kilograms where k/M/... means 1000/1000000). Actually most people in the IT area will prefer Megabyte = 1024^2 and hard disk manufacturers will prefer Megabyte = 1000^2 (because hard disk sizes will sound bigger than they are).
As a matter of fact, most people are confused by the IEC standard (multiplier 1000) and the traditional meaning (multiplier 1024). In general you shouldn't make assumptions on what people mean. For example, 128 kBit/s for MP3s usually means 128000 bits because the multiplier 1000 is mostly used with the unit bits. But often people then call 2048 kBit/s equal to 2 MBit/s - confusing eh?
So as a general rule, don't trust bit/byte units at all ;)
Divide by 2 to the power of 20, (1024*1024) bytes = 1 megabyte
1024*1024 = 1,048,576
2^20 = 1,048,576
1,048,576/1,048,576 = 1
It is the same thing.
BTW: Hard drive manufacturers don't count as authorities on this one!
Oh, yes they do (and the definition they assume from the S.I. is the correct one). On a related issue, see this post on CodingHorror.
for convert byte to megabyte(MB)
use totalbyte/1000/1000
for convert byte to mebibyte (MiB)
use totalbyte/1024/1024
https://en.wikipedia.org/wiki/Byte#Multiple-byte_units
The answer is that #1 is technically correct based on the real meaning of the Mega prefix, however (and in life there is always a however) the math for that doesn't come out so nice in base 2, which is how computers count, so #2 is what people really use.
Megabyte means 2^20 bytes. I know that technically that doesn't mesh with the SI units, and that some folks have come up with a new terminology to mean 2^20. None of that matters. Efforts to change the language to "clarify" things are doomed to failure.
Hard-drive manufacturers use it to mean 1,000,000 bytes, because that's what it means in SI so they figure technically they aren't lying (while actually they are). That falls under lies, damn lies, and marketing.
Use the computation your users will most likely expect. Do your users care to know how many actual bytes are on a disk or in memory or whatever, or do they only care about usable space? The answer to that question will tell you which calculation makes the most sense.
This isn't a precision question as much as it is a usability question. Provide the calculation that is most useful to your users.
In general, it's wrong to use decimal SI prefixes (e.g. kilo, mega) when referring to binary data sizes (except in casual usage). It's ambiguous and causes confusion. To be precise you can use binary prefixes (e.g. 1 mebibyte = 1 MiB = 1024 kibibytes = 2^20 bytes). When someone else uses decimal SI prefixes for binary data you need to get more information before you can know what is meant.
Microsoft Windows Explorer shows file size in the "Properties" window. This is a conversion from the byte count using 2^20
A question in my university homework is why use the one's complement instead of just the sum of bits in a TCP checksum. I can't find it in my book and Google isn't helping. Any chance someone can point me in the right direction?
Thanks,
Mike
Since this is a homework question, here is a hint:
Suppose you calculated a second checksum over the entire packet, including the first checksum? Is there a mathematical expression which would determine the result?
Probably the most important is that it is endian independent.
Little Endian computers store hex numbers with the LSB last (Intel processors for example). Big Endian computers put the LSB first (IBM mainframes for example). When carry is added to the LSB to form the 1's complement sum) it doesn't matter if we add 03 + 01 or 01 + 03: the result is the same.
Other benefits include the easiness of checking the transmission and the checksum calculation plus a variety of ways to speed up the calculation by updating only IP fields that have changed.
Ref: http://www.netfor2.com/checksum.html
The image below was scanned (poorly) from Computer Systems: A Programmer's Perspective. (I apologize to the publisher). This appears on page 489.
Figure 6.26: Summary of cache parameters http://theopensourceu.com/wp-content/uploads/2009/07/Figure-6.26.jpg
I'm having a terribly difficult time understanding some of these calculations. At the current moment, what is troubling me is the calculation for M, which is supposed to be the number of unique addresses. "Maximum number of unique memory addresses." What does 2m suppose to mean? I think m is calculated as log2(M). This seems circular....
For the sake of this post, assume the following in the event you want to draw up an example: 512 sets, 8 blocks per set, 32 words per block, 8 bits per word
Update: All of the answers posted thus far have been helpful but I still think I'm missing something. cwrea's answer provides the biggest bridge for my understand. I feel like the answer is on the tip of my mental tongue. I know it is there but I can't identify it.
Why does M = 2m but then m = log2(M)?
Perhaps the detail I'm missing is that for a 32-bit machine, we'd assume M = 232. Does this single fact allow me to solve for m? m = log2(232)? But then this gets me back to 32... I have to be missing something...
m & M are related to each other, not defined in terms of each other. They call M a derived quantity however since usually the processor/controller is the limiting factor in terms of the word length it uses.
On a real system they are predefined. If you have a 8-bit processor, it generally can handle 8-bit memory addresses (m = 8). Since you can represent 256 values with 8-bits, you can have a total of 256 memory addresses (M = 2^8 = 256). As you can see we start with the little m due to the processor constraints, but you could always decide you want a memory space of size M, and use that to select a processor that can handle it based on word-size = log2(M).
Now if we take your assumptions for your example,
512 sets, 8 blocks per set, 32 words
per block, 8 bits per word
I have to assume this is an 8-bit processor given the 8-bit words. At that point your described cache is larger than your address space (256 words) & therefore pretty meaningless.
You might want to check out Computer Architecture Animations & Java applets. I don't recall if any of the cache ones go into the cache structure (usually they focus on behavior) but it is a resource I saved on the past to tutor students in architecture.
Feel free to further refine your question if it still doesn't make sense.
The two equations for M are just a relationship. They are two ways of saying the same thing. They do not indicate causality, though. I think the assumption made by the author is that the number of unique address bits is defined by the CPU designer at the start via requirements. Then the M can vary per implementation.
m is the width in bits of a memory address in your system, e.g. 32 for x86, 64 for x86-64. Block size on x86, for example, is 4K, so b=12. Block size more or less refers to the smallest chunk of data you can read from durable storage -- you read it into memory, work on that copy, then write it back at some later time. I believe tag bits are the upper t bits that are used to look up data cached locally very close to the CPU (not even in RAM). I'm not sure about the set lines part, although I can make plausible guesses that wouldn't be especially reliable.
Circular ... yes, but I think it's just stating that the two variables m and M must obey the equation. M would likely be a given or assumed quantity.
Example 1: If you wanted to use the formulas for a main memory size of M = 4GB (4,294,967,296 bytes), then m would be 32, since M = 2^32, i.e. m = log2(M). That is, it would take 32 bits to address the entire main memory.
Example 2: If your main memory size assumed were smaller, e.g. M = 16MB (16,777,216 bytes), then m would be 24, which is log2(16,777,216).
It seems you're confused by the math rather than the architectural stuff.
2^m ("2 to the m'th power") is 2 * 2... with m 2's. 2^1 = 2, 2^2 = 2 * 2 = 4, 2^3 = 2 * 2 * 2 = 8, and so on. Notably, if you have an m bit binary number, you can only represent 2^m different numbers. (is this obvious? If not, it might help to replace the 2's with 10's and think about decimal digits)
log2(x) ("logarithm base 2 of x") is the inverse function of 2^x. That is, log2(2^x) = x for all x. (This is a definition!)
You need log2(M) bits to represent M different numbers.
Note that if you start with M=2^m and take log2 of both sides, you get log2(M)=m. The table is just being very explicit.