IEEE 802.16e standard Codes - standards

I'm working on an article about traping sets in LDPC codes. In numerical result part, writers check their algorithm on Codes C1 and C2 which are the LDPC codes with girth 6 and block lengths 576 and 1056, respectively, used in the IEEE 802.16e standard, but I can't find the matrix of these two parity check codes. Can anyone help me?

You can get the standard here: https://standards.ieee.org/standard/802_16e-2005.html
It must contain all the information necessary (if it is actually part of the IEEE 802.16e standard).

Related

Are floating point numbers the same in different systems conforming to IEEE754?

This is related to my latest question: What can I do about the difference between real numbers in R versus PostgreSQL?
I know very little about precision issues and the IEEE754 standard. I read this link, from which I quote (emphasis mine):
Because of its wide use, the format used to store Floating Point numbers in memory has been standardized by the Institute of Electrical and Electronic Engineers in something called IEEE 754.
To me, that means that if I the number 4104.694 should be equal in two different systems conforming to the standard. However, from my previous question, R and Postgres seem to represent this number differently:
des_num <- 4094.694
sprintf("%.64f", des_num)
# "4094.6939999999999599822331219911575317382812500000000000000000000000"
psql_num <- RPostgreSQL::dbGetQuery(con, "select 4104.694;")
sprintf("%.64f", psql_num)
# [1] "4104.6940000000004147295840084552764892578125000000000000000000000000"
Should I expect the same floating point number to be stored in exactly the same way in different systems conforming to the standard?
I don't know if R has an exact numeric type, but Postgres certainly does. Both the REAL and DECIMAL column types offer exact precision, see here. If you are working with Postgres as a data store behind your scripts, then if you use real or decimal you should be able to store something from R, and retrieve the exact same thing later on.

Why is NA^0 = 1 in R? [duplicate]

Prompted by a spot of earlier code golfing why would:
>NaN^0
[1] 1
It makes perfect sense for NA^0 to be 1 because NA is missing data, and any number raised to 0 will give 1, including -Inf and Inf. However NaN is supposed to represent not-a-number, so why would this be so? This is even more confusing/worrying when the help page for ?NaN states:
In R, basically all mathematical functions (including basic
Arithmetic), are supposed to work properly with +/- Inf and NaN as
input or output.
The basic rule should be that calls and relations with Infs really are
statements with a proper mathematical limit.
Computations involving NaN will return NaN or perhaps NA: which of
those two is not guaranteed and may depend on the R platform (since
compilers may re-order computations).
Is there a philosophical reason behind this, or is it just to do with how R represents these constants?
This is referenced in the help page referenced by ?'NaN'
"The IEC 60559 standard, also known as the ANSI/IEEE 754 Floating-Point Standard.
http://en.wikipedia.org/wiki/NaN."
And there you find this statement regarding what should create a NaN:
"There are three kinds of operations that can return NaN:[5]
Operations with a NaN as at least one operand.
It is probably is from the particular C compiler, as signified by the Note you referenced. This is what the GNU C documentation says:
http://www.gnu.org/software/libc/manual/html_node/Infinity-and-NaN.html
" NaN, on the other hand, infects any calculation that involves it. Unless the calculation would produce the same result no matter what real value replaced NaN, the result is NaN."
So it seems that the GNU-C people have a different standard in mind when writing their code. And the 2008 version of ANSI/IEEE 754 Floating-Point Standard is reported to make that suggestion:
http://en.wikipedia.org/wiki/NaN#Function_definition
The published standard is not free. So if you are have access rights or money you can look here:
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4610933
The answer can be summed up by "for historical reasons".
It seems that IEEE 754 introduced two different power functions - pow and powr, with the latter preserving NaN's in the OP case and also returning NaN for Inf^0, 0^0, 1^Inf, but eventually the latter was dropped as explained briefly here.
Conceptually, I'm in the NaN preserving camp, because I'm coming at the issue from viewpoint of limits, but from convenience point of view I expect current conventions are slightly easier to deal with, even if they don't make a lot of sense in some cases (e.g. sqrt(-1)^0 being equal to 1 while all operations are on real numbers makes little sense if any).
Yes, I'm late here, but as R Core member who was involved in this design, let me recall what I commented above. NaN preserving and NA preserving work "equivalently" in R, so if you agree that NA^0 should give 1, NaN^0 |-> 1 is a consequence.
Indeed (as others said) you should really read R's help pages and not C or
IEEE standards, to answer such questions,
and SimonO101 correctly cited
1 ^ y and y ^ 0 are 1, always
and I'm pretty sure that I was heavily involved (if not the author) of that.
Note that it is good, not bad, to be able to provide non-NaN answers, also in cases other programming languages do differently.
The consequence of such a rule is that more things work automatically correctly;
in the other case, the R programmer would have been urged to do more special casing herself.
Or put differently, a simple rule as the above (returning non-NaN in all cases) is a good rule, because it propagates continuity in a mathematical sense: lim_x f(x) = f(lim x).
We have had a few cases where it was clearly advantageous (i.e. did not need special casing, I'm repeating..) to adhere to the above "= 1" rule, rather than to propagate NaN. As I said further up, the sqrt(-1)^0 is also such an example, as 1 is the correct result as soon as you extend to the complex plane.
Here's one reasoning. From Goldberg:
In IEEE 754, NaNs are often represented as floating-point numbers with
the exponent e_max + 1 and nonzero significands.
So NaN is a floating-point number, though with a special meaning. Raising a number to the power zero sets its exponent to zero, therefore it will no longer be NaN.
Also note:
> 1^NaN
[1] 1
One is a number whose exponent is zero already.
Conceptually, the only problem with NaN^0 == 1 is that zero values can come about at least four different ways, but the IEEE format uses the same representation for three of them. The above formula equality sense for the most common case (which is one of the three), but not for the others.
BTW, the four cases I would recognize would be:
A literal zero
Unsigned zero: the difference between two numbers that are indistinguishable
Positive infinitesimal: The product or quotient of two numbers of matching sign, which is too small to be distinguished from zero.
Negative infinitesimal: The product or quotient of two numbers of opposite sign, which is too small to be distinguished from zero.
Some of these may be produced via other means (e.g. literal zero could be produced as the sum of two literal zeros; positive infinitesimal by the division of a very small number by a very large one, etc.).
If a floating-point recognized the above, it could usefully regard raising NaN to a literal zero as yielding one, and raising it to any other kind of zero as yielding NaN; such a rule would allow a constant result to be assumed in many cases where something that might be NaN would be raised to something the compiler could identify as a constant zero, without such assumption altering program semantics. Otherwise, I think the issue is that most code isn't going to care whether x^0 might would NaN if x is NaN, and there's not much point to having a compiler add code for conditions code isn't going to care about. Note that the issue isn't just the code to compute x^0, but for any computations based on that which would be constant if x^0 was.
If you look at the type of NaN, it is still a number, it's just not a specific number that can be represented by the numeric type.
EDIT:
For example, if you were to take 0/0. What is the result? If you tried to solve this equation on paper, you get stuck at the very first digit, how many zero's fit into another 0? You can put 0, you can put 1, you can put 8, they all fit into 0*x=0 but it's impossible to know which one the correct answer is. However, that does not mean the answer is no longer a number, it's just not a number that can be represented.
Regardless, any number, even a number that you can't represent, to the power of zero is still 1. If you break down some math x^8 * x^0 can be further simplified by x^(8+0) which equates to x^8, where did the x^0 go? It makes sense if x^0 = 1 because then the equation x^8 * 1 explains why x^0 just sort of disappears from existence.

What is the use of hexadecimal values in programming?

This is something I have been thinking while reading programming books and in computer science class at school where we learned how to convert decimal values into hexadecimal.
Can someone please tell me what are the advantages of using hexadecimal values and why we use them in programmnig?
Thank you.
In many cases (like e.g. bit masks) you need to use binary, but binary is hard to read because of its length. Since hexadecimal values can be much easier translated to/from binary than decimals, you could look at hex values as kind of shorthand notation for binary values.
It certainly depends on what you're doing.
It comes as an extension of base 2, which you probably are familiar with as essential to computing.
Check this out for a good discussion of
several applications...
https://softwareengineering.stackexchange.com/questions/170440/why-use-other-number-bases-when-programming/
The hexadecimal digit corresponds 1:1 to a given pattern of 4 bits. With experience, you can map them from memory. E.g. 0x8 = 1000, 0xF = 1111, correspondingly, 0x8F = 10001111.
This is a convenient shorthand where the bit patterns do matter, e.g. in bit maps or when working with i/o ports. To visualize the bit pattern for 169d is in comparison more difficult.
A byte consists of 8 binary digits and is the smallest piece of data that computers normally work with. All other variables a computer works with are constructed from bytes. For example; a single character can be stored in a single byte, and a 32bit integer consists of 4 bytes.
As bytes are so fundamental we want a way to write down their value as neatly and efficiently as possible. One option would be to use binary, but then we would need a lot of digits. This takes up a lot of space and can be confusing when many numbers are written in sequence:
200 201 202 == 11001000 11001001 11001010
Using hexadecimal notation, we can write every byte using just two digits:
200 == C8
Also, as 16 is a power of 2, it is easy to convert between hexadecimal and binary representations in your head. This is useful as sometimes we are only interested in a single bit within the byte. As a simple example, if the first digit of a hexadecimal representation is 0 we know that the first four binary digits are 0.

Optimal integer encoding that still sorts

One of the neat characteristics of UTF-8 is that if you compare two strings (with <) byte-by-byte, you get the same answer as if you had compared them codepoint-by-codepoint. I was wondering if there was a similar encoding that was optimal in size (e.g. UTF-8 "wastes" space by tagging bytes with 10xxxxxx if they are not the first byte representing a codepoint).
The assumption for optimality here is that a non-negative number n is more frequent than a number m if n < m.
I am most interested in knowing if there is a (byte-comparable) encoding that works for integers, with n more frequent than m if |n| < |m|.
Have you considered a variant of Huffman coding? Traditionally one recursively merges the two least frequent symbols, but to preserve order one could instead merge the two adjacent symbols having the least sum.
Looks like this problem has been well-studied (and the greedy algorithm is not optimal). The optimal algorithm was given by Hu and Tucker, which is described here and more detail in this thesis.
This paper discussing order-preserving dictionary-based compression also looks interesting.
There are very few standard encodings and the answer is no. Any further optimization beyond UTF-8 should not be referred to as "encoding" but a "compression" - and lexicographically-comparable compression is a different department.
If you are solving a real-world (non-purely-academic) problem, I'd just stick with the most standard UTF8. You can learn about its efficiency compared to other standard encodings on utf8everywhere.org.
To fully answer that question you need to know the frequency of the codepoints in the material.
UTF-8 is optimal for texts in English as multi-byte characters are very rare in typical English text.
To encode integers using UTF-8 as a base algorithm would entail mapping the first n integers to a 1-byte encoding, the next m to a 2-byte encoding and so on.
Whether that is an optimal encoding depends on the distribution. If the first n numbers are very frequent compared to higher numbers, then UTF-8 would be (near) optimal.

LENGTH Field in IEEE 802.11b

I am simulating the IEEE802.11b PHY Model. I am building the header of the Packet in the Physical Layer.
As per the Literature
The PLCP LENGTH field shall be an unsigned 16-bit integer that indicates the number of microseconds to transmit the PPDU.
If I assume the packet size to be 1024Bytes, what should be the value of the Length field(16 bit wide)
The calculation of the LENGTH field depends on the number of bytes to send, as well as on the data rate (5.5 or 11 Mbps). The basic idea of the calculation is:
Bytes * 8
LENGTH = Time (µs) = ----------------
Data rate (Mbps)
However, you need to read Section 18.2.3.5, Long PLCP LENGTH field in the 802.11b-1999 Standard, pages 15-17. It has the complete details of how to calculate this value, along with several examples. It unambiguously explains how to properly round the data, as well as when the length extension bit in the SERVICE field should be set.
I will not reproduce the text of the section here since it looks like IEEE might be strict about enforcing their copyright. However, if you don't have the standard already, I suggest you download it now from the link above -- it's free!
If you have any questions about interpreting the standard, don't hesitate to ask.

Resources