I am simulating the IEEE802.11b PHY Model. I am building the header of the Packet in the Physical Layer.
As per the Literature
The PLCP LENGTH field shall be an unsigned 16-bit integer that indicates the number of microseconds to transmit the PPDU.
If I assume the packet size to be 1024Bytes, what should be the value of the Length field(16 bit wide)
The calculation of the LENGTH field depends on the number of bytes to send, as well as on the data rate (5.5 or 11 Mbps). The basic idea of the calculation is:
Bytes * 8
LENGTH = Time (µs) = ----------------
Data rate (Mbps)
However, you need to read Section 18.2.3.5, Long PLCP LENGTH field in the 802.11b-1999 Standard, pages 15-17. It has the complete details of how to calculate this value, along with several examples. It unambiguously explains how to properly round the data, as well as when the length extension bit in the SERVICE field should be set.
I will not reproduce the text of the section here since it looks like IEEE might be strict about enforcing their copyright. However, if you don't have the standard already, I suggest you download it now from the link above -- it's free!
If you have any questions about interpreting the standard, don't hesitate to ask.
Related
I'm working with DICOM files since a few days, using FO-DICOM.
I'm using a set of dicom files for my tests, and I've been printing the "Photometric Interpretation" and the "Sample Per Pixel" values, to have a better understanding of what kind of images I'm working with.
The result was "MONOCHROME2" for the Photometric Interpretation, and "1" for the Sample Per Pixel.
What I understood by reading the part3 of the standard is that MONOCHROME2 represent a gray scale, starting from black for its minimum values.
But what is the Sample Per Pixel exactly? I thought this was representing the number of bytes (and not bits) per pixel (that would be logic to have 8 bits per pixel for a scale of gray right?)
But my problem here is that actually, my images seem to have 32 bpp.
I'm working with 512*512 pixels images, and I converted them into byte arrays. So I was expecting arrays of 512*512=262144 bytes.
But I get arrays of 1048630 bytes (which is a bit more than 4*262144)
Does someone have an explanation?
EDIT:
Here's are some of my datas :
PhotometricInterpretation=MONOCHROME2
SamplePerPixel=1
BitsAllocated=16
BitsStored=12
HighBit=11
PixelRepresentation=0
NumberOfFrames=0
The attribute (0028,0002) SamplesPerPixel refers to color images only and tells you the number of planes which are present in the image (e.g. 3 for RGB), so you have
PhotometricInterpretation=RGB
SamplesPerPixel=3
With 8 bits per pixel (I will revisit BPP below). As long as you have PhotometricInterpretation = MONOCHROME1 or MONOCHROME2, you can expect the SamplesPerPixel to be 1 and nothing else.
What you do have to take into consideration is the number of bits per pixel:
BitsAllocated (0028,0100)
BitsStored (0028,0101)
HighBit (0028,0102)
These tell you how many bits are used to encode a pixel value (BitsAllocated) and which of these bits really contain grayscale information (BitsStored, HighBit). HighBit is zero-based and usually but not necessarily = BitsStored-1
An example to illustrate this: For CT images, it is very common to express gray values in hounsfield units which range from -1000 to +3000. These are represented by 12 bits which are stored with a 2-byte-alignment, so
BitsAllocated (0028,0100) = 16
BitsStored (0028,0101) = 12
HighBit (0028,0102) = 11
Another degree of freedom is PixelRepresentation which tells you if the pixel data is encoded unsigned (0) or in 2s complement (1). I have seen both for CT images, however signed pixel data is rather unusual for image types other than CT.
In your example, I would assume that Bits Allocated == 32 or (not very likely) that you have a dataset containing multiple images ('frames'), so NumberOfFrames (0028,0008) > 1. If Number of Frames is absent, you can safely assume to have only one frame.
I have over-simplified a bit here, especially about color images but I think this is complicated enough ;-). Basically, DICOM offers any thinkable degree of freedom to encode pixel data and describe the encoding in the header.
I think I have recommended you to have a look at the DCMTK in a recent post. The DicomImage class features a nice interface (getInterData()) which cares about all that stuff and provides the pixel data read from a DICOM file in a normalized format.
[EDIT]: Feel free to post a DICOM dump of your dataset here, I would have a look at it and tell you how to interpret the pixel data.
How does the 68000 internally represent instructions.
I've read that there are different types of instructions: single effective operation word format instructions, brief and full extension word format instructions. The single effective operation word instruction seems to represent the instruction and the lower 6 bits of this instruction the addressing mode and register. Does this addressing mode and register tell you if there follows a brief or full extension word format instruction, which on his turn represents the operands for the instruction. Do you know a better manual than the 68000 programming reference manual.
Thanks in advance
The actual internal representation is a combination of "microcode" and "nanocode". The 68000 has 544 17-bit microcode words which dispaches to 366 68-bit nanocode words.
While this may not be what you wanted to know, this link may provide some insights:
http://www.easy68k.com/paulrsm/doc/dpbm68k1.htm
right, on m68000 indexed modes uses the brief extension. In "Address Register Indirect with Index (8-Bit Displacement) Mode" (d8, An, Xn), the BEW is filled with D/A (if Xn is a data or address register), Xn (the register number), W/L (to threat Xn contents as 16 or 32bits), scale to 0 (see note), and the 8-bit displacement.
on other hand, other modes, like the 16bit displacement, "Address with displacement" (d16,An) , the extension is only a word with the displacement.
the note is: brief extension word - m68k doesn't support the 2bits for scale so is set to 0; scale on BEW using the scale bits, and full extensions are only suported m68020,40,-> cpus. http://etd.dtu.dk/thesis/264182/bac10_19.pdf
I want code to render n bits with n + x bits, non-sequentially. I'd Google it but my Google-fu isn't working because I don't know the term for it.
For example, the input value in the first column (2 bits) might be encoded as any of the output values in the comma-delimited second column (4 bits) below:
0 1,2,7,9
1 3,8,12,13
2 0,4,6,11
3 5,10,14,15
My goal is to take a list of integer IDs, and transform them in a way they can still be used for persistent URLs, but that can't be iterated/enumerated sequentially, and where a client cannot determine programmatically if a URL in a search result set has been visited previously without visiting it again.
I would term this process "encoding". You'll see something similar done to permit the use of communications channels that have special symbols that are not permitted in data. Examples: uuencoding and base64 encoding.
That said, you still need to (and appear at first blush to have) ensure that there is only one correct de-code; and accept the increase in size of the output (in the case above, the output will be double the size, bit-for-bit as the input).
I think you'd be better off encrypting the number with a cheap cypher + a constant secret key stored on your server(s), adding a random character or four at the end, and a cheap checksum, and simply reject any responses that don't have a valid checksum.
<encrypt(secret)>
<integer>+<random nonsense>
</encrypt>
+
<checksum()>
<integer>+<random nonsense>
</checksum>
Then decrypt the first part (remember, cheap == fast), validate the ciphertext using the checksum, throw off the random nonsense, and use the integer you stored.
There are probably some cryptographic no-no's here, but let's face it, the cost of this algorithm being broken is a touch on the low side.
i'm making network application which doesn't send good data every time (most of time they are broken) so i tought to make control sum. At the end of data i will add control sum to check if they are valid. So i'm not sure is that a good idea to multiply every data (they are from 1 to 100) by 100, 100^2, 100^3..., and sum them.
Do you have any suggestion what to do, without making really big number(there are many data in the every packet).
Example:
Data: 1,4,2,77,12,32,5,52,23
My solution:1,4,2,77,12,32,5,52,23, 100+40000+2000000+ 77*10^4 ...
When client receive the packet he will check if last data is equal to sum of other datas.
Is there any better solution?
Multiplying the data results in a very large number to transmit, and not a lot of confidence that the numbers are correct. And addition runs into potential overflow issues. That is why it is customary to use an xor.
Or you can read up on http://en.wikipedia.org/wiki/Error-correcting_code to get even fancier solutions that can detect, and sometimes correct, small numbers of errors.
Best explanation here:
http://www.textfiles.com/programming/crc.txt
CRC functions will be available in you language's networking library.
Because 128 is 10000000 in binary, there is only 1 bit for subnetting, and there are 7 bits for hosts. We're going to subneting the Class C network address 192.168.10.0.
192.168.10.0 = Network address
255.255.255.128= Subnet mask
I have some base-64 encoded encrypted data and noticed a fair amount of repetition. In a (approx) 200-character-long string, a certain base-64 character is repeated up to 7 times in several separate repeated runs.
Is this a red flag that there is a problem in the encryption? According to my understanding, encrypted data should never show significant repetition, even if the plaintext is entirely uniform (i.e. even if I encrypt 2 GB of nothing but the letter A, there should be no significant repetition in the encrypted version).
According to the binomial distribution, there is about a 2.5% chance that you'd see one character from a set of 64 appear seven times in a series of 200 random characters. That's a small chance, but not negligible. With more information, you might raise your confidence from 97.5% to something very close to 100% … or find that the cipher text really is uniformly distributed.
You say that the "character is repeated up to 7 times" in several separate repeated runs. That's not enough information to say whether the cipher text has a bias. Instead, tell us the total number of times the character appeared, and the total number of cipher text characters. For example, "it appeared a total of 3125 times in 1000 runs of 200 characters each."
Also, you need to be sure that you are talking about the raw output of a cipher. Cipher text is often encapsulated in an "envelope" like that defined by the Cryptographic Message Syntax. Of course, this enclosing structure will have predictable patterns.
Well I guess it depends. Repetition in general is bad thing if it represents the same data.
Considering you are encoding it have you looked at data to see if you have something that repeats in those counts?
In order to understand better you gotta know what kind of encryption does it use.
It could be just coincidence that they are repeating.
But if repetition comes from same data, then it can be a red flag because then frequency counts can be used to decode it.
What kind of encryption are you using? Home made or some industry standard?
It depends on how are you encrypting your data.
Base64 encoding a string may count as light obfuscation, but it is NOT encryption. The purpose of Base64 encoding is to allow any sort of binary data to be encoded as a safe ASCII string.