How is data stored on disk? - EFI GUID - guid

I posted this question earlier on SuperUser but I feel it is more suited for programmers.
If I understand correctly, according to GPT, the first 16 bytes of LBA 2 is the partition type GUID for the first partition on disk. In Windows Disk Management the first partition is designated as an EFI System Partition. However upon further investigation an EFI System Partition's GUID is:
C12A7328-F81F-11D2-BA4B-00A0C93EC93B
And yet the first 16 bytes tell me otherwise:
28732AC1-1FF8-D211-BA4B-00A0C93EC93B
Interestingly the first 3 sections act as little endian while the other 2 are big endian.
Why is this the case?

EFI_GUID datatype is declared as follows:
typedef struct {
UINT32 Data1;
UINT16 Data2;
UINT16 Data3;
UINT8 Data4[8];
} EFI_GUID;
Because original EFI architectures (IA64 LE and IA32e) were little-endian by default, so are the integers. I haven't really seen an UEFI implementation on big-endian machine, so I don't know if standard GUIDs will be stored otherwise.

Related

why does the program counter in 8051 is 16 bit and stack pointer is 8 bit in 8051?

Why does the stack pointer holds only 8 bit address in 8051 and whereas the program counter holds the 16 bit address?
Every processor can have the width of its pointers deliberately designed. And both PC and SP are pointers, pointing to the instruction to be executed and saved contents on the stack, respectively.
The designers of the 8051 separated instruction memory and data memory. There are more memory sections, but the stack is located in the latter, so this should suffice.
Instruction memory: It has a maximum size of 65536 bytes that can be accesses without further "tricks". To address this range you need 16 bits.
Data memory: It has a maximum size of 256 bytes, even though the standard 8051 has only 128 of them implemented. To address this range you need 8 bits.
Please remember, code and stack are different things!
Code contains all instructions (and if present constants.) It is mostly composed of ROM, but can be RAM.
Stack stores return addresses and saved values. It has to be RAM.

How many bits of integer data can be stored in a DynamoDB attribute of type Number?

DynamoDB's Number type supports 38 digits of decimal precision. This is not big enough to store a 128-bit integer which would require 39 digits. The max value is 340,282,366,920,938,463,463,374,607,431,768,211,455 for unsigned 128-bit ints or 170,141,183,460,469,231,731,687,303,715,884,105,727 for signed 128-bit ints. These are both 39-digit numbers.
If I can't store 128 bits, then how many bits of integer data can I store in a Number?
DynamoDB attribute of type Number can store 126-bit integers (or 127-bit unsigned integers, with serious caveats).
According to Amazon's documentation:
Numbers can have up to 38 digits precision. Exceeding this results in an exception.
This means (verified by testing in the AWS console) that the largest positive integer and smallest negative integers, respectively, that DynamoDB can store in a Number attribute are:
99,999,999,999,999,999,999,999,999,999,999,999,999 (aka 10^38-1)
-99,999,999,999,999,999,999,999,999,999,999,999,999 (aka -10^38+1)
These numbers require 126 bits of storage, using this formula:
bits = floor (ln(number) / ln (2))
= floor (87.498 / 0.693)
= floor (126.259)
= 126
So you can safely store a 126-bit signed int in a DynamoDB.
If you want to live dangerously, you can store a 127-bit unsigned int too, but there are some caveats:
You'd need to avoid (or at least be very careful) using such a number as a sort key, because values with a most-significant-bit of 1 will sort as negative numbers.
Your app will need to convert unsigned ints to signed ints when storing them or querying for them in DynamoDB, and will also need to convert them back to unsigned after reading data from DynamoDB.
If it were me, I wouldn't take these risks for one extra bit without a very, very good reason.
One logical question is whether 126 (or 127 given the caveats above) is good enough to store a UUID. The answer is: it depends. If you are in control of the UUID generation, then you can always shave a bit or two from the UUID and store it. If you shave from the 4 "version" bits (see format here) then you may not be losing any entropy at all if you are always generating UUIDs with the same version.
However, if someone else is generating those UUIDs AND is expecting lossless storage, then you may not be able to use a Number to store the UUID. But you may be able to store it if you restrict clients to a whitelist of 4-8 UUID versions. The largest version now is 5 out of a 0-15 range, and some of the older versions are discouraged for privacy reasons, so this limitation may be reasonable depending on your clients and whether they adhere to the version bits as defined in RFC 4122.
BTW, I was surprised that this bit-limit question wasn't already online... at least not in an easily-Google-able place. So contributing this Q&A pair so future searchers can find it.

arduino wifi passphrase with \0

I'm trying to connect Arduino to a WiFi network.
const char* ssid = "ssid";
char* password = "some_hex_chars";
.
.
.
void setup(void){
WiFi.begin(ssid, password);
.
.
.
The problem is, a have code 0x00 somewhere in passphrase. Since begin() method takes argument char, which is null-terminated string, password is truncated.
Is there a way to work around this? Where can I find source of begin() method to modify it?
Edit: WRONG.
It's not passphrase, it's PSK with 64 hexadecimal characters, and it doesn't want to connect.
Update:
I solved the problem. I wasn't the PSK problem, but WiFi router advanced settings. When
54g™ Mode is set to 54g Performance, it doesn't want to connect. After I changed it to 54g Auto, it works fine.
I know nothing about Arduino - but more about 802.11 aka Wi-Fi. An so:
Don't do that. If you have a 0x00 in the middle your passphrase, it is technically invalid, as per the IEEE 802.11 standards.
And so will, I presume, be interpreted as the next to last character of your passphrase ( = the passphrase is everything before this 0x00) by your 802.11 stack if correctly implemented, and you're looking for undefined behavior - at best, interoperability problems, at worst, you're taking a bet.
How is that?
(warning: this is going to be boring, lots of "network lawyer" stuff)
The IEEE 802.11 standard relevant to this is IEEE Std 802.11i-2004 "Amendment 6: Medium Access Control (MAC) Security Enhancements"[0], aka "WPA2".
(I won't dig deeper down to WEP, which is clearly deprecated to no use, nor "basic" WPA, which was a transition waiting for this WPA2 standard to be complete).
The relevant part can be found in 802.11i, the ASN MIB[1] (annex D, normative), on page 136, define the "dot11RSNAConfigPSKPassPhrase" as a "DisplayString". So what type of data exactly is a "DisplayString"?
RFC 1213, "Management Information Base for Network Management of TCP/IP-based internets: MIB-II", from 1991, on page 3, states that:
"A DisplayString is restricted to the NVT ASCII character set, as
defined in pages 10-11 of [6]."
OK...
This "[6]" is RFC 854, from 1983 (Wow! These IETF and IEEE design their standards seriously and really, really build upon). Are you still following me? :-) So having a look at it we learn that NVT stands for "Network Virtual Terminal", and in pointed to page 10 and 11, we found:
The NVT printer [sic! remember that's 1983] [...] can produce
representations of all 95 USASCII graphics (codes 32 through 126).
OK, ASCII codes 32 to 126. Now let's come back to IEEE 802.11i:
In Annex H (informative), "RSNA reference implementations and test vectors", section "H.4 Suggested pass-phrase-to-PSK mapping" (remember that the purpose of the passphrase, mathematicaly massaged with the SSID, is to derive a PSK (Pre-Shared Key), more useful for 802.11 operation but much less user-friendly than "a damned simple passphrase that I can type with a damned keyboard"). Which, phrased the IEEE way, gives this (page 165):
The RSNA PSK consists of 256 bits, or 64 octets when represented in
hex. It is difficult for a user to correctly enter 64 hex characters.
Most users, however, are familiar with passwords and pass-phrases and
feel more comfortable entering them than entering keys. A user is more
likely to be able to enter an ASCII password or pass-phrase, even
though doing so limits the set of possible keys. This suggests that
the best that can be done is to introduce a pass-phrase to PSK
mapping.
This clause defines a pass-phrase–to–PSK mapping that is the
recommended practice for use with RSNAs.
This pass-phrase mapping was introduced to encourage users unfamiliar
with cryptographic concepts to enable the security features of their
WLAN.
...so for what's the purpose of a passphrase. And then on following page 166:
Here, the following assumptions apply:
A pass-phrase is a sequence of between 8 and 63 ASCII-encoded characters. The limit of 63 comes from the desire to distinguish
between a pass-phrase and a PSK displayed as 64 hexadecimal
characters.
Each character in the pass-phrase must have an encoding in the range of 32 to 126 (decimal), inclusive. [emphasis mine]
And Voila! Indeed, "32 to 126 (decimal), inclusive".
So here we have again our passphrase as ASCII "in the range of 32 to 126 (decimal)", confirmed from IEEE to IETF back to IEEE. We also learn that it's supposed to be between 8 and 63 bytes long, which, I would infer, imply that if longer than 63 bytes it will be trimmed down (and not NULL terminated, which is not a problem), and if shorter, will be cut at the first character outside of the 32-126 ASCII code. Of course the C string NULL terminator 0x00 is the more practical, sensible to use for this BTW.
So, passphrase = a string consisting only of 32 to 126 (decimal) ASCII code.
Have a look at an ASCII table, and you'll see this start with space, and end with the tilde '~'.
And there's definitely not the 0x00 in that.
Hence, long story short: your passphrase is standard-wise technically invalid, and you're looking for undefined behavior.
Congratulation if you've read me this far!
Addendum:
When it comes to networking protocol, do never, ever assume that what looks like "a string" is just "a string" from whatever you may presuppose, and always check the exact encoding/limitations.
Other example regarding Wi-Fi:
Another "string" is the SSID. Is this really a string? No. It is a raw array of 32 bytes, no ASCII, no UTF-8, Unicode, whatever, no termination, just 32 raw bytes, even if you "set it" as "foobar + NULL terminator" a whole 32 bytes will be used by the stack and go on the air (look at a wireshark trace, and doucle-click the SSID field in the dissection: 32 bytes long). So an SSID could consist of only ASCII spaces, tabs, CR, LF and a few 0x00 here and there, or only 0x00 BTW, it will be perfectly valid and managed as a full 32 bytes sequence anyway.
EDIT:
I wondered about your motivation for setting such a passphrase, and the only idea I could come up with - correct me if I'm wrong - is that your purpose was to play a neat trick to ensure that a regular user, using a regular keyboard, could never enter the passphrase. Sadly - or actually hopefully - as I explained, this cannot work because the IEEE precisely designed the passphrase data type to be 100% sure that anybody, using the most basic keyboard, could always type it. That was their motivation.
And so, what can you do?
As an alternative, you could directly use a PSK. That's plain raw 32 bytes (representation 64 hex digit ASCII), with no type-able/printable consideration. For example, from the hostapd.conf file (of course the example PSK is represented here as "text", but that's actually raw bytes):
# WPA pre-shared keys for WPA-PSK. This can be either entered as a 256-bit
# secret in hex format (64 hex digits), wpa_psk, or as an ASCII passphrase
# (8..63 characters) that will be converted to PSK. This conversion uses SSID
# so the PSK changes when ASCII passphrase is used and the SSID is changed.
# wpa_psk (dot11RSNAConfigPSKValue)
# wpa_passphrase (dot11RSNAConfigPSKPassPhrase)
#wpa_psk=0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef
#wpa_passphrase=secret passphrase
But then of course, 1/ this may not fit your use case (deployment wise), and 2/ the Arduino Wi-Fi API may have no such capabilities.
Hope this help.
[0]:
You can download it for free here:
http://standards.ieee.org/getieee802/download/802.11i-2004.pdf
[1]:
That's IEEE jargon for "Management Information Base" in "Abstract Syntax Notation", which is a formal, hierarchical notation of every data with their names and types for a given standard. You can think of it as "XML", only its not XML, and is used by IETF and IEEE (RFC 2578, RFC 1213).

what's the difference between int and cl_int in OpenCL? [duplicate]

This question already has an answer here:
When to use the OpenCL API scalar data types?
(1 answer)
Closed 8 years ago.
There are many data types in OpenCL, such as int, cl_int, char, cl_char, 'short', 'cl_short'. But what is the difference between int and cl_int, and when should I use cl_int instead of int?
The size of an int in C/C++ is machine dependent. It is guaranteed to be at least 16 bits, but these days will usually be 32 bits, and could also be 64. This poses a problem when passing data between a host and device in OpenCL - if the device has a different idea about what the size of an int is, then passing an int value(s) to the device might not produce the expected result.
The OpenCL headers provide the cl_int definition to provide a datatype that is always 32 bits, which matches the size that an OpenCL device expects. This means that you can pass a cl_int value, or an array of cl_int values from the host to device (and back), without running into problems with the sizes being mismatched.
So, whenever you are writing host code that deals with values or buffers that will be passed to the device, you should always use the cl_ datatypes.

AES Rijndael and little/big endian?

I am using the public domain reference implementation of AES Rijndael, commonly distributed under the name "rijndael-fst-3.0.zip". I plan to use this to encrypt network data, and I am wondering whether the results of encryption will be different on big/little endian architectures? In other words, can I encrypt a block of 16 bytes on a little endian machine and then decrypt that same block on big endian? And of course, the other way around as well.
If not, how should I go about swapping bytes?
Thanks in advance for your help.
Kind regards.
Byte order issues are relevant only in context of mapping multi-byte constructs to a sequence of bytes e.g. mapping a 4-byte sequence to a signed integer value is sensitive to byte order.
The AES algorithm is byte centric and insensitive to endian issues.
Rijndael is oblivious to byte order; it just sees the string of bytes you feed it. You should do the byte swapping outside of it as you always would (with ntohs or whatever interface your platform has for that purpose).

Resources