Is there any difference between a GUID and a UUID? - guid

I see these two acronyms being thrown around and I was wondering if there are any differences between a GUID and a UUID?

The simple answer is: **no difference, they are the same thing.
2020-08-20 Update: While GUIDs (as used by Microsoft) and UUIDs (as defined by RFC4122) look similar and serve similar purposes, there are subtle-but-occasionally-important differences. Specifically, some Microsoft GUID docs allow GUIDs to contain any hex digit in any position, while RFC4122 requires certain values for the version and variant fields. Also, [per that same link], GUIDs should be all-upper case, whereas UUIDs should be "output as lower case characters and are case insensitive on input". This can lead to incompatibilities between code libraries (such as this).
(Original answer follows)
Treat them as a 16 byte (128 bits) value that is used as a unique value. In Microsoft-speak they are called GUIDs, but call them UUIDs when not using Microsoft-speak.
Even the authors of the UUID specification and Microsoft claim they are synonyms:
From the introduction to IETF RFC 4122 "A Universally Unique IDentifier (UUID) URN Namespace": "a Uniform Resource Name namespace for UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally Unique IDentifier)."
From the ITU-T Recommendation X.667, ISO/IEC 9834-8:2004 International Standard: "UUIDs are also known as Globally Unique Identifiers (GUIDs), but this term is not used in this Recommendation."
And Microsoft even claims a GUID is specified by the UUID RFC: "In Microsoft Windows programming and in Windows operating systems, a globally unique identifier (GUID), as specified in [RFC4122], is ... The term universally unique identifier (UUID) is sometimes used in Windows protocol specifications as a synonym for GUID."
But the correct answer depends on what the question means when it says "UUID"...
The first part depends on what the asker is thinking when they are saying "UUID".
Microsoft's claim implies that all UUIDs are GUIDs. But are all GUIDs real UUIDs? That is, is the set of all UUIDs just a proper subset of the set of all GUIDs, or is it the exact same set?
Looking at the details of the RFC 4122, there are four different "variants" of UUIDs. This is mostly because such 16 byte identifiers were in use before those specifications were brought together in the creation of a UUID specification. From section 4.1.1 of RFC 4122, the four variants of UUID are:
Reserved, Network Computing System backward compatibility
The variant specified in RFC 4122 (of which there are five sub-variants, which are called "versions")
Reserved, Microsoft Corporation backward compatibility
Reserved for future definition.
According to RFC 4122, all UUID variants are "real UUIDs", then all GUIDs are real UUIDs. To the literal question "is there any difference between GUID and UUID" the answer is definitely no for RFC 4122 UUIDs: no difference (but subject to the second part below).
But not all GUIDs are variant 2 UUIDs (e.g. Microsoft COM has GUIDs which are variant 3 UUIDs). If the question was "is there any difference between GUID and variant 2 UUIDs", then the answer would be yes -- they can be different. Someone asking the question probably doesn't know about variants and they might be only thinking of variant 2 UUIDs when they say the word "UUID" (e.g. they vaguely know of the MAC address+time and the random number algorithms forms of UUID, which are both versions of variant 2). In which case, the answer is yes different.
So the answer, in part, depends on what the person asking is thinking when they say the word "UUID". Do they mean variant 2 UUID (because that is the only variant they are aware of) or all UUIDs?
The second part depends on which specification being used as the definition of UUID.
If you think that was confusing, read the ITU-T X.667 ISO/IEC 9834-8:2004 which is supposed to be aligned and fully technically compatible with RFC 4122. It has an extra sentence in Clause 11.2 that says, "All UUIDs conforming to this Recommendation | International Standard shall have variant bits with bit 7 of octet 7 set to 1 and bit 6 of octet 7 set to 0". Which means that only variant 2 UUID conform to that Standard (those two bit values mean variant 2). If that is true, then not all GUIDs are conforming ITU-T/ISO/IEC UUIDs, because conformant ITU-T/ISO/IEC UUIDs can only be variant 2 values.
Therefore, the real answer also depends on which specification of UUID the question is asking about. Assuming we are clearly talking about all UUIDs and not just variant 2 UUIDs: there is no difference between GUID and IETF's UUIDs, but yes difference between GUID and conforming ITU-T/ISO/IEC's UUIDs!
Binary encodings could differ
When encoded in binary (as opposed to the human-readable text format), the GUID may be stored in a structure with four different fields as follows. This format differs from the [UUID standard] 8 only in the byte order of the first 3 fields.
Bits Bytes Name Endianness Endianness
(GUID) RFC 4122
32 4 Data1 Native Big
16 2 Data2 Native Big
16 2 Data3 Native Big
64 8 Data4 Big Big

GUID is Microsoft's implementation of the UUID standard.
Per Wikipedia:
The term GUID usually refers to Microsoft's implementation of the Universally Unique Identifier (UUID) standard.
An updated quote from that same Wikipedia article:
RFC 4122 itself states that UUIDs "are also known as GUIDs". All this suggests that "GUID", while originally referring to a variant of UUID used by Microsoft, has become simply an alternative name for UUID…

Not really. GUID is more Microsoft-centric whereas UUID is used more widely (e.g., as in the urn:uuid: URN scheme, and in CORBA).

GUID has longstanding usage in areas where it isn't necessarily a 128-bit value in the same way as a UUID. For example, the RSS specification defines GUIDs to be any string of your choosing, as long as it's unique, with an "isPermalink" attribute to specify that the value you're using is just a permalink back to the item being syndicated.

One difference between GUID in SQL Server and UUID in PostgreSQL is letter case; SQL Server outputs upper while PostgreSQL outputs lower.
The hexadecimal values "a" through "f" are output as lower case characters and are case insensitive on input. - rfc4122#section-3

Related

Extending short key for AES256 (SNMPv3)

I am currently working on security of a switch that runs SNMPv3.
I am expected to code it in such a way, that any SHA (1 - 2-512) is compatible with any AES (128 - 256C).
Everything, like the algorithms alone, works pretty well. The problem is, that its been estabilished, that we are going to use SHA for key generation for both authentification and encryption.
When I want to use, let's say, SHA512 with AES256, there's no problem, since SHA has output of 64B and I need just 32B for key for AES256.
But when I want to use SHA1 with AES256, SHA1 produces only 20B, which is insufficient for the key.
I've searched the internet through and through and I found out, that it's common to use this combination (snmpget, openssl), but I havent found a single word about how are you supposed to prolong the key.
How can I extend the key from 20B to 32B so it works?
P. S.: Yes, I know SHA isn't KDF, yes, I know it's not that common to use this combination, but this is just how it is in my job assignment.
Here is a page discussing your exact question. In short, there is no standard way to do this (as you have already discovered), however, Cisco has adopted the approach outlined in section 2.1 of this document:
Chaining is described as follows. First, run the password-to-key algorithm with inputs of the passphrase and engineID as described in the USM document. This will output as many key bits as the hash algorithm used to implement the password-to-key algorithm. Secondly, run the password-to-key algorithm again with the previous output (instead of the passphrase) and the same engineID as inputs. Repeat this process as many times as necessary in order to generate the minimum number of key bits for the chosen privacy protocol. The outputs of each execution are concatenated into a single string of key bits.
When this process results in more key bits than are necessary, only the most significant bits of the string should be used.
For example, if password-to-key implemented with SHA creates a 40-octet string string for use as key bits, only the first 32 octets will be used for usm3DESEDEPrivProtocol.

Can AES-128 have a key of 15-long ASCII characters?

I'm trying to decrypt an encrypted h264 I-frame, and I was given a key of length 15, is this even valid?
Should not it be of length 16, so the binary representation would be 128 bits?
If you have a thing you could type on a keyboard, that is not a proper AES key, no matter the length. AES derives its power from the fact that its key is effectively random. Anything you can type on a keyboard in not an effectively random sequence of equivalent length. There are only about 96 characters you can type easily on a Latin-style keyboard. A byte has 256 values. 96^16 is a minuscule fraction of 256^16.
To convert a "password" that a human could type into an effectively random AES key, you need a password-based key derivation function (PBKDF). The most famous and widely available is PBKDF2. There are other excellent PBKDFs including scrypt and Argon2. All of them require a random salt, and all are (in cryptographic terms) very slow to compute.
That said, regarding your framework, it is not possible to guess how they have converted this string into a key. You must consult the documentation or the implementation. There are an unbounded number of ways to convert strings into keys (most of them are terrible, but there are still an unbounded selection to pick from). As Michael Fehr noted they might have done something insecure like padding with zeros. They might also have used a simple hashing function like SHA-256 and either used a 256-bit key or taken the top or bottom 128 bits. Or…almost literally anything else. There is no common practice here. Each encryption system has to document how it is implemented.
(Note that even if you see "AES-128," this is also ambiguous. It can mean "AES with a 128-bit key" or it can mean "AES with a 128-bit block and a key of 128, 192 or 256 bits." While the former meaning is a bit more common, the latter occurs often, for example in Apple documentation, despite being redundant (AES always has a 128-bit block). So even questions like "how long is the key" requires digging into the documentation or the implementation. Cryptography is unfortunately incredibly unstandardized.)
Should not it be of length 16, so the binary representation would be 128 bits?
You are right. For AES only key length of 128, 192 or 256 bit is valid.
I commonly see two possibilities for having a key of different length:
You was given a password, not a key. Then you need as well to ask for a way to generate a key from the password (Hash? PBKDF2? Other?)
Many frameworks will silently accept different key length and then trim or zero-pad the value to fit the required key size. IMHO this is not a proper approach as it gives the developers feeling the key is good and in reality a different (padded or trimmed) value is used.

how to identify encryption or hash value type

I have this hash or encrypted string
861004c2-a9e0-4dae-a436-f46cecf14591
please tell me which encryption or hash algorithms used to generate values like this and how can I decrypt it. i already search web for this string type and check previews threads related to the encryption and hash methods but fail to identify this string.
thanks
Based on the byte values alone it is impossible to distinguish which algorithm was used. It is a desired characteristic of hashes and encryption algorithms that though they are deterministic, their output is indistinguishable from real randomness. It follows that they are also indistinguishable from one another.
Now the formatting may help, as in Hamed's post it may indicate a GUID. But there is no way to know based on the byte values alone.
It looks llike a GUID. GUIDs have different versions and each version's algorithm differs.
For example, Version 1 GUIDs are generated based on the user's network card MAC address and the time while generating the GUID. Version 4 GUIDs use a pseudo-random number.
For more information check here.

Manual change of GUID - how bad is it?

How bad is changing generated GUID manually and using it? Is the probability of collision still insignificant or is manipulation with GUIDs dangerous?
Sometimes we just change some letter of previously generated GUID and use it. Should we stop doing it?
This depends on the version of the GUID and where you are making the change. Let's dissect a little how a GUID actually looks like:
A GUID has a version.   The 13th hex digit in a GUID marks its version. Current GUIDs are usually generated with version 4. If you change the version backwards you risk collision with a GUID that already exists. Change it forwards and you risk collision with potential future GUIDs.
A GUID has a variant too.   The 17th hex digit in a GUID is the variant field. Some values of it are reserved for backward compatibility, one value is reserved for future expansion. So changing something there means you risk collision with previously-generated GUIDs or maybe GUIDs to be generated in the future.
A GUID is structured differently depending on the version.   Version 4 GUIDs use (for the most part – excepting the 17th hex digit) truly random or pseudo-random bits (in most implementation pseuso-random). Change something there and your probability of collision remains about the same.
It should be very similar for version 3 and 5 GUIDs which use hashes, although I don't recall ever seeing one in the wild. Not so much for versions 1 and 2, though. Those have a structure and depending on where you change something you make things difficult.
Version 1 GUIDs include a timestamp and a counter field which gets incremented if two GUIDs are generated in the same clock interval (and thus would lead to the same timestamp). If you change the timestamp you risk colliding with a GUID generated earlier or later on the same machine. If you change the counter you risk colliding with a GUID that was generated at the same time and thus needed the counter as a “uniquifier”.
Version 2 GUIDs expand on version 1 and include a user ID as well.   The timestamp is less accurate and contains a user or group ID while a part of the counter is used to indicate which one is meant (but which only has a meaning to the generating machine). So with a change in those parts you risk collision with GUIDs generated by another user on the same machine.
Version 1 and 2 GUIDs include a MAC address.   Specifically, the MAC address of the computer that generated them. This ensures that GUIDs from different machines are different even if generated in the very same instant. There is a fallback if a machine doesn't have a MAC address but then there is no uniqueness guarantee. A MAC address also has a structure and consists of an “Organisationally Unique Identifier” (OUI; which is either locally-administered or handed out by the IEEE) and an unique identifier for the network card.
If you make a change in the OUI you risk colliding with GUIDs generated in computers with network cards of other manufacturers. Unless you make the change so the second-least significant bit of the first octet is 1, in which case you're switching to a locally-administered OUI and only risk collision with GUIDs generated on computers that have an overridden MAC address (which might include most VMs with virtual network hardware).
If you chance the card identifier you risk collision with GUIDs generated on computers with other network cards by the same manufacturer or, again, with those where the MAC address was overridden.
No other versions exist so far but the gist is the following: A GUID needs all its parts to ensure uniqueness; if you change something you may end up with a GUID which isn't necessarily unique anymore. So you're probably making it more of a GID or something. The safest to change are probably the current version 4 GUIDs (which is what Windows and .NET will generate) as they don't really guarantee uniqueness but instead make it very, very unlikely.
Generally I'd say you're much better off generating a new GUID, though. This also helps the person reading them because you can tell two GUIDs apart as different easily if they look totally different. If they only differ in a single digit a person is likely to miss the change and assume the GUIDs to be the same.
Further reading:
Wikipedia: GUID
Wikipedia: UUID
Eric Lippert: GUID guide. Part 1, part 2, part 3. (Read it; this guy can explain wonderfully and happens to be on SO too)
Wikipedia: MAC address
RFC 4122: The GUID versions
RFC 4122: The variant field
DCE 1.1: Authentication and security services – The description of version 2 GUIDs
Raymond Chen: GUIDs are globally unique, but substrings of GUIDs aren't
Raymond Chen: GUIDs are designed to be unique, not random
I have no idea how that would affect the uniqueness of the GUID, but it's probably not a good idea.
Visual Studio has a built in GUID generator that takes a couple of seconds to spin up and create a new GUID. If you don't use VS then there are other easy ways to create a new one. This page has 2 scripts (VB script and PHP) that will do the job and here's a .net version

Generating short license keys with OpenSSL

I'm working on a new licensing scheme for my software, based on OpenSSL public / private key encryption. My past approach, based on this article, was to use a large private key size and encrypt an SHA1 hashed string, which I sent to the customer as a license file (the base64 encoded hash is about a paragraph in length). I know someone could still easily crack my application, but it prevented someone from making a key generator, which I think would hurt more in the long run.
For various reasons I want to move away from license files and simply email a 16 character base32 string the customer can type into the application. Even using small private keys (which I understand are trivial to crack), it's hard to get the encrypted hash this small. Would there be any benefit to using the same strategy to generated an encrypted hash, but simply using the first 16 characters as a license key? If not, is there a better alternative that will create keys in the format I want?
DSA signatures are signficantly shorter than RSA ones. DSA signatures are the twice the size of the Q parameter; if you use the OpenSSL defaults, Q is 160 bits, so your signatures fit into 320 bits.
If you can switch to a base-64 representation (which only requires upper-and-lower case alphanumerics, the digits and two other symbols) then you will need 53 symbols, which you could do with 11 groups of 5. Not quite the 16 that you wanted, but still within the bounds of being user-enterable.
Actually, it occurs to me that you could halve the number of bits required in the license key. DSA signatures are made up of two numbers, R and S, each the size of Q. However, the R values can all be pre-computed by the signer (you) - the only requirement is that you never re-use them. So this means that you could precalculate a whole table of R values - say 1 million of them (taking up 20MB) - and distribute these as part of the application. Now when you create a license key, you pick the next un-used R value, and generate the S value. The license key itself only contains the index of the R value (needing only 20 bits) and the complete S value (160 bits).
And if you're getting close to selling a million copies of the app - a nice problem to have - just create a new version with a new R table.
Did you consider using some existing protection + key generation scheme? I know that EXECryptor (I am not advertising it at all, this is just some info I remember) offers strong protection whcih together with complimentatary product of the same guys, StrongKey (if memory serves) offers short keys and protection against cracking. Armadillo is another product name that comes to my mind, though I don't know what level of protection they offer now. But they also had short keys earlier.
In general, cryptographically strong short keys are based on some aspects of ECC (elliptic curve cryptography). Large part of ECC is patented, and in overall ECC is hard to implement right and so industry solution is a preferred way to go.
Of course, if you don't need strong keys, you can go with just a hash of "secret word" (salt) + user name, and verify them in the application, but this is crackable in minutes.
Why use public key crypto? It gives you the advantage that nobody can reverse-engineer the executable to create a key generator, but key generators are a somewhat secondary risk compared to patching the executable to skip the check, which is generally much easier for an attacker, even with well-obfuscated executables.
Eugene's suggestion of using ECC is a good one - ECC keys are much shorter than RSA or DSA for a given security level.
However, 16 characters in base 32 is still only 5*16=80 bits, which is low enough that brute-forcing for valid keys might be practical, regardless of what algorithm you use.

Resources