Whats the difference between a GUID and a UUID? [duplicate] - guid

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Is there any difference between a GUID and a UUID?
Whats the difference between a GUID and a UUID, and which should I use for true uniqueness?
Update:
What things do I need for uniqueness too in the algorithm I choose?
1) MAC address of network card
2) 128 bits (isn't 64 bits enough?)
3) What if I have a multicore machine on the same MAC address. Isn't there a chance of duplicates?

A GUID is Microsofts implementation of a unique identifier (UUID).
A UUID is defined as
"Universally Unique Identifier (UUID) is an identifier standard used in software construction, standardized by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE)."
From Wikipedia.
If you need to create a new guid/uuid use my website:)
http://www.createnewguid.com

GUIDs are 128-bit UUIDs. You can use GUIDs and be sure of their uniqueness.

Related

Why the salt is 8 characters long? [duplicate]

This question already has answers here:
What is the optimal length for user password salt? [closed]
(5 answers)
Closed 4 years ago.
Why is the salt not more than 8 ~ 16 characters long?
Also, why in most cases is it in the front or end of the password, and not in different positions?
Is this to make it harder for the breaker? Or is it useless?
Because more salt doesn't serve a useful purpose.
The point of salt is to prevent some parallel attacks from working in a reasonable amount of time/memory, and/or drive space. (You can no longer have a table that says Hash A => Password A, because even if you had enough disk space to construct a rainbow table, the salt makes the number of possible entries way beyond feasibility. And you can no longer hash a potential password once and compare it against a bunch of hashes at a time, because the salt is quite likely to be different for each hash.)
16 characters gives you somewhere between 10^16 and 96^16 times as many possibilities, which already fits the definition of "way beyond feasibility". Past a certain point, you're simply increasing your own storage requirements for no significant benefit.
The salt of 8 characters is enough against any imaginable in real life dictionary attack, it makes any dictionary or rainbow table useless.

how to identify encryption or hash value type

I have this hash or encrypted string
861004c2-a9e0-4dae-a436-f46cecf14591
please tell me which encryption or hash algorithms used to generate values like this and how can I decrypt it. i already search web for this string type and check previews threads related to the encryption and hash methods but fail to identify this string.
thanks
Based on the byte values alone it is impossible to distinguish which algorithm was used. It is a desired characteristic of hashes and encryption algorithms that though they are deterministic, their output is indistinguishable from real randomness. It follows that they are also indistinguishable from one another.
Now the formatting may help, as in Hamed's post it may indicate a GUID. But there is no way to know based on the byte values alone.
It looks llike a GUID. GUIDs have different versions and each version's algorithm differs.
For example, Version 1 GUIDs are generated based on the user's network card MAC address and the time while generating the GUID. Version 4 GUIDs use a pseudo-random number.
For more information check here.

Which method of password storage is more secure [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Which is the more secure method of storing passwords? I lack the mathematical background to determine the answer myself.
Let's please for the sake of argument assume that all passwords and usernames generated for each of the following methods are randomly generated 6 characters known to be exactly six alpha-humeric-special-character fields and that each are using the same hashing algorithm and the same number of passes.
The standard way. UserName stored in plain text and only the password is to be discovered. Hash(PlaintextPassword + UniqueRecordSalt) = Password stored in DB.
One field recognized as LoginInfo = Hash(Encryption(UserName, Password) + Shared Salt). Neither the UserName nor the Password are ever stored in any other format EVER.
Does the forced cross attempting of username/password combinations offset the weakness of a shared salt as opposed to a unique record salt? This is of course completely IGNORING all affects on usability and focusing entirely on security.
Can anyone point me to any software to help me answer this question myself since I lack the cryptography and mathematical knowledge to arrive at the answer myself?
Please feel free to move this to a more appropriate forum. I didn't know where else to put it. However, I don't feel that it is a topic irrelevant to programmers overall doing their everyday job.
Please read How to securely hash passwords? first. To summarize:
Never use a single pass of any hashing algorithm.
Never roll your own, which is what your example 2 is (and example 1 as well, if + means concatenation).
Username stored in the clear
Salt generated per user, 8-16 random bytes, stored in the clear
in pure binary or encoded into Base64 or hex or whatever you like.
Use BCrypt, SCrypt, or PBKDF2
Until some time after the results of the Password Hashing Competition, at least.
Use as high an work factor/cost/iteration count as your CPU's can handle during expected future peak times.
For PBKDF2 in particular, do not ask for more binary output bytes than the native hash produces. I would say not less than 20 binary bytes, regardless.
SHA-1: output = 20 bytes (40 hex digits)
SHA-224: 20 bytes <= output <= 28 bytes (56 hex digits)
SHA-256: 20 bytes <= output <= 32 bytes (64 hex digits)
SHA-384: 20 bytes <= output <= 48 bytes (96 hex digits)
SHA-512: 20 bytes <= output <= 64 bytes (128 hex digits)
For PBKDF2 in particular, SHA-384 and SHA-512 have a comparative advantage on 64-bit systems for the moment, as 2014 vintage GPU's many attackers will use have a smaller margin of advantage for 64-bit operations over your defensive CPU's than they would on 32-bit operations.
If you want an example, then perhaps look at PHP source code, in particular the password_hash() and password_verify() functions, per the PHP.net Password Hashing FAQ.
Alternately, I have a variety of (currently very crude) password hashing examples at my github repositories. Right now it's almost entirely PBKDF2, but I will be adding BCrypt, SCrypt, and so on in the future.
As you say option 1 is the standard way to store passwords. As long as you use a secure hash function (eg. NIST recommend PBKDF2) with a unique salt, your passwords are secure. So I would recommend this option.
Option 2 doesn't really make sense. You cant 'undo' a hash function, so why encrypt its contents? You would then also have to store the encryption key somewhere which is different issue entirely.
Also what do you mean by a shared salt? If you always use the same salt then that defeats the point of salting your hashes. A unique salt per row is the way to go.
I would say that combining the username and password into a single hash is overcomplicating things, and limits your options in development, since you can't get a row from the DB given a username.
Say you want to lock out a user after 5 incorrect password attempts. With a standard plain-text username and hashed pw, you can just have a 'login_attempt_count' column and update the row for that user each time their password is incorrectly entered.
If your username and passwords are hashed together, you have no way of identifying which row to update with a login attempt count, since a hashed correct username with a wrong password wont match any hash.
I guess you could have some kind of mapping function to get a row_id given a username, but I would say its just needlessly complicated, and with greater complication you have a bigger chance of security flaws.
As I said, I would just go with option 1. It's the industry standard way to store passwords, and its secure enough for pretty much any application (as long as you use a modern secure hash function).

Manual change of GUID - how bad is it?

How bad is changing generated GUID manually and using it? Is the probability of collision still insignificant or is manipulation with GUIDs dangerous?
Sometimes we just change some letter of previously generated GUID and use it. Should we stop doing it?
This depends on the version of the GUID and where you are making the change. Let's dissect a little how a GUID actually looks like:
A GUID has a version.   The 13th hex digit in a GUID marks its version. Current GUIDs are usually generated with version 4. If you change the version backwards you risk collision with a GUID that already exists. Change it forwards and you risk collision with potential future GUIDs.
A GUID has a variant too.   The 17th hex digit in a GUID is the variant field. Some values of it are reserved for backward compatibility, one value is reserved for future expansion. So changing something there means you risk collision with previously-generated GUIDs or maybe GUIDs to be generated in the future.
A GUID is structured differently depending on the version.   Version 4 GUIDs use (for the most part – excepting the 17th hex digit) truly random or pseudo-random bits (in most implementation pseuso-random). Change something there and your probability of collision remains about the same.
It should be very similar for version 3 and 5 GUIDs which use hashes, although I don't recall ever seeing one in the wild. Not so much for versions 1 and 2, though. Those have a structure and depending on where you change something you make things difficult.
Version 1 GUIDs include a timestamp and a counter field which gets incremented if two GUIDs are generated in the same clock interval (and thus would lead to the same timestamp). If you change the timestamp you risk colliding with a GUID generated earlier or later on the same machine. If you change the counter you risk colliding with a GUID that was generated at the same time and thus needed the counter as a “uniquifier”.
Version 2 GUIDs expand on version 1 and include a user ID as well.   The timestamp is less accurate and contains a user or group ID while a part of the counter is used to indicate which one is meant (but which only has a meaning to the generating machine). So with a change in those parts you risk collision with GUIDs generated by another user on the same machine.
Version 1 and 2 GUIDs include a MAC address.   Specifically, the MAC address of the computer that generated them. This ensures that GUIDs from different machines are different even if generated in the very same instant. There is a fallback if a machine doesn't have a MAC address but then there is no uniqueness guarantee. A MAC address also has a structure and consists of an “Organisationally Unique Identifier” (OUI; which is either locally-administered or handed out by the IEEE) and an unique identifier for the network card.
If you make a change in the OUI you risk colliding with GUIDs generated in computers with network cards of other manufacturers. Unless you make the change so the second-least significant bit of the first octet is 1, in which case you're switching to a locally-administered OUI and only risk collision with GUIDs generated on computers that have an overridden MAC address (which might include most VMs with virtual network hardware).
If you chance the card identifier you risk collision with GUIDs generated on computers with other network cards by the same manufacturer or, again, with those where the MAC address was overridden.
No other versions exist so far but the gist is the following: A GUID needs all its parts to ensure uniqueness; if you change something you may end up with a GUID which isn't necessarily unique anymore. So you're probably making it more of a GID or something. The safest to change are probably the current version 4 GUIDs (which is what Windows and .NET will generate) as they don't really guarantee uniqueness but instead make it very, very unlikely.
Generally I'd say you're much better off generating a new GUID, though. This also helps the person reading them because you can tell two GUIDs apart as different easily if they look totally different. If they only differ in a single digit a person is likely to miss the change and assume the GUIDs to be the same.
Further reading:
Wikipedia: GUID
Wikipedia: UUID
Eric Lippert: GUID guide. Part 1, part 2, part 3. (Read it; this guy can explain wonderfully and happens to be on SO too)
Wikipedia: MAC address
RFC 4122: The GUID versions
RFC 4122: The variant field
DCE 1.1: Authentication and security services – The description of version 2 GUIDs
Raymond Chen: GUIDs are globally unique, but substrings of GUIDs aren't
Raymond Chen: GUIDs are designed to be unique, not random
I have no idea how that would affect the uniqueness of the GUID, but it's probably not a good idea.
Visual Studio has a built in GUID generator that takes a couple of seconds to spin up and create a new GUID. If you don't use VS then there are other easy ways to create a new one. This page has 2 scripts (VB script and PHP) that will do the job and here's a .net version

Is there any difference between a GUID and a UUID?

I see these two acronyms being thrown around and I was wondering if there are any differences between a GUID and a UUID?
The simple answer is: **no difference, they are the same thing.
2020-08-20 Update: While GUIDs (as used by Microsoft) and UUIDs (as defined by RFC4122) look similar and serve similar purposes, there are subtle-but-occasionally-important differences. Specifically, some Microsoft GUID docs allow GUIDs to contain any hex digit in any position, while RFC4122 requires certain values for the version and variant fields. Also, [per that same link], GUIDs should be all-upper case, whereas UUIDs should be "output as lower case characters and are case insensitive on input". This can lead to incompatibilities between code libraries (such as this).
(Original answer follows)
Treat them as a 16 byte (128 bits) value that is used as a unique value. In Microsoft-speak they are called GUIDs, but call them UUIDs when not using Microsoft-speak.
Even the authors of the UUID specification and Microsoft claim they are synonyms:
From the introduction to IETF RFC 4122 "A Universally Unique IDentifier (UUID) URN Namespace": "a Uniform Resource Name namespace for UUIDs (Universally Unique IDentifier), also known as GUIDs (Globally Unique IDentifier)."
From the ITU-T Recommendation X.667, ISO/IEC 9834-8:2004 International Standard: "UUIDs are also known as Globally Unique Identifiers (GUIDs), but this term is not used in this Recommendation."
And Microsoft even claims a GUID is specified by the UUID RFC: "In Microsoft Windows programming and in Windows operating systems, a globally unique identifier (GUID), as specified in [RFC4122], is ... The term universally unique identifier (UUID) is sometimes used in Windows protocol specifications as a synonym for GUID."
But the correct answer depends on what the question means when it says "UUID"...
The first part depends on what the asker is thinking when they are saying "UUID".
Microsoft's claim implies that all UUIDs are GUIDs. But are all GUIDs real UUIDs? That is, is the set of all UUIDs just a proper subset of the set of all GUIDs, or is it the exact same set?
Looking at the details of the RFC 4122, there are four different "variants" of UUIDs. This is mostly because such 16 byte identifiers were in use before those specifications were brought together in the creation of a UUID specification. From section 4.1.1 of RFC 4122, the four variants of UUID are:
Reserved, Network Computing System backward compatibility
The variant specified in RFC 4122 (of which there are five sub-variants, which are called "versions")
Reserved, Microsoft Corporation backward compatibility
Reserved for future definition.
According to RFC 4122, all UUID variants are "real UUIDs", then all GUIDs are real UUIDs. To the literal question "is there any difference between GUID and UUID" the answer is definitely no for RFC 4122 UUIDs: no difference (but subject to the second part below).
But not all GUIDs are variant 2 UUIDs (e.g. Microsoft COM has GUIDs which are variant 3 UUIDs). If the question was "is there any difference between GUID and variant 2 UUIDs", then the answer would be yes -- they can be different. Someone asking the question probably doesn't know about variants and they might be only thinking of variant 2 UUIDs when they say the word "UUID" (e.g. they vaguely know of the MAC address+time and the random number algorithms forms of UUID, which are both versions of variant 2). In which case, the answer is yes different.
So the answer, in part, depends on what the person asking is thinking when they say the word "UUID". Do they mean variant 2 UUID (because that is the only variant they are aware of) or all UUIDs?
The second part depends on which specification being used as the definition of UUID.
If you think that was confusing, read the ITU-T X.667 ISO/IEC 9834-8:2004 which is supposed to be aligned and fully technically compatible with RFC 4122. It has an extra sentence in Clause 11.2 that says, "All UUIDs conforming to this Recommendation | International Standard shall have variant bits with bit 7 of octet 7 set to 1 and bit 6 of octet 7 set to 0". Which means that only variant 2 UUID conform to that Standard (those two bit values mean variant 2). If that is true, then not all GUIDs are conforming ITU-T/ISO/IEC UUIDs, because conformant ITU-T/ISO/IEC UUIDs can only be variant 2 values.
Therefore, the real answer also depends on which specification of UUID the question is asking about. Assuming we are clearly talking about all UUIDs and not just variant 2 UUIDs: there is no difference between GUID and IETF's UUIDs, but yes difference between GUID and conforming ITU-T/ISO/IEC's UUIDs!
Binary encodings could differ
When encoded in binary (as opposed to the human-readable text format), the GUID may be stored in a structure with four different fields as follows. This format differs from the [UUID standard] 8 only in the byte order of the first 3 fields.
Bits Bytes Name Endianness Endianness
(GUID) RFC 4122
32 4 Data1 Native Big
16 2 Data2 Native Big
16 2 Data3 Native Big
64 8 Data4 Big Big
GUID is Microsoft's implementation of the UUID standard.
Per Wikipedia:
The term GUID usually refers to Microsoft's implementation of the Universally Unique Identifier (UUID) standard.
An updated quote from that same Wikipedia article:
RFC 4122 itself states that UUIDs "are also known as GUIDs". All this suggests that "GUID", while originally referring to a variant of UUID used by Microsoft, has become simply an alternative name for UUID…
Not really. GUID is more Microsoft-centric whereas UUID is used more widely (e.g., as in the urn:uuid: URN scheme, and in CORBA).
GUID has longstanding usage in areas where it isn't necessarily a 128-bit value in the same way as a UUID. For example, the RSS specification defines GUIDs to be any string of your choosing, as long as it's unique, with an "isPermalink" attribute to specify that the value you're using is just a permalink back to the item being syndicated.
One difference between GUID in SQL Server and UUID in PostgreSQL is letter case; SQL Server outputs upper while PostgreSQL outputs lower.
The hexadecimal values "a" through "f" are output as lower case characters and are case insensitive on input. - rfc4122#section-3

Resources