Can roughtime be used as a timestamp authority? - trusted-timestamp

The roughtime protocol gives a secure timestamp right now. Can it be used as a timestamp authority, to sign a document and later prove that it existed before a certain time?

The simplest way would be to just use a cryptographic hash of your file as a nounce in your request to the roughtime servers.
Somebody apparently even implemented a proof of concept, including a more sophisticated method.

Related

Is HMAC still needed if encrypted data is always saved and retrieved locally

My understanding of HMAC is that it can help to verify the integrity of encrypted data before the data is processed i.e. it can be used to determine whether or not the data being sent to a decryption routine has been modified in any way.
That being the case, is there any advantage in incorporating it into an encryption scheme if the data is never transmitted outside of the application generating it? My use case is quite simple - a user submits data (in plaintext) to the scripts I've written to store customer details. My scripts then encrypt this data and save it to the database, and my scripts then provide a way for the user to retrieve the data and decrypt it based on the record ID they supply. There is no way for my users to send encrypted data directly to the decryption routine and I don't need to provide an external API.
Therefore, is it reasonable to assume that there is a chain of trust in the application by default because the same application is responsible for writing and retrieving the data? If I add HMAC to this scheme, is it redundant in this context or is it best practice to always implement HMAC regardless of the context? I'm intending to use the Defuse library but I'd like to understand what the benefit of HMAC is to my project.
Thanks in advance for any advice or input :)
First, you should understand that there are attacks that allow an attacker to modify encrypted data without decrypting it. See Is there an attack that can modify ciphertext while still allowing it to be decrypted? on Security.SE and Malleability attacks against encryption without authentication on Crypto.SE. If an attacker gets write access to the encrypted data -- even without any decryption keys -- they could cause significant havoc.
You say that the encrypted data is "never transmitted outside of the application generating it" but in the next two sentences you say that you "save it to the database" which appears (to me) to be something of a contradiction. Trusting the processing of encrypted data in memory is one thing, but trusting its serialization to disk, especially if done by another program (such as a database system) and/or on a separate physical machine (now or in the future, as the system evolves).
The significant question here is: would it ever be a possible for an attacker to modify or replace the encrypted data with alternate encrypted data, without access to the application and keys? If the attacker is an insider and runs the program as a normal user, then it's not generally possible to defend your data: anything the program allows the attacker to do is on the table. However, HMAC is relevant when write access to the data is possible for a non-user (or for a user in excess of their normal permissions). If the database is compromised, an attacker could possibly modify data with impunity, even without access to the application itself. Using HMAC verification severely limits the attacker's ability to modify the data usefully, even if they get write access.
My OCD usually dictates that implementing HMAC is always good practice, if for no other reason, to remove the warning from logs.
In your case I do not believe there is a defined upside to implementing HMAC other than ensuring the integrity of the plain text submission. Your script may encrypt the data but it would not be useful in the unlikely event that bad data is passed to it.

bcrypt/Bcrypt.net strength and alternatives

Ok after a LOT of researching, I've settled on using bcrypt (feel free to comment) to hash and store passwords in my phonegap app.
A few days ago I stumbled upon Bcrypt.net and it seems 'good enough' to me (again, feel free to comment). So, my question is what other alternative implementations of bcrypt are available in C#? Are there any SERIOUS flaws in the implementation of Bcrypt.net?
My security model is basically going to look like this:
User enters his pin/password/passphrase on the client
This is sent to my .net app over secure SSL (so basically send in plaintext from the client)
Use a library like bcrypt.net to hash the password and do the storage/comparison
Is there anything else that I really need to consider here?
Any help will be greatly appreciated.
Glad to see somebody here who did some research.
I haven't seen any good reasons why you should not use bcrypt. In general, using either bcrypt, PBKDF2 or scrypt on the server to provide a good layer of security.
As always, the devil is in the details. You certainly require SSL, if possible TLS 1.2 using AES encryption. If you cannot do this, make sure you don't allow much else than username/password + necessary HTML in your connection.
You should make a decision on the character encoding of the password. I would advice UTF-8, possibly narrowed down to printable ASCII characters. Either document the character encoding used or store it somewhere in the configuration.
Try to store all input parameters to bcrypt together with the "hashed" password. Certainly don't forget the iteration count. This makes it easier to upgrade to higher iteration count when the user enters his/her password later on. You need to generate a secure random salt of 8-16 bytes to store with the password.
In addition, you may want to apply an additional KBKDF (key based key derivation scheme) to the output of any of above PBKDF's. This makes it possible to use the output of bcyrpt for additional keys etc. KBKDF's work on data with enough entropy, so generally they take little time (e.g. use a NIST SP 800-108 compatible counter mode KDF). I guess this should be considered "expert mode".
The major reasons for doing password hashing are:
a. Password plaintexts are not transmitted over the wire (primary).
b. Password plaintexts are never persisted on server (secondary)
So with your setup - you're not doing a. and instead relying on the SSL. I think you should still hash on client side if possible. Leaves you more margin for future changes and in general, passwords deserve higher security / protection than your content data.
Also, i don't know what kind of server apps / extensibility you may support, so again insulating the password(s) from code might still remain an additional issue.
As far as the actual algo / util for doing the hash - i don't have the security expertise :)
You're good with bcrypt.
Great research from a cracker: https://crackstation.net/hashing-security.htm#faq
Additional verification from sophos: http://www.sophos.com/en-us/medialibrary/PDFs/other/sophossecuritythreatreport2013.pdf
bcrypt is also part of the c# libs:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa375383(v=vs.85).aspx

Client side encryption - best practice

I wrote a "Password Locker" C# app a while ago as an exercise in encryption. I'd like to move the data to the web so that I can access it anywhere without compromising my password data. I'd just like to run my ideas by the community to ensure I'm not making a mistake as I'm not an encryption expert.
Here's what I envision:
In the C# app all the password data is encrypted as a single chunk of text using a user supplied password. I'm using Rijndael (symmetric encryption) in CBC mode. The password is salted using a hard coded value.
Encrypted data gets sent to my database
I go to a web page on my server and download the encrypted text. Using client side javascript I input my password. The javascript will decrypt everything (still client side)
Here are my assumptions:
I assume that all transmissions can be intercepted
I assume that the javascript (which contains the decryption algo, and hard coded salt) can be intercepted (since it's really just on the web)
The password cannot be intercepted (since it's only input client side)
The result is that someone snooping could have everything except the password.
So, based on those assumptions: Is my data safe? I realize that my data is only as safe as the strength of my password... Is there something I can do to improve that? Is Rijndael decryption slow enough to prevent brute force attacks?
I thought about using a random salt value, but that would still need to be transmitted and because of that, it doesn't seem like it would be any safer. My preference is to not store the password in any form (hashed or otherwise) on the web.
Edit:
I am considering using SSL, so my "interception" assumptions may not be valid in that case.
Edit 2:
Based on comments from Joachim Isaksson, I will be running with SSL. Please continue breaking apart my assumptions!
Edit 3:
Based on comments from Nemo I will use salt on a per user basis. Also, I'm using PBKDF2 to derive a key based on passwords, so this is where I'll get my "slowness" to resist brute force attacks.
Without even going into the crypto analysis in any way, if you're assuming all your information can be intercepted (ie you're running without SSL), you're not secure.
Since anyone can intercept the Javascript, they can also change the Javascript to make the browser pass the clear text elsewhere once decrypted.
Also, anyone hacking into the site (or the site owner) can maliciously change the Javascript to do the same thing even if SSL is on.
By "password data", I assume you mean "password-protected data"?
The salt does need to be random. It is fine that is transmitted in the clear. The purpose of a salt is protection against dictionary attacks. That is, should someone manage to obtain your entire encrypted database, they could quickly try a large dictionary of passwords against all of your users. With random salts, they need to try the dictionary against each user.
Or, alternatively, even without compromising the database, they could generate a huge collection of pre-encrypted data for lots of dictionary words, and immediately be able to recognize any known plaintext encrypted by any of those keys.
Even with a salt, dictionary attacks can be faster than you would like, so deriving key data from a password is a lot more subtle than most people realize.
Bottom line: As always, never invent your own cryptography, not even your own modes of operation. To derive an encryption key from a password, use a well-known standard like PBKDF2 (aka. PKCS#5).
Well, as this is an open question:
Issue #1
What are you going to do if the password that is supplied is incorrect, or if the salt/ciphertext is altered? You will get an incorrect decryption result, but how are you going to test that? What happens if just the last part of the ciphertext is altered? Or removed altogether?
Solution: Provide integrity protection against such attacks. Add a HMAC using a different key or use a mode like GCM mode.
Issue #2
What happens if you change or add a few bytes to the password (compare the encrypted store in time)?
Solution: Encrypt your key store with a different IV each time.
That's already 4 issues found :) Cryptography is hard.

How to safely de-duplicate files encrypted at the client's side?

Bitcasa's claim its to provide infinite storage for a fixed fee.
According to a TechCrunch interview, Bitcasa uses client-side convergent encryption. Thus no unencrypted data ever reaches the server. Using convergent encryption, the encryption-key gets derived from the be encrypted source-data.
Basically, Bitcasa uses a hash function to identify identical files uploaded by different users to store them only once on their servers.
I wonder, how the provider is able to ensure, that no two different files get mapped to the same encrypted file or the same encrypted data stream, since hash functions aren't bijective.
Technical question: What do I have to implement, so that such a collision may never happen.
Most deduplication schemes make the assumption that hash collisions are so unlikely to happen that they can be ignored. This allows clients to skip reuploading already-present data. It does break down when you have two files with the same hash, but that's unlikely to happen by chance (and you did pick a secure hash function to prevent people from doing it intentionally, right?)
If you insist on being absolutely sure, all clients must reupload their data (even if it's already on the server), and once this data is reuploaded, you must check that it's identical to the currently-present data. If it's not, you need to pick a new ID rather than using the hash (and sound the alarm that a collision has been found in SHA1!)

How do I prevent replay attacks?

This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
There are several good answers here and putting them all together is where the answer ultimately lies:
Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
(Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
Use https... it has replay protection built in.
If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.

Resources