I'm working on a Windows Phone 7 app that requires some data encryption. I'm having a hard time finding any documentation about a key ring or DPAPI for Windows Phone 7. I've come up with a few ways, but they all have downsides.
Generate a random encryption key and store it to IsolatedStorage. The pros of this is that it is simple, and it "just works". The cons are that this key will likely show up in backup files, etc.
Generate a random key and use the device ID to encrypt that key. The pros are that it offers SOME better protection. It isn't hard to figure out the device ID. The Cons are that if they back it up and restore it to another phone, the key will be invalid.
User input cannot be required (i.e. I can't derive the key from user input)
I don't like either of the two. There must be something like DPAPI or a key ring in Windows Phone 7. Or is it lacking that?
Unfortunately there is no currently keyring/dpapi exposed to managed code. If you can't ask the user for input, then you're stuck with one of your two approaches. Personally, I like the approach where you use the IMEI as part of the key i.e. (psudocode) ...
shared_secret = sha512(newGuid + IMEI)
The benefit to doing it that way is that if the phone is lost or stolen and the SIM card is swapped out (for phones with SIM cards like GSM or Verizon's LTE), the data will remain inaccessible and if they leave the old SIM card in to access the data, there is a better chance the phone can be found and/or remote wiped.
Related
i have a project for a website, running on Django. One function of it needs to store user/password for a third party website. So it needs to be symmetric encryption, as it needs to use these credentials in an automated process.
Storing credentials is never a good idea, I know, but for this case there is no other option.
My idea so far is, to create a Django app, that will save and use these passwords, and do nothing else. With this I can have 2 "webservers" that will not receive any request from outside, but only get tasking via redis or something. Therefore I can isolate them to some degree (they are the only servers who will have access to this extra db, they will not handle any web request, etc)
First question: Does this plan sound solid or is there a major flaw?
Second question is about the encryption itself:
AES requires an encryption key for all its work, ok that needs to be "secured" in some way. But I am more interested in the IV.
Every user can have one or more credential sets saved in the extra db. Would it be a good idea to use some hash of sort over the user id or something to generate a per user custom IV? Most of the time I see IV to be just random generated. But then I will have to also store them somewhere in addition to the key.
For me it gets a bit confusing here. I need key and IV to decrypt, but I would "store" them the same way. So wouldn't it be likely if one get compromised, that also the IV will be? Would it then make any difference if I generate the IV on the fly over a known procedure? Problem then, everyone could know the IV if they know their user id, as the code will be open source....
In the end, I need some direction guidance as how to handle key and best unique IV per user. Thank you very much for reading so far :-)
Does this plan sound solid or is there a major flaw?
The need to store use credentials is imho a flaw by design, at least we all appreciate you are aware of it.
Having a separate credential service with dedicated datastore seems to be best you can do under stated conditions. I don't like the option to store user credentials, but let's skip academic discussion to practical things.
AES requires an encryption key for all its work, ok that needs to be "secured" in some way.
Yes, there's the whole problem.
to generate a per user custom IV?
IV allows reusing the same key for multiple encryptions, so effectively it needs to be unique for each ciphertext (if a user has multiple passwords, you need an IV for each password). Very commonly IV is prepended to the ciphertext as it is needed to decrypt it.
Would it then make any difference if I generate the IV on the fly over a known procedure?
IV doesn't need to be secret itself.
Some encryption modes require the IV to be unpredictable (e.g. CBC mode), therefore it's best if you generate the IV as random. There are some modes that use IV as a counter to encrypt/decrypt only part of data (such as CTR or OFB), but still it is required the IV is unique for each key and encryption.
I am using AES to encrypt some data, the problem is that I have to use a key that contains only 4 digits (like pin code), so anyone can loop 9999 times to find my key and decrypt my text. The data I am encrypting here is an SMS.
Is the any idea to avoid this?
No, there isn't. You can add salts and iteration counts to a PBKDF all you want, but in the end the attacker only has 10K tries to go through, and that's peanuts.
The only sensible way to do this is to have a separate entity that performs the decryption. It can add secret entropy of its own to the key seed, and use a strong key. The entity would then place restrictions on the authentication with the PIN.
You might want to take a good look at your system's security architecture and see if you can change something to avoid this problem (access control, other login credentials etc. etc.).
Edit: Removed my comment about adding a salt, everyone who pointed this out was correct. You could perhaps increase the time complexity of decryption, such that a brute-force attack would take a prohibitively long time.
Edit: read this: https://security.stackexchange.com/questions/6719/how-would-you-store-a-4-digit-pin-code-securely-in-the-database
You can take the same aproach as ATM machines: after someone enters an incorrect PIN three times, that account is temprorarily invalid (you can also set a along time-out) and that user will have to undertake some kind of action (e.g. click a confirmation link in an e-mail) in order to reactive his/her account.
You'll also have to salt the PIN with an unique property of that user (preferably a string that was randomly generated when that user was registered). I also recommend adding an additional salt to all hashes that is either hard-coded or read from a config file (usefull in case your database is compromised but the rest isn't).
This approach still leaves you vulnerable to an attack where someone chooses a single PIN and brute-forces usernames. You can take some countermeasures to this by applying the same policy to IP-adresses, but that's still far from optimal.
EDIT: If your goal is to encrypt traffic rather than to hash PIN's, you should use HTTPS or another protocol based on public-key cryptography, that way you won't have to use your PIN for encrypting these SMS's.
assuming that you can only enter 4 digits, pad the keylength in the application with either phone number of sender or something like that?
I'm building a secure payment portal.
We currently have two applications that will be using this. One is a web application, the other a desktop app. Both of these require users to login/authenticate, the same credentials can be used for either application.
I want to build an automatic login mechanism that will fill in all the various login/order details and be able to call this from either app mentioned above. I've been thinking that the best way to do this is to pass this information encrypted through the URL. ie https://mysite.com/TakePayment.aspx?id=GT2jkjh3....
Since we don't want to integrate the payment processing too tightly into the desktop app to reduce our PCI scope, we decided to have it open the browser to a central, secured payment page through a simple shell execute with the full URL causing the default browser to open that page.
Originally we were using AES for the encryption, but this is currently being re-examined as we would prefer not having to give out the key to the end user (AES is symmetric, symmetric encryption = both parties need the private key, why bother even encrypting then since we're going to be distributing the app?) So I'm looking at switching it over to use Public Key Encryption with the built in RSA routines within .NET
After coding up the RSA portion I noticed most examples on the net used 1024bits for the key-length, I went with this and now have our portal working with public key encryption, however the URLs generated are much much longer than when I was using AES so it made me start researching what the max limits for URLs are. http://www.boutell.com/newfaq/misc/urllength.html Says that IE is the limiting browser at about 2048 characters in the path portion. My initial tests with the RSA encryption show my urls will be around 1400 chars long.
My questions boil down to this:
1) Is there a better way for passing information from a desktop app to a website that I'm not thinking of? I'd prefer it be just as easy to use from another web page as it is from the desktop, hence my current solution.
2) Is 1024 bit RSA keys necessary? Or overkill for something like this? A shorter key would mean shorter encrypted text right?
3) Are there any other unforeseen problems with URLs in the 1200-1400 character range? Proxies? Firewalls? Web-Accelerators?
Thanks
Update 12/11/2011:
Come to find out, the method that we ended up going with here ended up biting us in the ass recently (or rather we found out about it today, even though the problem was a very sporadic and difficult one to track down..)
The plain text token that we encrypted was originally rather small, only a hundred bytes or so. This is what resulted in my test URLs being approximately 1400 bytes long. Through feature creep we've been required to add more data to the token, and the average URL length jumped to 1700-1800 in length.
Once the length of our plain text hits 173 characters long and above however, the URL length jumps again, this time up to 2080+ or so, which now causes problems for IE. After some investigation in how RSA encryption works, this should have been totally expected, but was an oversight on my part originally.
We're using 1024 bit RSA encryption, which means that the maximum data block size that can be encrypted is 1024/8 - 24 = 86 bytes, every 86 bytes needs to be "chopped up" and encrypted separately, so at 86 * 2 = 172, we're only encrypting two blocks, above that we're encrypting three, four, five, etc. By passing 172, our cipher text length grew so long the URL's are now too long.. I'm probably messing up the explanation a little here, but that's the general gist of it..
It seems we'll be looking at designing a better way for this to work, as it can be expected they'll want "more features" to be added in the future and thus our token will grow ever larger...
Assuming this is all logged in a database can you not pass the data back and forth using SSL web services. Then in the case of being able to quickly go from the desktop app to the web app make a rpc call to the website to generate a random key, pass that to the user and call a web page using that. Make the key valid for say 10 seconds meaning should a key be captured and broken it will have become invalid?
I have little experience with this kind of thing so I'm expecting many holes to be poked in the idea.
We've had to extend our website to communicate user credentials to a suppliers website (in the query string) using AES with a 256-bit key, however they are using a static IV when decrypting the information.
I've advised that the IV should not be static and that it is not in our standards to do that, but if they change it their end we would incur the [big] costs so we have agreed to accept this as a security risk and use the same IV (much to my extreme frustration).
What I wanted to know is, how much of a security threat is this? I need to be able to communicate this effectively to management so that they know exactly what they are agreeing to.
*UPDATE:*We are also using the same KEY throughout as well.
Thanks
Using a static IV is always a bad idea, but the exact consequences depend on the Mode of Operation in use. In all of them, the same plaintext will produce the same ciphertext, but there may be additional vulnerabilities: For example, in CFB mode, given a static key, the attacker can extract the cipherstream from a known plaintext, and use it to decrypt all subsequent strings!
Using a static IV is always a bad idea. Using a static key is always a bad idea. I bet that your supplier had compiled the static key into their binaries.
Sadly, I've seen this before. Your supplier has a requirement that they implement encryption and they are attempting to implement the encryption in a manner that's as transparent as possible---or as "checkbox" as possible. That is, they aren't really using encryption to provide security, they are using it to satisfy a checkbox requirement.
My suggestion is that you see if the supplier would be willing to forsake this home-brewed encryption approach and instead run their system over SSL. Then you get the advantage of using a quality standard security protocol with known properties. It's clear from your question that neither your supplier nor you should be attempting to design a security protocol. You should, instead, use one that is free and available on every platform.
As far as I know (and I hope others will correct me if I'm wrong / the user will verify this), you lose a significant amount of security by keeping a static key and IV. The most significant effect you should notice is that when you encrypt a specific plaintext (say usernameA+passwordB), you get the same ciphertext every time.
This is great for pattern analysis by attackers, and seems like a password-equivalent that would give attackers the keys to the kingdom:
Pattern analysis: The attacker can see that the encrypted user+password combination "gobbbledygook" is used every night just before the CEO leaves work. The attacker can then leverage that information into the future to remotely detect when the CEO leaves.
Password equivalent: You are passing this username+password in the URL. Why can't someone else pass exactly the same value and get the same results you do? If they can, the encrypted data is a plaintext equivalent for the purposes of gaining access, defeating the purpose of encrypting the data.
What I wanted to know is, how much of a security threat is this? I need to be able to communicate this effectively to management so that they know exactly what they are agreeing to.
A good example of re-using the same nonce is Sony vs. Geohot (on a different algorithm though). You can see the results for sony :) To the point. Using the same IV might have mild or catastrophic issues depending on the encryption mode of AES you use. If you use CTR mode then everything you encrypted is as good as plaintext. In CBC mode your first block of plaintext will be the same for the same encrypted data.
Our client wants to give us a database. The original database has a phone number column. He doesn't want to give us a phone number. Somehow i'm not sure why - it is decided that client will give us encrypted phone numbers with encrypted with 128bit AES key.
We will tell the client which phone number is to be shortlisted for some purpose but we will never know what is the actual phone number .. we'll just know the encrypted numbers.
Here are things I don't understand:
is using 128bit AES key encryption suitable for this purpose ?
should the client preserve the AES key used to convert the numbers or
should the client instead of
preserving the key create a database
mapping the orignal numbers with
encrypted numbers
should the same key be used to convert all numbers or different
if randomly generated keys are used to encrypt numbers isn't it
possible that for two phone numbers
the encrypted text may be same ?
IMO this is the wrong approach. Instead of encrypting the phone number, which still leaves a chance of you decrypting it (e.g. because someone leaks the key), the client should just replace them with an ID that points to a table with the real telephone numbers; of course, this lookup table stays with him, you never get it.
I.E.
Original table:
Name | Phone
-------+---------
Erich | 555-4245
Max | 1234-567
You get:
Name | Phone
-------+---------
Erich | 1
Max | 2
Only your client has:
ID | Phone
---+---------
1 | 555-4245
2 | 1234-567
Addressing your concerns in order:
It may be, it may not be. You haven't really mentioned what the purpose is at all, in fact:
Why the need for encryption?
Who is it being protected from?
What's the value (or liability to you, if lost) of the data?
How motivated are the hypothetical attackers assumed to be?
What performance loss is acceptable for the security gain?
What hardware do you have available?
Who has what physical/logical access to various parts of the system?
And so on, and so forth. Without knowing the situation, it's not possible to say whether this is an appropriate encryption scheme. (Though it is likely to be a solid choice).
Surely that's for the client to decide? I will say, though, that the latter case seems to defeat the purpose of encryption entirely.
The same key ought to be used to convert all numbers, unless you fancy juggling keys around to try and remember which one to use to decrypt which phone numbers. If the security system is well designed, this wouldn't give any extra security and would just be a bizarre headache.
By definitiong of encryption, no. It's always a reversable mapping which means there's no loss of information such as you would get with a hash. And consequently, every instance of ciphertext has a single unique plaintext that will encrypt to it (with a given key).
Though all in all this doesn't sound like it's needed. It sounds to me like someone's been making decisions based on appearances rather than technical merit - "We encrypt your phone numbers with the same 128-bit encryption used in browsers" sounds good but is it actually needed?