For our messaging app, if we send user messages directly to CloudKit (without doing any of our own encryption), can we claim that our app is end-to-end encrypted, "where only the communicating users can read the messages"?
Matt,
I presume/assume this is a coding question? So I'll answer with a coded answer :)
You can offer your guarantee that messages are only readable by communicating users by encrypting the messages using public/private key pairs. Technology that has been around for more than a while now. This discussion talks in detail about the process.
Search for "Swift RSA Public Key Encryption Howto" on https://forums.developer.apple.com forum.
I know this is an old and somewhat controversial question, but I thought I'd lend my two cents on the subject.
After some quick Googling, the most authoritative public and user-facing answer to this question I've found is on Apple's Privacy page. See the section on iCloud, which reads,
Your iCloud content [...] is encrypted when it’s transferred and when it’s stored on our servers.
It then talks about CloudKit.
That sounds pretty end-to-end-ish to me.
However, they go on to state that "some personal data, such as Home and Health data, is stored with end-to-end encryption." This contrasting passage does not bode well for your "We're end-to-end encrypted!" requirement, unless the quote above suffices for your purposes.
For my own present project, it does, as my data isn't necessarily business-class levels of sensitive; I don't need full control over what and how everything is encrypted. My biggest concern is that I, the developer, cannot see my users' data. This, CloudKit enforces for me, whereas tools like Firebase do not.
That is all.
Happy Googling! 😊
I am trying to design a pairing application for my university this valentine. How is it supposed to work, you ask?? The clients will submit preferences to the server and after some days if any two clients have the same preferences, they will be notified -- not in any other case. A fool-proof framework design needs to be built for this purpose. What I am willing to do is to ensure my clients that even though they will be submitting their favourite responses to me via my website, I will still not be able to see those as if I would, this application will have issues of privacy. I am trying to match the user preferences with each other, they will obviously be encrypted and there is no way I can match any two unless I decrypt them at some point in my server locally -- assuming the fact that RSA encryption mechanism has a very little probability of collision of hashed values and I definitely cannot match them :) . The bottleneck here then is >> never ever decrypt the client preferences locally on the admin's machine/server. One approach which is currently on my mind is to introduce a salt while encrypting which will stay safe in the hands of the client, but still decryption needs to be done at some point in time to match these hashes. Can there be some alternative approach for this type of design, I think I might be missing something.
Turn things around. Design a mailbox-like system and use pseudonyms. Instead of getting an email once a match has been found, make people request it. That way you can leave the preferences unencrypted. Just the actual user has to be hidden from public. Start with an initial population of fake users to hide your early adaptors and you will be done.
I.e.: In order to post preferences, I'll leave a public key to contact me. The server searches matches and posts encrypted messages to a public site. Everyone can see these messages (or not, if you design properly) but I am the only one that can read them.
I want to handle some confidential data in one of my web application. So that the data shouldn't able to read by the Developer or Database administrator.
We can easily hide the data from DB administrator by implementing some encryption technique. But still the developer can see the data since he only creating the decryption technique. I want only the end user should see his data.
I can't encrypt data using some algorithms like PBKDF2 or DB side encryption methods Like TDE & EKM because still I need to keep the encryption key somewhere. If I keep in server side or in db the developer can access and decrypt the data. If I keep it in client side, the user can't access the information from a separate machine.
So How to handle this situation? Thanks in advance.
You are heading the direction of Zero Knowledge Web Applications, such as implemented by SpiderOak (see also crypton). These applications typically work by deriving a key from the user's password using something like PBKDF2, and performing encryption/decryption on client side. However, there are a number of complexities to overcome to make it true zero-knowledge, and also to meet usability requirements. One could write an essay on this, but instead I suggest you start by reading the linked references. If you have any questions, let me know.
In a nutshell, the "more zero-knowledge" you want the system to be, the harder it is to realise without sacrificing usability (one example is overcoming the points made in Javascript Cryptography Considered Harmful). However, there are various tradeoffs you can make in order to make it sufficiently difficult to cheat without affecting usability too much.
I need to keep the encryption key somewhere
No you don't. The user only has to remember it. For convenience you could save it in the browser's local storage.
So I would like to modify a PHP / MySQL application in order to store credit card but not cvv and bank account info securely. PCI DSS require 1024 RSA/DSA. A small number of users will be given private key in order to decrypt the batch file of account info for submission to payment processors monthly. I'm unclear if it is possible to have a system that would allow the users who have signed in with normal 8 digit passwords to modify their own account info securely. It seems that this is not possible, and the encryption should be one-way (ie each user -> admins; never allowing user to decrypt their own info again), with account info never exposed back to users even over SSL connections. Or is there a proper and easy way to do this that I'm unaware of that is PCI DSS compliant?
PCI DSS does not require 1024 bit RSA to encrypt. Older versions of the specification mentioned AES and 3DES by name, but I believe newer versions just specify strong encryption. Most people are using AES 256.
Encrypting data at-rest with an asymmetric algorithm doesn't really work. Symmetric algorithms work best. This allows the application to access the card data when it needs to. This doesn't mean you have to show the data to the user ever again, it just means the data is there when you need to get to it. If you're storing credit card authorization information, you'll usually need the card number for settlement. (It really depends on the features your processor has. Some of the small-business level processors store the card for you, but this is infeasible for large scale processors like Paymentech and FDMS.)
The problem is that you will have to rotate your encryption keys periodically. This is usually what screws everyone up. If you roll your own encryption, you need to make sure that you can specify n number of keys that are accessible for as long as there is data encrypted with those keys. At any point in time, only one of those keys should be used for encryption. Unless you have a deep understanding of crypto and key management in terms of PCI, you might want to go with a commercial offering. Yes, these are expensive, but you have to determine the best course with a build or buy decision making process.
Ingrian (now SafeNet) has a decent offering for a network HSM. It will manage the keys for you and do the cryptographic operations. It may also be possible to use their DB level encryption integration so that you don't have to change your application at all. (Though DB level encryption is dubiously secure in my opinion.)
This is a very deep subject; I've done a lot with PCI and suggest you hire someone to guide you through doing it properly. You'll spend a lot of money on false starts and redoing work, so get an auditor involved early to at least asses what you need and tell you how to implement the security properly.
You may have an easier time if you differentiate between data storage, access, and transmission.
Storage requires strong reversible encryption; the data is not useful unless you can retrieve it.
Access requires a user or process to authenticate itself before it is permitted to decrypt the data. Here's an example of a mechanism that would accomplish this:
Store the data with a secret key that is never directly exposed to any user. Of course, you'll need to store that key somewhere, and you must be able to retrieve it.
When each user chooses a password, use the password to encrypt a personal copy of the private key for that user. (Note: even though you're encrypting each copy of the key, security issues may arise from maintaining multiple copies of the same information.)
Do not store the user's password. Instead, hash it according to standard best practices (with salt, etc.) and store the hash.
When a user provides a password to log in, hash it and compare to your stored value. If they match, use the (plainitext) password to decrypt the key, which is then used to decrypt the actual data.
Transmit the data through a secure connection, such as SSL. It's reasonable (perhaps required) to allow users to access (and modify) their own data, as long as you continue to follow best practices.
Comments:
An 8-digit password implies a key space of 108 ~ 227 = 27 bits, which by today's standards is fairly terrible. If you can't encourage longer (or alphanumeric) passwords, you may want to consider additional layers.
One advantage to the multiple-layer strategy (user provides a password that is used to encrypt the "actual" key) is that you can change the encryption key transparently to the user, thereby satisfying any key-rotation requirements..
The standard admonition whenever you're designing a security solution is to remember that DIY security, even when following standards, is risky at best. You're almost always better off using an off-the-shelf package by a reputable vendor, or at least having a trained, certified security professional audit both your strategy and your implementation.
Good luck!
My coworker and I are having a fist-fight civilized discussion over password security. Please help us resolve our differences.
One of us takes the viewpoint that:
Storing passwords encrypted using a public key in addition to a one-way hashed version is OK and might be useful for integration with other authentication systems in the future in case of a merger or acquisition.
Only the CEO/CTO would have access to the private key, and it would only be used when necessary. Regular login validation would still occur via the hashed password.
I have/he has done this before in previous companies and there are many sites out there that do this and have survived security audits from Fortune 500 companies before.
This is a common, and accepted practice, even for financial institutions, thus there is no need to explicitly state this in the privacy policy.
Sites like Mint.com do this.
The other one of us takes the following viewpoint:
Storing passwords, even in encrypted form, is an unnecessary security risk and it's better to avoid exposure to this risk in the first place.
If the private key falls into the wrong hands, users that use the same password across multiple sites would risk having all of their logins compromised.
This is a breach of trust of our users, and if this practice is implemented, they should be explicitly informed of this.
This is not an industry-wide practice and no big name sites (Google, Yahoo, Amazon, etc.) implement this. Mint.com is a special case because they need to authenticate with other sites on your behalf. Additionally, they only store the passwords to your financial institutions, not your password to Mint.com itself.
This is a red flag in audits.
Thoughts? Comments? Have you worked at an organization that implemented this practice?
The first practice of storing recoverable version of passwords is plain wrong. Regardless of the fact that big sites do this. It is wrong. They are wrong.
I automatically distrust any site that stores my password unhashed. Who knows what would happen if the employees of that big company decide to have fun? There was a case some guy from Yahoo stole and sold user emails. What if someone steals/sells the whole database with my emails and passwords?
There is no need whatsoever for you to know my original password to perform authentication. Even if you decide later to split the system, add a new one or integrate with a third party, you still will be fine with just a hash of the password.
Why should CEOs be more reliable / trustworthy than other people? There are example of high-ranking government people who have lost confidential data.
There's no reason a regular site has to store a password, not a single one.
What happens if in the future those private keys can be broken? What if the key used is a weak key, as has happened just recently in Debian.
The bottom line is: Why would one take such great risks for little to no benefit. Most companies aren't ever going to need an encrypted password.
Hash Passwords
Storing passwords in a reversible form is unnecessary and risky.
In my opinion, a security breach seems much more likely than the need to merge password tables. Furthermore, the cost of a security breach seems far higher than the cost of implementing a migration strategy. I believe it would be much safer to hash passwords irreversibly.
Migration Strategy
In case of a company merger, the original algorithm used to hash passwords can be noted in a combined password table, and different routines called to verify the passwords of different users, determined by this identifier. If desired, the stored hash (and its identifier) can be updated at this time too, since the user's clear-text password will be available during the login operation. This would allow a gradual migration to a single hash algorithm. Note that passwords should expire after some time anyway, so this would be upper bound on the time migration would require.
Threats
There are a couple of avenues to attack encrypted passwords:
The decryption key custodian could be corrupt. They could decrypt the passwords and steal them. A custodian might do this on his own, or he could be bribed or blackmailed by someone else. An executive without special training is especially susceptible to social engineering too.
An attack can also be made on the public key used for encryption. By substituting the real public key with one of their own, any of the application administrators would be able to collect passwords. And if only the CEO has the real decryption key, this is unlikely to be discovered for a long time.
Mitigation
Supposing this battle is lost, and the passwords are encrypted, rather than hashed, I'd fight on for a couple of concessions:
At the very least, the decryption key should require the cooperation of multiple people for recover. A key sharing technique like Shamir's secret sharing algorithm would be useful.
Measures to protect the integrity of the encryption key are required too. Storage on a tamper-proof hardware token, or using a password-based MAC may help.
and might be useful for integration
with other authentication systems in
the future
If there is no immediate need to store the password in a reversable encrypted format, don't.
I'm working in a financial institution and here the deal is: no one should ever know user's password, so the default and implemented policy used everywhere is: one way hashed passwords with a strong hashing algorithm.
I for once stand in favor of this option: you do not want to go into the trouble of handling the situation where you have lost your two-way encryption password or someone stole it and could read the stored passwords.
If somebody loses their password you just change it and give it to them.
If a company needs to merge, they HAVE to keep hashed passwords the way they are: security is above everything else.
Think about it this way: would you store your home keys in a box that has a lock with a key you have, or would you better prefer to keep them with you everytime?
In the first case: everybody could access your home keys, given the proper key or power to break the box, in the second case to have your keys a potential home-breaker should threaten you or take them from you in some way... same with passwords, if they are hashed on a locked DB it is like nobody has a copy of them, therefore no one can access your data.
I have had to move user accounts between sites (as might happen in a merger or acquisition) when the passwords were one-way hashed and it was not a problem. So I do not understand this argument.
Even if the two applications used different hashing algorithms, there will be a simple way to handle the situation.
The argument in favor of storing them seems to be that it might simplify integration in the case of a merger or acquisition. Every other statement in that side of the argument is no more than a justification: either "this is why it's not so bad" or "other people are doing it".
How much is it worth to be able to do automatic conversions that a client may not want done in event of merger or acquisition? How often do you anticipate mergers and/or acquisitions? Why would it be all that difficult to use the hashed passwords as they are, or to ask your customers to explicitly go along with the changes?
It looks like a very thin reason to me.
On the other side, when you store passwords in recoverable form there's always a danger that they'll get out. If you don't, there isn't; you can't reveal what you don't know. This is a serious risk. The CEO/CTO might be careless or dishonest. There might be a flaw in the encryption. There would certainly be a backup of the private key somewhere, and that could get out.
In short, in order to even consider storing passwords in recoverable form, I'd want a good reason. I don't think potential convenience in implementing a conversion that might or might not be required by a possible business maneuver qualifies.
Or, to put it in a form that software people might understand, YAGNI.
I would agree that the safest way remains the one-way hash (but with a salt of course!). I'd only resort to encryption when I'd need to for integrating with other systems.
Even when you have a built system that is going to need integration with other systems, it's best to ask your users for that password before integrating. That way the user feels 'in control' of his own data. The other way around, starting with encrypted passwords while the use is not clear to the end-user, will raise a lot of questions when you start integrating at some point in time.
So I will definitely go with one-way hash, unless there is a clear reason (clear development-wise and clear to the end-user!) that the unencrypted password is immediately needed.
edit:
Even when integration with other systems is needed, storing recoverable passwords still isn't the best way. But that of course, depends on the system to integrate with.
Okay first of all, giving the CEO/CTO access to plaintext passwords is just plain stupid. If you are doing things right, there is no need for this. If a hacker break your site, what's stopping him from attacking the CEO next?
Both methods are wrong.
Comparing the hash of a received password against a stored hash means the user sends his plaintext password on every login, a backdoor in your webapp will obtain this. If the hacker does not have sufficient privileges to plant a backdoor, he will just break the hashes with his 10K GPU botnet. If the hashes cannot be broken, it means they have collisions, which means you have a weak hash, augmenting a blind brute force attack by magnitudes. I am not exaggerating, this happens every day, on sites with millions of users.
Letting users use plaintext passwords to login to your site means letting them user the same password on every site. This is what 99% of all public sites do today, it is a pathetic, malicious, anti-evolutionary practice.
The ideal solution is to use a combination of both SSL client certificates and server certificates. If you do this correctly, it will render the common MITM/Phishing attack impossible; an attack of such could not be used against the credentials OR the session. Furthermore, users are able to store their client certificates on cryptographic hardware such as smart cards, allowing them to login on any computer without the risk of losing their credentials (although they'd still be vulnerable to session hijacking).
You make think I'm being unreasonable, but SSL client certificates were invented for a reason...
Every time I have anything to do with passwords they are one way hashed, with a changing salt i.e. hash(userId + clearPassword). I am most happy when no one at our company can access passwords in the clear.
If you're a fringe case, like mint.com, yes, do it. Mint stores your passwords to several other sites (your bank, credit card, 401k, etc), and when you login to Mint, it goes to all of those other sites, logs in via script as you, and pulls back your updated financial data into one easy-to-see centralized site. Is it tinfoil-hat secure? Probably not. Do I love it? Yes.
If you're not a fringe case, lord no, you shouldn't ever be doing this. I work for a large financial institution, and this is certainly not at all an accepted practice. This would probably get me fired.