Asymmetric Encryption: without knowing the signer's public key owner - encryption

I am working for a big consulting firm and we have a platform that exchanges data with couple of big companies. We are trying to enhance then platform on which enterprises will be able to deposit data. The workflow that we are building is that we would like the companies to upload data on the platform, without us knowing who uploaded the data (the reason why: we do not need to every-time sign a complicated NDA).
I had a look at couple of cryptographic systems, I thought of Asymmetric Encryption, but then you need to know the owner's public key in order to decrypt the message.
I also thought of Zero Knowledge Proof, but then again you need to know the identity of the data'owner.
I thought even at merkle tree in order to whitelist specific public key that can access the platform, but then again you need to know the public key's owner.
So any idea, how I can give access to specific users to a platform, without knowing who after that accesses it? Thanks for your help, am a bit stuck.

Related

Offline Encryption/Decrytpion without storing the private key? Is it possible?

I have a question on the limitations of cryptography. May seem like a stupid question. I apologise in advance.
This is for a client which I myself am trying to wrap my head around it.
The information will be encrypted and then encoded in an accessible format e.g. QR or barcode. Decryption is using the application our developers are creating. The problem is that the application would be offline majority of the time when it is in used as the users would be in areas that have either intermittent or poor reception. So to be able to decrypt it, the application has to have the private key present on the device itself, correct? Would this even be a good solution? Even the developers have concerns on having all the apps offline with the same private key present. Note that the application will be used by multiple groups.
Is there an alternative that I can explore that any of you can suggest where we don't have to store the private key but still manage to secure the information for offline use? So far I've look into DRM for restricting copying information but not sure how it would help. I'm also willing to look into other solutions for this.
The database holding the information would be updated when they have an internet connection. I'm only assuming on this part since I'm not handling this part of the project.
Please and thank you in advance for your advice.
Maybe not the right way but found a suitable path.
Using a combination of asymmetric key and symmetric key where the symmetric key is used to decrypt data on the offline device. Asymmetric is used to encrypt the data. The asymmetric keys is only exchange when the devices are in need to be sync'd. This would put the trust on the devices itself so I'm not worried on this.
This idea came from Sectigo - Why Automotive Key Fob Encryption Hacks Are Making Headlines?

Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it.
I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place.
It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation.
Do you have a better suggestion? Which method would you recommend? More importantly why?
Encryption in SQL is really only good for securing the data as it rests on the server, although that doesn't mean that it is unimportant. When you mention that a prime concern is injection attacks or the likes, my concern would be whether or not the database uses a single account (SQL or otherwise) to connect to the database, which would be common for a public internet site. If you use integrated authentication, or connect to SQL using the same credentials supplied to the application, then SQL's encryption might work fine.
However, if you're using a single login, SQL's encryption is going to manage encrypting and decrypting the data for you, based on your login. So, if your application is compromised, SQL may not be able to protect that data for you, as it implicitly decrypts it and doesn't know anything is wrong.
You may want to, as you suggested, encrypt/decrypt the data in the application, and store as bytes in the database. That way you control who can decrypt the data and when (for example, you could assign the key to decrypting this data to those few employees you mentioned that are in a specific role). You could look into Microsoft's Security Application Block, or Bouncy Castle, etc. for good encryption utilities. Just be careful about how you manage the key.
Update:
Although you could potentially use two connection strings: one normal, with no rights to the encrypted data, and one that has the key and the rights to the data. Then have your application use the appropriate connection when the user has the rights. Of course, that's pretty kludgy.
Some practices that we follow:
Never use dynamic sql. It's completely unnecessary.
Regardless of #1, always parameterize your queries. This alone will get rid of sql injection, but there are lots of other entry points.
Use the least priviledged account you can for accessing the database server. This typically means the account should NOT have the ability to run ad hoc queries (see #1). It also means that it shouldn't have access to run any DDL statements (create, drop, ..).
Don't trust the web application, much less any input received from a browser. Sanitize everything. Web App servers are cracked on a regular basis.
We also deal with a lot of PII and are extremely strict (to the point of paranoia) on how the data is accessed and by whom. Everything that comes through the server is logged. To make sure this happens we only allow access to the database through stored procedures. The procs always test to see if the user account is even authorized to execute the query. Further they log when, who, and what. We do not have any mass delete queries at all.
Our IDs are completely non-guessable. This is for every table in the system.
We do not use ORM tools. They typically require way too much access to the database server to work right and we just aren't comfortable with that.
We do background checks on the DBA's and our other production support people every 6 months. Access to production is tightly controlled and actively monitored. We don't allow contractors access to production for any reason and everything is code reviewed prior to being allowed into the code base.
For the encrypted data, allow specific users access to the decryption keys. Change those keys often, as in once a month if possible.
ALL data transfer between machines is encrypted. Kerberos between servers and desktops; SSL between IIS and browsers.
Recognize and architect for the fact that a LOT of data theft is from internal employees. Either by actively hacking the system, actively granting unauthorized users access, or passively by installing crap (like IE 6) on their machines. Guess how Google got hacked.
The main question in your situation is identifying all of the parts that need access to the PII.
Things like how does the information get into your system? The main thing here is where does the initial encryption key get stored?
Your issue is key management. No matter how many way's you turn the problem around, you'll end up with one simple elementary fact: the service process needs access to the keys to encrypt the data (is important that is a background service because that implies it cannot obtain the root of the encryption hierarchy key from a human entered password whenever is needed). Therefore compromise of the process leads to compromise of the key(s). There are ways to obfuscate this issue, but no ways to truly hide it. To put this into perspective though, only a compromise of the SQL Server process itself could expose this problem, something which is significantly higher bar than a SQL Injection vulnerability.
You are trying to circumvent this problem by relying on the public key/private key asymmetry and use the public key to encrypt the data so that it can only be decrypted by the owner of the private key. So that the service does not need access to the private key, therefore if compromised it cannot be used to decrypt the data. Unfortunately this works only in theory. In the real world RSA encryption is so slow that is cannot be used for bulk data. This is why common RSA based encryption scheme uses a symmetric key to encrypt the data and encrypts the symmetric key with the RSA key.
My recommendation would be to stick with tried and tested approaches. Use a symmetric key to encrypt the data. Use an RSA key to encrypt the symmetric key(s). Have SQL Server own and control the RSA private key. Use the permission hierarchy to protect the RSA private key (really, there isn't anything better you could do). Use module signing to grant access to the encryption procedures. This way the ASP service itself does not even have the privileges to encrypt the data, it can only do so by the means of the signed encryption procedure. It would take significant 'creative' administration/coding mistakes from your colleagues to compromise such a scheme, significantly more than a mere 'operator error'. A system administrator would have an easier path, but any solution that is designed to circumvent a sysadmin is doomed.

storing credit card info

So I would like to modify a PHP / MySQL application in order to store credit card but not cvv and bank account info securely. PCI DSS require 1024 RSA/DSA. A small number of users will be given private key in order to decrypt the batch file of account info for submission to payment processors monthly. I'm unclear if it is possible to have a system that would allow the users who have signed in with normal 8 digit passwords to modify their own account info securely. It seems that this is not possible, and the encryption should be one-way (ie each user -> admins; never allowing user to decrypt their own info again), with account info never exposed back to users even over SSL connections. Or is there a proper and easy way to do this that I'm unaware of that is PCI DSS compliant?
PCI DSS does not require 1024 bit RSA to encrypt. Older versions of the specification mentioned AES and 3DES by name, but I believe newer versions just specify strong encryption. Most people are using AES 256.
Encrypting data at-rest with an asymmetric algorithm doesn't really work. Symmetric algorithms work best. This allows the application to access the card data when it needs to. This doesn't mean you have to show the data to the user ever again, it just means the data is there when you need to get to it. If you're storing credit card authorization information, you'll usually need the card number for settlement. (It really depends on the features your processor has. Some of the small-business level processors store the card for you, but this is infeasible for large scale processors like Paymentech and FDMS.)
The problem is that you will have to rotate your encryption keys periodically. This is usually what screws everyone up. If you roll your own encryption, you need to make sure that you can specify n number of keys that are accessible for as long as there is data encrypted with those keys. At any point in time, only one of those keys should be used for encryption. Unless you have a deep understanding of crypto and key management in terms of PCI, you might want to go with a commercial offering. Yes, these are expensive, but you have to determine the best course with a build or buy decision making process.
Ingrian (now SafeNet) has a decent offering for a network HSM. It will manage the keys for you and do the cryptographic operations. It may also be possible to use their DB level encryption integration so that you don't have to change your application at all. (Though DB level encryption is dubiously secure in my opinion.)
This is a very deep subject; I've done a lot with PCI and suggest you hire someone to guide you through doing it properly. You'll spend a lot of money on false starts and redoing work, so get an auditor involved early to at least asses what you need and tell you how to implement the security properly.
You may have an easier time if you differentiate between data storage, access, and transmission.
Storage requires strong reversible encryption; the data is not useful unless you can retrieve it.
Access requires a user or process to authenticate itself before it is permitted to decrypt the data. Here's an example of a mechanism that would accomplish this:
Store the data with a secret key that is never directly exposed to any user. Of course, you'll need to store that key somewhere, and you must be able to retrieve it.
When each user chooses a password, use the password to encrypt a personal copy of the private key for that user. (Note: even though you're encrypting each copy of the key, security issues may arise from maintaining multiple copies of the same information.)
Do not store the user's password. Instead, hash it according to standard best practices (with salt, etc.) and store the hash.
When a user provides a password to log in, hash it and compare to your stored value. If they match, use the (plainitext) password to decrypt the key, which is then used to decrypt the actual data.
Transmit the data through a secure connection, such as SSL. It's reasonable (perhaps required) to allow users to access (and modify) their own data, as long as you continue to follow best practices.
Comments:
An 8-digit password implies a key space of 108 ~ 227 = 27 bits, which by today's standards is fairly terrible. If you can't encourage longer (or alphanumeric) passwords, you may want to consider additional layers.
One advantage to the multiple-layer strategy (user provides a password that is used to encrypt the "actual" key) is that you can change the encryption key transparently to the user, thereby satisfying any key-rotation requirements..
The standard admonition whenever you're designing a security solution is to remember that DIY security, even when following standards, is risky at best. You're almost always better off using an off-the-shelf package by a reputable vendor, or at least having a trained, certified security professional audit both your strategy and your implementation.
Good luck!

Storing encrypted passwords

My coworker and I are having a fist-fight civilized discussion over password security. Please help us resolve our differences.
One of us takes the viewpoint that:
Storing passwords encrypted using a public key in addition to a one-way hashed version is OK and might be useful for integration with other authentication systems in the future in case of a merger or acquisition.
Only the CEO/CTO would have access to the private key, and it would only be used when necessary. Regular login validation would still occur via the hashed password.
I have/he has done this before in previous companies and there are many sites out there that do this and have survived security audits from Fortune 500 companies before.
This is a common, and accepted practice, even for financial institutions, thus there is no need to explicitly state this in the privacy policy.
Sites like Mint.com do this.
The other one of us takes the following viewpoint:
Storing passwords, even in encrypted form, is an unnecessary security risk and it's better to avoid exposure to this risk in the first place.
If the private key falls into the wrong hands, users that use the same password across multiple sites would risk having all of their logins compromised.
This is a breach of trust of our users, and if this practice is implemented, they should be explicitly informed of this.
This is not an industry-wide practice and no big name sites (Google, Yahoo, Amazon, etc.) implement this. Mint.com is a special case because they need to authenticate with other sites on your behalf. Additionally, they only store the passwords to your financial institutions, not your password to Mint.com itself.
This is a red flag in audits.
Thoughts? Comments? Have you worked at an organization that implemented this practice?
The first practice of storing recoverable version of passwords is plain wrong. Regardless of the fact that big sites do this. It is wrong. They are wrong.
I automatically distrust any site that stores my password unhashed. Who knows what would happen if the employees of that big company decide to have fun? There was a case some guy from Yahoo stole and sold user emails. What if someone steals/sells the whole database with my emails and passwords?
There is no need whatsoever for you to know my original password to perform authentication. Even if you decide later to split the system, add a new one or integrate with a third party, you still will be fine with just a hash of the password.
Why should CEOs be more reliable / trustworthy than other people? There are example of high-ranking government people who have lost confidential data.
There's no reason a regular site has to store a password, not a single one.
What happens if in the future those private keys can be broken? What if the key used is a weak key, as has happened just recently in Debian.
The bottom line is: Why would one take such great risks for little to no benefit. Most companies aren't ever going to need an encrypted password.
Hash Passwords
Storing passwords in a reversible form is unnecessary and risky.
In my opinion, a security breach seems much more likely than the need to merge password tables. Furthermore, the cost of a security breach seems far higher than the cost of implementing a migration strategy. I believe it would be much safer to hash passwords irreversibly.
Migration Strategy
In case of a company merger, the original algorithm used to hash passwords can be noted in a combined password table, and different routines called to verify the passwords of different users, determined by this identifier. If desired, the stored hash (and its identifier) can be updated at this time too, since the user's clear-text password will be available during the login operation. This would allow a gradual migration to a single hash algorithm. Note that passwords should expire after some time anyway, so this would be upper bound on the time migration would require.
Threats
There are a couple of avenues to attack encrypted passwords:
The decryption key custodian could be corrupt. They could decrypt the passwords and steal them. A custodian might do this on his own, or he could be bribed or blackmailed by someone else. An executive without special training is especially susceptible to social engineering too.
An attack can also be made on the public key used for encryption. By substituting the real public key with one of their own, any of the application administrators would be able to collect passwords. And if only the CEO has the real decryption key, this is unlikely to be discovered for a long time.
Mitigation
Supposing this battle is lost, and the passwords are encrypted, rather than hashed, I'd fight on for a couple of concessions:
At the very least, the decryption key should require the cooperation of multiple people for recover. A key sharing technique like Shamir's secret sharing algorithm would be useful.
Measures to protect the integrity of the encryption key are required too. Storage on a tamper-proof hardware token, or using a password-based MAC may help.
and might be useful for integration
with other authentication systems in
the future
If there is no immediate need to store the password in a reversable encrypted format, don't.
I'm working in a financial institution and here the deal is: no one should ever know user's password, so the default and implemented policy used everywhere is: one way hashed passwords with a strong hashing algorithm.
I for once stand in favor of this option: you do not want to go into the trouble of handling the situation where you have lost your two-way encryption password or someone stole it and could read the stored passwords.
If somebody loses their password you just change it and give it to them.
If a company needs to merge, they HAVE to keep hashed passwords the way they are: security is above everything else.
Think about it this way: would you store your home keys in a box that has a lock with a key you have, or would you better prefer to keep them with you everytime?
In the first case: everybody could access your home keys, given the proper key or power to break the box, in the second case to have your keys a potential home-breaker should threaten you or take them from you in some way... same with passwords, if they are hashed on a locked DB it is like nobody has a copy of them, therefore no one can access your data.
I have had to move user accounts between sites (as might happen in a merger or acquisition) when the passwords were one-way hashed and it was not a problem. So I do not understand this argument.
Even if the two applications used different hashing algorithms, there will be a simple way to handle the situation.
The argument in favor of storing them seems to be that it might simplify integration in the case of a merger or acquisition. Every other statement in that side of the argument is no more than a justification: either "this is why it's not so bad" or "other people are doing it".
How much is it worth to be able to do automatic conversions that a client may not want done in event of merger or acquisition? How often do you anticipate mergers and/or acquisitions? Why would it be all that difficult to use the hashed passwords as they are, or to ask your customers to explicitly go along with the changes?
It looks like a very thin reason to me.
On the other side, when you store passwords in recoverable form there's always a danger that they'll get out. If you don't, there isn't; you can't reveal what you don't know. This is a serious risk. The CEO/CTO might be careless or dishonest. There might be a flaw in the encryption. There would certainly be a backup of the private key somewhere, and that could get out.
In short, in order to even consider storing passwords in recoverable form, I'd want a good reason. I don't think potential convenience in implementing a conversion that might or might not be required by a possible business maneuver qualifies.
Or, to put it in a form that software people might understand, YAGNI.
I would agree that the safest way remains the one-way hash (but with a salt of course!). I'd only resort to encryption when I'd need to for integrating with other systems.
Even when you have a built system that is going to need integration with other systems, it's best to ask your users for that password before integrating. That way the user feels 'in control' of his own data. The other way around, starting with encrypted passwords while the use is not clear to the end-user, will raise a lot of questions when you start integrating at some point in time.
So I will definitely go with one-way hash, unless there is a clear reason (clear development-wise and clear to the end-user!) that the unencrypted password is immediately needed.
edit:
Even when integration with other systems is needed, storing recoverable passwords still isn't the best way. But that of course, depends on the system to integrate with.
Okay first of all, giving the CEO/CTO access to plaintext passwords is just plain stupid. If you are doing things right, there is no need for this. If a hacker break your site, what's stopping him from attacking the CEO next?
Both methods are wrong.
Comparing the hash of a received password against a stored hash means the user sends his plaintext password on every login, a backdoor in your webapp will obtain this. If the hacker does not have sufficient privileges to plant a backdoor, he will just break the hashes with his 10K GPU botnet. If the hashes cannot be broken, it means they have collisions, which means you have a weak hash, augmenting a blind brute force attack by magnitudes. I am not exaggerating, this happens every day, on sites with millions of users.
Letting users use plaintext passwords to login to your site means letting them user the same password on every site. This is what 99% of all public sites do today, it is a pathetic, malicious, anti-evolutionary practice.
The ideal solution is to use a combination of both SSL client certificates and server certificates. If you do this correctly, it will render the common MITM/Phishing attack impossible; an attack of such could not be used against the credentials OR the session. Furthermore, users are able to store their client certificates on cryptographic hardware such as smart cards, allowing them to login on any computer without the risk of losing their credentials (although they'd still be vulnerable to session hijacking).
You make think I'm being unreasonable, but SSL client certificates were invented for a reason...
Every time I have anything to do with passwords they are one way hashed, with a changing salt i.e. hash(userId + clearPassword). I am most happy when no one at our company can access passwords in the clear.
If you're a fringe case, like mint.com, yes, do it. Mint stores your passwords to several other sites (your bank, credit card, 401k, etc), and when you login to Mint, it goes to all of those other sites, logs in via script as you, and pulls back your updated financial data into one easy-to-see centralized site. Is it tinfoil-hat secure? Probably not. Do I love it? Yes.
If you're not a fringe case, lord no, you shouldn't ever be doing this. I work for a large financial institution, and this is certainly not at all an accepted practice. This would probably get me fired.

Encrypt data from users in web applications

Some web applications, like Google Docs, store data generated by the users. Data that can only be read by its owner. Or maybe not?
As far as I know, this data is stored as is in a remote database. So, if anybody with enough privileges in the remote system (a sysadmin, for instance) can lurk my data, my privacy could get compromised.
What could be the best solution to store this data encrypted in a remote database and that only the data's owner could decrypt it? How to make this process transparent to the user? (You can't use the user's password as the key to encrypt his data, because you shouldn't know his password).
If encryption/decryption is performed on the server, there is no way you can make sure that the cleartext is not dumped somewhere in some log file or the like.
You need to do the encryption/decryption inside the browser using JavaScript/Java/ActiveX or whatever. As a user, you need to trust the client-side of the web service not to send back the info unencrypted to the server.
Carl
I think Carl, nailed it on the head, but I wanted to say that with any website, if you are providing it any confidential/personal/privileged information then you have to have a certain level of trust, and it is the responsibility of the service provider to establish this trust. This is one of those questions that has been asked many times, across the internet since it's inception, and it will only continue to grow until we all have our own SSL certs encoded on our fingerprint, and even then we will have to ask the question 'How do I know that the finger is still attached to the user?'.
Well, I'd consider a process similar to Amazons AWS. You authenticate with a private password that is not saved remotely. Just a hash is used to validate the user. Then you generate a certificate with one of the main and long-tested algorithms and provide this from a secure page. Then a public/private key algorithm can be used to encrypt things for the users.
But the main problem remains the same: If someone with enough privileges can access the data (say: hacked your server), you're lost. Given enough time and power, everything could be breaked. It's just a matter of time.
But I think algorithms and applications like GPG/PGP and similar are very well known and can be implemented in a way that secure web applications - and keep the usability at a score that the average user can handle.
edit I want to catch up with #Carl and Unkwntech and add their statement: If you don't trust the site itself, don't give private data away. That's even before someone hacks their servers... ;-)
Auron asked: How do you generate a key for the client to encrypt/decrypt the data? Where do you store this key?
Well, the key is usually derived from some password the user has chosen. You don't store it, you trust the user to remember it. What you can store is maybe some salt value associated to that user, to increase security against rainbow-table attacks for instance.
Crypto is hard to get right ;-) I would recommend to look at the source code for AxCrypt and for Xecrets' off-line client.
Carl
No, you can't use passwords, but you could use password hashes. However, Google Docs are all about sharing, so such a method would require storing a copy of the document for each user.

Resources