We are creating a Silverlight application and need to have a few parameters pass in with the URL from calling site.
example: http://oursite.com/index.aspx?test=d53ae99b-06a0-4ba7-81ed-4556adc532b2
We want to give the calling website 'test' string that links back to the GUID of our table which tells the Silverlight application what it's task is when they arrive. We also use this GUID for authentication on our application among other things.
The GUID are as such:
d53ae99b-06a0-4ba7-81ed-4556adc532b2
8354b838-99b3-4b4c-bb07-7cf68620072e
Encrypted, the values are much longer:
l5GyhPWSBUw8KdD+TpWJOsoOFDF0LzmGzd4uufLx+v/d3eByGZ6zPcRjvCRMG2tg
WVMN7B0FPa18/Q7+U4njb5AOKnx6Ga9xoAsvCET6MyjM5TV6dO86OexaCXDiXaES
My question is, with security in mind, should we give them the GUID encrypted or like it is, unencrypted?
Does it matter?
What is everyone's experience with this type of parameter passing?
In matters of encryption, the key is to define your security context. What might someone be able to do if they had access to the original GUIDs? If they couldn't do anything hazardous, there's no point encrypting, and it's generally best not to encrypt. If there's any security risk posed by this information being publicly available, you'd better encrypt it.
Since you say:
We also use this guid for authentication on our application among other things
... I'm guessing you'll want to encrypt. But you may want to re-think your authentication strategy. It's often best to use time-tested, well-accepted methods for things like authentication and encryption, since you can be relatively certain that there aren't unknown exploits.
Related
My case is that I want to make the data protected even from people who have access to the back-end (the keys store), so they couldn't read it without the user's (represented by the client app, in my case the browser) assistance.
One option is to have the decryption keys stored on the client and passed with each request which sounds pretty messy to me and i'm not sure I want my keys to wander around the net like this. What I imagine though is that the client will keep some token (it might be a password the user knows) and the decryption can't happen without it.
I thought about using the purpose string for this, I have the feeling it is not a good idea since its main purpose is isolation. On the other hand it is part of the additional authenticated data used for subkey derivation. (based on this article https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/implementation/subkeyderivation?view=aspnetcore-2.1#additional-authenticated-data-and-subkey-derivation).
I came across some examples that create their own symmetric encryption with a lower level classes. (like this post Encrypt and decrypt a string in C#?). Since I'm not an expert in this area I would like to use as much build in classes as possible.
What is the recommended way to achieve what I need with the classes from the Data Protection API? (I'm using .net core 1.1 on Ubuntu)
I am using a AES encryption/decryption class that needs a key value and vector value encrypt and decrypt data in an MVC3 application.
On saving the record I am encrypting the data then storing in a database. When i retrieve the record i am decrypting in the controller and passing the unencrypted value to the view.
The concern is not protecting data as it traverses the network but to protect the database should it be compromised.
I have read many posts that say dont put the keys for encryption in your code.
Ok so where should they be kept? File system? Another Database?
Looking for some direction.
Common sense says, if an intruder gets access to your database, they will most likely also have access to your file system. It really comes down to you. For one, you can try to hide it. In configuration files, in plain files somewhere in file system, encrypt it with another key that is within the application ... and so on and so forth.
Configuration files are a logical answer, but why take a chance - mix it. Feel free to mix keys with multi-level encryptions - one requiring something from the record itself and being unique to every record, other one requiring a configuration value, third one requiring an application-specific value, and perhaps a fourth one from a library hidden well within your application's references? This way, even if one layer somehow gets compromised, you will have several others protecting it.
Yes, it adds overhead. Yes, it is relatively expensive. But is it worth it if you have sensitive data like user credit card details? You bet it is.
I'm using similar encryption and hashing techniques in one of my personal pet projects that is highly security focused and carefully controlled. It depends how much data you need to display at any one time - for example, mine will ever fetch only 10 records at a time, most likely even less.
... To specify what I mean by mixing: Encrypt once. Then encrypt that data again with different key and suggestedly different algorithm.
I would use Registry Keys protected by ACL, so only the account under which your app pool is running can read them.
An extremely secure ASP.NET application is having to be written at my work and instead of trawling through the Internet looking for best practices I was wondering as to what considerations and generally what things should be done to ensure a public web application is safe.
Of course we've taken into consideration user/pass combinations but there needs to be a much deeper level than this. I'm talking about every single level and layer of the application i.e.
Using URL rewrites
Masterpages
SiteMaps
Connection pooling
Session data
Encoding passwords.
Using stored procedures instead of direct SQL statements
I'm making this a community wiki as there wouldn't be one sole answer which is correct as it's such a vast topic of discussion. I will point out also that this is not my forte by any means and previous security lockdown has been reached via non-public applications.
That's a bigger toppic than I think you perhaps realise. The best advice is to get someone that already knows who can advise you. Failing that I would start by reading the Microsoft document "Improving Web Application Security: Threats and Countermeasures" but be warned that runs to 919 printed pages.
You should refine the idea of "stored procedures" into just using parameterized queries. That will take care of most of your problems there. You can also restrict fields on the UI and strip out or encode damaging characters like the pesky ';'...
use forms authentication instead of storing authentication data in session.
Obviously: Hash passwords. If you want to be very cautious use SHA1 encryption instead of md5.
Say you have a bunch of files.
Say you can store meta data to these files.
Say, one of these meta attributes were called "encryption"
Say everyone was allowed to look at these files, but since they are encrypted, only people who know how to decrypt them can actually read the contents.
Say, for every given value of "encryption", a group of people share the knowledge on how to decrypt files marked with that value.
Say you want to be able to do this programmatically, in an OS agnostic way (if possible)
What are the values you would use for "encryption"?
How would you store the keys?
How would you organize access to the keys?
I am currently leaning towards following implementation:
the value of the field "encryption" contains the name of a key, possibly also denoting the algorithm used
each user has access to a bunch of keys. This could be defined by roles the user has in an LDAP/ActiveDirectory like structure, or they could just be files in a secure directory in the users profile/home directory.
on viewing a file, the viewer (I'm trying to build a document management system) checks the users keys and decrypts the file if a matching key was found.
What encryption would you use? Symmetric (AES)? Or Asymmetric (what are the good ones)?
Using asymmetric keys would have the additional benefit of making a difference between reading a file and writing a file: Access to the private key is necessary for writing the file, access to the public key (only semi public, as only certain roles have access to it) would allow reading the file. Am I totally mistaken here?
What are common systems to solve these problems used in small to medium sized businesses?
EDIT: It seems there are no universal sollutions. So, I will state the problem I am trying to solve a little more clearly:
Imagine a Document Management System that operates in a distributed fashion: Each document is copied to various nodes in a (company controlled, private) P2P network. An algorithm for assuring redundancy of documents is used to ensure backups of all documents (including revisions). This system works as a service / daemon in the background and shovels documents to and fro.
This means, that users will end up with documents probably not meant for them to see on their local workstation (a company controlled PC or a laptop or something - the setting is such that a SME IT guy sets this all up and controls who is part of the P2P network).
This rules out directory access based schemes, as the user will probably be able to get to the data. Am I mistaken here? Could a local folder be encrypted such that it can only be accessed by a Domain user? How secure is that?
I am aware of users sharing decrypted versions of files - and that that is hard to suppress technically. This is not a problem I am trying to solve.
The encryption isn't the hard part, here. Understanding the business needs, and especially, what threats you're trying to protect against, is the hard part. Key management isn't a trivial thing.
I highly recommend the book "Applied Cryptography" to help you understand the protocol-level issues better.
This is a hard problem. If this is something really serious, you should not use the advice of amateur cryptographers on the internet.
That said, here's my musings:
I'd encrypt each file with a random symmetric key using AES. This encryption would be on a job that runs overnight, so the key changes overnight.
I'd encrypt the key of each file with the public key of everyone who has access to the file.
If someone loses access to files, they'd be unable to read the new copies the next day (they could still have copies locally of old versions).
I'd use gpg (runs on nearly all OS-es happily).
You misunderstand asymmetric crypto. Public key is given to everyone, Private key you keep yourself. If Alice encrypts something with Bob's Public key, only Bob can decrypt it. If Bob encrypts something with his Private key - everyone can decrypt it, and everyone knows it came from Bob cause only he has his Private Key.
EDIT: However, if you ignored everything I said and went a different route, and gave every FILE it's own pub/priv keypair... then you would rely on the public key be available ONLY to those you want to read the file, and the private key available to those you want to r/w. But that's a bit trickier, and relies heavily on people not being able to distribute keys. Overnight jobs to change keys could mitigate that problem, but then you have the problem of distributing new keys to users.
If I understand you correctly, you could use GNU Privacy Guard. It's cross-platform and open source. Basically, every user has a copy of GPG and a local "keychain" with their "private keys" and "public keys". When you want to encrypt something, you use the person's public key, and the results can only be decrypted with their associated private key. A user can have more than one keypair, so you could give all administrators access to the "administrator role" private key, and each hold of they private key could decrypt documents encrypted with the "administrator role" public key.
The cool part is that you can encrypt a file with multiple public keys, and any one of the corresponding private keys could then be used to decrypt it.
The difficulty of this problem is why many businesses default to using OS-specific solutions, such as Active Directory.
For OS-agnostic, you have to re-create a lot of user-management stuff that the specific OS and/or Network vendors have already built.
But it can be done. For the encryption itself - go with AviewAnew's answer.
I have to agree with Mark here:
Understanding the business needs, and especially, what threats you're trying to protect against, is the hard part
For example; are you worried that unauthorized users may gain access to sensitive files? You can use file-level access control on virtually any operating system to restrict users or groups from accessing files/directories.
Are you worried that authorized users may copy the files locally and then lose their laptop? There are a number of os-level encryption facilities that provide varying degrees of protection. I personally recommend TrueCrypt for thumb drives and other portable media, and Windows Vista now include BitLocker which provides a different level of protection.
Another variation of the lost-laptop theme is the lost-backup theme, and many backup vendors now include encryption schemes for your tape backups for just this reason.
Finally, if you're worried that authorized users may share the files with unauthorized users then you may be trying to solve the wrong problem. Authorized users who can decrypt these files can just as easily share a new unencrypted version of the same document.
What you need is public-key encryption using either OpenPGP or X.509 certificates. In both cases you can encrypt the single block of data for multiple "recipients" using their OpenPGP keys or X.509 certificates respectively. In X.509 the standards for encrypting the data this way are PKCS#7 and CMS (defined in some RFC, I forgot the number). You would need to employ some key revocation checking in order to prevent access for those people, who were given access before but don't have it now.
This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
There are several good answers here and putting them all together is where the answer ultimately lies:
Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
(Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
Use https... it has replay protection built in.
If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.