Why do I need to perform server side validation? - asp.net

Thanks to everyone who commented or posted an answer! I've kept my original question and update below for completeness.
[Feb 16, 2011 - Update 2] As some people point out - my question should have been: Given a standard asp.net 4 form, if I don't have any server side validation, what types of malicious attacks am I susceptible to?
Here is my take away on this issue.
If data isn't sensitive (comments on a page) - from an asp.net security standpoint, following standard best practices (SqlParameters, request validation enabled, etc) will protect you from malicious attacks.
For sensitive data/applications - it's up to you to decide what type of server side validation is appropriate for your application. You need to think the end to end solution (webservices, other systems, etc). You can view a number of suggestions below - whitelist validation, etc.
If you are using ajax (xhr requests) to post user input you need to reproduce the protection from the other bullets in your code on the server. Again, lots of solutions below – like ensuring that the data does not contain any html/code, etc. (side note: the .net framework requestValidationMode="4.0" does afford some protection in this regard - but I can't speak to how complete a solution it is)
Please feel free to continue to comment...if any of the above is incorrect please let me know. Thanks!
[Feb 3, 2011 - Update 1] I want to thank everyone for their answers! Perhaps I should ask the reverse question:
Assume a simple asp.net 4.0 web form (formview + datasource with request validation enabled) that allows logged in users to post comments to a public page (comments stored in sql server db table). What type of data validation or cleansing should I perform on the new "comments" on the server side?
[Jan 19, 2011 - Original Question] Our asp.net 4 website has a few forms where users can submit data and we use jquery validate on the client side. Users have to be logged in with a valid account to access these forms.
I understand that our client side validation rules could easily be bypassed and clients could post data without required fields, etc. This doesn’t concern me very much - users have to be logged in and I don’t consider our data very “sensitive” nor would I say any of our validation is “critical”. The input data is written to the database using SqlParameters (to defend against sql injection) and we depend on asp.net request validation to defend against potentially dangerous html input.
Is it really worth our time to rewrite the various jquery validation rules on the server? Specifically how could a malicious user compromise our server or what specific attacks could we be open to?
I apologize as it appears that this question has been discussed a few times on this site – but I have yet to find an answer that cites specific risks or issues with not performing server side validation. Thanks in advance

Hypothetical situation:
Let's say you have a zip code field. On the client-side you validate that it must be in a "00000" or "00000-0000" pattern. Since you're allowing a hyphen, you decide to store the field as a varchar in the database.
So, some evil user comes along and decides to bypass all of your client-side validation and submit something that's not in the correct format and makes it past the request validation.
Ok, no big deal..., you're encoding it before displaying it back to the user later anyway.
But what else are you doing with that zip code? Are you submitting it to web service for some sort of lookup? Are you uploading it to a GPS device? Will it ever be interpreted by something else in the future? Does your zipcode field now contain some JSON or something else weird?
Or something like this: http://www.businessinsider.com/livingsocial-server-flaw-2011-1

Security is a dependability attribute that is defined as the probability that the system resists to an attack, or else the probability a fault is not maliciously activated.
In order to implement security, you must perform a threat analysis. Complex computer systems are subject to deeper analyses (think about an aircraft's o a control tower's equipment) as they become more critical and threats pose business or human life at risk.
You can perform your own threat analysis by questioning yourself what happens if a user bypasses validation?.
Two groups of answers, by examples:
Group 1 (critical)
The user can buy articles paying less than their price
The user can be revealed information about other users
The user obtains privileges he/she is not supposed to have
Group 2 (non critical)
The user is displayed inconsistent data in the next page
Processing continues, but the inconsistency leads to an error that requires human intervention
The user's data (but only of that user, not others) get compromised
A strange error page is returned to the user, with lots of technical information that cannot be used anyway
In the first case, you must definitely fix your validation problem, because you could lose money after an attack, or lose the trust of your public (think about forging Facebook URLs and showing someone's photos even if you are not mutually friends).
In the second case, if you are sure that an inconsistent field doesn't put your business or the data at risk, you may still avoid fixing
The real problem is
How do you prove that any inconsistent data sent to your website is never supposed to have any consequence over the system that may pose a threat?
So that's why you lose less time fixing your validation rather than thinking about it

Honestly, users don't care what you consider "sensitive" or "critical" data. Those criteria are up to them to decide.
I know that if I was a user of your application and I saw my data change without me directly doing something to cause the change...I would close my account up as fast as possible. It would be readily apparent that your system wasn't secure and none of my data was safe.
Keep in mind that you're forcing people to log in so you at least have their passwords somewhere. Whether or not they are easily accessed, a breach is a breach and I have lost my trust.
So...while you may not consider an input injection attack important, your users will and that is why you should still do server side input validation.

Your data may not be worth much, that's fine by me.
BUT, attackers could inject CSRF "cross site request forgery" attack code into your application; users of your site may have their data at other sites compromised. Yes, it would require those 'other sites' to have bugs, but that happens. Yes, it would require that users not use the 'logout' buttons on those sites, but not enough people use them. Think of all the tasty data your users have stored at other web sites. You wouldn't something bad to happen to your users.
Attackers could inject HTML that invites users to download and install 'plugins necessary for viewing this content' -- plugins that are keyloggers, or search hard drives for credit card numbers or tax filings. Maybe a plugin to become spambots or porn hosts. Your users trust your site to not recommend plugins that are owned by the Yakuza, right? They might not feel friendly if your site recommends installing evil things.
Depending upon what kinds of bugs invalid data might trigger, you might find yourself a spambot or a porn host. It heavily depends on how defensively you have coded other aspects of your application. Too many applications blindly trust input data.
And the best part: your users aren't human. Your users are browsers, which might be executing attacks supplied by other sites that didn't bother to perform good input validation and output sanitizing. Your users are viruses or worms that happen to find you by chance or by design. You might trust the individuals, but how far do you trust their computers? Me, not very far.
Please write applications to be as secure as you can -- you may put a large button on the front page to drop all users' data if you want -- but please don't intentionally write insecure programs.

This an excellent and brave question. The short (and possibly brave) answer is you don't. If you are aware of all the security vulnerabilities and you still don't believe it's necessary, then that's your choice.
It really depends on who your users are, who the site is exposed to (in terms of intranet or internet) and how easy it is to obtain an account. You say that your data is not sensitive yet you still require users to log in. How bad would it be if an unauthorised user were to access the system by hopping on another user's machine whilst they were elsewhere?
Bear in mind that relying on the request validation to look for malicious input can never be proved to be 100% safe so security is usually done at multiple levels with a fair bit of redundancy.
However it has to be your choice and you are doing the right thing to find out the consequences of leaving this out.

I believe that you need to validate both on the client side and on the server side, and here's why.
On the client side, you are often saving the user from submitting data that is obviously wrong. They have not filled in a required field. They have put letters in a field that is only supposed to contain numbers. They have provided a date in the future when only a date in the past will do (such as date of birth). And so on. By preventing these kinds of mistakes on the client side, you are avoiding user frustration, and also reducing the number of unnecessary hits to your web server.
On the server side, you should generally repeat all of the validation that you did on the client side. That is because, as you have observed, clever users can get around client-side validation and submit invalid data. In addition, there is some validation that is inefficient or impossible to do on the client side. Sometimes, you check that the data entry adheres to business rules. You might check it against existing data in the database. If you just let users enter anything (especially omitting required fields), the website won't function properly for them.

Check out the Tamper Data extension for firefox. You can feed the server anything you want very easily

Anyone performing HTTP POSTs to your server via your web site (with jQuery validation) can also perform HTTP POSTs via some other means that bypasses the jQuery validation. For example, I could use System.Net.HttpWebRequest to POST some data to your server with the appropriate cookies that injects malicious content into the form fields. I'd have to set up the __EVENT_VALIDATION and __VIEWSTATE fields correctly, but if I succeed, I'd be bypassing the validation.
If you don't have server-side data validation, then you are effectively not validating the inputs at all. The jQuery validation is nice for user experience but not a real line of defense.
This is especially so with inputs like a free-form comments field. You definitely want to ensure that the field does not contain HTML or other malicious script. As an extra measure of defense, you should also escape the comment content when it is displayed in your web app with a library like AntiXss (see http://wpl.codeplex.com/).

In terms of client-side vs. server-side validation, my opinion is that client-side validation is just to make sure the form is filled correctly and a user could tamper with the form and bypass the verifications you do in javascript.
On the server-side you could actually make sure that you actually want to store this data and validate it in depth manner and check relative database tables to ensure that your database is always normalized with any data set that you get from the client. I would say even that the server side is more important than the client side in terms of not showing the user what do you look for in the form and how you validate the data.
to summarize, I recommend verification on both sides, but if I had to choose between the two i would recommend server-side validation , but that could mean that your server could potentially performing additional validations that you could have prevented from validating on the client side

To answer your second question:
You need to use a whitelist to keep malicious input out of the incoming comments.
The .NET Framework request validation does a very good job of stopping XSS payloads in incoming POST requests. It may not, however, prevent other malicious or mischevious HTML from getting into the comments (image tags, hyperlinks, etc.).
So if possible I would set up whitelist validation on the server side for allowed characters. A regex should cover this just fine. You should allow A-Za-z0-9, whitespace, and a few punctuation marks. If the regex fails to match, return an error message to the user and stop the transaction. Regarding SQL Injection: I would allow apostrophes through in this case (unless you like terrible grammar in your comments), but put code comments around your parameterized SQL queries to the effect of: "This is the only protection against SQL, so be careful when modifying." You should also lock down the permissions of the database account used by the web process (read/write only, not database owner permissions). What I wouldn't do is try to do blacklist validation on the input, as that is very time consuming to do correctly (see RSnake's XSS Cheat Sheet at http://ha.ckers.org/xss.html for an idea of the number of things you would need to prevent just for XSS).
Between the .NET framework and your own whitelist validation you should be safe from HTML-based attacks such as XSS and CSRF*. SQL injection will be prevented by using parameterized queries. If the comment data touches any other assets you may need to put more controls in place, but those cover the attacks relevant to the basic data submission form you've outlined.
Also, I wouldn't try to "cleanse" the data at all. It is very difficult to do properly and users (as was mentioned above) hate it when their data is modified without their permission. It is more secure and more usable to give user's a clear error message when your data validation fails. If you put their comment back on the page for them to edit, HTML encode the output so you aren't vulnerable to a Reflected XSS attack.
And as always, OWASP.org (http://www.owasp.org) is a good reference for all things webappsec related. Check out their Top Ten and Development Guide projects.
*CSRF may not be a direct concern of yours, as fraudulent posts to your site may not matter to you, but preventing XSS has the side benefit of keeping CSRF payloads targeting other sites from being hosted from your site.

Related

CSRF protection while making use of server side caching

Situation
There is a site at examp.le that costs a lot of CPU/RAM to generate and a more lean examp.le/backend that will perform various tasks to read, write and serve user-specific data for authenticated requests. A lot of resources could be saved by utilizing a server side cache on the examp.le site but not on examp.le/backend and just asynchronously grab all user-specific data from the backend once the page arrives at the client. (Total loading time may even be lower, despite the need of an additional request.)
Threat model
CSRF attacks. Assuming (maybe foolishly) that examp.le is reliably safeguarded against XSS code injection, we still need to consider scripts on malicious site exploit.me that cause the victims browser to run a request against examp.le/backend with their authorization cookies included automagically and cause the server to perform some kind of data mutation on behalf of the user.
Solution / problem with that
As far as I understand, the commonly used countermeasure is to include another token in the generated exampl.le page. The server can verify this token is linked to the current user's session and will only accept requests that can provide it. But I assume caching won't work very well if we are baking a random token into every response to examp.le..?
So then...
I see two possible solutions: One would be some sort of "hybrid caching" where each response to examp.le is still programmatically generated but that program is just merging small dynamic parts to some cached output. Wouldn't work with caching systems that work on the higher layers of the server stack, let alone a CDN, but still might have its merits. I don't know if there is a standard ways or libraries to do this, or more specifically if there are solutions for wordpress (which happens to be the culprit in my case).
The other (preferred) solution would be to get an initial anti-CSRF token directly from examp.le/backend. But I'm not quite clear in my understanding about the implications of that. If the script on exploit.me could somehow obtain that token, the whole mechanism would make no sense to begin with. The way I understand it, if we leave exploitable browser bugs and security holes out of the picture and consider only requests coming from a non-obscure browser visiting exploit.me, then the HTTP_ORIGIN header can be absolutely trusted to be tamper proof. Is that correct? But then that begs the question: wouldn't we get mostly the same amount of security in this scenario by only checking authentication cookie and origin header, without throwing tokens back and forth?
I'm sorry if this question feels a bit all over the place, but I'm partly still in the process of getting the whole picture clear ;-)
First of all: Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF) are two different categories of attacks. I assume, you meant to tackle CSRF problem only.
Second of all: it's crucial to understand what CSRF is about. Consider following.
A POST request to exampl.le/backend changes some kind of crucial data
The request to exampl.le/backend is protected by authentication mechanisms, which generate valid session cookies.
I want to attack you. I do it by sending you a link to a page I have forged at cats.com\best_cats_evr.
If you are logged in to exampl.le in one browser tab and you open cats.com\best_cats_evr in another, the code will be executed.
The code on the site cats.com\best_cats_evr will send a POST request to exampl.le/backend. The cookies will be attached, as there is not reason why they should not. You will perform a change on exampl.le/backend without knowing it.
So, having said that, how can we prevent such attacks?
The CSRF case is very well known to the community and it makes little sense for me to write everything down myself. Please check the OWASP CSRF Prevention Cheat Sheet, as it is one of the best pages you can find in this topic.
And yes, checking the origin would help in this scenario. But checking the origin will not help, if I find XSS vulnerability in exampl.le/somewhere_else and use it against you.
What would also help would be not using POST requests (as they can be manipulated without origin checks), but use e.g. PUT where CORS should help... But this quickly turns out to be too much of rocket science for the dev team to handle and sticking to good old anti-CSRF tokens (supported by default in every framework) should help.

Do I need extra XSS security for ASP.NET 4 websites?

From what I understand about what ASP.NET does and my own personal testing of various XSS tests, I found that my ASP.NET 4 website does not require any XSS prevention.
Do you think that an ASP.NET 4.0 website needs any added XSS security than its default options? I cannot enter any javascript or any tags into my text fields that are then immediately printed onto the page.
Disclaimer - this is based on a very paranoid definition of what "trusted output" is, but when it comes to web security, I don't think you CAN be too paranoid.
Taken from the OWASP page linked to below: Untrusted data is most
often data that comes from the HTTP request, in the form of URL
parameters, form fields, headers, or cookies. But data that comes from
databases, web services, and other sources is frequently untrusted
from a security perspective. That is, it might not have been perfectly
validated.
In most cases, you do need more protection if you are taking input from ANY source and outputting it to HTML. This includes data retrieved from files, databases, etc - much more than just your textboxes. You could have a website that is perfectly locked down and have someone go directly to the database via another tool and be able to insert malicious script.
Even if you're taking data from a database where only a trusted user is able to enter the data, you never know if that trusted user will inadvertently copy and paste in some malicious script from a website.
Unless you absolutely positively trust any data that will be output on your website and there is no possible way for a script to inadvertently (or maliciously in case of an attacker or disgruntled employee) put dangerous data into the system, you should sanitize all output.
If you haven't already, familiarize yourself with the info here: https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
and go through the other known threats on the site as well.
In case you miss it, the Microsoft.AntiXss library is a very good tool to have at your disposal. In addition to a better version of the HtmlEncode function, it also has nice features like GetSafeHtmlFragment() for when you WANT to include untrusted HTML in your output and have it sanitized. This article shows proper usage: http://msdn.microsoft.com/en-us/library/aa973813.aspx The article is old, but still relevant.
Sorry Dexter, ASP.NET 4 sites do require XSS protection. You're probably thinking that the inbuilt request validation is sufficient and whilst it does an excellent job, it's not foolproof. It's still essential that you validate all input against a whitelist of acceptable values.
The other thing is that request validation is only any good for reflective XSS, that is XSS which is embedded in the request. It won't help you at all with persistent XSS so if you have other data sources where the input validation has not been as rigorous, you're at risk. As such, you always need to encode your output and encode it for the correct markup context (HTML, JavaScript, CSS). AntiXSS is great for this.
There's lots more info specifically as it relates to ASP.NET in OWASP Top 10 for .NET developers part 2: Cross-Site Scripting (XSS).

ASP.NET - Hacking the yellow screen of death

In some of my books that I've read, it is stated that it is good to hide yellow screens of death (obviously), but not only for the reason in that it is quite informal to users, but also because hackers can use the information to hack your website.
My question is this. How can a hacker use this information? How does a call stack of basic operations of the .NET call stack help hackers?
I attached a yellow screen of death that I encountered on one of the websites that I created a long time ago and it sparked my interest.
(The error is that it fails when attempting to cast a query string parameter to an int. Yea, I know its bad code, I wrote it many years ago ;)
If you're writing secure code, the YSOD shouldn't provide a hacker with the ability to hack your application. If however, your code is insecure, then the YSOD could provide the attacker with essential information to allow them to carry out their attack.
Say, for example, you have written your own forum software. You have put in lots of validation for when the user writes posts to prevent XSS attacks and such, but your validation is faulty. If a hacker can bring up the YSOD when they make a post, the stack trace shown could potentially show them the cracks in your validation and exploit them to create XSS attacks or obtain member details or passwords and such.
The YSOD on it's own is no threat, but to a hacker, it can be a very useful way of finding flaws in your application's security.
There are several different ways this could compromise your application... but most of them would only make it easier for an attack... the vulnerability would probably already have to be there. For example, you could easily reveal a hard-coded password or salt, or reveal a line of code accepting user input without properly sterilizing it.
As mentioned by others YSOD itself is not necessarily always helpful to hacker but assume on your Line 13: in your code above you had your hard-coded Connection string or an inline sql query.
I now know from your YSOD that the "id" meant in your querysting is actually artId and not any randorm id number which may be of some use to hacker.
Also if hacker was able to get more than one different YSOD, it might reveal more info as a whole and sufficient enough to damage your app.
Sometime back MS reported a security vulnerability with ASP.NET where the workaround provided was to enable CustomErrors and hide from user the error-code and any paricular error related detail.
One thing that hasn't been mentioned yet is that an attacker would now have good reason to believe that you're using a MySql database (which they wouldn't be likely to guess about an ASP.NET app otherwise), helping them to narrow the range of potential attacks. No sense in making their job easier than it has to be.

What would you like to see in an beginner's ASP.NET security book

This is a shameless information gathering exercise for my own book.
One of the talks I give in the community is an introduction to web site vulnerabilities. Usually during the talk I can see at least two members of the audience go very pale; and this is basic stuff, Cross Site Scripting, SQL Injection, Information Leakage, Cross Site Form Requests and so on.
So, if you can think back to being one, as a beginning web developer (be it ASP.NET or not) what do you feel would be useful information about web security and how to develop securely? I will already be covering the OWASP Top Ten
(And yes this means stackoverflow will be in the acknowledgements list if someone comes up with something I haven't thought of yet!)
It's all done now, and published, thank you all for your responses
First, I would point out the insecurities of the web in a way that makes them accesible to people for whom developing with security in mind may (unfortunately) be a new concept. For example, show them how to intercept an HTTP header and implement an XSS attack. The reason you want to show them the attacks is so they themselves have a better idea of what they're defending against. Talking about security beyond that is great, but without understanding the type of attack they're meant to thwart, it will be hard for them to accurately "test" their systems for security. Once they can test for security by trying to intercept messages, spoof headers, etc. then they at least know if whatever security they're trying to implement is working or not. You can teach them whatever methods you want for implementing that security with confidence, knowing if they get it wrong, they will actually know about it because it will fail the security tests you showed them to try.
Defensive programming as an archetypal topic which covers all the particular attacks, as most, if not all, of them are caused by not thinking defensively enough.
Make that subject the central column of the book . What would've served me well back then was knowing about techniques to never trust anything, not just one stop tips, like "do not allow SQL comments or special chars in your input".
Another interesting thing I'd love to have learned earlier is how to actually test for them.
I think all vulnerabilities are based off of programmers not thinking, either momentary lapses of judgement, or something they haven't thought of. One big vulnerability that was in an application that I was tasked to "fix up", was the fact that they had returned 0 (Zero) from the authentication method when the user that was logging in was an administrator. Because of the fact that the variable was initialized originally as 0, if any issues happened such as the database being down, which caused it to throw an exception. The variable would never be set to the proper "security code" and the user would then have admin access to the site. Absolutely horrible thought went into that process. So, that brings me to a major security concept; Never set the initial value of a variable representing a "security level" or anything of that sort, to something that represents total god control of the site. Better yet, use existing libraries out there that have gone through the fire of being used in massive amounts of production environments for a long period of time.
I would like to see how ASP.NET security is different from ASP Classic security.
Foxes
Good to hear that you will have the OWASP Top Ten. Why not also include coverage of the SANS/CWE Top 25 Programming mistakes.
How to make sure your security method is scalable with SQL Server. Especially how to avoid having SQL Server serialize requests from multiple users because they all connect with the same ID...
I always try to show the worst-case scenario on things that might go wrong. For instance on how a cross-site script injection can work as a black-box attack that even works on pages in the application that a hacker can’t access himself or how even an SQL injection can work as a black box and how a hacker can steal your sensitive business data, even when your website connects to your database with a normal non-privileged login account.

How do I prevent replay attacks?

This is related to another question I asked. In summary, I have a special case of a URL where, when a form is POSTed to it, I can't rely on cookies for authentication or to maintain the user's session, but I somehow need to know who they are, and I need to know they're logged in!
I think I came up with a solution to my problem, but it needs fleshing out. Here's what I'm thinking. I create a hidden form field called "username", and place within it the user's username, encrypted. Then, when the form POSTs, even though I don't receive any cookies from the browser, I know they're logged in because I can decrypt the hidden form field and get the username.
The major security flaw I can see is replay attacks. How do I prevent someone from getting ahold of that encrypted string, and POSTing as that user? I know I can use SSL to make it harder to steal that string, and maybe I can rotate the encryption key on a regular basis to limit the amount of time that the string is good for, but I'd really like to find a bulletproof solution. Anybody have any ideas? Does the ASP.Net ViewState prevent replay? If so, how do they do it?
Edit: I'm hoping for a solution that doesn't require anything stored in a database. Application state would be okay, except that it won't survive an IIS restart or work at all in a web farm or garden scenario. I'm accepting Chris's answer, for now, because I'm not convinced it's even possible to secure this without a database. But if someone comes up with an answer that does not involve the database, I'll accept it!
If you hash in a time-stamp along with the user name and password, you can close the window for replay attacks to within a couple of seconds. I don't know if this meets your needs, but it is at least a partial solution.
There are several good answers here and putting them all together is where the answer ultimately lies:
Block-cipher encrypt (with AES-256+) and hash (with SHA-2+) all state/nonce related information that is sent to a client. Hackers with otherwise just manipulate the data, view it to learn the patterns and circumvent everything else. Remember ... it only takes one open window.
Generate a one-time random and unique nonce per request that is sent back with the POST request. This does two things: It ensures that the POST response goes with THAT request. It also allows tracking of one-time use of a given set of get/POST pairs (preventing replay).
Use timestamps to make the nonce pool manageable. Store the time-stamp in an encrypted cookie per #1 above. Throw out any requests older than the maximum response time or session for the application (e.g., an hour).
Store a "reasonably unique" digital fingerprint of the machine making the request with the encrypted time-stamp data. This will prevent another trick wherein the attacker steals the clients cookies to perform session-hijacking. This will ensure that the request is coming back not only once but from the machine (or close enough proximity to make it virtually impossible for the attacker to copy) the form was sent to.
There are ASPNET and Java/J2EE security filter based applications that do all of the above with zero coding. Managing the nonce pool for large systems (like a stock trading company, bank or high volume secure site) is not a trivial undertaking if performance is critical. Would recommend looking at those products versus trying to program this for each web-application.
If you really don't want to store any state, I think the best you can do is limit replay attacks by using timestamps and a short expiration time. For example, server sends:
{Ts, U, HMAC({Ts, U}, Ks)}
Where Ts is the timestamp, U is the username, and Ks is the server's secret key. The user sends this back to the server, and the server validates it by recomputing the HMAC on the supplied values. If it's valid, you know when it was issued, and can choose to ignore it if it's older than, say, 5 minutes.
A good resource for this type of development is The Do's and Don'ts of Client Authentication on the Web
You could use some kind of random challenge string that's used along with the username to create the hash. If you store the challenge string on the server in a database you can then ensure that it's only used once, and only for one particular user.
In one of my apps to stop 'replay' attacks I have inserted IP information into my session object. Everytime I access the session object in code I make sure to pass the Request.UserHostAddress with it and then I compare to make sure the IPs match up. If they don't, then obviously someone other than the person made this request, so I return null. It's not the best solution but it is at least one more barrier to stop replay attacks.
Can you use memory or a database to maintain any information about the user or request at all?
If so, then on request for the form, I would include a hidden form field whose contents are a randomly generated number. Save this token to in application context or some sort of store (a database, flat file, etc.) when the request is rendered. When the form is submitted, check the application context or database to see if that randomly generated number is still valid (however you define valid - maybe it can expire after X minutes). If so, remove this token from the list of "allowed tokens".
Thus any replayed requests would include this same token which is no longer considered valid on the server.
I am new to some aspects of web programming but I was reading up on this the other day. I believe you need to use a Nonce.
(Replay attacks can easily be all about an IP/MAC spoofing, plus you're challenged on dynamic IPs )
It is not just replay you are after here, in isolation it is meaningless. Just use SSL and avoid handcrafting anything..
ASP.Net ViewState is a mess, avoid it. While PKI is heavyweight and bloated, at least it works without inventing your own security 'schemes'. So if I could, I'd use it and always go for mutual authent. Server-only authentification is quite useless.
The ViewState includes security functionality. See this article about some of the build-in security features in ASP.NET . It does validation against the server machineKey in the machine.config on the server, which ensures that each postback is valid.
Further down in the article, you also see that if you want to store values in your own hidden fields, you can use the LosFormatter class to encode the value in the same way that the ViewState uses for encryption.
private string EncodeText(string text) {
StringWriter writer = new StringWriter();
LosFormatter formatter = new LosFormatter();
formatter.Serialize(writer, text);
return writer.ToString();
}
Use https... it has replay protection built in.
If you only accept each key once (say, make the key a GUID, and then check when it comes back), that would prevent replays. Of course, if the attacker responds first, then you have a new problem...
Is this WebForms or MVC? If it's MVC you could utilize the AntiForgery token. This seems like it's similar to the approach you mention except it uses basically a GUID and sets a cookie with the guid value for that post. For more on that see Steve Sanderson's blog: http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Another thing, have you considered checking the referrer on the postback? This is not bulletproof but it may help.

Resources