Using URLScan from microsoft, do you know of any way to scan the form data for malicious input? The equivalent of validateRequest (from asp.net) but for classical asp? I want to block all data received from the browser that has potentially malicious input, for example, input that contains < script > elements or other forms of xss
Any clues?
There's nothing built in. Here's some suggestions:
http://aspalliance.com/1703_SQL_Injection_in_Classic_ASP_and_Possible_Solutions.all
Related
I am scanning my web application which i have build in Asp.net. Scanner is injecting junk data into the system trying to do blind Sql injection on the system but i am using Sql store procedures with parametrized quires which is escaping the blind sql injection but these junk entries are stored into the system as normal text i am sanitizing the inputs not to take ' and other sql related parameters.Now my question are
1) Are these junk entries any threat to the system?
2) Do i really need to sanitize the input if i am already using paramitrised quires with store procedures?
3) Scanner is not able to enter information into the system if u don't create login sequence is that a good thing?
Any other precautions i should take please let me know
Thanks
As you correctly mentioned, the 'junk' entries in your database are form submissions that Acunetix is submitting when testing for SQL injection, XSS and other vulnerabilities.
To answer your questions specifically:
1) No, this junk data is just an artifact of the scanner submitting forms. You might want to consider applying stricter validation on these forms though -- remember, if a scanner can input a bunch of bogus data, an automated script (or a real user for that matter) can also insert a bunch of bogus data.
Some ideas for better validation could include restricting the kind of input based on what data should be allowed in a particular field. For example, if a user is expected to input a telephone number, then there is no point allowing the user to enter alpha-characters (numbers, spaces, dashes, parenthesis and a plus sign should be enough for a phone number).
Alternatively, you may also consider using a CAPTCHA for some forms. Too many CAPTCHAs may adversely affect the user experience, so be cautious where, when and how often you make use of them.
2) If you are talking about SQL injection, no, you shouldn't need to do anything else. Parameterized queries are the proper way to avoid SQLi. However, be careful of Cross-site Scripting (XSS). Filtering characters like <>'" is not the way to go when dealing with XSS.
In order to deal with XSS, the best approach (most of the time) is to exercise Context-dependent Outbound Encoding, which basically boils-down to -- use the proper encoding based on which XSS context you're in, and encode when data is printed onto the page (i.e. do not encode when saving data to the database, encode when you are writing that data to the page). To read more about this, this is the easiest, and most complete source I've come across -- http://excess-xss.com/#xss-prevention
3) A login sequence is Acunetix's way of authenticating into your application. Without it, the scanner can not scan the internals of your app. So unless you have forms (perhaps on the customer-facing portion of your site) the scanner is not going to be able to insert any data -- Yes, this is generally a good thing :)
I was reading about ASP.NET Script Exploits, and one of the suggestions is: (emphasis is mine; and the suggestion is #3 in section "Guarding Against Scripting Exploits
" in the web page)
If you want your application to accept some HTML (for example, some formatting instructions from users), you should encode the HTML at the client before it is submitted to the server. For more information, see How to: Protect Against Script Exploits in a Web Application by Applying HTML Encoding to Strings.
Isn't that really bad advice? I mean, an exploiter could send the HTML via curl or something similar, and the HTML would then be sent un-encoded to the server, which can't be good(?)
Am I missing something here or mis-interpreting the statement?
Microsoft is not wrong in their sentence, but on the other hand far from complete, and their sentence is dangerous.
Since by default, validateRequest == true, you indeed should encode special HTML characters in the client in order for them to get into the server in the first place and bypass validateRequest.
But - they should have emphasized that this is certainly not a replacement for server side filtering and validation.
Specifically, if you must accept HTML, the strongest advice is to use white-listing instead of black filtering (i.e. allow very specific HTML tags and eliminate all the others). Use of Microsoft AntiXSS library is highly recommended for strong user input filtering. It's far better than "re-inventing the wheel" yourself.
I don't think that advice is good...
From my experience I would totally agree with your thought and replace that advice with the following:
all input has to be checked server-side first thing on arrival
all input that can possibly contain "active content" (like HTML, JavaScript...) has to be escaped on arrival and never be sent to any client till full sanitazion took place
I would never trust the client to send trusted data. As you stated there are simply too many ways that data can be submitted. Even non-malicious users may be able to bypass the system on the client if they have JavaScript disabled.
However on the link from that item it becomes clear what they mean with point 3:
You can help protect against script exploits in the following ways:
Perform parameter validation on form variables, query-string
variables, and cookie values. This validation should include two types
of verification: verification that the variables can be converted to
the expected type (for example, convert to an integer, convert to
date-time, and so on), and verification of expected ranges or
formatting. For example, a form post variable that is intended to be
an integer should be checked with the Int32.TryParse method to verify
the variable really is an integer. Furthermore, the resulting integer
should be checked to verify the value falls within an expected range
of values.
Apply HTML encoding to string output when writing values back out
to the response. This helps ensure that any user-supplied string input
will be rendered as static text in the browsers instead of executable
script code or interpreted HTML elements.
HTML encoding converts HTML elements using HTML–reserved characters so
that they are displayed rather than executed.
I think that this is just a case of a misplaced word because there is no way you can perform this level of validation on the client and in the examples contained in the link it is clearly server side code being presented without any mention of the client.
Edit:
You also have request validation enabled by default right? So clearly the focus of protecting content is on the server as far as Microsoft is concerned.
I think the author of the article misspoke. If you go to the linked web page, it talks about encoding data before it's sent back to the client, not the other way around. I think this is just an editing error by the author and he intended to say the opposite.. to encode it before it's returned to the client.
A general question on where to put validation.
I have an ASP.NET form that gets/sets data from/to a DataSet.
Currently, the fields in the form are validated by the form itself (e.g. for invalid length, range, etc.).
Is it a good... or better idea to transfer this validation checks into the DataSet.
The downside is I need to trigger update calls to the DataSet in order to get the column with errors.
In using forms, I can catch the error earlier.
The main reason I'd prefer to do this is I'd be using this Dataset assembly into another project (a WFC service?). And I'd like to re-use the same validation code when possible.
If you found anything similar to what I prefer to do, please give a link. Thanks!
Validations need to happen at page level (i.e, using javascript) and also at database level. Put in in your database APIs (i.e, using stored procedures). Don't rely solely on front-end validation, and don't commit any data without validation.
You can perform additional checks at Business layer level if need be.
Use both ) DataSet validation is more reliable, but ASP.NET Forms Validation works faster, user don't have to wait server response with validation results. But form validation is easy to cheat, you could create Response mannualy and send it to server without any form validation.
I am wondering if there's a good solution for preventing automated form submissions / POST attacks.
Would it be possible to add a form field with a token generated server side that would be unique for each form displayed on the site, but with a way to check if this token was indeed generated by my app and not by user, without having to save all tokens to a database?
IMO if app can know that data comes from a form generated by the app (meaning that it used logic only known to the app), than it would go on to process form.
Any suggesting for such an algorithm?
EDIT: In other words, can I generate a string that:
1) when I receive back, I can recognize as string that I generated,
2) know that it's been used only once,
3) would take user too many attempts to generate such string on the client side,
4) this string does not have to be persisted by the application
You'll probably be interested in Hashcash. I tried to cheat on a blog by multiple voting some comment, but I gave up when I saw that this method is a bit complicated for humans to generate that values.
I have always seen a lot of hidden fields used in web applications. I have worked with code which is written to use a lot of hidden fields and the data values from the visible fields sent back and forth to them. Though I fail to understand why the hidden fields are used. I can almost always think of ways to resolve the same problem without the use of hidden fields. How do hidden fields help in design?
Can anyone tell me what exactly is the advantage that hidden fields provide? Why are hidden fields used?
Hidden fields is just the easiest way, that is why they are used quite a bit.
Alternatives:
storing data in a session server-side (with sessionid cookie)
storing data in a transaction server-side (with transaction id as the single hidden field)
using URL path instead of hidden field query parameters where applicable
Main concerns:
the value of the hidden field cannot be trusted to not be tampered with from page to page (as opposed to server-side storage)
big data needs to be posted every time, could be a problem, and is not possible for some data (for example uploaded images)
Main advantages:
no sticky sessions that spill between pages and multiple browser windows
no server-side cleanup necessary (for expired data)
accessible to client-side scripts
Suppose you want to edit an object. Now it's helpful to put the ID into a hidden field. Of course, you must never rely on that value (i.e. make sure the user has appropriate rights upon insert/update).
Still, this is a very convenient solution. Showing the ID in a visible field (e.g. read-only text box) is possible, but irritating to the user.
Storing the ID in a session / cookie is prohibitive, because it disallows multiple opened edit windows at the same time and imposes lifetime restrictions (session timeout leads to a broken edit operation, very annoying).
Using the URL is possible, but breaks design rules, i.e. use POST when modifying data. Also, since it is visible to the user it creates uglier URLs.
Most typical use I see/use is for IDs and other things that really don't need to be on the page for any other reason than its needed at some point to be sent back to the server.
-edit, should've included more detail-
say for instance you have some object you want to update -- the UI sends back a collection of values and the server at that point may or may not know "hey this is a customer object" so you fire off a request to the server and say "hey, give me ID 7" and now you have your customer object as the system knows it. The updates are applied, validated, whatever and now your UI gets the completed result.
I guess a good excuse/argument is using linq. Try to update an object in linq without getting it from the DB first. It has no real idea that it's something it can keep track of until you get the full object.
heres one reason, convenient way of passing data between client code (javascript) and server side.
There are many useful scenarios.
One is to "store" some data on a page which should not be entered by a user. For example, store the user ID when generate a page, then this value will be auto-submitted with the form back to the server.
One other scenario is security. Add some hidden token to the page and check its existence on the server. This will help identify whether a form was submitted via the browser or by some bot which just posted to some url on your site.
It keeps things out of the URL (as in the querystring) so it keeps that clean. It also keeps things out of Session that may not necessarily need to be in there.
Other than that, I can't think of too many other benefits.
They are generally used to store state as an interaction progresses. Cookies could be used instead, but some people disable them. Could also use a single hidden field to point at server-side state, but then there are session-stickiness issues.
If you are using hidden field in the form, you are increasing the burden of form by including a new control.
If there is no need to take hidden field, you should't take it because it is not suitable on the bases of security point. using hidden field does not come under the good programming. Because it also affect the performance of application.