Acunetix Webscan - asp.net

I am scanning my web application which i have build in Asp.net. Scanner is injecting junk data into the system trying to do blind Sql injection on the system but i am using Sql store procedures with parametrized quires which is escaping the blind sql injection but these junk entries are stored into the system as normal text i am sanitizing the inputs not to take ' and other sql related parameters.Now my question are
1) Are these junk entries any threat to the system?
2) Do i really need to sanitize the input if i am already using paramitrised quires with store procedures?
3) Scanner is not able to enter information into the system if u don't create login sequence is that a good thing?
Any other precautions i should take please let me know
Thanks

As you correctly mentioned, the 'junk' entries in your database are form submissions that Acunetix is submitting when testing for SQL injection, XSS and other vulnerabilities.
To answer your questions specifically:
1) No, this junk data is just an artifact of the scanner submitting forms. You might want to consider applying stricter validation on these forms though -- remember, if a scanner can input a bunch of bogus data, an automated script (or a real user for that matter) can also insert a bunch of bogus data.
Some ideas for better validation could include restricting the kind of input based on what data should be allowed in a particular field. For example, if a user is expected to input a telephone number, then there is no point allowing the user to enter alpha-characters (numbers, spaces, dashes, parenthesis and a plus sign should be enough for a phone number).
Alternatively, you may also consider using a CAPTCHA for some forms. Too many CAPTCHAs may adversely affect the user experience, so be cautious where, when and how often you make use of them.
2) If you are talking about SQL injection, no, you shouldn't need to do anything else. Parameterized queries are the proper way to avoid SQLi. However, be careful of Cross-site Scripting (XSS). Filtering characters like <>'" is not the way to go when dealing with XSS.
In order to deal with XSS, the best approach (most of the time) is to exercise Context-dependent Outbound Encoding, which basically boils-down to -- use the proper encoding based on which XSS context you're in, and encode when data is printed onto the page (i.e. do not encode when saving data to the database, encode when you are writing that data to the page). To read more about this, this is the easiest, and most complete source I've come across -- http://excess-xss.com/#xss-prevention
3) A login sequence is Acunetix's way of authenticating into your application. Without it, the scanner can not scan the internals of your app. So unless you have forms (perhaps on the customer-facing portion of your site) the scanner is not going to be able to insert any data -- Yes, this is generally a good thing :)

Related

HTML form authentication to prevent spam / POST attacks

I am wondering if there's a good solution for preventing automated form submissions / POST attacks.
Would it be possible to add a form field with a token generated server side that would be unique for each form displayed on the site, but with a way to check if this token was indeed generated by my app and not by user, without having to save all tokens to a database?
IMO if app can know that data comes from a form generated by the app (meaning that it used logic only known to the app), than it would go on to process form.
Any suggesting for such an algorithm?
EDIT: In other words, can I generate a string that:
1) when I receive back, I can recognize as string that I generated,
2) know that it's been used only once,
3) would take user too many attempts to generate such string on the client side,
4) this string does not have to be persisted by the application
You'll probably be interested in Hashcash. I tried to cheat on a blog by multiple voting some comment, but I gave up when I saw that this method is a bit complicated for humans to generate that values.

Documents/links on preventing HTML form fiddling?

I'm using ASP.Net but my question is a little more general than that. I'm interested in reading about strategies to prevent users from fooling with their HTML form values and links in an attempt to update records that don't belong to them.
For instance, if my application dealt with used cars and had links to add/remove inventory, which included as part of the URL the userid, what can I do to intercept attempts to munge the link and put someone else's ID in there? In this limited instance I can always run a check at the server to ensure that userid XYZ actually has rights to car ABC, but I was curious what other strategies are out there to keep the clever at bay. (Doing a checksum of the page, perhaps? Not sure.)
Thanks for your input.
The following that you are describing is a vulnerability called "Insecure Direct Object References" And it is recognized by A4 in the The OWASP top 10 for 2010.
what can I do to intercept attempts to
munge the link and put someone else's
ID in there?
There are a few ways that this vulnerability can be addressed. The first is to store the User's primary key in a session variable so you don't have to worry about it being manipulated by an attacker. For all future requests, especially ones that update user information like password, make sure to check this session variable.
Here is an example of the security system i am describing:
"update users set password='new_pass_hash' where user_id='"&Session("user_id")&"'";
Edit:
Another approach is a Hashed Message Authentication Code. This approach is much less secure than using Session as it introduces a new attack pattern of brute force instead of avoiding the problem all togather. An hmac allows you to see if a message has been modified by someone who doesn't have the secret key. The hmac value could be calculated as follows on the server side and then stored as a hidden variable.
hmac_value=hash('secret'&user_name&user_id&todays_date)
The idea is that if the user trys to change his username or userid then the hmac_value will not be valid unless the attacker can obtain the 'secret', which can be brute forced. Again you should avoid this security system at all costs. Although sometimes you don't have a choice (You do have a choice in your example vulnerability).
You want to find out how to use a session.
Sessions on tiztag.
If you keep track of the user session you don't need to keep looking at the URL to find out who is making a request/post.

Saving private data

Can anybody detail some approach on how to save private data in social websites like facebook, etc. They cant save all the updates and friends list in clear text format because of privacy issues. So how do they actually save it?
Hashing all the data with user password so that only a valid session view it is one possibility. But I think there are some problem with this approach and there must be some better solution.
They can and probably do save it in plain text - it goes into a database on a server somewhere. There aren't really privacy issues there... and even if there were, Facebook has publicly admitted they don't care about privacy.
Most applications do not encrypt data like this in the database. The password will usally be stored in a salted hash, and the application artchitecture is responsible for limiting visibility based on appropriate rights/roles.
Most websites do in fact save updates and friends list in clear text format---that is, they save them in an SQL database. If you are a facebook developer you can access the database using FQL, the Facebook Query Language. Queries are restricted so that you can only look at the data of "friends" or of people running your application, or their friends, or what have you. (The key difference between SQL and FQL is that you must always include a WHERE X=id where the X is a keyed column.)
There are other approaches, however. You can store information in a Bloom filter or in some kind of hash. You might want to read Peter Wayner's book Translucent Databases---he goes into clever approaches for storing data so that you can detect if it is present or missing, but you can't do brute force searches.

Query String Parameters make my app at risk?

I'm writing an Asp.Net WebForms app where I am calling an edit page an passing in the data about the record to be edited using query string parameters in the URL.
Like:
http://myapp.path/QuoteItemEdit.aspx?PK=1234&DeviceType=12&Mode=Edit
On a previous page in the app, I have presented the user with a GridView of screened items he can edit based on his account privileges, and I call the edit page with these above parameter list, and the page know what to do. I do NOT do any additional checking on the target page to validate whether the user has access to the passed in PK record value as I planned to rely on the previous page to filter the list down and I would be fine.
However, it is clear the user can now type in a URL to a different PK and get access to edit that record. (Or, he may have access to Mode=View, but not Mode=Edit or Mode=Delete. Basically, I was hoping to avoid validating the record and access rights on the target page.
I have also tested the same workflow using Session variables to store PK, DeviceType, and Mode before calling the target page, and then reading them from Session in the target page. So there are no query string paramaters involved. This would take control away from the user.
So, I'm looking for feedback on these two approaches so that I choose an accepted/standard way of dealing with this, as it seems like a very common app design pattern for CRUD apps.
Agreed, you'll want to validate permissions on the target page, it's the only way to be absolutely sure. When it comes to security, redundancy isn't a bad thing. Secure your database as if you don't trust the business layer, secure your business layer as if you don't trust the UI, and secure the UI as well.
You should always validate before the real execution of the action, especially if passing the parameters by query string. For the second page that does the execution you might not need as much feedback for the user since you do not have to be nice to the user if he tries to cirumvent your security, so error handling should be a lot easier.
Passing the variables per session is acceptable but imho you should still validate the values.
We always use querystrings so records can be bookmarked easily, however always validate in both places, if you write you access control code nicely it should just be a case of re-using the existing code...
I believe the common practice is to do what you're avoiding: On the original page, you need to check to see what the user should have capabilities to do, and display their options appropriately. Then on the actual work page, you need to check the user again to verify they are allowed to be there, with access to that specific task.
From a usability standpoint, this is what the user would want (keeps it simple, allows them to bookmark certain pages, etc), and security on both pages is the only way to do this.
If you really don't want to check access rights on the target page:
You could hash the PK with the UserID and then add the hash value to the query string.
string hash = hashFunction(PK.toString() + UserID.toString());
Then you have to make sure the hash in the queryString equals the hash value calculated before loading the page.
Assuming this is an internal organization Web application.
Session variables can be manipulated as well, although not as easily. Whatever authentication you're using throughout your site, you should definitely use on your target page as well. Otherwise, you'll be open to exposing data you may not want as you have found out.
You could do the following to make your URLs a bit more secure:
-Use Guids for Primary Keys so users cant guess other record ID's
-The Mode couls be implicit: Guid = Edit, no Guid = New
and..
-Server-side validation is the only way to go.

Restricting user password character set

Working on a login system - the point where customer chooses their password for site access.
Beyond using RegEx to ensure that the password is strong enough, normally on our system all data that will wind up in the database is checked against injection etc and a reasonably restricted character set is enforced on all fields. I don't really want a particularly restrictive character set for the password, as I think it is a bit of an anti-pattern on security to control it too much.
However in the case of the password, I'll be hashing it with a salted SHA-512 for insert anyway which raises a handful of questions:
Is there any point whatsoever in restricting the character set that the customer can use in the password - ie am I exposed to any vulnerabilities outside of injection which I am assuming would be circumvented completely by the hashing?
There must be negatives to an allow-all approach - I can think of the fact that in the future what is an innocent combination now could become a dangerous one - is that a real concern, and are there others I may have missed?
Are there any characters/strings that must be rejected - would they get through the native ASP.NET protection anyway?
A bit more subjective maybe, but given it is a SHA-512 hash - is there any point in restricting the maximum length of password that the user can choose (within reasonable parameters), assuming that a password of significant size/complexity could raise a warning to confirm that they do want to set it.
Thanks for your help.
EDIT: This is an ASP.NET web application accessing a MSSQL2008 database using ADO.NET (not LINQ/EF).
From a non-English perspective - there should be no restrictions on the password.
For example, why restrict a Japanese-language speaker to using the US-ASCII character set? And why should a French speaker not use accented characters?
Given that your hash is persisted correctly there's no technical reason to restrict it.
There is little reason to worry about SQL insertion attacks unless you're actually inserting the password into the database in plain text (Danger, Will Robertson, Danger!) and even then if you paramaterize the query it won't be an issue. You should allow [a-zA-Z0-9] plus some set of special characters. Probably the only character to restrict is '<' which will trigger the ASP.net validation warning. There are a number of fun tools out there to do password complexity checking on the client side. I like this one. It provides some instant feedback to the user as they are typing.
Since the password is hashed, it will be stored in the database in hex decimal format. Therefore, I see no point of restricting the type of allowed characters. If I wanted to use a Chinese letter in my password, I should be able to do so. If I have installed an extension for Firefox that generates random bytes and uses those for my passwords, I should be able to do so. The lesson here is to not limit your users' passwords.
Also note that RegEx has a unicode support that is capable of detecting if the user has used a letter of any languages. That might become handy when you are validating the strength of the password.

Resources