Error handling for query string parameters -1%27, getting bombarded - asp.net

I am not an expert, but not a rookie either. I'm using Asp.net webforms. I set up an error catching routine in global.asax to log the error info in an SQL table and redirect to a friendly error page. I began finding hundreds of exceptions per day with this in the query string "?id=-1%27". I use query strings for items and categories but only allow integers of 3 digits or less. Then I started geo-locating the IPs. The vast majority of them are from Russia and surrounding countries. So I started storing all of the IPs in a table. Is anyone else experiencing this and how best to handle it. I want to catch legitimate errors, but this is a major annoyance. Any input would be appreciated. I have Googled for 2 days and can't find anything related specifically to this issue.

The %27 is ASCII for the single quote (') and that is a red flag for someone trying to perform SQL injection via the query string to your application's data access layer logic.
I would be less concerned about where the attacks are coming from and more focused on techniques for protecting/processing your data before it is even attempted to be used by your data access layer and data storage (read: database).
Using parameterized SQL and data sanitation (read: white-listing allowable text for strings) is a great first step in combating these attacks.
UPDATE
It might be worth considering creating a custom exception for the invalid ID value being passed in as part of the query string. You can then test for the length being greater than three and throw that custom exception. Elsewhere you can catch/trap that custom exception type and do whatever you wish with that (read: potentially ignore that exception if it is becoming too large of annoyance). Please understand that I would never advocate ignoring exceptions (empty catch block), but merely stating that is possible to do such a "bad" thing.

Related

Check if record exists - performance

I have a table that will contain large amounts of data. The purpose of this table is user transactions.
I will be inserting into this table from a web-service, which a third party will be calling, frequently.
The third party will be supplying a reference code (most probably a string).
The requirement here is that I will need to check whether this reference code has already been inserted. If it exists, just return the details and do nothing else. If it doesn't create the transaction as expected. The reasoning behind this is the possibility of loss of communication with the service after the request is received.
I have some performance concerns with this, as the search will be done on a string value, and also on a large table. Most of the time the transaction will not exist in the database, as this is just a precaution.
I am not asking for code here, but for the best approach for performance.
AS your subject indicates if you are trying to look for evaluating EXISTS (SELECT 1 from Sometable) then there will not be much of performance penality. This is because you will not be writing just a bunch of 1s (means the inner query) to evaluate the result to boolean.
Other aspect to this is the non clustered indexing provided on the reference code field.If the length of the reference code lets say its a fixed length string (CHAR(50) then also the B -tree will be optimum .
I am not sure about the data consistency requirements hence exepct the normal readcommitted will wont do any harm unless you have highly transnational read writes.

Acunetix Webscan

I am scanning my web application which i have build in Asp.net. Scanner is injecting junk data into the system trying to do blind Sql injection on the system but i am using Sql store procedures with parametrized quires which is escaping the blind sql injection but these junk entries are stored into the system as normal text i am sanitizing the inputs not to take ' and other sql related parameters.Now my question are
1) Are these junk entries any threat to the system?
2) Do i really need to sanitize the input if i am already using paramitrised quires with store procedures?
3) Scanner is not able to enter information into the system if u don't create login sequence is that a good thing?
Any other precautions i should take please let me know
Thanks
As you correctly mentioned, the 'junk' entries in your database are form submissions that Acunetix is submitting when testing for SQL injection, XSS and other vulnerabilities.
To answer your questions specifically:
1) No, this junk data is just an artifact of the scanner submitting forms. You might want to consider applying stricter validation on these forms though -- remember, if a scanner can input a bunch of bogus data, an automated script (or a real user for that matter) can also insert a bunch of bogus data.
Some ideas for better validation could include restricting the kind of input based on what data should be allowed in a particular field. For example, if a user is expected to input a telephone number, then there is no point allowing the user to enter alpha-characters (numbers, spaces, dashes, parenthesis and a plus sign should be enough for a phone number).
Alternatively, you may also consider using a CAPTCHA for some forms. Too many CAPTCHAs may adversely affect the user experience, so be cautious where, when and how often you make use of them.
2) If you are talking about SQL injection, no, you shouldn't need to do anything else. Parameterized queries are the proper way to avoid SQLi. However, be careful of Cross-site Scripting (XSS). Filtering characters like <>'" is not the way to go when dealing with XSS.
In order to deal with XSS, the best approach (most of the time) is to exercise Context-dependent Outbound Encoding, which basically boils-down to -- use the proper encoding based on which XSS context you're in, and encode when data is printed onto the page (i.e. do not encode when saving data to the database, encode when you are writing that data to the page). To read more about this, this is the easiest, and most complete source I've come across -- http://excess-xss.com/#xss-prevention
3) A login sequence is Acunetix's way of authenticating into your application. Without it, the scanner can not scan the internals of your app. So unless you have forms (perhaps on the customer-facing portion of your site) the scanner is not going to be able to insert any data -- Yes, this is generally a good thing :)

How to catch multiple sql exception?

There are two unique fields in my database and it is possible the user will try to add a value which already exists in one (or both) of these columns via my webpage. Quite literally, there are 2 textboxes and these will refer to the 2 columns in SQL.
I would like the error caught and show the user which word (or words) was not allowed where a duplicate would be created.
SQL prevents me entering the duplicate, but, it will error on the first attempt. Meaning, if the user tried to enter 2 words and both were duplicates, SQL would error on the first textbox, meaning the user could update textBox1 value, try and again and then be told off about the second textbox. This is not good for the user as it's slow but I don't know what the best approach is.
In my opinion your last defence is the database! You should be preventing the query getting there!
For me, I'd query the database and retrieve the 2 values of all rows for just these 2 columns.
Then, when a user tries to submit, I'd check the list to see if it exists. This means no exception is thrown. I can then decide how to display this information to the screen.
Of course, this is great depending on how many rows you have, if you had millions maybe this isn't so good and you may want to ping the database first using a scalar variable to see if the value already exists!
SqlException are quite verbose and u will get a full stacktrace instead of a few particular fields what caused the exception.
Also sometimes SqlException wouldn't specify enough like in case of hitting a stored procedure which fails internally ..
The best idea would be to do all kinds of validations for the field values, FK constraints, not null checks, etc before hitting the DB.
Hitting the db and then letting the user know due to some exception is not at all advised as is a big performance bottleneck.

Is the ASP.NET cryptographic vulnerability work around a BIG LIE?

This question is somewhat of a follow up to How serious is this new ASP.NET security vulnerability and how can I workaround it? So if my question seems to be broken read over this question and its accepted solution first and then take that into the context of my question.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
I see why adding the delay to the error page is somewhat relevant but doesn't this also just add another layer to fool the script into thinking the site is an invalid target?
What could be done to prevent this if the script takes into account that since the site is asp.net it's running the AES encryption that it ignores the timing of error pages and watches the redirection or lack of redirection as the response vector? If a script does this will that mean there's NO WAY to stop it?
Edit: I accept the timing attack reduction but the error page part is what really seems bogus. This attack vector puts their data into viewstate. There's only 2 cases. Pass. Fail.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Summary of this vulnerability
The cipher key from the WebResoure.axd / ScriptResource.axd is taken and the first guess of the validation key is used to generate a value of potential key with the ciphered text.
This value is passed to the WebResource.axd / ScriptResource.axd at this point if the decryption key was guessed correctly their response will be accepted but since the data is garbage that it's looking for the WebResource.axd / ScriptResource.axd will return a 404 error.
If the decryption key was not successfully guessed it will get a 500 error for the padding invalid exception. At this point the attack application knows to increment the potential decryption key value and try again repeating until it finds the first successful 404 from the WebResource.axd / ScriptResource.axd
After having successfully deduced the decryption key this can be used to exploit the site to find the actual machine key.
re:
How does this have relevance on whether they're redirected to a 200, 404 or 500? No one can answer this, this is the fundamental question. Which is why I call shenanigans on needing to do this tom foolery with the custom errors returning a 200. It just needs to return the same 500 page for both errors.
I don't think that was clear from the original question, I'll address it:
who said the errors need to return 200? that's wrong, you just need all the errors to return the same code, making all errors return 500 would work as well. The config proposed as a work around just happened to use 200.
If you don't do the workaround (even if its your own version that always returns 500), you will see 404 vs. 500 differences. That is particularly truth in webresource.axd and scriptresource.axd, since the invalid data decrypted is a missing resource / 404.
Just because you don't know which feature had the issue, doesn't mean there aren't features in asp.net that give different response codes in different scenarios that relate to padding vs. invalid data. Personally, I can't be sure if there is any other feature that gives different response code as well, I just can tell you those 2 do.
Can someone explain why returning the same error page and same status code for custom errors matters? I find this to be immaterial especially if this is advocated as part of the work around to it.
Sri already answered that very clearly in the question you linked to.
Its not about hiding than an error occurred, is about making sure the attacker can't tell the different between errors. Specifically is about making sure the attacker can't determine if the request failed because it couldn't decrypt /padding was invalid, vs. because the decrypted data was garbage.
You could argue: well but I can make sure it isn't garbage to the app. Sure, but you'd need to find a mechanism in the app that allows you to do that, and the way the attack works you Always need at least a tiny bit of garbage in the in message. Consider these:
ScriptResource and WebResource both throw, so custom error hides it.
View state is by default Not encrypted, so by default its Not involved the attack vector. If you go through the trouble of turning the encryption on, then you very likely set it to sign / validate it. When that's the case, the failure to decrypt vs. the failure to validate is the same, so the attacker again can't know.
Auth ticket also signs, so its like the view state scenario
Session cookies aren't encrypted, so its irrelevant
I posted on my blog how the attack is getting so far like to be able to forge authentication cookies.
Isn't it just as easy for the script/application to execute this attack and not specifically care whether or not it gets a http status code and more on the outcome? Ie doing this 4000 times you get redirected to an error page where on 4001 you stay on the same page because it didn't invalidate the padding?
As mentioned above, you need to find a mechanism that behaves that way i.e. decrypted garbage stays on the same page instead of throwing an exception / and thus getting you to the same error page.
Either Fail, they're on a page and the viewstate does not contain their data. No matter what you do here there is no way to remove the fail case because the page just will never contain their inserted data unless they successfully cracked the key. This is why I can't justify the custom errors usage having ANY EFFECT AT ALL.
Or Pass, they're on a page and the viewstate contains their inserted data.
Read what I mentioned about the view state above. Also note that the ability to more accurately re-encrypt is gained After they gained the ability to decrypt. That said, as mentioned above, by default view state is not that way, and when its on its usually accompanied with signature/validation.
I am going to elaborate on my answer in the thread you referenced.
To pull off the attack, the application must respond in three distinct ways. Those three distinct ways can be anything - status codes, different html content, different response times, redirects, or whatever creative way you can think of.
I'll repeat again - the attacker should be able to identify three distinct responses without making any mistake, otherwise the attack won't work.
Now coming to the proposed solution. It works, because it reduces the three outcomes to just two. How does it do that? The catch-all error page makes the status code/html/redirect all look identical. The random delay makes it impossible to distinguish between one or the other solely on the basis of time.
So, its not a lie, it does work as advertised.
EDIT : You are mixing things up with brute force attack. There is always going to be a pass/fail response from the server, and you are right it can't be prevented. But for an attacker to use that information to his advantage will take decades and billions of requests to your server.
The attack that is being discussed allows the attacker to reduce those billions of requests into a few thousands. This is possible because of the 3 distinct response states. The workaround being proposed reduces this back to a brute-force attack, which is unlikely to succeed.
The workaround works because:
You do not give any indication about "how far" the slightly adjusted took you. If you get another error message, that is information you can learn from.
With the delay you hide how long the actual calculation took. So you do not get information, that shows if you got deeper into the system, that you can learn from.
No, it isn't a big lie. See this answer in the question you referenced for a good explanation.

Auto-generated Form Value

Looking for guidance on how to achieve something in ASP.NET Web Form - the behaviour is a bit like that seen in ASP.NET AutocompleteExtender, but I can't find anything that gives the flexibility I need. Here is what I am trying to do:
2 TextBox fields on the form,
CompanyName and CompanyRef
(CompanyRef an abbreviated unique
Company identifier)
User types in the CompanyName
As soon as there are 3 characters in the
CompanyName an internal webservice is
called (AJAX?)
Webservice checks what has been entered so far and
evaluates a 3 character representation of it - for instance
"Stack" would be returned as STA0001.
If there is already an STA0001 in the db it would return STA0002 and so on
The value returned would be targetted at the
CompanyRef TextBox
User needs to be able to edit the CompanyRef if they so wish
I'm not looking for code per se, more high level guidance on how this can be done, or if there are any components available that I am missing that you may be able to point me in the direction of. Googling and searching on SO has returned nothing - not sure if I'm looking for the right thing though.
Generating the CompanyRef is easy enough. There are lots of articles etc which cover combining say an autonumber or counter with a string. The difficulty I have with your approach is that you intend to let users fiddle with the ref, and make their own up. What for?
[EDIT - Follow up to comment]
The comment box didn't allow for enough characters to answer your comment fully (and I'm still getting used to the conventions in place here....)
You could use AJAX to call the web service and return currently available values, and then use javascript to update the field. The problem with this is that once a user has decided he or she likes one, it may no longer be available when it is passed back to the database. That means you will have to do one final check, which may result in a message to the user that they can't now have the value they were told was available when they started the process. Only you know the likelihood of this happening. It will depend on the number of concurrent users you have.
I've done an article on calling web services etc using jQuery which should give you a starting point for the AJAX part: http://www.mikesdotnetting.com/Article/104/Many-ways-to-communicate-with-your-database-using-jQuery-AJAX-and-ASP.NET

Resources