Security Audit Issue [For Asp.Net WebForms] : Source code disclosed - asp.net

After the security audit of Asp.Net Application I have received a error report and one error is as Source Code Disclosed.
How Should I resolve this issue by preventing any person to view code?

This is javascript code, which is really common to be exposed/disclosed (*) simply because it is intended to be downloaded to the browser where it then runs. To label this a risk might seem abundant, although there could be some risk depending on what it is that you put in it.
The question is mainly: could this code be exploited, or could it be altered into something that is dangerous?
The answer is to not put secrets in it, and also to never rely on client-side-only logic and validation. Always have a server side equivalent that enforces whatever rules need to be enforced, and use SSL/https so the connection is secure, and then you should be good.
(*) just hit F12, go to tab Sources or Debugger, and you'll see it here as well

Related

SQL Injection Detected By Firewall For SQL Code Submitted As Text For Storage

I have a head-scratcher here. Over a year ago, I wrote a website feature/form where I could submit SQL Code that is not executed but stored in a table. This feature worked when I created it, as I was able to upload several scripts into the database. I have not needed to use this feature for several months, and recent upgrades to my website had me re-checking features. The feature stopped working ... and after some research, it was determined that our company firewall was now blocking the form from submitting due to a detection of "SQL Injection".
They swear that no changes were made to the firewall, however, this seems unlikely since this feature previously functioned. Regardless ... the confusion I have is that I know many websites, like this one, that allow people to post "code" using a web form interface without being flagged as SQL Injection. I am sure websites (like this one) have firewalls protecting them as well.
Is there something that needs to be done when transmitting code on a page submit/postback to clear a firewall's SQL Injection checks?
For clarification ...
There is a form, with a LargeTextArea control, where a SQL Script is entered. This SQL code is transmitted via postback to the server, and server-side code handles the saving of the script into a table. Very similar to what this website (StackOverflow) does I would assume. We can post code here, without it being intercepted and blocked by a firewall. The code we post here in our messages is eventually stored in a database on the server. That is the same behavior that I am performing.
Because of the firewall intervening between the client browser and the web server, the postback is never completed. Therefore, the server never receives the postback data to perform any processing. The client browser simply receives a "connection-reset" error.
I always thought of SQL Injection as something that should be handled server-side ... the responsibility of the programmer to ensure it is not abused. Having a firewall interfere prior to arriving at the server and having code execute to even check for SQL Injections ... feels wrong to me. Even if you have code that prevents SQL Injection, it would not matter if the firewall is intercepting and intervening prior to any server-side logic. Am I wrong?
Your firewall rules for SQL injection are blocking parameters that "look like" SQL injection - and that can lead to false positives for code that is not executed.
The correct way to get around this is to modify the firewall rule. See this answer for a way to that in ModSecurity.
Since this doesn't seem to be an option for you, you might consider bypassing the rule with obfuscation. For example - encrypting with a simple fixed key before putting it in the database (and decrypting on display) would hide this code from the firewall. And also provide some guardrails against it being executed in the future.
With bypassing security checks you are taking on a great responsibility. You should be very careful to ensure (and warn in comments) that this code is never executed. This includes executing it to check that it is correct SQL - which could also be abused for SQL injection.

Azure Front Door WAF is blocking .AspNet.ApplicationCookie

I'm wondering if anyone else has had this issue with Azure Front Door and the Azure Web Application Firewall and has a solution.
The WAF is blocking simple GET requests to our ASP.NET web application. The rule that is being triggered is DefaultRuleSet-1.0-SQLI-942440 SQL Comment Sequence Detected.
The only place that I can find an sql comment sequence is in the .AspNet.ApplicationCookie as per this truncated example: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH. If I remove the 2 dashes '--' in the cookie value, the request successfully gets through the firewall. As soon as I add them back the request gets blocked by the same firewall rule.
It seems that I have 2 options. Disable the rule (or change it from Block to Log) which I don't want to do, or change the .AspNet.ApplicationCookie value to ensure that it does not contain any text that would trigger a firewall rule. The cookie is generated by the Microsoft.Owin.Security.Cookies library and I'm not sure if I can change how it is generated.
I ran into same problem as well.
If you have a look to the cookie value: RZI5CL3Uk8cJjmX3B8S-q0ou--OO--bctU5sx8FhazvyvfAH7wH there are two -- which is the potentially dangerous SQL command that can comment out your SQL command that you're going to query. An attacker may run their command instead of your command - after commenting out your query.
But, obviously, this cookie won't run any query on the SQL side and we are sure about that. So we can create rule exclusions that won't run specific conditions.
Go to your WAF > Click Managed Rules on the left blade > Click manage exclusions on the top > and click add
In your case, adding this rule would be fine:
Match variable: Request cookie name
Operator: Starts With
Selector: .AspNet.ApplicationCookie
However, I use Asp.Net Core 3.1 and I use Asp.Net Core Identity. I encountered other issues as well, such as __RequestVerificationToken.
Here is my full list of exclusions. I hope it helps.
PS I think there is a glitch at the moment. If you have an IP restriction on your environment, such as UAT, because of these exclusions Web Application Firewall is by-passing the IP restriction and your UAT site becomes open to the public even if you have still custom IP restriction rule on your WAF.
I ran into something similar and blogged about it here: Front Door incomplete first request.
To test this I created a web application and put it behind the Front Door service. In that test application I iterate over all the properties of the HttpContext.HttpRequest and print them out. As far as I can see right now, there are two properties that have differences between a direct request and a request through Front Door. Both the AcceptTypes and the UserLanguages property are empty for Front Door requests, while they are absolutely filled in when directly accessing the test application.
I’m not quite sure what the reason is for the first Front Door request to be different from a direct request. Is it a bug? Is it intentional and if so, why? Or is it because Front Door is developed using a framework that doesn’t support these properties, having them be empty when being forwarded?
Unfortunately I didn't find a solution to the issue, but to answer the question if anyone else is experiencing this: I did experience something similar.
Seems that the cookie got corrupted , as I was comparing the fields that existed before vs a healthy cookie, my guess is maybe somewhere in the content of the field it is being interpreted as a truncate sql statement and probably triggering the rule. Still to determine if this is true and/or what cause it.
I ran into this issue but the token was being passed through via the request query rather than via a cookie. In case it might help someone, for the specified host I had to allow via a custom rule doing a regex match on the RequestUri, using the following regex (taken from the original managed rule):
:\/\\\\*!?|\\\\*\/|[';]--|--[\\\\s\\\\r\\\\n\\\\v\\\\f]|--[^-]*?-|[^\\u0026-]#.*?[\\\\s\\\\r\\\\n\\\\v\\\\f]|;?\\\\x00

Why do I need to perform server side validation?

Thanks to everyone who commented or posted an answer! I've kept my original question and update below for completeness.
[Feb 16, 2011 - Update 2] As some people point out - my question should have been: Given a standard asp.net 4 form, if I don't have any server side validation, what types of malicious attacks am I susceptible to?
Here is my take away on this issue.
If data isn't sensitive (comments on a page) - from an asp.net security standpoint, following standard best practices (SqlParameters, request validation enabled, etc) will protect you from malicious attacks.
For sensitive data/applications - it's up to you to decide what type of server side validation is appropriate for your application. You need to think the end to end solution (webservices, other systems, etc). You can view a number of suggestions below - whitelist validation, etc.
If you are using ajax (xhr requests) to post user input you need to reproduce the protection from the other bullets in your code on the server. Again, lots of solutions below – like ensuring that the data does not contain any html/code, etc. (side note: the .net framework requestValidationMode="4.0" does afford some protection in this regard - but I can't speak to how complete a solution it is)
Please feel free to continue to comment...if any of the above is incorrect please let me know. Thanks!
[Feb 3, 2011 - Update 1] I want to thank everyone for their answers! Perhaps I should ask the reverse question:
Assume a simple asp.net 4.0 web form (formview + datasource with request validation enabled) that allows logged in users to post comments to a public page (comments stored in sql server db table). What type of data validation or cleansing should I perform on the new "comments" on the server side?
[Jan 19, 2011 - Original Question] Our asp.net 4 website has a few forms where users can submit data and we use jquery validate on the client side. Users have to be logged in with a valid account to access these forms.
I understand that our client side validation rules could easily be bypassed and clients could post data without required fields, etc. This doesn’t concern me very much - users have to be logged in and I don’t consider our data very “sensitive” nor would I say any of our validation is “critical”. The input data is written to the database using SqlParameters (to defend against sql injection) and we depend on asp.net request validation to defend against potentially dangerous html input.
Is it really worth our time to rewrite the various jquery validation rules on the server? Specifically how could a malicious user compromise our server or what specific attacks could we be open to?
I apologize as it appears that this question has been discussed a few times on this site – but I have yet to find an answer that cites specific risks or issues with not performing server side validation. Thanks in advance
Hypothetical situation:
Let's say you have a zip code field. On the client-side you validate that it must be in a "00000" or "00000-0000" pattern. Since you're allowing a hyphen, you decide to store the field as a varchar in the database.
So, some evil user comes along and decides to bypass all of your client-side validation and submit something that's not in the correct format and makes it past the request validation.
Ok, no big deal..., you're encoding it before displaying it back to the user later anyway.
But what else are you doing with that zip code? Are you submitting it to web service for some sort of lookup? Are you uploading it to a GPS device? Will it ever be interpreted by something else in the future? Does your zipcode field now contain some JSON or something else weird?
Or something like this: http://www.businessinsider.com/livingsocial-server-flaw-2011-1
Security is a dependability attribute that is defined as the probability that the system resists to an attack, or else the probability a fault is not maliciously activated.
In order to implement security, you must perform a threat analysis. Complex computer systems are subject to deeper analyses (think about an aircraft's o a control tower's equipment) as they become more critical and threats pose business or human life at risk.
You can perform your own threat analysis by questioning yourself what happens if a user bypasses validation?.
Two groups of answers, by examples:
Group 1 (critical)
The user can buy articles paying less than their price
The user can be revealed information about other users
The user obtains privileges he/she is not supposed to have
Group 2 (non critical)
The user is displayed inconsistent data in the next page
Processing continues, but the inconsistency leads to an error that requires human intervention
The user's data (but only of that user, not others) get compromised
A strange error page is returned to the user, with lots of technical information that cannot be used anyway
In the first case, you must definitely fix your validation problem, because you could lose money after an attack, or lose the trust of your public (think about forging Facebook URLs and showing someone's photos even if you are not mutually friends).
In the second case, if you are sure that an inconsistent field doesn't put your business or the data at risk, you may still avoid fixing
The real problem is
How do you prove that any inconsistent data sent to your website is never supposed to have any consequence over the system that may pose a threat?
So that's why you lose less time fixing your validation rather than thinking about it
Honestly, users don't care what you consider "sensitive" or "critical" data. Those criteria are up to them to decide.
I know that if I was a user of your application and I saw my data change without me directly doing something to cause the change...I would close my account up as fast as possible. It would be readily apparent that your system wasn't secure and none of my data was safe.
Keep in mind that you're forcing people to log in so you at least have their passwords somewhere. Whether or not they are easily accessed, a breach is a breach and I have lost my trust.
So...while you may not consider an input injection attack important, your users will and that is why you should still do server side input validation.
Your data may not be worth much, that's fine by me.
BUT, attackers could inject CSRF "cross site request forgery" attack code into your application; users of your site may have their data at other sites compromised. Yes, it would require those 'other sites' to have bugs, but that happens. Yes, it would require that users not use the 'logout' buttons on those sites, but not enough people use them. Think of all the tasty data your users have stored at other web sites. You wouldn't something bad to happen to your users.
Attackers could inject HTML that invites users to download and install 'plugins necessary for viewing this content' -- plugins that are keyloggers, or search hard drives for credit card numbers or tax filings. Maybe a plugin to become spambots or porn hosts. Your users trust your site to not recommend plugins that are owned by the Yakuza, right? They might not feel friendly if your site recommends installing evil things.
Depending upon what kinds of bugs invalid data might trigger, you might find yourself a spambot or a porn host. It heavily depends on how defensively you have coded other aspects of your application. Too many applications blindly trust input data.
And the best part: your users aren't human. Your users are browsers, which might be executing attacks supplied by other sites that didn't bother to perform good input validation and output sanitizing. Your users are viruses or worms that happen to find you by chance or by design. You might trust the individuals, but how far do you trust their computers? Me, not very far.
Please write applications to be as secure as you can -- you may put a large button on the front page to drop all users' data if you want -- but please don't intentionally write insecure programs.
This an excellent and brave question. The short (and possibly brave) answer is you don't. If you are aware of all the security vulnerabilities and you still don't believe it's necessary, then that's your choice.
It really depends on who your users are, who the site is exposed to (in terms of intranet or internet) and how easy it is to obtain an account. You say that your data is not sensitive yet you still require users to log in. How bad would it be if an unauthorised user were to access the system by hopping on another user's machine whilst they were elsewhere?
Bear in mind that relying on the request validation to look for malicious input can never be proved to be 100% safe so security is usually done at multiple levels with a fair bit of redundancy.
However it has to be your choice and you are doing the right thing to find out the consequences of leaving this out.
I believe that you need to validate both on the client side and on the server side, and here's why.
On the client side, you are often saving the user from submitting data that is obviously wrong. They have not filled in a required field. They have put letters in a field that is only supposed to contain numbers. They have provided a date in the future when only a date in the past will do (such as date of birth). And so on. By preventing these kinds of mistakes on the client side, you are avoiding user frustration, and also reducing the number of unnecessary hits to your web server.
On the server side, you should generally repeat all of the validation that you did on the client side. That is because, as you have observed, clever users can get around client-side validation and submit invalid data. In addition, there is some validation that is inefficient or impossible to do on the client side. Sometimes, you check that the data entry adheres to business rules. You might check it against existing data in the database. If you just let users enter anything (especially omitting required fields), the website won't function properly for them.
Check out the Tamper Data extension for firefox. You can feed the server anything you want very easily
Anyone performing HTTP POSTs to your server via your web site (with jQuery validation) can also perform HTTP POSTs via some other means that bypasses the jQuery validation. For example, I could use System.Net.HttpWebRequest to POST some data to your server with the appropriate cookies that injects malicious content into the form fields. I'd have to set up the __EVENT_VALIDATION and __VIEWSTATE fields correctly, but if I succeed, I'd be bypassing the validation.
If you don't have server-side data validation, then you are effectively not validating the inputs at all. The jQuery validation is nice for user experience but not a real line of defense.
This is especially so with inputs like a free-form comments field. You definitely want to ensure that the field does not contain HTML or other malicious script. As an extra measure of defense, you should also escape the comment content when it is displayed in your web app with a library like AntiXss (see http://wpl.codeplex.com/).
In terms of client-side vs. server-side validation, my opinion is that client-side validation is just to make sure the form is filled correctly and a user could tamper with the form and bypass the verifications you do in javascript.
On the server-side you could actually make sure that you actually want to store this data and validate it in depth manner and check relative database tables to ensure that your database is always normalized with any data set that you get from the client. I would say even that the server side is more important than the client side in terms of not showing the user what do you look for in the form and how you validate the data.
to summarize, I recommend verification on both sides, but if I had to choose between the two i would recommend server-side validation , but that could mean that your server could potentially performing additional validations that you could have prevented from validating on the client side
To answer your second question:
You need to use a whitelist to keep malicious input out of the incoming comments.
The .NET Framework request validation does a very good job of stopping XSS payloads in incoming POST requests. It may not, however, prevent other malicious or mischevious HTML from getting into the comments (image tags, hyperlinks, etc.).
So if possible I would set up whitelist validation on the server side for allowed characters. A regex should cover this just fine. You should allow A-Za-z0-9, whitespace, and a few punctuation marks. If the regex fails to match, return an error message to the user and stop the transaction. Regarding SQL Injection: I would allow apostrophes through in this case (unless you like terrible grammar in your comments), but put code comments around your parameterized SQL queries to the effect of: "This is the only protection against SQL, so be careful when modifying." You should also lock down the permissions of the database account used by the web process (read/write only, not database owner permissions). What I wouldn't do is try to do blacklist validation on the input, as that is very time consuming to do correctly (see RSnake's XSS Cheat Sheet at http://ha.ckers.org/xss.html for an idea of the number of things you would need to prevent just for XSS).
Between the .NET framework and your own whitelist validation you should be safe from HTML-based attacks such as XSS and CSRF*. SQL injection will be prevented by using parameterized queries. If the comment data touches any other assets you may need to put more controls in place, but those cover the attacks relevant to the basic data submission form you've outlined.
Also, I wouldn't try to "cleanse" the data at all. It is very difficult to do properly and users (as was mentioned above) hate it when their data is modified without their permission. It is more secure and more usable to give user's a clear error message when your data validation fails. If you put their comment back on the page for them to edit, HTML encode the output so you aren't vulnerable to a Reflected XSS attack.
And as always, OWASP.org (http://www.owasp.org) is a good reference for all things webappsec related. Check out their Top Ten and Development Guide projects.
*CSRF may not be a direct concern of yours, as fraudulent posts to your site may not matter to you, but preventing XSS has the side benefit of keeping CSRF payloads targeting other sites from being hosted from your site.

Does it suck to not provide graceful errors to a user who doesn't use the website properly?

I recently received a feedback from a colleague about my source code of a website. He says that it is a bad practice to not handle gracefully what visual interface does not allow to do.
Since it's not very clear, here's an example.
Let's say a visitor can comment something.
A comment is saved into a database, in a nvarchar(500) column.
The <input /> field length is limited to 500.
But, of course, nothing forbids to a more advanced user to disable the length limit and to type 501 character.
(Other examples: submitting an option which does not even exist in a <select />. But there is a graceful error when the user is asked to enter a number, and she enters a non-number instead, since keypress events are controlled through JavaScript, and JavaScript may be disabled)
If the visitor does so, there would be a failure on code contracts level. The AJAX request would fail with an unexpected error (or, on page submit, there will be an unexpected error). In all cases, the visitor will see that something wrong happened, but will have no graceful message indicating that the length of the submitted comment is too long.
Why is it bad practice? Why would I bother to design clear and explicit error messages for the cases where the visitor who uses correctly the website will never have?
Note: I understand that it sucks to display a .NET Framework detailed error and a stack trace when something like this happens. If I do so, it's a serious security issue. But in my case, there is just an AJAX response with something very generic or a redirect to a generic page with the apologizes about an error.
Since everyone appears to be missing your actual question, I'll put in my 2c (though I'll no doubt be downvoted in retaliation)
As long as your inputs are validated server side (your client-side maxlength is probably ok, though some obscure browsers may not support it), you can return a generic error message as long as it contains no exception information (which you have stated it doesn't).
If, however, it's possible to fail validation via lack of javascript or incorrect entry, then a custom error message should be provided for the sake of the user's sanity.
In short, what you are doing is fine.
First an most importantly
You should validate everything the user supplies on the server! This means not letting 501 letters through
Other than that if an unhandled exception occurs you should show the user a message which gives nothing away. If you were to return exception information this is gold dust to an attacker.
The best method is to display a general error such as "We're sorry, we're working on the problem straight away" and e-mail the exception information to the developers in order for them to fix it.
Why would I bother to design clear and explicit error messages for the cases where the visitor who uses correctly the website will never have?
If everyone used the web correctly, we'd never need to have validation.
As Ronald Reagan once said, "Trust, but verify".
Put in server-side validation for the length of fields. Put in validation to make sure there aren't any XSS or SQL Injection attacks. It's not the people who use your site correctly that you have to worry about, it's the ones that use it maliciously.
I think that the largest part of the problem is that you are assuming that validation should only be happening in the UI. It really is best to validate in the UI and the backend. There is no need to return a stack trace or detailed exception information. On Page_Load(), you should always be validating all user input again and displaying the information statically, as if the user has disabled JavaScript.
What you're describing isn't just bad practice, it's bad design. If you can anticipate an error or exception, then you should anticipate methods of handling it, mitigating it or alleviating it. This goes for any interface design whether it's for a website or a refrigerator. If a visitor gets a generic error and is given no insight as to how to fix it, then why should that person bother using your website? If they're forced to (for work reasons maybe), then all you've done is alienate your customer and give yourself a bad name.
I would suggest you ask yourself why you're not handling these very easy to control situations. Is it laziness or do you just lack experience as a user?
Server side validation is for two main purposes:
as a graceful degradation if the client validation doesn't work for some reason
in this case, you want a nice user-friendly message
as a security measure to ensure malicious clients can't damage your system.
in this case, you want no internal details displayed
If you want to take the route of true graceful degradation, it would be NICE if the server still gave back the user a friendly message for each validation.
In the case of maxLength, this is not very likely to be needed. But many kinds of validation use Javascript, and there are still those people or platforms that don't support Javascript. Older mobile platforms would be the main suspects here.
However, these days, most of us assume that Javascript can be relied on, so a generic error message if server validation fails is fine.

ASP.Net, Capture image/screenshot of client error

We currently have fairly robust error handling functionality in our ASP.Net application.
We log all errors in the database, a text file on the server
and also send automated emails containing the error details back to our support people.
This all happens on the server of course.
We would like to capture (and retrieve) an image of the client browser at the time the error occurred to provide additional info for troubleshooting?
Is this at all possible?
If so what would be an elegant approach to this problem?
This is not technically impossible, but it is so impractical for nearly all purposes that it might as well be impossible. You would need a plugin running on the client's machine which can receive instructions from your error page to take the screenshot, connect to the server and upload it.
If your client screens have complex data which affects the state surrounding the exception, you should revisit your design to ensure all of that is recorded before it's sent to the client, so you can keep all relevant state tracked with a given exception.
Saying something is "impractical" is usually easier than actually trying to solve something that is difficult, but not technically impossible.
I have done some more research and have come across
an approach that allows one to get hold of the rendered html server side.
Further more, there are ways to also convert html to images
I will implement the solution using a combination of the two.
Capturing a client browser screenshot is not possible due to security and privacy reasons. What you can (and imho you should) do is capture the url and the browser version and try to reproduce it in the same environment.

Resources