Are there any Asp.net security tools and/or frameworks? - asp.net

As per the increasing security threats, my site needs extreme care in terms of security in all aspects. I know asp.net has built in some security measures (Anti-forgery token, cross-site scripting, authentication, roles), but that is just not enough.
I need a tool to test all possible security threats (Brute-force attacks, .... IP location, browser info ... )
and a framework (open source is better) that handles all these concerns and let you build upon.
EDIT
So to narrow a bit, my primary concern is protecting the "login" page from all possible threats.
Help is highly appreciated !
P.S. If someone can not answer, please skip the question and spare the comments and negative votes. Thanks.

In terms of security it sounds like your building a pretty serious system.
When I build apps I first analyze the usage if I know the end client and they operate behind a firewall I first restrict access to the site via ip address.
Always use SSL certificates for sensitive parts of your site.
If the site is public facing use microsoft forms authentication, but split the security elements out into a separate db so no accidental amends can happen on the schema that may affect security.
Make sure that any client side validation is also repeated on the server side, client side validation is their to save round trips but someone can spoof your site.
Make sure you set a limit on the number of times a password can be tried before it locks out.
Enforce a strong password policy thru the .net membership provider.
Make sure you encrypt any important variables passed to javascript.
Don't do any of this stuff: -
//sql injection
string sql = "select * from Test where userid = '" + textbox1.text "'"
The best starting point to testing you whole server for security vulnerabilities is below: -
http://www.microsoft.com/en-us/download/confirmation.aspx?id=573
Regards
Steve

I think that a general defence approche is what you must think of. With that I mean that you must "seal your server" and not only the web pages. In the server side you need first to change the default ports, use a firewall to block port scanning and to monitor critical ports to not get out/in.
Now from the web/page side I know at least one tool from google that can help you with some attacts.
http://google-gruyere.appspot.com/
a second article about sql injection
http://www.symantec.com/connect/articles/detection-sql-injection-and-cross-site-scripting-attacks
From programs I know the iMperva that is more close to what you search for
http://www.imperva.com/products/wsc_threatradar.html
I am sure that there are more...
Also take some time and read the
Can some hacker steal the cookie from a user and login with that name on a web site?
How serious is this new ASP.NET security vulnerability and how can I workaround it?

Use the built in ASP.net membership system. It was designed by security professionals and is thoroughly tested and robust. If you use it properly, you have very little to worry about. It has a lot of built in features such as logging failed login attempts which would probably benefit you.

Related

OWASP ASVS V10.4 Verify that backend TLS connection failures are logged ---- How?

we are working on an ASP.NET project that is required to comply with the OWASP ASVS checklist. One of the term is "Verify that backend TLS connection failures are logged." I couldn't find a way to achieve this but the customer is holding us to it. Any suggestions/references? Sample code will be even better.
Here's the link to the owasp thing:
http://code.google.com/p/owasp-asvs/wiki/Verification_V10
Thanks in advance.
I agree this is badly worded. There is a revamp of the ASVS at the moment. Come on by and help us make things more concrete and testable.
In your instance, TLS connections are typically maintained by the operating system on behalf of application and library code that rarely if ever makes any real effort to vaidate that the TLS connection is what it says it is on the tin. Browsers are getting better at warning their users, but libraries are shockingly bad and applications that call them worse at checking for end to end errors and making wise security decisions about the state of the connection. Even basic things like certificate revocation should be a no brainer, but so few libraries enable this by default.
We see this every day in mobile phone apps - it's trivial to get most mobile apps to connect via MITM proxies that don't provide any user feedback that the connection is untrustworthy.
I'd like to see this requirement to be:
"User agent software (mobile app, browser, web service, library) MUST make it clear to the end user in terms they can readily understand that the connection is untrustworthy, and furthermore reject the connection, or to require user intervention to establish an insecure connection. Such failed or insecure connections should be logged."
This would make sure that - regardless if it's the OS, the library or the application - someone owns this interaction, and that is has a clear security objective (no untrusted connections), a usability control that favors security, and lastly a detection control to allow pre-incident monitoring and post-incident re-construction if a user makes a terribly poor choice (for we know they always do given the chance).
I know this doesn't answer your question per se, but you don't mention if you use OpenSSL or WCF or ... I'd be happy to contribute a code snippet if you let me know the platform you'd like.

Monitoring ASP.NET and SQL Server for Security

What is the best (or any good) way to monitor an ASP.NET application to ensure that it is secure and to quickly detect intrusion? How do we know for sure that, as of right now, our application is entirely uncompromised?
We are about to launch an ASP.NET 4 web application, with the data stored on SQL Server. The web server runs in IIS on a Windows Server 2008 instance, and the database server runs on SQL Server 2008 on a separate Win 2008 instance.
We have reviewed Microsoft's security recommendations, and I think our application is very secure. We have implemented "defense in depth" and considered a range of attack vectors.
So we "feel" confident, but have no real visibility yet into the security of our system. How can we know immediately if someone has penetrated? How can we know if a package of some kind has been deposited on one of our servers? How can we know if a data leak is in progress?
What are some concepts, tools, best practices, etc.?
Thanks in advance,
Brian
Additional Thoughts 4/22/11
Chris, thanks for the very helpful personal observations and tips below.
What is a good, comprehensive approach to monitoring current application activity for security? Beyond constant vigilance in applying best practices, patches, etc., I want to know exactly what is going on inside my system right now. I want to be able to observe and analyze its activity in a way that clearly shows me which traffic is suspect and which is not. Finally, I want this information to be totally accurate and easy to digest.
How do we efficiently get close to that? Wouldn't a good solution include monitoring logins, database activity, ASP.NET activity, etc. in addition to packets on the wire? What are some examples of how to assume a strong security posture?
Brian
The term you are looking for is Intrusion Detection System (IDS). There is a related term called Intrusion Prevention System (IPS).
IDS's monitor traffic coming into your servers at the IP level and will send alerts based on sophisticated analysis of the traffic.
IPS's are the next generation of IDS which actually attempt to block certain activities.
There are many commercial and open source systems available including Snort, SourceFire, Endace, and others.
In short, you should look at adding one of these systems to your mix for real time monitoring and potentially blocking of hazardous activities.
I wanted to add a bit more information here as the comments area is just a bit small.
The main thing you need to understand are the types of attacks you will see. These are going to range from relatively unsophisticated automated scripts on up to highly sophisticated targeted attacks. They will also hit everything they can see from the web site itself to IIS, .Net, Mail server, SQL (if accessible), right down to your firewall and other exposed machines/services. A wholistic approach is the only way to really monitor what's going on.
Generally speaking, a new site/company is going to be hit with the automated scripts within a few minutes (I'd say 30 at most) of going live. Which is the number one reason new installations of MS Windows keep the network severely locked down during installation. Heck, I've seen machines nailed within 30 seconds of being turned on for the first time.
The approach hackers/worms take is to constantly scan wide ranges of IP addresses, this is followed up with machine fingerprinting for those that respond. Based on the profile they will send certain types of attacks your way. In some cases the profiling step is skipped and they attack certain ports regardless of response. Port 1443 (SQL) is a common one.
Although the most common form of attack, the automated ones are by far the easiest to deal with. Shutting down unused ports, turning off ICMP (ping response), and having a decent firewall in place will keep most of the scanners away.
For the scripted attacks, make sure you aren't exposing commonly installed packages like PhpMyAdmin, IIS's web admin tools, or even Remote Desktop outside of your firewall. Also, get rid of any accounts named "admin", "administrator", "guest", "sa", "dbo", etc Finally make sure your passwords AREN'T allowed to be someones name and are definitely NOT the default one that shipped with a product.
Along these lines make sure your database server is NOT directly accessible outside the firewall. If for some reason you have to have direct access then at the very least change the port # it responds to and enforce encryption.
Once all of this is properly done and secured the only services that are exposed should be the web ones (port 80 / 443). The items that can still be exploited are bugs in IIS, .Net, or your web application.
For IIS and .net you MUST install the windows updates from MS pretty much as soon as they are released. MS has been extremely good about pushing quality updates for windows, IIS, and .Net. Further a large majority of the updates are for vulnerabilities already being exploited in the wild. Our servers have been set to auto install updates as soon as they are available and we have never been burned on this (going back to at least when server 2003 was released).
Also you need to stay on top of the updates to your firewall. It wasn't that long ago that one of Cisco's firewalls had a bug where it could be overwhelmed. Unfortunately it let all traffic pass through when this happened. Although fixed pretty quickly, people were still being hammered over a year later because admins failed to keep up with the IOS patches. Same issue with windows updates. A lot of people have been hacked simply because they failed to apply updates that would have prevented it.
The more targeted attacks are a little harder to deal with. A fair number of hackers are going after custom web applications. Things like posting to contact us and login forms. The posts might include JavaScript that, once viewed by an administrator, could cause credentials to be transferred out or might lead to installing key loggers or Trojans on the recipients computers.
The problem here is that you could be compromised without even knowing it. Defenses include making sure HTML and JavaScript can't be submitted through your site; having rock solid (and constantly updated) spam and virus checks at the mail server, etc. Basically, you need to look at every possible way an external entity could send something to you and do something about it. A lot of Fortune 500 companies keep getting hit with things like this... Google included.
Hope the above helps someone. If so and it leads to a more secure environment then I'll be a happy guy. Unfortunately most companies don't monitor traffic so they have no idea just how much time is spent by their machines fending off this garbage.
I can say some thinks - but I will glad to hear more ideas.
How can we know immediately if someone has penetrated?
This is not so easy and in my opinion, ** an idea is to make some traps** inside your backoffice , together with monitor for double logins from different ips.
a trap can be anything you can think of, for example a non real page that say "create new administrator", or "change administrator password", on backoffice, and there anyone can gets in and try to make a new administrator is for sure a penetrator - of course this trap must be known only on you, or else there is no meaning for that.
For more security, any change to administrators must need a second password, and if some one try to make a real change on administrators account, or try to add any new administrator, and fails on this second password must be consider as a penetrator.
way to monitor an ASP.NET application
I think that any tool that monitor the pages for some text change, can help on that. For example this Network Monitor can monitor for specific text on you page and alert you, or take some actions if this text not found, that means some one change the page.
So you can add some special hiden text, and if you not found, then you can know for sure that some one change the core of your page, and probably is change files.
How can we know if a package of some kind has been deposited on one of our servers
This can be any aspx page loaded on your server and act like a file browser. For this not happens I suggest to add web.config files to the directories that used for uploading data, and on this web.config do not allow anything to run.
<configuration>
<system.web>
<authorization>
<deny users="*" />
</authorization>
</system.web>
</configuration>
I have not tried it yet, but Lenny Zeltser directed me to OSSEC, which is a host-based intrusion detection system that continuously monitors an entire server to detect any suspicious activity. This looks like exactly what I want!
I will add more information once I have a chance to fully test it.
OSSEC can be found at http://www.ossec.net/

Why do I need to perform server side validation?

Thanks to everyone who commented or posted an answer! I've kept my original question and update below for completeness.
[Feb 16, 2011 - Update 2] As some people point out - my question should have been: Given a standard asp.net 4 form, if I don't have any server side validation, what types of malicious attacks am I susceptible to?
Here is my take away on this issue.
If data isn't sensitive (comments on a page) - from an asp.net security standpoint, following standard best practices (SqlParameters, request validation enabled, etc) will protect you from malicious attacks.
For sensitive data/applications - it's up to you to decide what type of server side validation is appropriate for your application. You need to think the end to end solution (webservices, other systems, etc). You can view a number of suggestions below - whitelist validation, etc.
If you are using ajax (xhr requests) to post user input you need to reproduce the protection from the other bullets in your code on the server. Again, lots of solutions below – like ensuring that the data does not contain any html/code, etc. (side note: the .net framework requestValidationMode="4.0" does afford some protection in this regard - but I can't speak to how complete a solution it is)
Please feel free to continue to comment...if any of the above is incorrect please let me know. Thanks!
[Feb 3, 2011 - Update 1] I want to thank everyone for their answers! Perhaps I should ask the reverse question:
Assume a simple asp.net 4.0 web form (formview + datasource with request validation enabled) that allows logged in users to post comments to a public page (comments stored in sql server db table). What type of data validation or cleansing should I perform on the new "comments" on the server side?
[Jan 19, 2011 - Original Question] Our asp.net 4 website has a few forms where users can submit data and we use jquery validate on the client side. Users have to be logged in with a valid account to access these forms.
I understand that our client side validation rules could easily be bypassed and clients could post data without required fields, etc. This doesn’t concern me very much - users have to be logged in and I don’t consider our data very “sensitive” nor would I say any of our validation is “critical”. The input data is written to the database using SqlParameters (to defend against sql injection) and we depend on asp.net request validation to defend against potentially dangerous html input.
Is it really worth our time to rewrite the various jquery validation rules on the server? Specifically how could a malicious user compromise our server or what specific attacks could we be open to?
I apologize as it appears that this question has been discussed a few times on this site – but I have yet to find an answer that cites specific risks or issues with not performing server side validation. Thanks in advance
Hypothetical situation:
Let's say you have a zip code field. On the client-side you validate that it must be in a "00000" or "00000-0000" pattern. Since you're allowing a hyphen, you decide to store the field as a varchar in the database.
So, some evil user comes along and decides to bypass all of your client-side validation and submit something that's not in the correct format and makes it past the request validation.
Ok, no big deal..., you're encoding it before displaying it back to the user later anyway.
But what else are you doing with that zip code? Are you submitting it to web service for some sort of lookup? Are you uploading it to a GPS device? Will it ever be interpreted by something else in the future? Does your zipcode field now contain some JSON or something else weird?
Or something like this: http://www.businessinsider.com/livingsocial-server-flaw-2011-1
Security is a dependability attribute that is defined as the probability that the system resists to an attack, or else the probability a fault is not maliciously activated.
In order to implement security, you must perform a threat analysis. Complex computer systems are subject to deeper analyses (think about an aircraft's o a control tower's equipment) as they become more critical and threats pose business or human life at risk.
You can perform your own threat analysis by questioning yourself what happens if a user bypasses validation?.
Two groups of answers, by examples:
Group 1 (critical)
The user can buy articles paying less than their price
The user can be revealed information about other users
The user obtains privileges he/she is not supposed to have
Group 2 (non critical)
The user is displayed inconsistent data in the next page
Processing continues, but the inconsistency leads to an error that requires human intervention
The user's data (but only of that user, not others) get compromised
A strange error page is returned to the user, with lots of technical information that cannot be used anyway
In the first case, you must definitely fix your validation problem, because you could lose money after an attack, or lose the trust of your public (think about forging Facebook URLs and showing someone's photos even if you are not mutually friends).
In the second case, if you are sure that an inconsistent field doesn't put your business or the data at risk, you may still avoid fixing
The real problem is
How do you prove that any inconsistent data sent to your website is never supposed to have any consequence over the system that may pose a threat?
So that's why you lose less time fixing your validation rather than thinking about it
Honestly, users don't care what you consider "sensitive" or "critical" data. Those criteria are up to them to decide.
I know that if I was a user of your application and I saw my data change without me directly doing something to cause the change...I would close my account up as fast as possible. It would be readily apparent that your system wasn't secure and none of my data was safe.
Keep in mind that you're forcing people to log in so you at least have their passwords somewhere. Whether or not they are easily accessed, a breach is a breach and I have lost my trust.
So...while you may not consider an input injection attack important, your users will and that is why you should still do server side input validation.
Your data may not be worth much, that's fine by me.
BUT, attackers could inject CSRF "cross site request forgery" attack code into your application; users of your site may have their data at other sites compromised. Yes, it would require those 'other sites' to have bugs, but that happens. Yes, it would require that users not use the 'logout' buttons on those sites, but not enough people use them. Think of all the tasty data your users have stored at other web sites. You wouldn't something bad to happen to your users.
Attackers could inject HTML that invites users to download and install 'plugins necessary for viewing this content' -- plugins that are keyloggers, or search hard drives for credit card numbers or tax filings. Maybe a plugin to become spambots or porn hosts. Your users trust your site to not recommend plugins that are owned by the Yakuza, right? They might not feel friendly if your site recommends installing evil things.
Depending upon what kinds of bugs invalid data might trigger, you might find yourself a spambot or a porn host. It heavily depends on how defensively you have coded other aspects of your application. Too many applications blindly trust input data.
And the best part: your users aren't human. Your users are browsers, which might be executing attacks supplied by other sites that didn't bother to perform good input validation and output sanitizing. Your users are viruses or worms that happen to find you by chance or by design. You might trust the individuals, but how far do you trust their computers? Me, not very far.
Please write applications to be as secure as you can -- you may put a large button on the front page to drop all users' data if you want -- but please don't intentionally write insecure programs.
This an excellent and brave question. The short (and possibly brave) answer is you don't. If you are aware of all the security vulnerabilities and you still don't believe it's necessary, then that's your choice.
It really depends on who your users are, who the site is exposed to (in terms of intranet or internet) and how easy it is to obtain an account. You say that your data is not sensitive yet you still require users to log in. How bad would it be if an unauthorised user were to access the system by hopping on another user's machine whilst they were elsewhere?
Bear in mind that relying on the request validation to look for malicious input can never be proved to be 100% safe so security is usually done at multiple levels with a fair bit of redundancy.
However it has to be your choice and you are doing the right thing to find out the consequences of leaving this out.
I believe that you need to validate both on the client side and on the server side, and here's why.
On the client side, you are often saving the user from submitting data that is obviously wrong. They have not filled in a required field. They have put letters in a field that is only supposed to contain numbers. They have provided a date in the future when only a date in the past will do (such as date of birth). And so on. By preventing these kinds of mistakes on the client side, you are avoiding user frustration, and also reducing the number of unnecessary hits to your web server.
On the server side, you should generally repeat all of the validation that you did on the client side. That is because, as you have observed, clever users can get around client-side validation and submit invalid data. In addition, there is some validation that is inefficient or impossible to do on the client side. Sometimes, you check that the data entry adheres to business rules. You might check it against existing data in the database. If you just let users enter anything (especially omitting required fields), the website won't function properly for them.
Check out the Tamper Data extension for firefox. You can feed the server anything you want very easily
Anyone performing HTTP POSTs to your server via your web site (with jQuery validation) can also perform HTTP POSTs via some other means that bypasses the jQuery validation. For example, I could use System.Net.HttpWebRequest to POST some data to your server with the appropriate cookies that injects malicious content into the form fields. I'd have to set up the __EVENT_VALIDATION and __VIEWSTATE fields correctly, but if I succeed, I'd be bypassing the validation.
If you don't have server-side data validation, then you are effectively not validating the inputs at all. The jQuery validation is nice for user experience but not a real line of defense.
This is especially so with inputs like a free-form comments field. You definitely want to ensure that the field does not contain HTML or other malicious script. As an extra measure of defense, you should also escape the comment content when it is displayed in your web app with a library like AntiXss (see http://wpl.codeplex.com/).
In terms of client-side vs. server-side validation, my opinion is that client-side validation is just to make sure the form is filled correctly and a user could tamper with the form and bypass the verifications you do in javascript.
On the server-side you could actually make sure that you actually want to store this data and validate it in depth manner and check relative database tables to ensure that your database is always normalized with any data set that you get from the client. I would say even that the server side is more important than the client side in terms of not showing the user what do you look for in the form and how you validate the data.
to summarize, I recommend verification on both sides, but if I had to choose between the two i would recommend server-side validation , but that could mean that your server could potentially performing additional validations that you could have prevented from validating on the client side
To answer your second question:
You need to use a whitelist to keep malicious input out of the incoming comments.
The .NET Framework request validation does a very good job of stopping XSS payloads in incoming POST requests. It may not, however, prevent other malicious or mischevious HTML from getting into the comments (image tags, hyperlinks, etc.).
So if possible I would set up whitelist validation on the server side for allowed characters. A regex should cover this just fine. You should allow A-Za-z0-9, whitespace, and a few punctuation marks. If the regex fails to match, return an error message to the user and stop the transaction. Regarding SQL Injection: I would allow apostrophes through in this case (unless you like terrible grammar in your comments), but put code comments around your parameterized SQL queries to the effect of: "This is the only protection against SQL, so be careful when modifying." You should also lock down the permissions of the database account used by the web process (read/write only, not database owner permissions). What I wouldn't do is try to do blacklist validation on the input, as that is very time consuming to do correctly (see RSnake's XSS Cheat Sheet at http://ha.ckers.org/xss.html for an idea of the number of things you would need to prevent just for XSS).
Between the .NET framework and your own whitelist validation you should be safe from HTML-based attacks such as XSS and CSRF*. SQL injection will be prevented by using parameterized queries. If the comment data touches any other assets you may need to put more controls in place, but those cover the attacks relevant to the basic data submission form you've outlined.
Also, I wouldn't try to "cleanse" the data at all. It is very difficult to do properly and users (as was mentioned above) hate it when their data is modified without their permission. It is more secure and more usable to give user's a clear error message when your data validation fails. If you put their comment back on the page for them to edit, HTML encode the output so you aren't vulnerable to a Reflected XSS attack.
And as always, OWASP.org (http://www.owasp.org) is a good reference for all things webappsec related. Check out their Top Ten and Development Guide projects.
*CSRF may not be a direct concern of yours, as fraudulent posts to your site may not matter to you, but preventing XSS has the side benefit of keeping CSRF payloads targeting other sites from being hosted from your site.

What would you like to see in an beginner's ASP.NET security book

This is a shameless information gathering exercise for my own book.
One of the talks I give in the community is an introduction to web site vulnerabilities. Usually during the talk I can see at least two members of the audience go very pale; and this is basic stuff, Cross Site Scripting, SQL Injection, Information Leakage, Cross Site Form Requests and so on.
So, if you can think back to being one, as a beginning web developer (be it ASP.NET or not) what do you feel would be useful information about web security and how to develop securely? I will already be covering the OWASP Top Ten
(And yes this means stackoverflow will be in the acknowledgements list if someone comes up with something I haven't thought of yet!)
It's all done now, and published, thank you all for your responses
First, I would point out the insecurities of the web in a way that makes them accesible to people for whom developing with security in mind may (unfortunately) be a new concept. For example, show them how to intercept an HTTP header and implement an XSS attack. The reason you want to show them the attacks is so they themselves have a better idea of what they're defending against. Talking about security beyond that is great, but without understanding the type of attack they're meant to thwart, it will be hard for them to accurately "test" their systems for security. Once they can test for security by trying to intercept messages, spoof headers, etc. then they at least know if whatever security they're trying to implement is working or not. You can teach them whatever methods you want for implementing that security with confidence, knowing if they get it wrong, they will actually know about it because it will fail the security tests you showed them to try.
Defensive programming as an archetypal topic which covers all the particular attacks, as most, if not all, of them are caused by not thinking defensively enough.
Make that subject the central column of the book . What would've served me well back then was knowing about techniques to never trust anything, not just one stop tips, like "do not allow SQL comments or special chars in your input".
Another interesting thing I'd love to have learned earlier is how to actually test for them.
I think all vulnerabilities are based off of programmers not thinking, either momentary lapses of judgement, or something they haven't thought of. One big vulnerability that was in an application that I was tasked to "fix up", was the fact that they had returned 0 (Zero) from the authentication method when the user that was logging in was an administrator. Because of the fact that the variable was initialized originally as 0, if any issues happened such as the database being down, which caused it to throw an exception. The variable would never be set to the proper "security code" and the user would then have admin access to the site. Absolutely horrible thought went into that process. So, that brings me to a major security concept; Never set the initial value of a variable representing a "security level" or anything of that sort, to something that represents total god control of the site. Better yet, use existing libraries out there that have gone through the fire of being used in massive amounts of production environments for a long period of time.
I would like to see how ASP.NET security is different from ASP Classic security.
Foxes
Good to hear that you will have the OWASP Top Ten. Why not also include coverage of the SANS/CWE Top 25 Programming mistakes.
How to make sure your security method is scalable with SQL Server. Especially how to avoid having SQL Server serialize requests from multiple users because they all connect with the same ID...
I always try to show the worst-case scenario on things that might go wrong. For instance on how a cross-site script injection can work as a black-box attack that even works on pages in the application that a hacker can’t access himself or how even an SQL injection can work as a black box and how a hacker can steal your sensitive business data, even when your website connects to your database with a normal non-privileged login account.

Public ASP.NET Application Security Considerations

An extremely secure ASP.NET application is having to be written at my work and instead of trawling through the Internet looking for best practices I was wondering as to what considerations and generally what things should be done to ensure a public web application is safe.
Of course we've taken into consideration user/pass combinations but there needs to be a much deeper level than this. I'm talking about every single level and layer of the application i.e.
Using URL rewrites
Masterpages
SiteMaps
Connection pooling
Session data
Encoding passwords.
Using stored procedures instead of direct SQL statements
I'm making this a community wiki as there wouldn't be one sole answer which is correct as it's such a vast topic of discussion. I will point out also that this is not my forte by any means and previous security lockdown has been reached via non-public applications.
That's a bigger toppic than I think you perhaps realise. The best advice is to get someone that already knows who can advise you. Failing that I would start by reading the Microsoft document "Improving Web Application Security: Threats and Countermeasures" but be warned that runs to 919 printed pages.
You should refine the idea of "stored procedures" into just using parameterized queries. That will take care of most of your problems there. You can also restrict fields on the UI and strip out or encode damaging characters like the pesky ';'...
use forms authentication instead of storing authentication data in session.
Obviously: Hash passwords. If you want to be very cautious use SHA1 encryption instead of md5.

Resources