This is the first time I have been faced with someone trying to penetrate a website I have created. What can I do to put a stop to the attempts?
As a side note, their sql injection stands no chance of ever working and there isn't any data that we have that isn't already available by anyone using this site normally.
Appended:
I think the code part is covered for most XSS and sql injection but I am definitely considering a security audit. I was just curious about the response. Am I really only limited to blocking ip addresses?
If you already are protected against SQL injections, you've got a major attack covered. The next biggest threat (in my opinion) would be Cross-Site Scripting (XSS) since it would allow an attacker to have another user do something malicious, making it hard to track that activity.
You should also be aware of Cross-Site Request Forgeries (CSRF), since that is one that many people seem to miss a lot of times.
I would take a look at OWASP's Top 10 Web Security Vulnerabilities and make sure you protect against all 10 them as best as possible. Any one of them could seriously open yourself up to attackers if you aren't careful.
Unless this is your first public website, all of the websites you have worked on were under attack roughly 3 minutes in to being accessible whether you knew it or not.
A couple things you can start doing are:
Start blocking the IPs that attacks are coming from. This isn't always feasible as IP addresses frequently change and some types of attacks can work with a spoofed address.
Put an intrusion detection system (IDS) in place and start monitoring everything.
Verify your firewalls are working correctly and monitor the attack vectors. Make sure everything they are going after is pretty well secured.
This answer comes from another one that I answered about IIS getting hacked:
Hopefully you've had your IIS logfiles
turned on and hopefully the hacker
didn't erase them. By default they're
located here:
c:\winnt\system32\LogFiles\W3SVC1 and
will generally be named after the
date.
Then it's probably helpful to figure
out how to use log parser (from
Microsoft), which is free. Then use
this guide to help you with
looking forensically at your logfiles.
Do you have a firewall because it's
syslogs might be helpful.
Another decent tool to help you find
sql injection issues is to go
here and download HP's Scrawlr.
If you have any more questions about
what you've found, come back and ask.
Is it many sources or just a few IP's? We've had a few IP's do shadowy things and have used IIS to block them specifically. If it's a coordinated attack from multiple sources this won't help.
Related
Can someone explain to me the real danger, XSS can do on internal sites using Windows Authentication. I know firewall can be break and hacker can access internal site but for now let's not focus on that. I just want to know what and internal employee(hacker) can do on a Windows Authentication environment using XSS.
Thanks,
Windows Authentication does not prevent XSS attacks or help detecting them, so it is irrelevant to the question.
An internal employee (hacker) can do all what any external hacker can do.... only about 10 times easier. Here is a quick comparison.
But practically speaking, because internal hackers can access information much easier and in more ways than external hackers, they usually use less sophisticated methods. They focus more on covering their tracks.
But then, who said that internal hackers have to do it from inside? Unlike the external hackers, they have the option to do it from both inside and outside. An employee can go to a coffee shop and perform an XSS attack (or any other attack) just like any external hacker. Their much deeper knowledge of the internal systems will make any attack much easier. Also IT employees can use the projects that are under their control to attack (XSS or otherwise) other projects that are not under their control.
The possibilities are limitless. It's nearly impossible to protect yourself from internal hackers. Your only hope is to find the tracks. The good news is that if you find the tracks, it is usually much easier to reach and punish internal hackers as long as they haven't fled the country.
That, however, doesn't mean that you don't need to bother. Your coworker can probably access your office, and even break the lock on your drawer and reach your wallet that you put there while you're having your lunch in the kitchen, but you still don't put your wallet on the desk in plain sight, do you? Well, at least you don't leave in on the table in the conference room. The better you protect your assets, the more sophisticated methods the hackers have to use, and the more likely they will leave tracks behind.
You can't ignore the security aspects for your internal Websites just because of the fact you believe that they aren't visibile to the outside world.
There were so far many cases of different companies ignoring the security of their internal apps just because they think it's safe. But...
you said your Website is behind a firewall. Are you 100% sure of the firewall configurations and that no one would be able to access it?
Are you 100% sure that your firewall is safe and no one could break it?
Are you 100% of your server configurations and this website is only accessible from your Internal network?
Are you 100% sure that there's no possible way to access this internal Website/Database/App or whatever using the public Interface that is visible to the outside world?
Also you mentioned that it's vulnerable for XSS. The fact that you ignored XSS could also mean that there are more dangerous vulnerabilities that you haven't found yet or maybe didn't bother to find because you believe that it's just an internal website.
Also as #Racil Hilan pointed out in his answer. An insider could also exploit such a vulnerability.
Even for an internal website, XSS prevention is important, because of what #shawkyz1 said, but, in my opinion, more so because of what #Racil-Hilan said.
The easiest way to gain access to a company's data is to apply for a job. If you're inside, you're trusted. Imagine that an employee is pissed off with the company, for example because he's heard he'll be laid off. He can cause tremendous damage with XSS, for example by injecting a script which also pipes all input to a location from which he can steal it as his leisure.
Or perhaps he doesn't even steal the data, but cause tremendous misinformation by slightly altering messages users try to send, for example by randomly incrementing numbers, or with a small change exchanging "do" and "do not".
I have a web site that reports about each non-expected server side error on my email.
Quite often (once each 1-2 weeks) somebody launches automated tools that bombard the web site with a ton of different URLs:
sometimes they (hackers?) think my site has inside phpmyadmin hosted and they try to access vulnerable (i believe) php-pages...
sometimes they are trying to access pages that are really absent but belongs to popular CMSs
last time they tried to inject wrong ViewState...
It is clearly not search engine spiders as 100% of requests that generated errors are requests to invalid pages.
Right now they didn't do too much harm, the only one is that I need to delete a ton of server error emails (200-300)... But at some point they could probably find something.
I'm really tired of that and looking for the solution that will block such 'spiders'.
Is there anything ready to use? Any tool, dlls, etc... Or I should implement something myself?
In the 2nd case: could you please recommend the approach to implement? Should I limit amount of requests from IP per second (let's say not more than 5 requests per second and not more then 20 per minute)?
P.S. Right now my web site is written using ASP.NET 4.0.
Such bots are not likely to find any vulnerabilities in your system, if you just keep the server and software updated. They are generally just looking for low hanging fruit, i.e. systems that are not updated to fix known vulnerabilities.
You could make a bot trap to minimise such traffic. As soon as someone tries to access one of those non-existant pages that you know of, you could stop all requests from that IP address with the same browser string, for a while.
There are a couple of things what you can consider...
You can use one of the available Web Application Firewalls. It usually has set of rules and analytic engine that determine suspicious activities and react accordingly. For example in you case it can automatically block attempts to scan you site as it recognize it as a attack pattern.
More simple (but not 100% solution) approach is check referer url (referer url description in wiki) and if request was originating not from one of you page you rejected it (you probably should create httpmodule for that purpose).
And of cause you want to be sure that you site address all known security issues from OWASP TOP 10 list (OWASP TOP 10). You can find very comprehensive description how to do it for asp.net here (owasp top 10 for .net book in pdf), i also recommend to read the blog of the author of the aforementioned book: http://www.troyhunt.com/
Theres nothing you can do (reliabily) to prevent vulernability scanning, the only thing to do really is to make sure you are on top of any vulnerabilities and prevent vulernability exploitation.
If youre site is only used by a select few and in constant locations you could maybe use an IP restriction
What is the best (or any good) way to monitor an ASP.NET application to ensure that it is secure and to quickly detect intrusion? How do we know for sure that, as of right now, our application is entirely uncompromised?
We are about to launch an ASP.NET 4 web application, with the data stored on SQL Server. The web server runs in IIS on a Windows Server 2008 instance, and the database server runs on SQL Server 2008 on a separate Win 2008 instance.
We have reviewed Microsoft's security recommendations, and I think our application is very secure. We have implemented "defense in depth" and considered a range of attack vectors.
So we "feel" confident, but have no real visibility yet into the security of our system. How can we know immediately if someone has penetrated? How can we know if a package of some kind has been deposited on one of our servers? How can we know if a data leak is in progress?
What are some concepts, tools, best practices, etc.?
Thanks in advance,
Brian
Additional Thoughts 4/22/11
Chris, thanks for the very helpful personal observations and tips below.
What is a good, comprehensive approach to monitoring current application activity for security? Beyond constant vigilance in applying best practices, patches, etc., I want to know exactly what is going on inside my system right now. I want to be able to observe and analyze its activity in a way that clearly shows me which traffic is suspect and which is not. Finally, I want this information to be totally accurate and easy to digest.
How do we efficiently get close to that? Wouldn't a good solution include monitoring logins, database activity, ASP.NET activity, etc. in addition to packets on the wire? What are some examples of how to assume a strong security posture?
Brian
The term you are looking for is Intrusion Detection System (IDS). There is a related term called Intrusion Prevention System (IPS).
IDS's monitor traffic coming into your servers at the IP level and will send alerts based on sophisticated analysis of the traffic.
IPS's are the next generation of IDS which actually attempt to block certain activities.
There are many commercial and open source systems available including Snort, SourceFire, Endace, and others.
In short, you should look at adding one of these systems to your mix for real time monitoring and potentially blocking of hazardous activities.
I wanted to add a bit more information here as the comments area is just a bit small.
The main thing you need to understand are the types of attacks you will see. These are going to range from relatively unsophisticated automated scripts on up to highly sophisticated targeted attacks. They will also hit everything they can see from the web site itself to IIS, .Net, Mail server, SQL (if accessible), right down to your firewall and other exposed machines/services. A wholistic approach is the only way to really monitor what's going on.
Generally speaking, a new site/company is going to be hit with the automated scripts within a few minutes (I'd say 30 at most) of going live. Which is the number one reason new installations of MS Windows keep the network severely locked down during installation. Heck, I've seen machines nailed within 30 seconds of being turned on for the first time.
The approach hackers/worms take is to constantly scan wide ranges of IP addresses, this is followed up with machine fingerprinting for those that respond. Based on the profile they will send certain types of attacks your way. In some cases the profiling step is skipped and they attack certain ports regardless of response. Port 1443 (SQL) is a common one.
Although the most common form of attack, the automated ones are by far the easiest to deal with. Shutting down unused ports, turning off ICMP (ping response), and having a decent firewall in place will keep most of the scanners away.
For the scripted attacks, make sure you aren't exposing commonly installed packages like PhpMyAdmin, IIS's web admin tools, or even Remote Desktop outside of your firewall. Also, get rid of any accounts named "admin", "administrator", "guest", "sa", "dbo", etc Finally make sure your passwords AREN'T allowed to be someones name and are definitely NOT the default one that shipped with a product.
Along these lines make sure your database server is NOT directly accessible outside the firewall. If for some reason you have to have direct access then at the very least change the port # it responds to and enforce encryption.
Once all of this is properly done and secured the only services that are exposed should be the web ones (port 80 / 443). The items that can still be exploited are bugs in IIS, .Net, or your web application.
For IIS and .net you MUST install the windows updates from MS pretty much as soon as they are released. MS has been extremely good about pushing quality updates for windows, IIS, and .Net. Further a large majority of the updates are for vulnerabilities already being exploited in the wild. Our servers have been set to auto install updates as soon as they are available and we have never been burned on this (going back to at least when server 2003 was released).
Also you need to stay on top of the updates to your firewall. It wasn't that long ago that one of Cisco's firewalls had a bug where it could be overwhelmed. Unfortunately it let all traffic pass through when this happened. Although fixed pretty quickly, people were still being hammered over a year later because admins failed to keep up with the IOS patches. Same issue with windows updates. A lot of people have been hacked simply because they failed to apply updates that would have prevented it.
The more targeted attacks are a little harder to deal with. A fair number of hackers are going after custom web applications. Things like posting to contact us and login forms. The posts might include JavaScript that, once viewed by an administrator, could cause credentials to be transferred out or might lead to installing key loggers or Trojans on the recipients computers.
The problem here is that you could be compromised without even knowing it. Defenses include making sure HTML and JavaScript can't be submitted through your site; having rock solid (and constantly updated) spam and virus checks at the mail server, etc. Basically, you need to look at every possible way an external entity could send something to you and do something about it. A lot of Fortune 500 companies keep getting hit with things like this... Google included.
Hope the above helps someone. If so and it leads to a more secure environment then I'll be a happy guy. Unfortunately most companies don't monitor traffic so they have no idea just how much time is spent by their machines fending off this garbage.
I can say some thinks - but I will glad to hear more ideas.
How can we know immediately if someone has penetrated?
This is not so easy and in my opinion, ** an idea is to make some traps** inside your backoffice , together with monitor for double logins from different ips.
a trap can be anything you can think of, for example a non real page that say "create new administrator", or "change administrator password", on backoffice, and there anyone can gets in and try to make a new administrator is for sure a penetrator - of course this trap must be known only on you, or else there is no meaning for that.
For more security, any change to administrators must need a second password, and if some one try to make a real change on administrators account, or try to add any new administrator, and fails on this second password must be consider as a penetrator.
way to monitor an ASP.NET application
I think that any tool that monitor the pages for some text change, can help on that. For example this Network Monitor can monitor for specific text on you page and alert you, or take some actions if this text not found, that means some one change the page.
So you can add some special hiden text, and if you not found, then you can know for sure that some one change the core of your page, and probably is change files.
How can we know if a package of some kind has been deposited on one of our servers
This can be any aspx page loaded on your server and act like a file browser. For this not happens I suggest to add web.config files to the directories that used for uploading data, and on this web.config do not allow anything to run.
<configuration>
<system.web>
<authorization>
<deny users="*" />
</authorization>
</system.web>
</configuration>
I have not tried it yet, but Lenny Zeltser directed me to OSSEC, which is a host-based intrusion detection system that continuously monitors an entire server to detect any suspicious activity. This looks like exactly what I want!
I will add more information once I have a chance to fully test it.
OSSEC can be found at http://www.ossec.net/
I have a DB with user accounts information.
I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts.
I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address.
Is this the case?
If so, how do I avoid being banned? should I be using a proxy?
Thanks
You get banned for suspicious (or malicious) activity.
If you are running a normal business application inside a normal company intranet you are unlikely to get banned.
Since you have access to user accounts information, you already have a lot of access to the system. The best thing to do is to ask your systems administrator, since he/she defines what constitutes suspicious/malicious activity. The systems administrator might also want to help you ensure that your database is at least as secure as the original information.
should I be using a proxy?
A proxy might disguise what you are doing - but you are still doing it. So this isn't the most ethical way of solving the problem.
Is the cron job that fetches data from this "database" on the same server? Are you fetching data for a user from a remote server using screen scraping or something?
If this is the case, you may want to set up a few different cron jobs and do it in batches. That way you reduce the amount of load on the remote server and lower the chance of wherever you are getting this data from, blocking your access.
Edit
Okay, so if you have not got permission to do scraping, obviously you are going to want to do it responsibly (no matter the site). Try gather as much data as you can from as little requests as possible, and spread them out over the course of the whole day, or even during times that a likely to be low load. I wouldn't try and use a proxy, that wouldn't really help the remote server, but it would be a pain in the ass to you.
I'm no iPhone programmer, and this might not be possible, but you could try have the individual iPhones grab the data so all the source traffic isn't from the same IP. Just an idea, otherwise just try to be a bit discrete.
Here are some tips from Jeff regarding the scraping of Stack Overflow, but I'd imagine that the rules are similar for any site.
Use GZIP requests. This is important! For example, one scraper used 120 megabytes of bandwidth in only 3,310 hits which is substantial. With basic gzip support (baked into HTTP since the 90s, and universally supported) it would have been 20 megabytes or less.
Identify yourself. Add something useful to the user-agent (ideally, a link to an URL, or something informational) so we can see your bot as something other than "generic unknown anonymous scraper."
Use the right formats. Don't scrape HTML when there is a JSON or RSS feed you could use instead. Heck, why scrape at all when you can download our cc-wiki data dump??
Be considerate. Pulling data more than every 15 minutes is questionable. If you need something more timely than that ... why not ask permission first, and make your case as to why this is a benefit to the SO community and should be allowed? Our email is linked at the bottom of every single page on every SO family site. We don't bite... hard.
Yes, you want an API. We get it. Don't rage against the machine by doing naughty things until we build it. It's in the queue.
This is a shameless information gathering exercise for my own book.
One of the talks I give in the community is an introduction to web site vulnerabilities. Usually during the talk I can see at least two members of the audience go very pale; and this is basic stuff, Cross Site Scripting, SQL Injection, Information Leakage, Cross Site Form Requests and so on.
So, if you can think back to being one, as a beginning web developer (be it ASP.NET or not) what do you feel would be useful information about web security and how to develop securely? I will already be covering the OWASP Top Ten
(And yes this means stackoverflow will be in the acknowledgements list if someone comes up with something I haven't thought of yet!)
It's all done now, and published, thank you all for your responses
First, I would point out the insecurities of the web in a way that makes them accesible to people for whom developing with security in mind may (unfortunately) be a new concept. For example, show them how to intercept an HTTP header and implement an XSS attack. The reason you want to show them the attacks is so they themselves have a better idea of what they're defending against. Talking about security beyond that is great, but without understanding the type of attack they're meant to thwart, it will be hard for them to accurately "test" their systems for security. Once they can test for security by trying to intercept messages, spoof headers, etc. then they at least know if whatever security they're trying to implement is working or not. You can teach them whatever methods you want for implementing that security with confidence, knowing if they get it wrong, they will actually know about it because it will fail the security tests you showed them to try.
Defensive programming as an archetypal topic which covers all the particular attacks, as most, if not all, of them are caused by not thinking defensively enough.
Make that subject the central column of the book . What would've served me well back then was knowing about techniques to never trust anything, not just one stop tips, like "do not allow SQL comments or special chars in your input".
Another interesting thing I'd love to have learned earlier is how to actually test for them.
I think all vulnerabilities are based off of programmers not thinking, either momentary lapses of judgement, or something they haven't thought of. One big vulnerability that was in an application that I was tasked to "fix up", was the fact that they had returned 0 (Zero) from the authentication method when the user that was logging in was an administrator. Because of the fact that the variable was initialized originally as 0, if any issues happened such as the database being down, which caused it to throw an exception. The variable would never be set to the proper "security code" and the user would then have admin access to the site. Absolutely horrible thought went into that process. So, that brings me to a major security concept; Never set the initial value of a variable representing a "security level" or anything of that sort, to something that represents total god control of the site. Better yet, use existing libraries out there that have gone through the fire of being used in massive amounts of production environments for a long period of time.
I would like to see how ASP.NET security is different from ASP Classic security.
Foxes
Good to hear that you will have the OWASP Top Ten. Why not also include coverage of the SANS/CWE Top 25 Programming mistakes.
How to make sure your security method is scalable with SQL Server. Especially how to avoid having SQL Server serialize requests from multiple users because they all connect with the same ID...
I always try to show the worst-case scenario on things that might go wrong. For instance on how a cross-site script injection can work as a black-box attack that even works on pages in the application that a hacker can’t access himself or how even an SQL injection can work as a black box and how a hacker can steal your sensitive business data, even when your website connects to your database with a normal non-privileged login account.