stopping ZmEu attacks with ASP.NET MVC - asp.net

recently my elmah exception logs are full of attempts from people using thus dam ZmEu security software against my server
for those thinking “what the hell is ZmEu?” here is an explanation...
“ZmEu appears to be a security tool used for discovering security holes in in version 2.x.x of PHPMyAdmin, a web based MySQL database manager. The tool appears to have originated from somewhere in Eastern Europe. Like what seems to happen to all black hat security tools, it made its way to China, where it has been used ever since for non stop brute force attacks against web servers all over the world.”
Heres a great link about this annoying attack -> http://www.philriesch.com/articles/2010/07/getting-a-little-sick-of-zmeu/
Im using .net so they aint gonna find PHPMyAdmin on my server but the fact that my logs are full ofZmEu attacks its becoming tiresome.
The link above provide a great fix using HTAccess, but im using IIS7.5, not apache.
I have a asp.net MVC 2 site, so im using the global.asax file to create my routes
Here is the HTAccess seugestion
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_URI} !^/path/to/your/abusefile.php
RewriteCond %{HTTP_USER_AGENT} (.*)ZmEu(.*)
RewriteRule .* http://www.yourdomain.com/path/to/your/abusefile.php [R=301,L]
</IfModule>
My question is there anything i can add like this in the Global.ascx file that does the same thing ?

An alternative answer to my other one ... this one specifically stops Elmah from logging the 404 errors generated by ZmEu, while leaving the rest of your sites behaviour unchanged. This might be a bit less conspicuous than returning messages straight to the hackers.
You can control what sorts of things Elmah logs in various ways, one way is adding this to the Global.asax
void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e)
{
if (e.Exception.GetBaseException() is HttpException)
{
HttpException httpEx = (HttpException)e.Exception.GetBaseException();
if (httpEx.GetHttpCode() == 404)
{
if (Request.UserAgent.Contains("ZmEu"))
{
// stop Elmah from logging it
e.Dismiss();
// log it somewhere else
logger.InfoFormat("ZmEu request detected from IP {0} at address {1}", Request.UserHostAddress, Request.Url);
}
}
}
}
For this event to fire, you'll need to reference the Elmah DLL from your project, and add a using Elmah; to the top of your Global.asax.cs.
The line starting logger.InfoFormat assumes you are using log4net. If not, change it to something else.

The ZmEu attacks were annoying me too, so I looked into this. It can be done with an HttpModule.
Add the following class to your project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Security.Principal;
//using log4net;
namespace YourProject
{
public class UserAgentBlockModule : IHttpModule
{
//private static readonly ILog logger = LogManager.GetLogger(typeof(UserAgentBlockModule));
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpRequest request = application.Request;
if (request.UserAgent.Contains("ZmEu"))
{
//logger.InfoFormat("ZmEu attack detected from IP {0}, aiming for url {1}", request.UserHostAddress, request.Url.ToString());
HttpContext.Current.Server.Transfer("RickRoll.htm");
}
}
public void Dispose()
{
// nothing to dispose
}
}
}
and then add the following line to web.config
<httpModules>
...
<add name="UserAgentBlockFilter" type="YourProject.UserAgentBlockModule, YourProject" />
</httpModules>
... and then add a suitable htm page to your project so there's somewhere to redirect them to.
Note that if you're using log4net you can comment in the log4net lines in the code to log the occasions when the filter kicks in.
This module has worked for me in testing (when I send the right userAgent values to it). I haven't tested it on a real server yet. But it should do the trick.
Although, as I said in the comments above, something tells me that returning 404 errors might be a less conspicuous response than letting the hackers know that you're aware of them. Some of them might see something like this as a challenge. But then, I'm not an expert on hacker psychology, so who knows.

Whenever I get a ZmEu or phpMyAdmin or forgotten_password I redirect the query to:
<meta http-equiv='refresh' content='0;url=http://www.ripe.net$uri' />
[or apnic or arin]. I'm hoping the admins at ripe.net don't like getting hacked.

On IIS 6.0 you can also try this...
Set your website in IIS to use host headers. Then create a web site in IIS, using the same IP address, but with no host header definition. (I labeled mine "Rogue Site" because some rogue oonce deliverately set his DNS for his domain to resolve to my popular government site. (I'm not sure why) Anyway, using host headers on multiple sites is a good practice. And having a site defined for the case when no host header is included is a way to catch visitors who don't have your domain name in the HTTP request.
On the site with no host header, create a home page that returns a response header status of "HTTP 410 Gone". Or you can redirect them elsewhere.
Any bots that try to visit your server by the IP address rather than the domain name will resolve the this site and get the error "410 Gone".
I also use Microsoft's URLscan, and modified the URLscan.ini file to exclude the user angent string, "ZmEu".

If you are using IIS 7.X you could use Request Filtering to block the requests
Scan Headers: User-agent
Deny Strings: ZmEu
To try if it works start Chrome with the parameter --User-Agent "ZmEu"
This way asp.net is never invoked and its saves you some CPU/Memory..

I added this pattern in Microsoft URL Rewrite Module:
^$|EasouSpider|Add Catalog|PaperLiBot|Spiceworks|ZumBot|RU_Bot|Wget|Java/1.7.0_25|Slurp|FunWebProducts|80legs|Aboundex|AcoiRobot|Acoon Robot|AhrefsBot|aihit|AlkalineBOT|AnzwersCrawl|Arachnoidea|ArchitextSpider|archive|Autonomy Spider|Baiduspider|BecomeBot|benderthewebrobot|BlackWidow|Bork-edition|Bot mailto:craftbot#yahoo.com|botje|catchbot|changedetection|Charlotte|ChinaClaw|commoncrawl|ConveraCrawler|Covario|crawler|curl|Custo|data mining development project|DigExt|DISCo|discobot|discoveryengine|DOC|DoCoMo|DotBot|Download Demon|Download Ninja|eCatch|EirGrabber|EmailSiphon|EmailWolf|eurobot|Exabot|Express WebPictures|ExtractorPro|EyeNetIE|Ezooms|Fetch|Fetch API|filterdb|findfiles|findlinks|FlashGet|flightdeckreports|FollowSite Bot|Gaisbot|genieBot|GetRight|GetWeb!|gigablast|Gigabot|Go-Ahead-Got-It|Go!Zilla|GrabNet|Grafula|GT::WWW|hailoo|heritrix|HMView|houxou|HTTP::Lite|HTTrack|ia_archiver|IBM EVV|id-search|IDBot|Image Stripper|Image Sucker|Indy Library|InterGET|Internet Ninja|internetmemory|ISC Systems iRc Search 2.1|JetCar|JOC Web Spider|k2spider|larbin|larbin|LeechFTP|libghttp|libwww|libwww-perl|linko|LinkWalker|lwp-trivial|Mass Downloader|metadatalabs|MFC_Tear_Sample|Microsoft URL Control|MIDown tool|Missigua|Missigua Locator|Mister PiX|MJ12bot|MOREnet|MSIECrawler|msnbot|naver|Navroad|NearSite|Net Vampire|NetAnts|NetSpider|NetZIP|NextGenSearchBot|NPBot|Nutch|Octopus|Offline Explorer|Offline Navigator|omni-explorer|PageGrabber|panscient|panscient.com|Papa Foto|pavuk|pcBrowser|PECL::HTTP|PHP/|PHPCrawl|picsearch|pipl|pmoz|PredictYourBabySearchToolbar|RealDownload|Referrer Karma|ReGet|reverseget|rogerbot|ScoutJet|SearchBot|seexie|seoprofiler|Servage Robot|SeznamBot|shopwiki|sindice|sistrix|SiteSnagger|SiteSnagger|smart.apnoti.com|SmartDownload|Snoopy|Sosospider|spbot|suggybot|SuperBot|SuperHTTP|SuperPagesUrlVerifyBot|Surfbot|SurveyBot|SurveyBot|swebot|Synapse|Tagoobot|tAkeOut|Teleport|Teleport Pro|TeleportPro|TweetmemeBot|TwengaBot|twiceler|UbiCrawler|uptimerobot|URI::Fetch|urllib|User-Agent|VoidEYE|VoilaBot|WBSearchBot|Web Image Collector|Web Sucker|WebAuto|WebCopier|WebCopier|WebFetch|WebGo IS|WebLeacher|WebReaper|WebSauger|Website eXtractor|Website Quester|WebStripper|WebStripper|WebWhacker|WebZIP|WebZIP|Wells Search II|WEP Search|Widow|winHTTP|WWWOFFLE|Xaldon WebSpider|Xenu|yacybot|yandex|YandexBot|YandexImages|yBot|YesupBot|YodaoBot|yolinkBot|youdao|Zao|Zealbot|Zeus|ZyBORG|Zmeu
The top listed one, “^$” is the regex for an empty string. I do not allow bots to access the pages unless they identify with a user-agent, I found most often the only things hitting my these applications with out a user agent were security tools gone rogue.
I will advise you when blocking bots be very specific. Simply using a generic word like “fire” could pop positive for “firefox” You can also adjust the regex to fix that issue but I found it much simpler to be more specific and that has the added benefit of being more informative to the next person to touch that setting.
Additionally, you will see I have a rule for Java/1.7.0_25 in this case it happened to be a bot using this version of java to slam my servers. Do be careful blocking language specific user agents like this, some languages such as ColdFusion run on the JVM and use the language user agent and web requests to localhost to assemble things like PDFs. Jruby, Groovy, or Scala, may do similar things, however I have not tested them.

Setup your server up properly and dont worry about the attackers :)
All they do is try some basic possibilities to see if youve overlooked an obvious pitfall.
No point filtering out this one hacker who is nice enough to sign his work for you.
If you have a closer look at your log files you see there are so many bots doing this all the time.

Related

IIS/ASP.NET receiving calls from external application to SOAP proxy server

This is a weird one, sorry :( I have a remote server (3rd party, not under my control) that calls a defined endpoint (http://myservice.com/service.asmx), but internally before calling, it appends '.wsdl' to the URL string (so I see http://myservice.com/service.asmx.wsdl) The original server waiting for this request is expecting this, but the original server is no longer in service and I'm hoping to replace it with a 'stub'.
Basically, I'm trying to put an ASP.NET application in place to receive the requests (all currently running locally with IIS). I've used wsdl.exe to create my stub code, and it's called service.asmx. Using POSTMAN against this running service, it all works great - I can debug, see the responses etc, but if I try to rename my project to service.asmx.wsdl to accomodate for the real server making the request, I see a 405 - HTTP Verb error. I've been unable to figure out how to make this work and was thinking it's IIS handers or something like that. I've looked at IIS handers, but I can't seem to find one that would work (i.e., copying the .asmx profiles into newly created .wsdl profiles)
So my question is "Can I make the endpoint at .wsdl behave like it's an .asmx or am I approaching this all wrong?
After much hairpulling, I had to add Global.asax file to my project and implement the following method therein...
protected void Application_BeginRequest(object sender, EventArgs e)
{
var path = Request.Path;
if (path.EndsWith(".asmx.wsdl"))
Context.RewritePath(path.Replace (".asmx.wsdl", ".asmx"));
This allowed for the default asmx handlers in IIS to remain as-is and process the request from the URL by simply rewriting the URL programmatically.

Securing http headers

I Have website that is in production server and it supposed to be very secure so i want to secure http header so that no unwanted information is leaked.
I have searched on net about securing http headers and so far found that we can remove un anted information like removing
'Server Microsoft-IIS/7.5
X-AspNet-Version 4.0.303319
X-Powered-By ASP.NET -'
I have found solution for X-Aspnet and X powered by :
1. For X-AspNet i have added below code in system.web section
<httpRuntime enableVersionHeader="false"/>
For X-Powered i have added below code in system.webserver section
But for Server header removal code is not working :(
Code i am using for is :
I have added a class with name CustomHeaderModule and inside that class code is as below
///
/// Summary description for CustomHeaderModule
///
public class CustomHeaderModule : IHttpModule
{
public void Dispose()
{
throw new NotImplementedException();
}
public void Init(HttpApplication context)
{
context.PostReleaseRequestState += PostReleaseRequestState;
}
void PostReleaseRequestState(object sender, EventArgs e)
{
//HttpContext.Current.Response.Headers.Remove("Server");
// Or you can set something funny
HttpContext.Current.Response.Headers.Set("Server", "CERN httpd");
}
}
and then registered this in web.config under system.webserver section
<modules runAllManagedModulesForAllRequests="true">
<add name="CustomHeaderModule" type="CustomHeaderModule" />
</modules>
Now this code is not working ..i am still seeing server in header in chrome browser..
how can i fix this and apart from these 3 setting is there any other to secure more ?
Considering your problem what I would suggest you is to use ASafaWeb to test your Website!
Second is to read these articles from Troy Hunt and Paul Bouwer:
Shhh… don’t let your response headers talk too loudly
Clickjack attack – the hidden threat right in front of you
ASafaWeb, Excessive Headers and Windows Azure
Following this articles you will finally have a look at NWebSec!
Sorry if this doesn’t answer your question directly but I wouldn’t really bother removing those headers. Someone can easily find out what server are you using by looking at the html code on the browser side.
If I look at source code and I see things like __VIEWSTATE I’ll immediately know this is ASP.NET and if I dig a little deeper I’ll probably be able to figure out the version too.
What I’d suggest is that you focus on standard security and risk procedures such as making sure you are not open to SQL injections, validating everything on the server side, making sure you have all backups in place and ready to be up in several mins, adding additional layer of authentication if needed, making sure you have all security updates on the server and such…
I have found one solution which works on IIS but not on local but i am okay with that...Removing/Hiding/Disabling excessive HTTP response headers in Azure/IIS7 without UrlScan
anyways apart from these 3 settings ..is there any other way i can more secure http headers..

Replacement for ASP.NET Virtual Directory for Multi-tenancy

I am working on an ASP.NET WebForms Application, using ASP.NET 4.5
The Application has multi-tenancy support. Each tenant has an own URL like:
http://myApplication.net/DemoTenant1/
Very simplified in the Login.aspx the application calls this method and translates this URL to an internal ID.
public static string getTenant(HttpRequest request)
{
return = request.Url.ToString();
}
The problem is now, we have more than 200 tenants, for each we need to define an WebApplication which is
a bunch of work :-)
probably very inefficient as an own worker process for each tenant is opend
I am looking for a smart replacement where I stay compatible to the old URLs.
I am looking for an idea how to solve this via URL Routing or maybe to mix WebForms with MVC and add a Login Controller?
Also open to other ideas...
I agree with what Alexander said, the proper way to do this would be with URL Routing.
But... If you are trying to save time...
First, remove all of your web applications;
So get rid of...
http://myApplication.net/DemoTenant1/
http://myApplication.net/DemoTenant2/
http://myApplication.net/DemoTenant3/
And then you need to make sure that typing in the following:
http://myApplication.net/
... takes you to the actual WebApplication you want to use.
Then, in the global.asax file... you need to capture 404 exceptions.
So when someone types in:
http://myApplication.net/DemoTenant1/
... it will throw a 404 exception which you could catch in your global.asax file like this:
void Application_Error(object sender, EventArgs e)
{
string urlData = Request.ServerVariables["SCRIPT_NAME"];
// do some string splitting to get the DemoTenant1 value
// Response.Redirect("~Login.aspx?tenant=DemoTenant1");
}
Its a bit messy but I have done this in the past when I was in exactly the same situation as you. Although, you do now have the routing module built by Microsoft (which I did not have at the time). I am quite sure that you can use the Routing modules within Webforms, without having to use MVC.

IIS 6: How to handle a space (%20) after .aspx

Occasionally, my IIS 6 server will receive a request which contains a space after ".aspx", like:
http://www.foo.com/mypage.aspx%20?param=value
The "%20" immediately following ".aspx" causes the server to result in a "404 Page Not Found".
Is there a way to configure IIS to accept ".aspx%20" and process the page as if the "%20" didn't exist?
I looked at the "Home Directory" / "Configuration" in the properties of the site in IIS Manager and I added an entry for ".aspx%20" but that didn't work. Any other suggestions are appreciated.
+1 for the custom HttpModule (as Frédéric Hamidi suggested). It's a clean, modular solution and may help you rewrite other URLS, should you need to do so.
Your OnBeginRequest (referring to the link Frédéric provided) might look more or less like this:
private void OnBeginRequest(object sender, EventArgs e)
{
HttpContext context = ((HttpApplication)sender).Context;
string url = context.Request.RawUrl;
context.RewritePath(url.Replace(".aspx%20",".aspx"), false);
}
You might want to consider writing an HTTP module to remove the trailing space from the URL.
Override the 404 page in your web.config and handle the situation you described in code.

Problem with a URL that ends with %20

I have a big problem. There are devices in live that send the URL "/updates ". It's a typo of the developer for those devices. In the server logs, it looks like "/updates+".
I have a ManageURL rewriting module that handles all requests without extension. But this request causes an HttpException:
System.Web.HttpException:
System.Web.HttpException
at System.Web.Util.FileUtil.CheckSuspiciousPhysicalPath(String physicalPath)
at System.Web.HttpContext.ValidatePath()
at System.Web.HttpApplication.ValidatePathExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
As I see in the logs, the URL rewriting module does not even get this URL, so I cannot fix it there.
Is there a way to handle those URLs with ASP.NET?
Ok, this is an old thread, but I like to add a workable solution that works for all ASP.NET versions. Have a look at this answer in a related thread. It basically comes down to registering to the event PreSendRequestHeaders in global.asax.cs.
Alternatively, when on ASP.NET 4.0 or higher, use <httpRuntime relaxedUrlToFileSystemMapping="true" /> in web.config.
According to some, this is in System.Web.dll:
internal static void CheckSuspiciousPhysicalPath(string physicalPath)
{
if (((physicalPath != null) && (physicalPath.Length > 0))
&& (Path.GetFullPath(physicalPath) != physicalPath))
{
throw new HttpException(0x194, "");
}
}
I guess you cannot change that, but can't one disable it in the IIS settings? Of course, that would also disable all other checks... :-(
Or write some ISAPI filter that runs before the above code? Writing your own module is said to be easy, according to Handle URI hacking gracefully in ASP.NET.
Or, create your own error page. In this page (like suggested in the URI hacking link above) search for specific text in exception.TargetSite.Name, such as CheckSuspiciousPhysicalPath and if found (or simply always) look at current.Request.RawUrl or something like that, clear the error and redirect to a repaired URL?
you could run a URL-rewriting ISAPI, like IIRF.
If you have access to code why not just check for '+' at the end and remove it?

Resources