Problem with a URL that ends with %20 - asp.net

I have a big problem. There are devices in live that send the URL "/updates ". It's a typo of the developer for those devices. In the server logs, it looks like "/updates+".
I have a ManageURL rewriting module that handles all requests without extension. But this request causes an HttpException:
System.Web.HttpException:
System.Web.HttpException
at System.Web.Util.FileUtil.CheckSuspiciousPhysicalPath(String physicalPath)
at System.Web.HttpContext.ValidatePath()
at System.Web.HttpApplication.ValidatePathExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
As I see in the logs, the URL rewriting module does not even get this URL, so I cannot fix it there.
Is there a way to handle those URLs with ASP.NET?

Ok, this is an old thread, but I like to add a workable solution that works for all ASP.NET versions. Have a look at this answer in a related thread. It basically comes down to registering to the event PreSendRequestHeaders in global.asax.cs.
Alternatively, when on ASP.NET 4.0 or higher, use <httpRuntime relaxedUrlToFileSystemMapping="true" /> in web.config.

According to some, this is in System.Web.dll:
internal static void CheckSuspiciousPhysicalPath(string physicalPath)
{
if (((physicalPath != null) && (physicalPath.Length > 0))
&& (Path.GetFullPath(physicalPath) != physicalPath))
{
throw new HttpException(0x194, "");
}
}
I guess you cannot change that, but can't one disable it in the IIS settings? Of course, that would also disable all other checks... :-(
Or write some ISAPI filter that runs before the above code? Writing your own module is said to be easy, according to Handle URI hacking gracefully in ASP.NET.
Or, create your own error page. In this page (like suggested in the URI hacking link above) search for specific text in exception.TargetSite.Name, such as CheckSuspiciousPhysicalPath and if found (or simply always) look at current.Request.RawUrl or something like that, clear the error and redirect to a repaired URL?

you could run a URL-rewriting ISAPI, like IIRF.

If you have access to code why not just check for '+' at the end and remove it?

Related

URL Routing, Image Handler & "A potentially dangerous Request.Path value"

I've been experiencing this problem now for quite sometime and have decided to try and get to the bottom of it once and for all by posting the question here for some thought. I have an image handler in a .net 4 website located here:
https://www.amadeupurl.co.uk/ImageHandler.ashx?i=3604
(actual domain removed for privacy)
Now this works fine and serves an image from the web server without problem, I say without problem because if I access the URL it works fine, the image loads, no exception is generated. However someone did visit this exact URL yesterday and an exception was raised along the following lines:
Exception Generated
Error Message:
A potentially dangerous Request.Path value was detected from the client (?).
Stack Trace:
at System.Web.HttpRequest.ValidateInputIfRequiredByConfig() at System.Web.HttpApplication.PipelineStepManager.ValidateHelper(HttpContext context)
Technical Information:
DATE/TIME: 23/01/2013 03:50:01
PAGE: www.amadeupurl.co.uk/ImageHandler.ashx?i=3604
I understand the error message, thats not a problem I just don't understand why it being generated here, to make things worse I'm unable to replicate it, like I said I click the link the image loads, no exception. I am using URL routing and registered the handler to be ignored in case this was causing an issue with the following code:
routes.Ignore("{resource}.ashx")
I'm not sure why else I would be getting the error or what else to try.
Asp.Net 4.0+ comes with a very strict built-in request validation, part of it is the potential dangerous characters in the url which may be used in XSS attacks. Here are default invalid characters in the url :
< > * % & : \ ?
You can change this behavior in your config file:
<system.web>
<httpRuntime requestPathInvalidCharacters="<,>,*,%,&,:,\,?" />
</system.web>
Or get back to .Net 2.0 validation:
<system.web>
<httpRuntime requestValidationMode="2.0" />
</system.web>
A very common invalid character is %, so if by any chance (attack, web-crawlers, or just some non-standard browser) the url is being escaped you get this:
www.amadeupurl.co.uk/ImageHandler.ashx/%3Fi%3D3604
instead of this:
www.amadeupurl.co.uk/ImageHandler.ashx/?i=3604
Note that %3F is the escape character for ?. The character is considered invalid by Asp.Net request validator and throws an exception:
A potentially dangerous Request.Path value was detected from the client (?).
Though in the error message you see the unescaped version of the character (%3F) which is ? again
Here's a good article on Request Validation and how to deal with it
Even I faced this issue but for me, I accidentally typed & instead of the ? in the URL
for example:
example.com/123123&parameter1=value1&paameter2=value2
but in actual it has to be:
example.com/123123?parameter1=value1&paameter2=value2
A super old thread but this works:
return RedirectToAction("MyAction", new { #myParameterName = "MyParameterValue" });
You can also add the controller name after action name if the request is going to a different controller and also add more query string parameters simply by chaining them with commas between.

IIS 6: How to handle a space (%20) after .aspx

Occasionally, my IIS 6 server will receive a request which contains a space after ".aspx", like:
http://www.foo.com/mypage.aspx%20?param=value
The "%20" immediately following ".aspx" causes the server to result in a "404 Page Not Found".
Is there a way to configure IIS to accept ".aspx%20" and process the page as if the "%20" didn't exist?
I looked at the "Home Directory" / "Configuration" in the properties of the site in IIS Manager and I added an entry for ".aspx%20" but that didn't work. Any other suggestions are appreciated.
+1 for the custom HttpModule (as Frédéric Hamidi suggested). It's a clean, modular solution and may help you rewrite other URLS, should you need to do so.
Your OnBeginRequest (referring to the link Frédéric provided) might look more or less like this:
private void OnBeginRequest(object sender, EventArgs e)
{
HttpContext context = ((HttpApplication)sender).Context;
string url = context.Request.RawUrl;
context.RewritePath(url.Replace(".aspx%20",".aspx"), false);
}
You might want to consider writing an HTTP module to remove the trailing space from the URL.
Override the 404 page in your web.config and handle the situation you described in code.

HttpHandler to download txt files (ASP.NET)?

Hey, I created a HttpHandler for downloading files from the server. It seems it is not handling anything...I put a breakpoint in the ProcessRequest, it never goes there.
public class DownloadHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
//download stuff and break point
}
}
It never stops there, as mentioned. I also registered it in the web.config.
<add verb="*" path="????" type="DownloadHandler" />
I am not sure about the path part of that entry. What do I have to enter there? I am downloading txt files, but the URL does not contain the filename, I somehow have to pass it to the handler. How would I do this? Session maybe?
Thanks
Have you read How to register Http Handlers? Are you using IIS 6 or 7?
The path part should contain a (partial) url, so if in your case you are using a static url without the filenames, you should put that there. You can end the url in the name of a non-existent resource and map that to path
e.g. the url is http://myserver.com/pages/downloadfiles
and the path="downloadfiles"
If you do POST, you can put the filename in a hidden field, and extract it in the handler. If you're using GET, I'm not sure, either cross-post the viewstate or put the filename in the session like you said.
Any reason why you can't put the filename in the url?
The path for a handler needs to be the path you are trying to handle - bit of a tautology I know but it's as simple as that. Whatever path on your site (real or much more likely virtual) you want to be handled by this handler.
Now unless the kind of file at the end of that path is normally handled by ASP.NET (e.g. .aspx, .asmx but not a .txt) ASP will never see the request in order for it to go through it's pipeline and end up at your handler. In that case you have to bind the extension type in IIS to ASP.NET.
As far as identifying what file the handler is supposed to respond with you could achieve this any number of ways - I would strongly recommend avoiding session or cookies or anything temporal and implicit. I would instead suggest using the querystring or form values, basically anything which will show up as a request header.
Fianlly, I have to ask why you're using a handler for this at all - .txt will serve just fine normally so what additional feature are you trying to implement here? There might well be a better way.

stopping ZmEu attacks with ASP.NET MVC

recently my elmah exception logs are full of attempts from people using thus dam ZmEu security software against my server
for those thinking “what the hell is ZmEu?” here is an explanation...
“ZmEu appears to be a security tool used for discovering security holes in in version 2.x.x of PHPMyAdmin, a web based MySQL database manager. The tool appears to have originated from somewhere in Eastern Europe. Like what seems to happen to all black hat security tools, it made its way to China, where it has been used ever since for non stop brute force attacks against web servers all over the world.”
Heres a great link about this annoying attack -> http://www.philriesch.com/articles/2010/07/getting-a-little-sick-of-zmeu/
Im using .net so they aint gonna find PHPMyAdmin on my server but the fact that my logs are full ofZmEu attacks its becoming tiresome.
The link above provide a great fix using HTAccess, but im using IIS7.5, not apache.
I have a asp.net MVC 2 site, so im using the global.asax file to create my routes
Here is the HTAccess seugestion
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_URI} !^/path/to/your/abusefile.php
RewriteCond %{HTTP_USER_AGENT} (.*)ZmEu(.*)
RewriteRule .* http://www.yourdomain.com/path/to/your/abusefile.php [R=301,L]
</IfModule>
My question is there anything i can add like this in the Global.ascx file that does the same thing ?
An alternative answer to my other one ... this one specifically stops Elmah from logging the 404 errors generated by ZmEu, while leaving the rest of your sites behaviour unchanged. This might be a bit less conspicuous than returning messages straight to the hackers.
You can control what sorts of things Elmah logs in various ways, one way is adding this to the Global.asax
void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e)
{
if (e.Exception.GetBaseException() is HttpException)
{
HttpException httpEx = (HttpException)e.Exception.GetBaseException();
if (httpEx.GetHttpCode() == 404)
{
if (Request.UserAgent.Contains("ZmEu"))
{
// stop Elmah from logging it
e.Dismiss();
// log it somewhere else
logger.InfoFormat("ZmEu request detected from IP {0} at address {1}", Request.UserHostAddress, Request.Url);
}
}
}
}
For this event to fire, you'll need to reference the Elmah DLL from your project, and add a using Elmah; to the top of your Global.asax.cs.
The line starting logger.InfoFormat assumes you are using log4net. If not, change it to something else.
The ZmEu attacks were annoying me too, so I looked into this. It can be done with an HttpModule.
Add the following class to your project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Security.Principal;
//using log4net;
namespace YourProject
{
public class UserAgentBlockModule : IHttpModule
{
//private static readonly ILog logger = LogManager.GetLogger(typeof(UserAgentBlockModule));
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpRequest request = application.Request;
if (request.UserAgent.Contains("ZmEu"))
{
//logger.InfoFormat("ZmEu attack detected from IP {0}, aiming for url {1}", request.UserHostAddress, request.Url.ToString());
HttpContext.Current.Server.Transfer("RickRoll.htm");
}
}
public void Dispose()
{
// nothing to dispose
}
}
}
and then add the following line to web.config
<httpModules>
...
<add name="UserAgentBlockFilter" type="YourProject.UserAgentBlockModule, YourProject" />
</httpModules>
... and then add a suitable htm page to your project so there's somewhere to redirect them to.
Note that if you're using log4net you can comment in the log4net lines in the code to log the occasions when the filter kicks in.
This module has worked for me in testing (when I send the right userAgent values to it). I haven't tested it on a real server yet. But it should do the trick.
Although, as I said in the comments above, something tells me that returning 404 errors might be a less conspicuous response than letting the hackers know that you're aware of them. Some of them might see something like this as a challenge. But then, I'm not an expert on hacker psychology, so who knows.
Whenever I get a ZmEu or phpMyAdmin or forgotten_password I redirect the query to:
<meta http-equiv='refresh' content='0;url=http://www.ripe.net$uri' />
[or apnic or arin]. I'm hoping the admins at ripe.net don't like getting hacked.
On IIS 6.0 you can also try this...
Set your website in IIS to use host headers. Then create a web site in IIS, using the same IP address, but with no host header definition. (I labeled mine "Rogue Site" because some rogue oonce deliverately set his DNS for his domain to resolve to my popular government site. (I'm not sure why) Anyway, using host headers on multiple sites is a good practice. And having a site defined for the case when no host header is included is a way to catch visitors who don't have your domain name in the HTTP request.
On the site with no host header, create a home page that returns a response header status of "HTTP 410 Gone". Or you can redirect them elsewhere.
Any bots that try to visit your server by the IP address rather than the domain name will resolve the this site and get the error "410 Gone".
I also use Microsoft's URLscan, and modified the URLscan.ini file to exclude the user angent string, "ZmEu".
If you are using IIS 7.X you could use Request Filtering to block the requests
Scan Headers: User-agent
Deny Strings: ZmEu
To try if it works start Chrome with the parameter --User-Agent "ZmEu"
This way asp.net is never invoked and its saves you some CPU/Memory..
I added this pattern in Microsoft URL Rewrite Module:
^$|EasouSpider|Add Catalog|PaperLiBot|Spiceworks|ZumBot|RU_Bot|Wget|Java/1.7.0_25|Slurp|FunWebProducts|80legs|Aboundex|AcoiRobot|Acoon Robot|AhrefsBot|aihit|AlkalineBOT|AnzwersCrawl|Arachnoidea|ArchitextSpider|archive|Autonomy Spider|Baiduspider|BecomeBot|benderthewebrobot|BlackWidow|Bork-edition|Bot mailto:craftbot#yahoo.com|botje|catchbot|changedetection|Charlotte|ChinaClaw|commoncrawl|ConveraCrawler|Covario|crawler|curl|Custo|data mining development project|DigExt|DISCo|discobot|discoveryengine|DOC|DoCoMo|DotBot|Download Demon|Download Ninja|eCatch|EirGrabber|EmailSiphon|EmailWolf|eurobot|Exabot|Express WebPictures|ExtractorPro|EyeNetIE|Ezooms|Fetch|Fetch API|filterdb|findfiles|findlinks|FlashGet|flightdeckreports|FollowSite Bot|Gaisbot|genieBot|GetRight|GetWeb!|gigablast|Gigabot|Go-Ahead-Got-It|Go!Zilla|GrabNet|Grafula|GT::WWW|hailoo|heritrix|HMView|houxou|HTTP::Lite|HTTrack|ia_archiver|IBM EVV|id-search|IDBot|Image Stripper|Image Sucker|Indy Library|InterGET|Internet Ninja|internetmemory|ISC Systems iRc Search 2.1|JetCar|JOC Web Spider|k2spider|larbin|larbin|LeechFTP|libghttp|libwww|libwww-perl|linko|LinkWalker|lwp-trivial|Mass Downloader|metadatalabs|MFC_Tear_Sample|Microsoft URL Control|MIDown tool|Missigua|Missigua Locator|Mister PiX|MJ12bot|MOREnet|MSIECrawler|msnbot|naver|Navroad|NearSite|Net Vampire|NetAnts|NetSpider|NetZIP|NextGenSearchBot|NPBot|Nutch|Octopus|Offline Explorer|Offline Navigator|omni-explorer|PageGrabber|panscient|panscient.com|Papa Foto|pavuk|pcBrowser|PECL::HTTP|PHP/|PHPCrawl|picsearch|pipl|pmoz|PredictYourBabySearchToolbar|RealDownload|Referrer Karma|ReGet|reverseget|rogerbot|ScoutJet|SearchBot|seexie|seoprofiler|Servage Robot|SeznamBot|shopwiki|sindice|sistrix|SiteSnagger|SiteSnagger|smart.apnoti.com|SmartDownload|Snoopy|Sosospider|spbot|suggybot|SuperBot|SuperHTTP|SuperPagesUrlVerifyBot|Surfbot|SurveyBot|SurveyBot|swebot|Synapse|Tagoobot|tAkeOut|Teleport|Teleport Pro|TeleportPro|TweetmemeBot|TwengaBot|twiceler|UbiCrawler|uptimerobot|URI::Fetch|urllib|User-Agent|VoidEYE|VoilaBot|WBSearchBot|Web Image Collector|Web Sucker|WebAuto|WebCopier|WebCopier|WebFetch|WebGo IS|WebLeacher|WebReaper|WebSauger|Website eXtractor|Website Quester|WebStripper|WebStripper|WebWhacker|WebZIP|WebZIP|Wells Search II|WEP Search|Widow|winHTTP|WWWOFFLE|Xaldon WebSpider|Xenu|yacybot|yandex|YandexBot|YandexImages|yBot|YesupBot|YodaoBot|yolinkBot|youdao|Zao|Zealbot|Zeus|ZyBORG|Zmeu
The top listed one, “^$” is the regex for an empty string. I do not allow bots to access the pages unless they identify with a user-agent, I found most often the only things hitting my these applications with out a user agent were security tools gone rogue.
I will advise you when blocking bots be very specific. Simply using a generic word like “fire” could pop positive for “firefox” You can also adjust the regex to fix that issue but I found it much simpler to be more specific and that has the added benefit of being more informative to the next person to touch that setting.
Additionally, you will see I have a rule for Java/1.7.0_25 in this case it happened to be a bot using this version of java to slam my servers. Do be careful blocking language specific user agents like this, some languages such as ColdFusion run on the JVM and use the language user agent and web requests to localhost to assemble things like PDFs. Jruby, Groovy, or Scala, may do similar things, however I have not tested them.
Setup your server up properly and dont worry about the attackers :)
All they do is try some basic possibilities to see if youve overlooked an obvious pitfall.
No point filtering out this one hacker who is nice enough to sign his work for you.
If you have a closer look at your log files you see there are so many bots doing this all the time.

Why the double.Parse throw error in live server and how to track?

I build a website, that:
reads data from a website by HttpWebRequest
Sort all Data
Parse values of the data
and give out newly
On local server it works perfect, but when I push it to my live server, the double.Parse fails with an error.
So:
- how to track what the double.parse is trying to parse?
- how to debug live server?
Lang is ASP.Net / C#.net 2.0
You probably have culture issues.
Pass CultureInfo.InvariantCulture to double.Parse and see if it helps.
To see the exception on the server, add <customErrors mode="Off" /> to the <system.web> element in web.config. (And make sure to remove it afterwords)
Alternatively, you can setup a real error logging system, such as ELMAH, or check the server's event log.
Sounds like a problem with regional settings and the decimal separator. Might be different in your development/live servers.
I would use TryParse instead of just plain Parse. That way, you would control what is being intended to parse.
Like this.
double outval;
if (!double.TryParse(yourvar, out outval)) {
// throw and manage error on your website
}
// life goes on.

Resources