Cookies. Case Sensitive Paths. How to rewrite URLs - asp.net

We have a sizable collection of applications (>50) all running under a single domain but with different virtual directories. Pretty standard stuff. We store cookies using paths to segregate cookies by application. Paths are set the the Application Path.
This seems to work fine as long as the casing of the URL is the same as the application path. If it is different, the browser fails to retrieve the collection of cookies.
Is there any very basic way (ISAPI? Global ASAX?) to rewrite all URLs so that they match the Application Path? Ideally this is something that can be configured at the application level.
Currently stuck on IIS6.
thanks

Wondering if this is a possible (even a good) solution:
In Global.asax:
void Application_BeginRequest(object sender, EventArgs e)
{
string url = HttpContext.Current.Request.Url.PathAndQuery;
string application = HttpContext.Current.Request.ApplicationPath;
if (!url.StartsWith(application))
{
HttpContext.Current.Response.Redirect(application + url.Substring(application.Length));
}
}

Use relative URLs in conjunction with a BASE tag might work?

Related

Remove query strings from static resources

My website is running on ASP.NET platform and recently i test my website on pingdom and i found the below error.
Resources with a "?" in the URL are not cached by some proxy caching
servers. Remove the query string and encode the parameters into the
URL for the following resources:
https://projectsdeal.co.uk/ScriptResource.axd?d ...
63Nawdr4rAt1lvT7c_zyBEkV9INg0&t=ffffffffe3663df5
https://projectsdeal.co.uk/ScriptResource.axd?d ...
JGTlZFM0WRegQM9wdaZV3fQWMKwg2&t=ffffffffe3663df5
Simple leave it as it is (its not an error !) - you can not remove this query string from resource because this is the id on how to load that resource from asp.net
The message that you get is actually talk for a proxy caching servers - what is a proxy caching server ? a middle computer that cache pages of your site, not the actually client computer - that can hold in cache that page and not bring slower your site in general.
So your client can hold that resource on cache if you set them correctly, and from what I see asp.net take care correctly and you resource are cached just fine - see this screen shot.
Now if you wish to add even more aggressive cache you can use the global.asax and do something like
protected void Application_BeginRequest(Object sender, EventArgs e)
{
string cTheFile = HttpContext.Current.Request.Path;
if (cTheFile.EndsWith("WebResource.axd", StringComparison.InvariantCultureIgnoreCase))
{
JustSetSomeCache(app);
}
}
private static void JustSetSomeCache(HttpApplication app)
{
app.Response.Cache.AppendCacheExtension("post-check=900, pre-check=3600");
app.Response.Cache.SetExpires(DateTime.UtcNow.AddHours(32));
app.Response.Cache.SetMaxAge(new TimeSpan(32, 0, 0));
app.Response.Cache.SetCacheability(HttpCacheability.Public);
app.Response.AppendHeader("Vary", "Accept-Encoding");
}
What is the different ? The second cache is not check the server at all for file change as the asp.net do, you can gain one webserver call.

Replacement for ASP.NET Virtual Directory for Multi-tenancy

I am working on an ASP.NET WebForms Application, using ASP.NET 4.5
The Application has multi-tenancy support. Each tenant has an own URL like:
http://myApplication.net/DemoTenant1/
Very simplified in the Login.aspx the application calls this method and translates this URL to an internal ID.
public static string getTenant(HttpRequest request)
{
return = request.Url.ToString();
}
The problem is now, we have more than 200 tenants, for each we need to define an WebApplication which is
a bunch of work :-)
probably very inefficient as an own worker process for each tenant is opend
I am looking for a smart replacement where I stay compatible to the old URLs.
I am looking for an idea how to solve this via URL Routing or maybe to mix WebForms with MVC and add a Login Controller?
Also open to other ideas...
I agree with what Alexander said, the proper way to do this would be with URL Routing.
But... If you are trying to save time...
First, remove all of your web applications;
So get rid of...
http://myApplication.net/DemoTenant1/
http://myApplication.net/DemoTenant2/
http://myApplication.net/DemoTenant3/
And then you need to make sure that typing in the following:
http://myApplication.net/
... takes you to the actual WebApplication you want to use.
Then, in the global.asax file... you need to capture 404 exceptions.
So when someone types in:
http://myApplication.net/DemoTenant1/
... it will throw a 404 exception which you could catch in your global.asax file like this:
void Application_Error(object sender, EventArgs e)
{
string urlData = Request.ServerVariables["SCRIPT_NAME"];
// do some string splitting to get the DemoTenant1 value
// Response.Redirect("~Login.aspx?tenant=DemoTenant1");
}
Its a bit messy but I have done this in the past when I was in exactly the same situation as you. Although, you do now have the routing module built by Microsoft (which I did not have at the time). I am quite sure that you can use the Routing modules within Webforms, without having to use MVC.

stopping ZmEu attacks with ASP.NET MVC

recently my elmah exception logs are full of attempts from people using thus dam ZmEu security software against my server
for those thinking “what the hell is ZmEu?” here is an explanation...
“ZmEu appears to be a security tool used for discovering security holes in in version 2.x.x of PHPMyAdmin, a web based MySQL database manager. The tool appears to have originated from somewhere in Eastern Europe. Like what seems to happen to all black hat security tools, it made its way to China, where it has been used ever since for non stop brute force attacks against web servers all over the world.”
Heres a great link about this annoying attack -> http://www.philriesch.com/articles/2010/07/getting-a-little-sick-of-zmeu/
Im using .net so they aint gonna find PHPMyAdmin on my server but the fact that my logs are full ofZmEu attacks its becoming tiresome.
The link above provide a great fix using HTAccess, but im using IIS7.5, not apache.
I have a asp.net MVC 2 site, so im using the global.asax file to create my routes
Here is the HTAccess seugestion
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_URI} !^/path/to/your/abusefile.php
RewriteCond %{HTTP_USER_AGENT} (.*)ZmEu(.*)
RewriteRule .* http://www.yourdomain.com/path/to/your/abusefile.php [R=301,L]
</IfModule>
My question is there anything i can add like this in the Global.ascx file that does the same thing ?
An alternative answer to my other one ... this one specifically stops Elmah from logging the 404 errors generated by ZmEu, while leaving the rest of your sites behaviour unchanged. This might be a bit less conspicuous than returning messages straight to the hackers.
You can control what sorts of things Elmah logs in various ways, one way is adding this to the Global.asax
void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e)
{
if (e.Exception.GetBaseException() is HttpException)
{
HttpException httpEx = (HttpException)e.Exception.GetBaseException();
if (httpEx.GetHttpCode() == 404)
{
if (Request.UserAgent.Contains("ZmEu"))
{
// stop Elmah from logging it
e.Dismiss();
// log it somewhere else
logger.InfoFormat("ZmEu request detected from IP {0} at address {1}", Request.UserHostAddress, Request.Url);
}
}
}
}
For this event to fire, you'll need to reference the Elmah DLL from your project, and add a using Elmah; to the top of your Global.asax.cs.
The line starting logger.InfoFormat assumes you are using log4net. If not, change it to something else.
The ZmEu attacks were annoying me too, so I looked into this. It can be done with an HttpModule.
Add the following class to your project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Security.Principal;
//using log4net;
namespace YourProject
{
public class UserAgentBlockModule : IHttpModule
{
//private static readonly ILog logger = LogManager.GetLogger(typeof(UserAgentBlockModule));
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpRequest request = application.Request;
if (request.UserAgent.Contains("ZmEu"))
{
//logger.InfoFormat("ZmEu attack detected from IP {0}, aiming for url {1}", request.UserHostAddress, request.Url.ToString());
HttpContext.Current.Server.Transfer("RickRoll.htm");
}
}
public void Dispose()
{
// nothing to dispose
}
}
}
and then add the following line to web.config
<httpModules>
...
<add name="UserAgentBlockFilter" type="YourProject.UserAgentBlockModule, YourProject" />
</httpModules>
... and then add a suitable htm page to your project so there's somewhere to redirect them to.
Note that if you're using log4net you can comment in the log4net lines in the code to log the occasions when the filter kicks in.
This module has worked for me in testing (when I send the right userAgent values to it). I haven't tested it on a real server yet. But it should do the trick.
Although, as I said in the comments above, something tells me that returning 404 errors might be a less conspicuous response than letting the hackers know that you're aware of them. Some of them might see something like this as a challenge. But then, I'm not an expert on hacker psychology, so who knows.
Whenever I get a ZmEu or phpMyAdmin or forgotten_password I redirect the query to:
<meta http-equiv='refresh' content='0;url=http://www.ripe.net$uri' />
[or apnic or arin]. I'm hoping the admins at ripe.net don't like getting hacked.
On IIS 6.0 you can also try this...
Set your website in IIS to use host headers. Then create a web site in IIS, using the same IP address, but with no host header definition. (I labeled mine "Rogue Site" because some rogue oonce deliverately set his DNS for his domain to resolve to my popular government site. (I'm not sure why) Anyway, using host headers on multiple sites is a good practice. And having a site defined for the case when no host header is included is a way to catch visitors who don't have your domain name in the HTTP request.
On the site with no host header, create a home page that returns a response header status of "HTTP 410 Gone". Or you can redirect them elsewhere.
Any bots that try to visit your server by the IP address rather than the domain name will resolve the this site and get the error "410 Gone".
I also use Microsoft's URLscan, and modified the URLscan.ini file to exclude the user angent string, "ZmEu".
If you are using IIS 7.X you could use Request Filtering to block the requests
Scan Headers: User-agent
Deny Strings: ZmEu
To try if it works start Chrome with the parameter --User-Agent "ZmEu"
This way asp.net is never invoked and its saves you some CPU/Memory..
I added this pattern in Microsoft URL Rewrite Module:
^$|EasouSpider|Add Catalog|PaperLiBot|Spiceworks|ZumBot|RU_Bot|Wget|Java/1.7.0_25|Slurp|FunWebProducts|80legs|Aboundex|AcoiRobot|Acoon Robot|AhrefsBot|aihit|AlkalineBOT|AnzwersCrawl|Arachnoidea|ArchitextSpider|archive|Autonomy Spider|Baiduspider|BecomeBot|benderthewebrobot|BlackWidow|Bork-edition|Bot mailto:craftbot#yahoo.com|botje|catchbot|changedetection|Charlotte|ChinaClaw|commoncrawl|ConveraCrawler|Covario|crawler|curl|Custo|data mining development project|DigExt|DISCo|discobot|discoveryengine|DOC|DoCoMo|DotBot|Download Demon|Download Ninja|eCatch|EirGrabber|EmailSiphon|EmailWolf|eurobot|Exabot|Express WebPictures|ExtractorPro|EyeNetIE|Ezooms|Fetch|Fetch API|filterdb|findfiles|findlinks|FlashGet|flightdeckreports|FollowSite Bot|Gaisbot|genieBot|GetRight|GetWeb!|gigablast|Gigabot|Go-Ahead-Got-It|Go!Zilla|GrabNet|Grafula|GT::WWW|hailoo|heritrix|HMView|houxou|HTTP::Lite|HTTrack|ia_archiver|IBM EVV|id-search|IDBot|Image Stripper|Image Sucker|Indy Library|InterGET|Internet Ninja|internetmemory|ISC Systems iRc Search 2.1|JetCar|JOC Web Spider|k2spider|larbin|larbin|LeechFTP|libghttp|libwww|libwww-perl|linko|LinkWalker|lwp-trivial|Mass Downloader|metadatalabs|MFC_Tear_Sample|Microsoft URL Control|MIDown tool|Missigua|Missigua Locator|Mister PiX|MJ12bot|MOREnet|MSIECrawler|msnbot|naver|Navroad|NearSite|Net Vampire|NetAnts|NetSpider|NetZIP|NextGenSearchBot|NPBot|Nutch|Octopus|Offline Explorer|Offline Navigator|omni-explorer|PageGrabber|panscient|panscient.com|Papa Foto|pavuk|pcBrowser|PECL::HTTP|PHP/|PHPCrawl|picsearch|pipl|pmoz|PredictYourBabySearchToolbar|RealDownload|Referrer Karma|ReGet|reverseget|rogerbot|ScoutJet|SearchBot|seexie|seoprofiler|Servage Robot|SeznamBot|shopwiki|sindice|sistrix|SiteSnagger|SiteSnagger|smart.apnoti.com|SmartDownload|Snoopy|Sosospider|spbot|suggybot|SuperBot|SuperHTTP|SuperPagesUrlVerifyBot|Surfbot|SurveyBot|SurveyBot|swebot|Synapse|Tagoobot|tAkeOut|Teleport|Teleport Pro|TeleportPro|TweetmemeBot|TwengaBot|twiceler|UbiCrawler|uptimerobot|URI::Fetch|urllib|User-Agent|VoidEYE|VoilaBot|WBSearchBot|Web Image Collector|Web Sucker|WebAuto|WebCopier|WebCopier|WebFetch|WebGo IS|WebLeacher|WebReaper|WebSauger|Website eXtractor|Website Quester|WebStripper|WebStripper|WebWhacker|WebZIP|WebZIP|Wells Search II|WEP Search|Widow|winHTTP|WWWOFFLE|Xaldon WebSpider|Xenu|yacybot|yandex|YandexBot|YandexImages|yBot|YesupBot|YodaoBot|yolinkBot|youdao|Zao|Zealbot|Zeus|ZyBORG|Zmeu
The top listed one, “^$” is the regex for an empty string. I do not allow bots to access the pages unless they identify with a user-agent, I found most often the only things hitting my these applications with out a user agent were security tools gone rogue.
I will advise you when blocking bots be very specific. Simply using a generic word like “fire” could pop positive for “firefox” You can also adjust the regex to fix that issue but I found it much simpler to be more specific and that has the added benefit of being more informative to the next person to touch that setting.
Additionally, you will see I have a rule for Java/1.7.0_25 in this case it happened to be a bot using this version of java to slam my servers. Do be careful blocking language specific user agents like this, some languages such as ColdFusion run on the JVM and use the language user agent and web requests to localhost to assemble things like PDFs. Jruby, Groovy, or Scala, may do similar things, however I have not tested them.
Setup your server up properly and dont worry about the attackers :)
All they do is try some basic possibilities to see if youve overlooked an obvious pitfall.
No point filtering out this one hacker who is nice enough to sign his work for you.
If you have a closer look at your log files you see there are so many bots doing this all the time.

How to write redirect application in asp.net?

I need to move all requests from one domain to another. I want to change part of URL, like subdomain.olddomain/url -> subdomain.newdomain/url.
I was sure that this is piece of cake and wrote Application_Begin request as:
void Application_BeginRequest(object sender, EventArgs e)
{
string url = Request.Url.ToString().ToLower();
string from = ConfigurationSettings.AppSettings["from"];
if (url.IndexOf(from) >= 0)
{
url = url.Replace(from, ConfigurationSettings.AppSettings["to"]);
Response.Redirect(url);
}
else
{
if (url.IndexOf("error.aspx") < 0)
{
Response.Redirect("Error.aspx?url=" + Server.UrlEncode(url));
}
}
}
So far, I forget, that BeginRequest started only when file physically exist.
Any ideas, how I can make such redirect in asp.net without creating hundreds of old pages?
Not 100% sure, but I think if you uncheck the Check that file exists option in IIS, it should work. How you do this depends on the IIS version.
I would recommend using a tool like ISAPIRewrite [http://www.isapirewrite.com/] to manage this for IIS 6, or the built in URL Rewriting for IIS7.
No reason to reinvent the wheel...
I believe you can specify an ASPX to run on 404 errors. That page can perform the redirect.
I would recommend doing this on the DNS level. I would redirect with a permanent 301 redirect to ensure that your search engine rankings are not affected.

Set Path dynamically in Forms Authentication

Here's the problem we facing.
In a hosted environment setup, we're hosting the same project multiple times. We currently manually specify a Path in the forms config section of our web.config. However, to smooth out our deployment process, we'd like to set the Path depending on the Virtual Directory name.
Is there a way for us to dynamically set the Path in the web.config?
There's an overload of FormsAuthentication.SetAuthCookie that takes the cookie path as a parameter, so if you're handling the login process yourself then you can just pass the path of your choice.
The problem is that the standard System.Web.UI.WebControls.Login will only use the default path value. You could, however, handle the LoggedIn event to fix the path...
void FixCookie( object sender, EventArgs args )
{
Response.Cookies[FormsAuthentication.FormsCookieName].Path = "/my-custom-path";
}

Resources