Replacement for ASP.NET Virtual Directory for Multi-tenancy - asp.net

I am working on an ASP.NET WebForms Application, using ASP.NET 4.5
The Application has multi-tenancy support. Each tenant has an own URL like:
http://myApplication.net/DemoTenant1/
Very simplified in the Login.aspx the application calls this method and translates this URL to an internal ID.
public static string getTenant(HttpRequest request)
{
return = request.Url.ToString();
}
The problem is now, we have more than 200 tenants, for each we need to define an WebApplication which is
a bunch of work :-)
probably very inefficient as an own worker process for each tenant is opend
I am looking for a smart replacement where I stay compatible to the old URLs.
I am looking for an idea how to solve this via URL Routing or maybe to mix WebForms with MVC and add a Login Controller?
Also open to other ideas...

I agree with what Alexander said, the proper way to do this would be with URL Routing.
But... If you are trying to save time...
First, remove all of your web applications;
So get rid of...
http://myApplication.net/DemoTenant1/
http://myApplication.net/DemoTenant2/
http://myApplication.net/DemoTenant3/
And then you need to make sure that typing in the following:
http://myApplication.net/
... takes you to the actual WebApplication you want to use.
Then, in the global.asax file... you need to capture 404 exceptions.
So when someone types in:
http://myApplication.net/DemoTenant1/
... it will throw a 404 exception which you could catch in your global.asax file like this:
void Application_Error(object sender, EventArgs e)
{
string urlData = Request.ServerVariables["SCRIPT_NAME"];
// do some string splitting to get the DemoTenant1 value
// Response.Redirect("~Login.aspx?tenant=DemoTenant1");
}
Its a bit messy but I have done this in the past when I was in exactly the same situation as you. Although, you do now have the routing module built by Microsoft (which I did not have at the time). I am quite sure that you can use the Routing modules within Webforms, without having to use MVC.

Related

how i can create a page that works like stackoverflow

i want to know how did these pages work!
like this :
https://stackoverflow.com/questions/ask
there is no extension in end of the address!
is this a way to call webmethods directly?!
i wrote this page , but i think its not right!
public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string name= Request.QueryString["name"];
if (Request.PathInfo == "/SayHi")Response.Write( SayHi(name));
}
[WebMethod]
public static string SayHi(string name)
{
return "Hi " + name;
}
//[WebMethod]
//public static string SayHi()
//{
// return "Hi ";
//}
}
For ASP.NET, you can use ASP.NET Routing, which will allow you to separately configure what the URLs should look like.
You can use it both for regular WebForms apps and with the newer ASP.NET MVC.
Take a look at ASP.Net MVC. It's the framework that runs the Stack Overflow site per this other question. MVC uses the routing engine to allow urls without a trailing ".aspx".
StackOverflow uses ASP.NET MVC as its core web technology and you are right there are no extensions, because there is a routing engine that handles requests.
In your example:
http://stackoverflow.com/questions/ask
This would equate to the StackOverflow site invoking a controller named ask and displaying its default view, based upon the rules setup for the routing engine.
Read ASP.NET MVC Routing Overview for more information on how ASP.NET MVC routing works.
UPDATE:
For more information on what software and hardware the StackOverflow site was originally built on, then read What Was Stack Overflow Built With?. This is generally still correct, although some of the hardware and amount of each may have changed with an increased user base.

ASP.NET Routing - GetRouteData does not work if path exists

I have a HttpModule which intercepts all requests and loads data from the database based on routing rules. However, I run into one problem all the time; GetRouteData only works if the path does not exist:
var routeData = RouteTable.Routes.GetRouteData(new HttpContextWrapper(HttpContext.Current));
Assuming a request comes in for the url http://localhost/contact, I get the correct routing data relating to that url if that path does not exist in the file system. The problem appears when I want to customize the page at that url which I do by creating an aspx page in the path ~/contact/default.aspx. Once I do that, GetRouteData return null.
I have even tried creating a new HttpContext object, but I still can not retrieve route data if the page exists.
Has anyone ever run into this problem? Is there a solution/workaround?
All help will be greatly appreciated.
Set RouteCollection.RouteExistingFiles to true.
public static void RegisterRoutes(RouteCollection routes)
{
// Cause paths to be routed even if they exists physically
routes.RouteExistingFiles = true;
// Map routes
routes.MapPageRoute("...", "...", "...");
}
Beware though. IIS7 behaves a little differently than the server used when debugging within Visual Studio. I got bit by this when I deployed my application to the web. Check out this feedback I submitted to Microsoft Connection.

stopping ZmEu attacks with ASP.NET MVC

recently my elmah exception logs are full of attempts from people using thus dam ZmEu security software against my server
for those thinking “what the hell is ZmEu?” here is an explanation...
“ZmEu appears to be a security tool used for discovering security holes in in version 2.x.x of PHPMyAdmin, a web based MySQL database manager. The tool appears to have originated from somewhere in Eastern Europe. Like what seems to happen to all black hat security tools, it made its way to China, where it has been used ever since for non stop brute force attacks against web servers all over the world.”
Heres a great link about this annoying attack -> http://www.philriesch.com/articles/2010/07/getting-a-little-sick-of-zmeu/
Im using .net so they aint gonna find PHPMyAdmin on my server but the fact that my logs are full ofZmEu attacks its becoming tiresome.
The link above provide a great fix using HTAccess, but im using IIS7.5, not apache.
I have a asp.net MVC 2 site, so im using the global.asax file to create my routes
Here is the HTAccess seugestion
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{REQUEST_URI} !^/path/to/your/abusefile.php
RewriteCond %{HTTP_USER_AGENT} (.*)ZmEu(.*)
RewriteRule .* http://www.yourdomain.com/path/to/your/abusefile.php [R=301,L]
</IfModule>
My question is there anything i can add like this in the Global.ascx file that does the same thing ?
An alternative answer to my other one ... this one specifically stops Elmah from logging the 404 errors generated by ZmEu, while leaving the rest of your sites behaviour unchanged. This might be a bit less conspicuous than returning messages straight to the hackers.
You can control what sorts of things Elmah logs in various ways, one way is adding this to the Global.asax
void ErrorLog_Filtering(object sender, ExceptionFilterEventArgs e)
{
if (e.Exception.GetBaseException() is HttpException)
{
HttpException httpEx = (HttpException)e.Exception.GetBaseException();
if (httpEx.GetHttpCode() == 404)
{
if (Request.UserAgent.Contains("ZmEu"))
{
// stop Elmah from logging it
e.Dismiss();
// log it somewhere else
logger.InfoFormat("ZmEu request detected from IP {0} at address {1}", Request.UserHostAddress, Request.Url);
}
}
}
}
For this event to fire, you'll need to reference the Elmah DLL from your project, and add a using Elmah; to the top of your Global.asax.cs.
The line starting logger.InfoFormat assumes you are using log4net. If not, change it to something else.
The ZmEu attacks were annoying me too, so I looked into this. It can be done with an HttpModule.
Add the following class to your project:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Security.Principal;
//using log4net;
namespace YourProject
{
public class UserAgentBlockModule : IHttpModule
{
//private static readonly ILog logger = LogManager.GetLogger(typeof(UserAgentBlockModule));
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpRequest request = application.Request;
if (request.UserAgent.Contains("ZmEu"))
{
//logger.InfoFormat("ZmEu attack detected from IP {0}, aiming for url {1}", request.UserHostAddress, request.Url.ToString());
HttpContext.Current.Server.Transfer("RickRoll.htm");
}
}
public void Dispose()
{
// nothing to dispose
}
}
}
and then add the following line to web.config
<httpModules>
...
<add name="UserAgentBlockFilter" type="YourProject.UserAgentBlockModule, YourProject" />
</httpModules>
... and then add a suitable htm page to your project so there's somewhere to redirect them to.
Note that if you're using log4net you can comment in the log4net lines in the code to log the occasions when the filter kicks in.
This module has worked for me in testing (when I send the right userAgent values to it). I haven't tested it on a real server yet. But it should do the trick.
Although, as I said in the comments above, something tells me that returning 404 errors might be a less conspicuous response than letting the hackers know that you're aware of them. Some of them might see something like this as a challenge. But then, I'm not an expert on hacker psychology, so who knows.
Whenever I get a ZmEu or phpMyAdmin or forgotten_password I redirect the query to:
<meta http-equiv='refresh' content='0;url=http://www.ripe.net$uri' />
[or apnic or arin]. I'm hoping the admins at ripe.net don't like getting hacked.
On IIS 6.0 you can also try this...
Set your website in IIS to use host headers. Then create a web site in IIS, using the same IP address, but with no host header definition. (I labeled mine "Rogue Site" because some rogue oonce deliverately set his DNS for his domain to resolve to my popular government site. (I'm not sure why) Anyway, using host headers on multiple sites is a good practice. And having a site defined for the case when no host header is included is a way to catch visitors who don't have your domain name in the HTTP request.
On the site with no host header, create a home page that returns a response header status of "HTTP 410 Gone". Or you can redirect them elsewhere.
Any bots that try to visit your server by the IP address rather than the domain name will resolve the this site and get the error "410 Gone".
I also use Microsoft's URLscan, and modified the URLscan.ini file to exclude the user angent string, "ZmEu".
If you are using IIS 7.X you could use Request Filtering to block the requests
Scan Headers: User-agent
Deny Strings: ZmEu
To try if it works start Chrome with the parameter --User-Agent "ZmEu"
This way asp.net is never invoked and its saves you some CPU/Memory..
I added this pattern in Microsoft URL Rewrite Module:
^$|EasouSpider|Add Catalog|PaperLiBot|Spiceworks|ZumBot|RU_Bot|Wget|Java/1.7.0_25|Slurp|FunWebProducts|80legs|Aboundex|AcoiRobot|Acoon Robot|AhrefsBot|aihit|AlkalineBOT|AnzwersCrawl|Arachnoidea|ArchitextSpider|archive|Autonomy Spider|Baiduspider|BecomeBot|benderthewebrobot|BlackWidow|Bork-edition|Bot mailto:craftbot#yahoo.com|botje|catchbot|changedetection|Charlotte|ChinaClaw|commoncrawl|ConveraCrawler|Covario|crawler|curl|Custo|data mining development project|DigExt|DISCo|discobot|discoveryengine|DOC|DoCoMo|DotBot|Download Demon|Download Ninja|eCatch|EirGrabber|EmailSiphon|EmailWolf|eurobot|Exabot|Express WebPictures|ExtractorPro|EyeNetIE|Ezooms|Fetch|Fetch API|filterdb|findfiles|findlinks|FlashGet|flightdeckreports|FollowSite Bot|Gaisbot|genieBot|GetRight|GetWeb!|gigablast|Gigabot|Go-Ahead-Got-It|Go!Zilla|GrabNet|Grafula|GT::WWW|hailoo|heritrix|HMView|houxou|HTTP::Lite|HTTrack|ia_archiver|IBM EVV|id-search|IDBot|Image Stripper|Image Sucker|Indy Library|InterGET|Internet Ninja|internetmemory|ISC Systems iRc Search 2.1|JetCar|JOC Web Spider|k2spider|larbin|larbin|LeechFTP|libghttp|libwww|libwww-perl|linko|LinkWalker|lwp-trivial|Mass Downloader|metadatalabs|MFC_Tear_Sample|Microsoft URL Control|MIDown tool|Missigua|Missigua Locator|Mister PiX|MJ12bot|MOREnet|MSIECrawler|msnbot|naver|Navroad|NearSite|Net Vampire|NetAnts|NetSpider|NetZIP|NextGenSearchBot|NPBot|Nutch|Octopus|Offline Explorer|Offline Navigator|omni-explorer|PageGrabber|panscient|panscient.com|Papa Foto|pavuk|pcBrowser|PECL::HTTP|PHP/|PHPCrawl|picsearch|pipl|pmoz|PredictYourBabySearchToolbar|RealDownload|Referrer Karma|ReGet|reverseget|rogerbot|ScoutJet|SearchBot|seexie|seoprofiler|Servage Robot|SeznamBot|shopwiki|sindice|sistrix|SiteSnagger|SiteSnagger|smart.apnoti.com|SmartDownload|Snoopy|Sosospider|spbot|suggybot|SuperBot|SuperHTTP|SuperPagesUrlVerifyBot|Surfbot|SurveyBot|SurveyBot|swebot|Synapse|Tagoobot|tAkeOut|Teleport|Teleport Pro|TeleportPro|TweetmemeBot|TwengaBot|twiceler|UbiCrawler|uptimerobot|URI::Fetch|urllib|User-Agent|VoidEYE|VoilaBot|WBSearchBot|Web Image Collector|Web Sucker|WebAuto|WebCopier|WebCopier|WebFetch|WebGo IS|WebLeacher|WebReaper|WebSauger|Website eXtractor|Website Quester|WebStripper|WebStripper|WebWhacker|WebZIP|WebZIP|Wells Search II|WEP Search|Widow|winHTTP|WWWOFFLE|Xaldon WebSpider|Xenu|yacybot|yandex|YandexBot|YandexImages|yBot|YesupBot|YodaoBot|yolinkBot|youdao|Zao|Zealbot|Zeus|ZyBORG|Zmeu
The top listed one, “^$” is the regex for an empty string. I do not allow bots to access the pages unless they identify with a user-agent, I found most often the only things hitting my these applications with out a user agent were security tools gone rogue.
I will advise you when blocking bots be very specific. Simply using a generic word like “fire” could pop positive for “firefox” You can also adjust the regex to fix that issue but I found it much simpler to be more specific and that has the added benefit of being more informative to the next person to touch that setting.
Additionally, you will see I have a rule for Java/1.7.0_25 in this case it happened to be a bot using this version of java to slam my servers. Do be careful blocking language specific user agents like this, some languages such as ColdFusion run on the JVM and use the language user agent and web requests to localhost to assemble things like PDFs. Jruby, Groovy, or Scala, may do similar things, however I have not tested them.
Setup your server up properly and dont worry about the attackers :)
All they do is try some basic possibilities to see if youve overlooked an obvious pitfall.
No point filtering out this one hacker who is nice enough to sign his work for you.
If you have a closer look at your log files you see there are so many bots doing this all the time.

Cookies. Case Sensitive Paths. How to rewrite URLs

We have a sizable collection of applications (>50) all running under a single domain but with different virtual directories. Pretty standard stuff. We store cookies using paths to segregate cookies by application. Paths are set the the Application Path.
This seems to work fine as long as the casing of the URL is the same as the application path. If it is different, the browser fails to retrieve the collection of cookies.
Is there any very basic way (ISAPI? Global ASAX?) to rewrite all URLs so that they match the Application Path? Ideally this is something that can be configured at the application level.
Currently stuck on IIS6.
thanks
Wondering if this is a possible (even a good) solution:
In Global.asax:
void Application_BeginRequest(object sender, EventArgs e)
{
string url = HttpContext.Current.Request.Url.PathAndQuery;
string application = HttpContext.Current.Request.ApplicationPath;
if (!url.StartsWith(application))
{
HttpContext.Current.Response.Redirect(application + url.Substring(application.Length));
}
}
Use relative URLs in conjunction with a BASE tag might work?

How to defer to the "default" HttpHandler for ASP.NET MVC after handling a potential override?

I need to implement a custom handler for MVC that gives me the first look at URLs requests to determine if it should rewrite the urls before submitting the URL to the routing engine. Any pattern is a candidate for the redirect, so I need to intercept the URL request before the standard MVC routing engine takes a look at it.
After looking at a whole bunch of examples, blogs, articles, etc. of implementing custom routing for ASP.NET MVC, I still haven't found a use-case that fits my scenario. We have an existing implementation for ASP.NET that works fine, but we're returning the "standard" handler when no overrides are matched. The technique we're currently using is very similar to that described in this MSDN article: http://msdn.microsoft.com/en-us/library/ms972974.aspx#urlrewriting_topic5 which says "the HTTP handler factory can return the HTTP handler returned by the System.Web.UI.PageParser class's GetCompiledPageInstance() method. (This is the same technique by which the built-in ASP.NET Web page HTTP handler factory, PageHandlerFactory, works.)".
What I'm trying to figure out is: how can I get the first look at the incoming request, then pass it to the MVC routing if the current request doesn't match any of the dynamically configured (via a data table) values?
You would need to:
Not use the standard MapRoute extension method in global.asax (this is what sets up the route handler).
Instead, write your own route subtype, like this.
Although I explained to Craig I may not need to override scenario any more (favoring the future rather than the past), I realized there is an easy and reasonably clean place this override could be implemented - in the Default.aspx code behind file. Here's the standard Page_Load method:
public void Page_Load(object sender, System.EventArgs e)
{
// Change the current path so that the Routing handler can correctly interpret
// the request, then restore the original path so that the OutputCache module
// can correctly process the response (if caching is enabled).
string originalPath = Request.Path;
HttpContext.Current.RewritePath(Request.ApplicationPath, false);
IHttpHandler httpHandler = new MvcHttpHandler();
httpHandler.ProcessRequest(HttpContext.Current);
HttpContext.Current.RewritePath(originalPath, false);
}
As you can see, it would be very easy to stick a url processing interceptor in front of the MVC handler. This changes the standard default page behavior and would need to be reflected in any other MVC app you'd want to create with this same method, but it sure looks like it would work.

Resources