I have a problem with a website. The server is IIS 7, running ASP.NET on the .NET 4.0 CLR. We are hosting a Sitecore application and I hesitated in adding it as a tag, because I really feel it's more the 'why' of the question and not necessarily related to the underlying tech that's causing the problem.
One of the things Sitecore does is add a boatload of custom pipelines. One of these pipelines is called the LayoutResolverPipeline, which is responsible for determining the path to the layout file the requested page will use. We've come up with a terribly useful and complicated way of hosting global content items across multiple domains. Which domain will serve which items is completely configurable through the Administrator web GUI (aka the Sitecore shell). The end goal is to make it possible for our marketing/consumer experience team to run multivariate testing to find the best user experience.
To that end, we have a 'launch' page that is responsible for considering everything about the current user, everything about the current system and domain settings, and determines which experience to give the customer. For most domains, this comes down to a weighted roll of the dice - for the test results to be statistically sound, they have to be sufficiently random. It is written as an IHttpHandler and it stores its decisions in the HttpContext.Current.Session (which is accomplished by also having it implement the IRequiresSessionState interface). The decision is stored so that if the customer decides to backtrack, we don't roll the dice again and instead give them a consistent experience for the duration of their visit. The decision is carried out by the handler issuing a 302 redirect for the next page in the customer's visit.
The launch handler is defined in the web.config file in the usual way:
<system.webServer>
<handlers>
<add verb="*" path="launch.ashx"
type="CMS.HttpHandlers.LaunchRequestHandler, CMS"
name="LaunchHandler"/>
We occasionally do business with partners who, for whatever reason, don't want the resultant 302 between their page and ours. They will instead link directly to a certain customer experience. Over time, however, we depreciate, move or obsolete whole user experiences, which for certain demanding and lazy partners result in lingering links to unsupported or non-existent items. We also have to handle the case of people mis-typing, mis-remembering, mis-linking, revisiting from their browser history or just trying random urls.
These latter cases have resulted in some nasty exceptions in the LayoutResolverPipeline. I am trying to resolve these exceptions by having it fall back to the LaunchHandler if it can't figure out what to do. I have this implemented as a Redirect, but I would like to simply invoke the LaunchHandler directly; it is going to do a 301 to a different item, anyways, and having multiple redirects on a single request is a costly waste of resources that I would like to avoid.
Enough background. The problem is that LayoutResolverPipeline is bound to the HttpBeginRequest portion of the IIS processing stack, which is well before the Session information is ready. This is a constraint of Sitecore's and it can't be moved without solving a whole load of other problems.
Questions:
Is there a way to pass control to a specific IHttpHandler other than redirecting to the URL it is bound to?
Is there a way to rejoin the code a later point in the event pipeline? I suppose this would mean binding to the Application.PostAcquireRequestState event for a single request only, which sounds ludicrous.
Is there a way to acquire session state information early?
I'm of course open to suggestions for how I might be doing it completely wrong. Oh, and if you know of a more useful tag to throw on it for the Asp.net/IIS pipeline specifically, I wasn't able to find one that wasn't a read herring. Thanks!
I don't think you want to go manually invoking any handlers... that sounds pretty hacky. What about using Server.Transfer() here instead of a 301 Redirect? Then it's transparent on the user's end. Of course the disadvantage there is that it doesn't update the apparent URL, but you can't do that without some sort of redirect going on.
Related
in an old ASP.NET Web Forms application, developed in Visual Studio 2010,
suddenly does not run anymore, and in the log file appears this message:
Session state has created a session id,
but cannot save it because the response was already flushed by the application.
No new deployment has been made, and no code modifications take place.
Until now I didn't find any solution to this.
What I have to check?
I state that the source code is no longer available, and therefore it would be very difficult to change the code and proceed with a new deployment.
Thanks in advance.
Luis
This would suggest that someone might be hitting the site and jumping directly to some URL (and thus code) that say does some response redirect to another page or some such.
Remember, when code behind runs, and say re-directs to another page, in most cases the running code for the current page is terminated, and that is normal behaviors.
However, the idea that you going to debug code and debug a web site when you don't have the code to debug? Gee, I don't see how that's going to work at all. As noted, if this just started, then it sounds like incoming requests are to pages that don't expect to be hit "first", but some pages that expect to be ONLY called from other pages in the site when some session() and imporant values are setup BEFORE such pages are to be hit.
It also not clear if the site is using sql based sessions, or just in-memory sessions. In memory can (and is) faster, but it also not particually relaible. Now, if you deployed to a new web server or new hosting, then often session errrors can now start to appear, and this is due to the MASSIVE HUGE LARGE DIFFERENT of using cloud based hosting vs that of older hosting soluions that run on a single server.
Clould computing is real utility computing, and thus when you host a web site on such systems, then in-memory session() cannot be used anymore, since multiple servers can and will be used to "dish out" web pages. Since more then one server might be used, then obvisouly in-memory sesson() can't work, since a few web pages might be served out by one server, and then a few more pages might be served out by another server. And using shared memory for a session is limited to ONE server, since multiplel servers don't and can't transfer their memory to other servers.
So, this suggests that you want to be sure that sql server based sessions are being used here - and for any kind of server farm, or any kind of system that does load balances between more then one server, then of course you HAVE to use sql server based sessions, since in memory can't work in that kind of environment.
The error could also be due to excessive server loads - often the session database is "locked" for a short period of time, and can thus often be a bottleneck. So, for say years you might not see a issue, but then as load and use of the web site increases, then this can become noticed where as in the past it was not. I suppose the database used for storing sessions could be checked, or looked at, since as you note, you don't have the ability to test + debug the code. I would REALLY but REALLY work towards solving and fixing this lack of source code for the web site, since without that, you have really no means to manage, maintain, and fix issues for that web site.
But, abrupt terminations of web pages? As noted, this could be a error triggered in code, and thus the page never finished what it was supposed to do. And as noted, perhaps a page that expects some session() values but does not have them as explained above could be the problem. It not clear if your errors also shows what URL this was occurring for.
While nothing seems to have changed - something obviously did.
Ultimate, you need to get that source code, or deal with the people + vendor that supplies the code for that site. If you don't have a vendor, and you don't have source code, you quite much attempting to work on a car that you cant even open the hood to check what's going on under that hood.
so, one suggestion here? Someone is hitting a page that expected some value(s) in session to exist. Often the simple solution is to shove ANY simple or dummy value into session so session REALLY does get created, and then when the page attempts to save the session(), there is one to save!!!
In other words, this error often occurs when session is attempted to be saved, but no sesison exists. For such pages, as noted, a simple tiny small code change of doing this session("zoozoo") = "my useless text" will fix this error. So, it sounds like session is being lost.
As noted, a error on a web page can also trigger a app-pool re-start. If app-pool re-starts, then session is lost (in memory session). Now, with session being lost, then any page that decides to terminate early AND ALSO having used session() will trigger this error.
So, this sounds like app-pool is being re-started and session is being lost. (you can google why app-pool restarts and for the many reasons). However, critical to this issue would be are you using sql based sessions, or in-memory (server) sessions? So, this sounds like some code is triggering a error, and with a error triggered, app-pool re-starts. And with app-pool being restarted, then in-memory session is blown away. And now, without ANY session at all, then attempts to save the session trigger the exact error message you see. (and this is why shoving a dummy value into the session allows and can fix some pages - since you can't save a "nothing" session, and if you do, then you get that exact error message.
but, as noted, you can't make these simple changes to code anyway, right?
But, first on this issue - are you using memory based sessions or not? And that feature can be setup and configured in IIS, and without changes to the code base. So, one quick fix might be to turn on sql server based sessions. It will cost web site performance (10%), but the increased reliability is more then worth the performance hit.
Another area to look at? Are AJAX calls being made to a page, and again without any previous session having been created? So, once again, we down to a change in end user behaviors, and possible those hitting a page first before having logged in, or done other things - and again one would see this error crop up.
Using PerfMon, I can see that my ASP.NET Applications (Total)\Sessions Active is growing indefinitely to the tens of thousands, and I suspect this is causing a recent performance degradation we are observing.
The growth appears to be around a few dozen per minute.
We are using .Net 4.5 and IIS 7.5
How can I get a sample of some details regarding these sessions using administrative tools? What could cause this? What next steps can I take to diagnose this odd behavior?
Place a hook on the Session_OnStart event (more on those events at MSDN: https://msdn.microsoft.com/en-us/library/ms178583.aspx).
From there you should examine and escalate depending on the situation.
First, simply place a breakpoint inside of the event handler and do some normal browsing in your development environment. You can use incognito windows in chrome to achieve anonymity for the sake of creating sessions. If you need to do this in production then you should set up some sort of logging database or leverage your existing logging database to record the session requests (you can serialize them temporarily if you need all of the data).
Look at what the request path is for these sessions, and at some of the contextual data in general. If there are erroneous sessions the handler should be called multiple times per request and that should be immediately obvious. From there you can determine how to handle the extra paths or requests that are coming in.
Since you tagged this asp.net, it is hard to tell what exact version or framework associated with that you are using. However, in general I have noticed that many browsers will accidentally cause an extra session for requesting resources, especially the favicon.
It is highly recommended that you do not create sessions for favicons. In asp.net mvc you can do that by ignoring the route. In asp.net mvc you can also ignore excessive resources. That is done in the global.asax.cs file like this
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.IgnoreRoute("favicon.ico");
See if you have a place in your application that you can do this if the extra sessions are being created as a result of these types of requests.
I am having some problems at the moment using a LoginServlet running OL 4.9, on Tomcat 7.
I have Tomcat configured to allow crossContext to be true, and that allows me to work with other app contexts on the same server. Specifically a Login Servlet. My only other app is the OpenLaszlo presentation server LPS(lps-4.9.0).
I am using a Tomcat Request Filter that snoops the incoming addresses and looks for a particular cookie of authentication, which then makes its way to the LoginServlet that does a forwarding to the OpenLaszlo page. This was done to KEEP the cookie alive when the Request Filter was awakened at the loading of the OpenLaszlo page.
All of that is working now.
There are no errors or warnings in the lps.log file or the localhost.<date>.log either, however the page loading goes on forever, and never completes.
Could it be something that I am passing along in the forwarded URL? I am using at least 2 parameters to cause lzr to be set to "dhtml" and then lzt to be set to "html."
I can't even get a simple <canvas> page with a simple button to load. Has anyone seen this, and been able to fix the problem?
Since I first wrote my description I wrote another plea for help to some friends and ex coworkers, and this will help update the details of what I have discovered thus far.
Here’s the scenario: I am using Tomcat 7, and have installed the WAR file for OpenLaszlo 4.9.
Alongside of this I created a LoginServlet hierarchy and code and web.xml file just under
“webapps”; the same level that lps-4.9.0 is installed.
The sequence of events is the following:
1. A login page comes up that takes the username and password, and sends that
off to /LoginServlet to process. Note: I have also written and registered a Request Filter
for Tomcat that halts traversal beyone /lps-4.9.0 and checks for proper authentication
as I retrieve the cookies from requests trying to access those levels.
2. In the LoginServlet, I am creating a MACH COOKIE that I’ll send along with the response,
so that the Filter will allow me past the /lps-4.9.0 level. To do this I had to do a FORWARD
operation to preserve the cookie. a REDIRECT would just drop them. Since you can’t
give a relative path higher than the Servlet’s root, I had to turn on Tomcat’s “crossContext”
feature that allows me to do that in the same domain. And I have both contexts registered
in Tomcat’s conf directory in server.xml, I believe. Anyhow it works. I can grab the
/lps-4.9.0 context, get a Request Dispatcher, and then use that dispatcher to FORWARD
the request/response pair to my OpenLaszlo file(the LZX file).
So it seems to get as far as LOADING the OpenLaszlo page, but when I perused the console
messages in Chrome’s Developer Tools debugger, it showed that it was actually trying
to use the context of the original request(i.e. /LoginServlet); and of course that doesn’t
exist. I guess when I passed along the original request/response pair, the request had
the FIRST context used, and then tried to derive the relative path to the file off of that.
QUESTION: Can I just copy the stuff from the original request, but change the context,
and forward THAT? Or architecturally should I try something else?
Thanks,
C
And the answer is..... You CAN'T DO IT... Period.
BTW. The Openlaszlo website server is DOWN, DEAD, KAPUT, NIX, GONE, NO MORE...
This will be the final project that I personally implement with the tool
with no support.
It's very sad to see something that had the right idea about development cycle times,
and keeping the client side GUI construction simple, fast, and easy could be something
that dies because of lack of interest? Say wha? Can't be because FLASH was in jeopardy.
I'm pretty sure that we, as programmers, aren't so paranoid about losing our jobs
that we think we must spend lots of hours CODING an interface to keep it secret.
I'm certainly not paranoid about it. I know there is NET BEANS for swing type
GUIS, and I've heard that GWT has adopted something similar now, and so I'll
keep looking for that perfect invention and deal with what is left over.
Critical Path must have been purchased by someone else too, and so the
site sponsor has no motivation to keep it alive, while it dies a slow death.
I have a web site that reports about each non-expected server side error on my email.
Quite often (once each 1-2 weeks) somebody launches automated tools that bombard the web site with a ton of different URLs:
sometimes they (hackers?) think my site has inside phpmyadmin hosted and they try to access vulnerable (i believe) php-pages...
sometimes they are trying to access pages that are really absent but belongs to popular CMSs
last time they tried to inject wrong ViewState...
It is clearly not search engine spiders as 100% of requests that generated errors are requests to invalid pages.
Right now they didn't do too much harm, the only one is that I need to delete a ton of server error emails (200-300)... But at some point they could probably find something.
I'm really tired of that and looking for the solution that will block such 'spiders'.
Is there anything ready to use? Any tool, dlls, etc... Or I should implement something myself?
In the 2nd case: could you please recommend the approach to implement? Should I limit amount of requests from IP per second (let's say not more than 5 requests per second and not more then 20 per minute)?
P.S. Right now my web site is written using ASP.NET 4.0.
Such bots are not likely to find any vulnerabilities in your system, if you just keep the server and software updated. They are generally just looking for low hanging fruit, i.e. systems that are not updated to fix known vulnerabilities.
You could make a bot trap to minimise such traffic. As soon as someone tries to access one of those non-existant pages that you know of, you could stop all requests from that IP address with the same browser string, for a while.
There are a couple of things what you can consider...
You can use one of the available Web Application Firewalls. It usually has set of rules and analytic engine that determine suspicious activities and react accordingly. For example in you case it can automatically block attempts to scan you site as it recognize it as a attack pattern.
More simple (but not 100% solution) approach is check referer url (referer url description in wiki) and if request was originating not from one of you page you rejected it (you probably should create httpmodule for that purpose).
And of cause you want to be sure that you site address all known security issues from OWASP TOP 10 list (OWASP TOP 10). You can find very comprehensive description how to do it for asp.net here (owasp top 10 for .net book in pdf), i also recommend to read the blog of the author of the aforementioned book: http://www.troyhunt.com/
Theres nothing you can do (reliabily) to prevent vulernability scanning, the only thing to do really is to make sure you are on top of any vulnerabilities and prevent vulernability exploitation.
If youre site is only used by a select few and in constant locations you could maybe use an IP restriction
I would be surprised if this is possible, but you never know.
Is there a way in which I could prioritise ASP.NET requests? For example, if the request is a NEW request (coming from Location X) I would like it to take priority over a request coming from a known location.
This will be running under IIS 7 so can I make use of the integrated pipeline to pre-process requests before they take threads out the ThreadPool?
Hmmm. Any feedback welcomed, even if it's to say No!
Thanks
Duncan
I don't think what you're after is possible in the truest sense of what you're asking for, but it might be possible to 'simulate' what you're after at the application level. John's right, they're processed first come, first served. But you might be able to give some kind of priority to your web application by setting a cookie for all visitors, and checking if that cookie is present before you render your homepage. If it is not present, you could assume that the request is new and therefore continue to render your homepage (or whatever). If it is present, you might choose to redirect them to another page (or perhaps a cached copy of your page).
Like I said, this isn't the 'truest' sense of what you are after, but if your homepage is particulary process intensive right now, and you want some way to separate recurring visitors from new visitors, this might do the trick.
Since you've asked, though - I'd have to ask you why it is necessary in your implementation to prioritise requests as you have mentioned. Is load on your web server a problem, and you want to appear more responsive to new customers?
Just hazarding a guess - interesting question, though! :)
Best,
Richard.