I have a flex/LCDS stack, where I'm finding that after logout, I often (but not always) start receiving Duplicate HTTP Session errors on the client.
Here's the important facts of the stack:
The flex client has a login/logout functionality within the app. The page does not refresh after the logout. (Therefore, the Flex app, and the underlying mx.messaging.FlexClient remains initialised)
A user may have multiple tabs open.
per-client-authentication is set to false - we're trying to achieve SSO (integrating with CAS) so the user principle is bound to the JSession.
The problem is most evident when using long-polling for messaging, and when there are two (or more) tabs open.
The problem is very difficult to reproduce when using RTMP or Streaming channels.
A user is bound to a JSession - ie., if they log in on Tab1, they become logged in on Tab2.
When a user logs out from either tab, the Jsession is invalidated.
Here's my current theory as to what's causing the issue:
Tab1 (T1) Starts client -> Issued ClientId1 (C1) -> JSession1 (J1) created
Tab2 (T2) Starts Client -> Issued ClientId2 (C2) -> Joins J1
T1 logs in -> J1 Unaffected
T2 logs in -> J1 Unaffected
T1 & T2 Both subscribe, start polling over amflongpolling
T1 sends logout -> J1 Invalidated -> J2 created
T2 sends poll (against J1)
T1 logout completes, returns with J2, updates cookie
The last two calls create a conflict, where the LCDS sees the FlexClient appears to be related to 2 JSessions.
As a result, an error along the lines of the following is recieved:
Server.Processing.DuplicateSessionDetected Detected duplicate
HTTP-based FlexSessions, generally due to the remote host disabling
session cookies. Session cookies must be enabled to manage the client
connection correctly.
Note: I've been able to recreate the problem in a stand-alone project. I believe it's not an issue with our application specific code, instead caused by the Stateful / session nature and conflicts between multiple tabs sharing the same session.
In summary, I believe the issue is caused where the session is invalidated on the server as a result of calls from one tab, but before the response is sent to the browser to inform it of the new JSession, calls are issued under the old Jsession.
What are some appropriate strategies to defend against this duplicate session issue?
Update
To clarify, while the scenario is similar to those discussed here, there are subtle differences which make the solutions in that article inappropriate.
Specifically, the article discusses preventing duplicate sessions by controlling the initial creation of JSessions across both browsers, using a JSP, or an orchestrated RemoteObject call.
Flex actually assists in this process by preventing outbound RemoteObject calls until the local FlexClient DSid variable is defined, showing that the initial session has been established.
My scenario differs, because the JSession (& associated LCDS FlexSession / Client-Side FlexClient objects) have already been established once, (using the techniques discussed in that article) and subsequently invalidated via logout - which calls session.invalidate() - destroying the JSession.
The issue arises when Tab2 sends a call with a stale JSession, a duplicate HTTP Session error. The situation then gets compounded, as when LCDS throw the DuplicateHTTPSession error, it also invalidates all known Jsessions attached with the client, meaning that the Tab1 - which had been ok - now has a stale JSession. The next time that Tab1 sends a call, IT causes a DuplicateHTTPSession error, and the cycle repeats.
Unfortunately, the Flex framework hooks for delaying calls while sesssions are established have no easy way (that I've found) of being re-enabled once set. (I've tried the following, to no avail:)
// Reset DSid to get a new FlexSession established on LCDS
use namespace mx_internal
public function resetFlexSession()
{
FlexClient.getInstance().id = null;
// Note - using FlexClient.NULL_ID also doesn't work.
}
I feel for you - I've fought this for a long time and never found a solution, but found a fix that worked for me so hopefully it will at least keep this issue under control until you can find the culprit. (And if you do, please post it here).
Now, I've got a slightly different environment than you (I'm using CF on the backend) so keep that in mind.
I also tried the whole "FlexClient.getInstance().id = null;" thing too and it didn't work by itself, but it was how and where I implemented it that made it work.
So, this is what I did that made the problem go away.
On my main form, before ANY RemoteServer calls are made, I setup a creationComplete handler and placed this code you already know and love:
// Not sure if this is needed anymore, but I'm leaving it in
FlexClient.getInstance().id = null;
Next, in my very first call to the server, I gracefully handle the failure, and clear that stinking ID out again:
public function login(event:Event): void {
Swiz.executeServiceCall(roUsers.login(),
function (event:ResultEvent): void {
// Handle a successful login here...
}
, function (faultevent:FaultEvent): void {
// This code fixes this issue with IE tabs dying and leaving Flex with a Duplicate Session problem.
if (faultevent.fault.faultString.indexOf("duplicate")) {
FlexClient.getInstance().id = null;
Swiz.dispatchEvent(event);
}
});
}
And it worked.
Basically, try the call, and if it fails for the duplicate session thing, then clear out that ID and reissue the call.
The key point being that I don't think clearing out the ID works until you've made at least one call to the server. Once you do, it worked like a CHARM for me, and in all of my apps.
Note that I'm using the SWIZ framework above so just translate it to your own world.
By the way, I've never seen this error in any other browser but IE, and I believe it may have something to do with the infamous Dead Tab Issue that IE suffers from.
If the above doesn't work, I also know of a few changes to some config files on the server that might help.
Good luck my friend!
This article titled, Avoiding duplicate session detected errors in LCDS, gives an in-depth explanation of what's happening in your situation. Here is a relevant quote:
...[LCDS] believes that the FlexClient it received the request from was already
associated with a different session on the server.
For the client application to make sure that FlexClients in the
application don’t get into this bad state, the client application must
make sure that a session is already established on the server before
multiple FlexClients connect to the server at the same time.
There are several approaches recommended to fixing this, including:
calling a jsp page to load the application
"The jsp page could both create a session for the client application and return an html wrapper to the client which would load the swf."
calling a Remoting destination
"which would automatically create a session for the client application on the server"
An additional, unrelated, cause to be aware of;
Some browsers (Internet Explorer, for example) apply domain naming rules to cookies and this means that a code domain like "my_clientX.server.com", although it may return valid BlazeDS responses, will continually trigger duplicate session notifications as access to the cookie will be blocked.
Changing the name to a valid name (without underscore) will resolve the issue.
Related
I'm playing with the service worker API in my computer so I can grasp how can I benefit from it in my real world apps.
I came across a weird situation where I registered a service worker which intercepts fetch event so it can check its cache for requested content before sending a request to the origin.
The problem is that this code has an error which prevented the function from making the request, so my page is left blank; nothing happens.
As the service worker has been registered, the second time I load the page it intercepts the very first request (the one which loads the HTML). Because I have this bug, that fetch event fails, it never requests the HTML and all I see its a blank page.
In this situation, the only way I know to remove the bad service worker script is through chrome://serviceworker-internals/ console.
If this error gets to a live website, which is the best way to solve it?
Thanks!
I wanted to expand on some of the other answers here, and approach this from the point of view of "what strategies can I use when rolling out a service worker to production to ensure that I can make any needed changes"? Those changes might include fixing any minor bugs that you discover in production, or it might (but hopefully doesn't) include neutralizing the service worker due to an insurmountable bug—a so called "kill switch".
For the purposes of this answer, let's assume you call
navigator.serviceWorker.register('service-worker.js');
on your pages, meaning your service worker JavaScript resource is service-worker.js. (See below if you're not sure the exact service worker URL that was used—perhaps because you added a hash or versioning info to the service worker script.)
The question boils down to how you go about resolving the initial issue in your service-worker.js code. If it's a small bug fix, then you can obviously just make the change and redeploy your service-worker.js to your hosting environment. If there's no obvious bug fix, and you don't want to leave your users running the buggy service worker code while you take the time to work out a solution, it's a good idea to keep a simple, no-op service-worker.js handy, like the following:
// A simple, no-op service worker that takes immediate control.
self.addEventListener('install', () => {
// Skip over the "waiting" lifecycle state, to ensure that our
// new service worker is activated immediately, even if there's
// another tab open controlled by our older service worker code.
self.skipWaiting();
});
/*
self.addEventListener('activate', () => {
// Optional: Get a list of all the current open windows/tabs under
// our service worker's control, and force them to reload.
// This can "unbreak" any open windows/tabs as soon as the new
// service worker activates, rather than users having to manually reload.
self.clients.matchAll({type: 'window'}).then(windowClients => {
windowClients.forEach(windowClient => {
windowClient.navigate(windowClient.url);
});
});
});
*/
That should be all your no-op service-worker.js needs to contain. Because there's no fetch handler registered, all navigation and resource requests from controlled pages will end up going directly against the network, effectively giving you the same behavior you'd get without if there were no service worker at all.
Additional steps
It's possible to go further, and forcibly delete everything stored using the Cache Storage API, or to explicitly unregister the service worker entirely. For most common cases, that's probably going to be overkill, and following the above recommendations should be sufficient to get you in a state where your current users get the expected behavior, and you're ready to redeploy updates once you've fixed your bugs. There is some degree of overhead involved with starting up even a no-op service worker, so you can go the route of unregistering the service worker if you have no plans to redeploy meaningful service worker code.
If you're already in a situation in which you're serving service-worker.js with HTTP caching directives giving it a lifetime that's longer than your users can wait for, keep in mind that a Shift + Reload on desktop browsers will force the page to reload outside of service worker control. Not every user will know how to do this, and it's not possible on mobile devices, though. So don't rely on Shift + Reload as a viable rollback plan.
What if you don't know the service worker URL?
The information above assumes that you know what the service worker URL is—service-worker.js, sw.js, or something else that's effectively constant. But what if you included some sort of versioning or hash information in your service worker script, like service-worker.abcd1234.js?
First of all, try to avoid this in the future—it's against best practices. But if you've already deployed a number of versioned service worker URLs already and you need to disable things for all users, regardless of which URL they might have registered, there is a way out.
Every time a browser makes a request for a service worker script, regardless of whether it's an initial registration or an update check, it will set an HTTP request header called Service-Worker:.
Assuming you have full control over your backend HTTP server, you can check incoming requests for the presence of this Service-Worker: header, and always respond with your no-op service worker script response, regardless of what the request URL is.
The specifics of configuring your web server to do this will vary from server to server.
The Clear-Site-Data: response header
A final note: some browsers will automatically clear out specific data and potentially unregister service workers when a special HTTP response header is returned as part of any response: Clear-Site-Data:.
Setting this header can be helpful when trying to recover from a bad service worker deployment, and kill-switch scenarios are included in the feature's specification as an example use case.
It's important to check the browser support story for Clear-Site-Data: before your rely solely on it as a kill-switch. As of July 2019, it's not supported in 100% of the browsers that support service workers, so at the moment, it's safest to use Clear-Site-Data: along with the techniques mentioned above if you're concerned about recovering from a faulty service worker in all browsers.
You can 'unregister' the service worker using javascript.
Here is an example:
if ('serviceWorker' in navigator) {
navigator.serviceWorker.getRegistrations().then(function (registrations) {
//returns installed service workers
if (registrations.length) {
for(let registration of registrations) {
registration.unregister();
}
}
});
}
That's a really nasty situation, that hopefully won't happen to you in production.
In that case, if you don't want to go through the developer tools of the different browsers, chrome://serviceworker-internals/ for blink based browsers, or about:serviceworkers (about:debugging#workers in the future) in Firefox, there are two things that come to my mind:
Use the serviceworker update mechanism. Your user agent will check if there is any change on the worker registered, will fetch it and will go through the activate phase again. So potentially you can change the serviceworker script, fix (purge caches, etc) any weird situation and continue working. The only downside is you will need to wait until the browser updates the worker that could be 1 day.
Add some kind of kill switch to your worker. Having a special url where you can point users to visit that can restore the status of your caches, etc.
I'm not sure if clearing your browser data will remove the worker, so that could be another option.
I haven't tested this, but there is an unregister() and an update() method on the ServiceWorkerRegistration object. you can get this from the navigator.serviceWorker.
navigator.serviceWorker.getRegistration('/').then(function(registration) {
registration.update();
});
update should then immediately check if there is a new serviceworker and if so install it. This bypasses the 24 hour waiting period and will download the serviceworker.js every time this javascript is encountered.
For live situations you need to alter the service worker at byte-level (put a comment on the first line, for instance) and it will be updated in the next 24 hours. You can emulate this with the chrome://serviceworker-internals/ in Chrome by clicking on Update button.
This should work even for situations when the service worker itself got cached as the step 9 of the update algorithm set a flag to bypass the service worker.
We had moved a site from godaddy.com to a regular WordPress install. Client (not us) had a serviceworker file (sw.js) cached into all their browsers which completely messed things up. Our site, a normal WordPress site, has no service workers.
It's like a virus, in that it's on every page, it does not come from our server and there is no way to get rid of it easily.
We made a new empty file called sw.js on the root of the server, then added the following to every page on the site.
<script>
if (navigator && navigator.serviceWorker && navigator.serviceWorker.getRegistration) {
navigator.serviceWorker.getRegistration('/').then(function(registration) {
if (registration) {
registration.update();
registration.unregister();
}
});
}
</script>
In case it helps someone else, I was trying to kill off service workers that were running in browsers that had hit a production site that used to register them.
I solved it by publishing a service-worker.js that contained just this:
self.globalThis.registration.unregister();
After a previous post about an issue with Session State being locked on every request (normal behavior for Asp.Net), tried fully disabling Session State (). This, in fact, disables the Session object and throws exceptions if try to use it. However, as stated in the named post, all requests are still serviced in a serialized fashion. This is, a second "simultaneous" request doesn't get served till previous gets finished served. Related documentation states that disabling Session State avoids the lock in the session but, in my case, my requests are still serviced serially.
This is not MVC.
This is my previous post Custom handler processes multiple requests serially and not simultaneouslly
Any help would be appreciated.
Turns out this always happens inside the developer environment (cassini). When dissabling session state there is no possible access to the Session object but it seems to exists a lock request somewhere.
Debugging in IIS this doesn't happen.
Hope this help.
When POSTing data - either using AJAX or from a mobile device or what have you - there is often a "retry" condition, so that should something like a timeout occue, the data is POSTed again.
Is this actually a good idea?
POST data is meant to be idempotent, so if you
make a POST to the server,
the server receives the request,
takes time to execute and
then sends the data back
if the timeout is hit sometime after 3. then the next retry will send data that was meant to be idempotent.
The question then is that should a retry (when calling from the client side) be set for POST data, or should the server be designed to always handle POST data appropriately (with tokens and so on), or am i missing something?
update as per the questions - this is for a mobile app. As it happens, during testing it was noticed that with too short a timeout, the app would retry. Meanwhile, the back-end server had in fact accepted and processed the initial request, and got v. upset when the new (otherwise identical) re-request came in.
nonce's are a (partial) solution to this. The server generates a nonce and gives it to the client. The client sends the POST including the nonce, the server checks if the nonce is valid and unused and if so, acts on the POST and invalidates the nonce, if not, it reports back that the nonce is used and discards the data. Also very usefull to avoid the 'double post' problem by users clicking a submit button twice.
However, it is moving the problem from the client to a different one on the server. If you invalidate the nonce before the action, the action might still fail / hang, if you invalidate it after, the nonce is still valid for requests during the processing. So, a possible scenario on the server becomes on receiving.
Lock nonce
Do action
On any processing error preventing action completion, rollback, release lock on nonce.
On no errors, invalidate / remove nonce.
Semaphores on the server side are most helpfull with this, most backend languages have libraries for these.
So, implementing all these:
It is safe to retry, if the action is already performed it won't be done again.
A reply that the nonce has already been used can be understood as a a confirmation that the original POST has been acted upon.
If you need the result of an action where the second requests shows that the first came through, a short-lived cache would be needed server-sided.
Up to you to set a sane limit on subsequent tries (what if the 2nd fails? or the 3rd?).
The issue with automatic retries is the server needs to know if the prior POST was successfully processed to avoid unintended consequences. For example, if each POST inserts a DB record, but the last POST timed out after the insert, the automatic re-POST will cause a duplicate DB record, which is probably not what you want.
In a robust setup you may be able to catch that there was a timeout and rollback any POSTed updates. That would safely allow an automatic re-POST. But many (most?) web systems aren't this elaborate, so you'll typically require human intervention to determine if the POST should happen again.
If you can't avoid the long wait until the server responds, a better practice would be to return an immediate response (200OK with the message "processing") and have the client, later on, send a new request that checks if the action was performed.
AJAX was not designed to be used in such a way ("heavy" actions).
By the way, the default HTTP timeout is 7200 secs so I don't think you'll reach it easily - but regardless, you should avoid having the user wait for long periods of time.
Providing more information about the process (like what exactly you're trying to do) would help in suggesting ways to avoid such obstacles.
If your requests are failing enough to instigate something like this, you have major problems that have nothing to do with implementing a retry condition. I have never seen a Web app that needed this type of functionality. Programming your app to automatically beat an overloaded sever with the F5 hammer is not the solution to anything.
If this ajax request is triggered from a button click, disable that button until it returns, successful or not. If it failed, let the user click it again.
I'm using Spring MVC 3 + Tiles for a webapp. I have a slow operation, and I'd like a please wait page.
There are two main approaches to please wait pages, that I know of:
Long-lived requests: render and flush the "please wait" bit of the page, but don't complete the request until the action has finished, at which point you can stream out the rest of the response with some javascript to redirect away or update the page.
Return immediately, and start processing on a background thread. The client polls the server (in javascript, or via page refreshes), and redirects away when the background thread finishes.
(1) is nice as it keeps the action all single-threaded, but doesn't seem possible with Tiles, as each JSP must complete rendering in full before the page is assembled and returned to the client.
So I've started implementing (2). In my implementation, the first request starts the operation on a background thread, using Spring's #Async annotation, which returns a Future<Result>. It then returns a "please wait" page to the user, which refreshes every few seconds.
When the please wait page is refreshed, the controller needs to check on the progress of the background thread. What is the best way of doing this?
If I put the Future object in the Session directly, then the poll request threads can pull it out and check on the thread's progress. However, doesn't this mean my Sessions are not serializable, so my app can't be deployed with more than one web server (without requiring sticky sessions)?
I could put some kind of status flag in the Session, and have the background thread update the Session when it is finished. I'm very concerned that passing an HttpSession object to a non-request thread will result in hard to debug errors. Is this allowed? Can anyone cite any documentation either way? It works fine when the sessions are in-memory, of course, but what if the sessions are stored in a database? What if I have more than one web server?
I could put some kind of status flag in my database, keyed on the session id, or some other aspect of the slow operation. It seems weird to have session data in my domain database, and not in the session, but at least I know the database is thread-safe.
Is there another option I have missed?
The Spring MVC part of your question is rather easy, since the problem has nothing to do with Spring MVC. See a possible solution in this answer: https://stackoverflow.com/a/4427922/734687
As you can see in the code, the author is using a tokenService to store the future. The implementation is not included and here the problems begin, as you are already aware of, when you want failover.
It is not possible to serialize the future and let it jump to a second server instance. The thread is executed within a certain instance and therefore has to stay there. So session storage is no option.
As in the example link you could use a token service. This is normally just a HashMap where you can store your object and access it later again via the token (the String identifier). But again, this works only within the same web application, when the tokenService is a singleton.
The solution is not to save the future, but instead the state of the work (in work, finished, failed with result). Even when the querying session and the executing threads are on different machines, the state should be accessible and serialize able. But how would you do that? This could be implemented by storing it in a database or on the file system (the example above you could check if the zip file is available) or in a key/value store or in a cache or in a common object store (Terracota), ...
In fact, every batch framework (Spring Batch for example) works this way. It stores the current state of the jobs in the database. You are concerned that you mix domain data with operation data. But most applications do. On large applications there is the possibility to use two database instances, operational data and domain data.
So I recommend that you save the state and the result of the work in a database.
Hope that helps.
I am not able to make more than one request at a time in asp.net while the session is active. Why does this limitation exist? Is there a way to work around it?
This issue can be demonstrated with a WebForms app with just 3 simple aspx pages (although the limitation still applies in asp.net mvc).
Create an asp.net 3.5 web application.
There should be just three pages:
NoWait.aspx, Wait.aspx, and SessionStart.aspx
NoWait.aspx has this single nugget added between the default div tags: <%=DateTime.Now.Ticks %>. The code-behind for this page is the default (empty).
Wait.aspx looks just like NoWait.aspx, but it has one line added to Page_Load in the code-behind: Thread.Sleep(3000); //wait 3 seconds
SessionStart.aspx also looks just like NoWait.aspx, but it has this single line in its code-behind: Session["Whatever"] = "Anything";
Open a browser and go to NoWait.aspx. It properly shows a number in the response, such as: "633937963004391610". Keep refreshing and it keeps changing the number. Great so far! Create a new tab in the same browser and go to Wait.aspx. It sits for 3 seconds, then writes the number to the response. Great so far! No, try this: Go to Wait.aspx and while it's spinning, quickly tab over to NoWait.aspx and refresh. Even while Wait.aspx is sleeping, NoWait.aspx WILL provide a response. Great so far. You can continue to refresh NoWait.aspx while Wait.aspx is spinning, and the server happily sends a response each time. This is the behavior I expect.
Now is where it gets weird.
In a 3rd tab, in the same browser, visit SessionStart.aspx. Next, tab over to Wait.aspx and refresh. While it's spinning, tab over to NoWait.aspx and refresh. NoWait.aspx will NOT send a response until Wait.aspx is done running!
This proves that while a session is active, you can't make concurrent requests with the same user. Requests are all queued up and served synchronously. I do not expect or understand this behavior. I have tested this on Visual Studio 2008's built in web server, and also IIS 7 and IIS 7.5.
So I have a few questions:
1) Am I correct that there is indeed a limitation here, or is my test above invalid because I am doing something wrong?
2) Is there a way to work around this limitation? In my web app, certain things take a long time to execute, and I would like users to be able to do things in other tabs while they wait of a big request to complete. Can I somehow configure the session to allow "dirty reads"? This could prevent it from being locked during the request?
3) Why does this limitation exist? I would like to gain a good understanding of why this limitation is necessary. I think I'd be a better developer if I knew!
Here is a link talking about session state and locking. It does perform and exclusive lock.
The easiest way around this is to make the long running tasks asynchronous. You can make the long running tasks run on a separate thread, or use and asynchronous delegate and return a response to the browser immediately. The client side page can send requests to the server to check and see if it is done (through ajax most likely), and when the server tells the client it's finished, notify the user. That way although the server requests have to be handled one at a time by the server, it doesn't look like that to the user.
This does have it's own set of problems, and you'll have to make sure that account for the HTTP context closing as that will dispose certain functionality in the asp.net session. One example you'll probably have to account for is probably releasing a lock on the session, if that is actually occurring.
This isn't too surprising that this could be a limitation. Each browser would have it's own session, before the advent of ajax, post back requests were synchronous. Making the same session handle concurrent could get really ugly, and I can see how that wouldn't be a priority for the IIS and ASP.NET teams to add in.
For reasons Kevin described, users can't access two pages that might write to their session state at the same time - the framework itself can't exert fine-grained control over the locking of the session store, so it has to lock it for entire requests.
To work around this, pages that only read session data can declare that they do so. ASP.NET won't obtain a session state write lock for them:
// Or false if it doesn't need access to session state at all
EnableSessionState="ReadOnly"