I'm using Spring MVC 3 + Tiles for a webapp. I have a slow operation, and I'd like a please wait page.
There are two main approaches to please wait pages, that I know of:
Long-lived requests: render and flush the "please wait" bit of the page, but don't complete the request until the action has finished, at which point you can stream out the rest of the response with some javascript to redirect away or update the page.
Return immediately, and start processing on a background thread. The client polls the server (in javascript, or via page refreshes), and redirects away when the background thread finishes.
(1) is nice as it keeps the action all single-threaded, but doesn't seem possible with Tiles, as each JSP must complete rendering in full before the page is assembled and returned to the client.
So I've started implementing (2). In my implementation, the first request starts the operation on a background thread, using Spring's #Async annotation, which returns a Future<Result>. It then returns a "please wait" page to the user, which refreshes every few seconds.
When the please wait page is refreshed, the controller needs to check on the progress of the background thread. What is the best way of doing this?
If I put the Future object in the Session directly, then the poll request threads can pull it out and check on the thread's progress. However, doesn't this mean my Sessions are not serializable, so my app can't be deployed with more than one web server (without requiring sticky sessions)?
I could put some kind of status flag in the Session, and have the background thread update the Session when it is finished. I'm very concerned that passing an HttpSession object to a non-request thread will result in hard to debug errors. Is this allowed? Can anyone cite any documentation either way? It works fine when the sessions are in-memory, of course, but what if the sessions are stored in a database? What if I have more than one web server?
I could put some kind of status flag in my database, keyed on the session id, or some other aspect of the slow operation. It seems weird to have session data in my domain database, and not in the session, but at least I know the database is thread-safe.
Is there another option I have missed?
The Spring MVC part of your question is rather easy, since the problem has nothing to do with Spring MVC. See a possible solution in this answer: https://stackoverflow.com/a/4427922/734687
As you can see in the code, the author is using a tokenService to store the future. The implementation is not included and here the problems begin, as you are already aware of, when you want failover.
It is not possible to serialize the future and let it jump to a second server instance. The thread is executed within a certain instance and therefore has to stay there. So session storage is no option.
As in the example link you could use a token service. This is normally just a HashMap where you can store your object and access it later again via the token (the String identifier). But again, this works only within the same web application, when the tokenService is a singleton.
The solution is not to save the future, but instead the state of the work (in work, finished, failed with result). Even when the querying session and the executing threads are on different machines, the state should be accessible and serialize able. But how would you do that? This could be implemented by storing it in a database or on the file system (the example above you could check if the zip file is available) or in a key/value store or in a cache or in a common object store (Terracota), ...
In fact, every batch framework (Spring Batch for example) works this way. It stores the current state of the jobs in the database. You are concerned that you mix domain data with operation data. But most applications do. On large applications there is the possibility to use two database instances, operational data and domain data.
So I recommend that you save the state and the result of the work in a database.
Hope that helps.
Related
I am writing a web application using ASP.NET (not MVC), with .NET v4 (not v4.5).
I fetch some of the data which I must display from a 3rd-party web service, one of whose methods takes a long time (several seconds) to complete. The information to be fetched/prefetched varies depending on the users' initial requests (because different users ask for details about different objects).
In a single-user desktop application, I might:
Display my UI as quickly as possible
Have a non-UI background task to fetch the information in advance
Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it
To do something similar using ASP.NET, I guessed I can:
Use a BackgroundWorker, passing the Session instance as a parameter to the worker
On completion of the worker's task, write fetched data to the Session
If the user's request for data arrives before the task is complete, then block until it it has completed
Do you foresee problems, can you suggest improvements?
[There are other questions on StackOverflow about ASP.NET and background tasks, but these all seem to be about fetching and updating global application data, not session-specific data.]
Why not use same discipline as in a desktop application:
Load the page without the data from the service ( = Display my UI as quickly as possible)
Fetch the service data using an ajax call (= Have a non-UI background task to fetch the information in advance)
this is actually the same, although you can show an animated gif indicating you are still in progress... (Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it)
In order to post an example code it will be helpful to know if you are using jquery? plain javascript? something else? no javascript?
Edit
I am not sure if this was your plan but Another idea is to fetch the data on server side as well, and cache the data for future requests.
In this case the stages will be:
Get a request.
is the service data cached?
2.a. yes? post page with full data.
2.b. no? post page without service data.
2.b.i. On server side fetch service data and cache it for future requests.
2.b.ii. On client side fetch service data and cache it for current session.
Edit 2:
Bare in mind that the down side of this discipline is that in case the method you fetch the data changes, you will have to remember to modify it both on server and client side.
I am trying to make 6 asynchronous jQuery ajax calls to my .NET Page Method all at once on document.ready to request for different sets of data from the database and in return render as charts to users.
Problem is that when one chart takes a long time to generate, it locks up generation of the next 5 charts, for instance, when each chart takes 1 min to generate, the user will be approx waiting for 6 mins, instead of 1 - 2 mins which i thought it will be when using async ajax calls and page method gets processed in parallel.
After reading a lot of useful posts in this forum, i found that this is because I have to read and write to session objects within the page methods, and asp.net will lock the whole request, as a result making them run sequentially.
I have seen people suggesting to set the session state to read only in #Page tag, but it will not address my problem because i need write to the session as well. I have considered moving from inProc session to sql database session, but my session object is not serializable and is used across the whole project. I also cannot change to use Cache instead because the session contains user specific details.
Can anyone please help and point me to the right direction? I have been spending days to investigate this page inefficiency and still haven't yet found a nice way yet.
Thanks in advance
From my personal experience, switching to SQL session will NOT help this problem as all of the concurrent threads will block in SQL as the first thread in will hold an exclusive lock on one or more rows in the database.
I'm curious as to why your session object isn't serializable. The only solution that I can think of is use a database table to store the user specific data that you are keeping in session and then only holding onto a database lock for as long as it takes you to update the user data.
You can use the ASP.NET session id or other unique cookie value as the database key.
The problem may not be server side at all.
Browsers have a built in limit on how many concurrent HTTP requests they will make - this is part of the HTTP/1.1 spec which sugests a limit of 2.
In IE7 the limit is 2. in IE8 it is 6. But when a page loads you could easily hit 6 due to the concurrent requests for CSS, JS, images etc.
A good source of info about these limits is BrowserScope (see Connections per Hostname column).
What about combining those 6 requests into 1 request? This will also load a little faster.
Say, for example, you are caching data within your ASP.NET web app that isn't often updated. You have another process running outside of the app which ocassionally updates this data, when you do this you would like the cached data to be cleared immediately so that the next request picks up the new data straight away.
The caching service is running in the context of your web app and not externally - what is a good method of calling into the web app to get it to update the cache?
You could of course, just hack a page or web service together called ClearTheCache that does it. This can then be called by your other process. Of course you don't want this process to be externally useable or visible on your web app, so perhaps you could then check that incoming requests to this page are calling localhost, if not throw a 404. Is this acceptable? Could this be spoofed at all (for instance if you used HttpApplication.Request.Url.Host)?
I can think of many different ways to go about this, mainly revolving around creating a page or web service and limiting requests to it somehow, but I'm not sure any are particularly elegant. Neither do I like the idea of the web app routinely polling out to another service to check if it needs to execute something, I'd really like a PUSH solution.
Note: The caching scenario is just an example, I could use out-of-process caching here if needed. The question is really concentrating on invoking code, for any given reason, within a web app externally but in a controlled context.
Don't worry about the limiting to localhost, you may want to push from a different server in future. Instead share a key (asymmetrical or symmetrical doesn't really matter) between the two, have the PUSH service encrypt a block of data (control data for example) and have the receiver decrypt. If the block decrypts correctly and the data is readable you can safely assume that only the service that was supposed to call you has and you can perform the required actions! Not the neatest solution, but allows you to scale beyond a single server.
EDIT
Having said that an asymmetrical key would be better, have the PUSH service hold the private part and the website the public part.
EDIT 2
Have the PUSH service put the date/time it generated the cipher text into the data block, then the client can be sure that a replay attack hasn't taken place by ensuring the date/time is within an acceptable time period (say a minute).
Consider an external caching mechanism like EL's caching block, which would be available to both the web and the service, or a file to cache data to.
HTH.
I am working on a web application (ASP.NET) game that would consist of a single page, and on that page, there would be a game board akin to Monopoly. I am trying to determine what the best architectural approach would be. The main requirements I have identified thus far are:
Up to six users share a single game state object.
The users need to keep (relatively) up to date on the current state of the game, i.e. whose turn it is, what did the active user just roll, how much money does each other user have, etc.
I have thought about keeping the game state in a database, but it seems like overkill to keep updating the database when a game state object (say, in a cache) could be kept up to date. For example, the flow might go like this:
Receive request for data from a user.
Look up data in database. Create object from that data.
Verify user has permissions to perform request based on the game's state (i.e. make sure it's really their turn or have enough money to buy that property).
Update the game object.
Write the game object back to the database.
Repeat for every single request.
Consider that a single server would be serving several concurrent games.
I have thought about using AJAX to make requests to an an ASP.NET page.
I have thought about using AJAX requests to a web service using silverlight.
I have thought about using WCF duplex channels in silverlight.
I can't figure out what the best approach is. All seem to have their drawbacks. Does anyone out there have experience with this sort of thing and care to share those experiences? Feel free to ask your own questions if I am being too ambiguous! Thanks.
Update: Does anyone have any suggestions for how to implement this connection to the server based on the three options I mention above?
You could use the ASP.Net Cache or the Application state to store the game object since these are shared between users. The cache would probably be the best place since objects can be removed from it to save memory.
If you store the game object in cache using a unique key you can then store the key in each visitors Session and use this to retrieve the shared game object. If the cache has been cleared you will recreate the object from the database.
While updating a database seems like overkill, it has advantages when it comes time to scale up, as you can have multiple webheads talking to one backend.
A larger concern is how you communicate the game state to the clients. While a full update of the game state from time to time ensures that any changes are caught and all clients remain in synchronization even if they miss a message, gamestate is often quite large.
Consider as well that usually you want gamestate messages to trigger animations or other display updates to portray the action (for example, of a piece moves, it shouldn't just appear at the destination in most cases... it should move across the board).
Because of that, one solution that combines the best of both worlds is to keep a database that collects all of the actions performed in a table, with sequential IDs. When a client requests an update, it can give all the actions after the last one it knew about, and the client can "act out" the moves. This means even if an request fails, it can simply retry the request and none of the actions will be lost.
The server can then maintain an internal view of the gamestate as well, from the same data. It can also reject illegal actions and prevent them from entering the game action table (and thus prevent other clients from being incorrectly updated).
Finally, because the server does have the "one true" gamestate, the clients can periodically check against that (which will allow you to find errors in your client or server code). Because the server database should be considered the primary, you can retransmit the entire gamestate to any client that gets incorrect state, so minor client errors won't (potentially) ruin the experience (except perhaps a pause while the state is downloaded).
Why don't you just create an application level object to store your details. See Application State and Global Variables in ASP.NET for details. You can use the sessionID to act as a key for the data for each player.
You could also use the Cache to do the same thing using a long time out. This does have the advantage that older data could be flushed from the Cache after a period of time ie 6 hours or whatever.
I am thinking on the following approach but not sure if its the best way out:
step1 (server side): A TaskMangaer class creates a new thread and start a task.
step2 (server side): Store taskManager object reference into the cache for future reference.
step3 (client side): Use periodic Ajax call to check the status of the task.
Basically the intention is to have a framework to run a background task (5mins approx) and provide regular feedback on the web UI for the percentage of task completed.
Is there a neat way around this or any existing asp.net API that will be helpful ?
Edit 1#: I want to run the task in-proc with the app.
Edit 2#: Looks like badge implementation on stack overflow is also using the cache to track background task. https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
I think the problem with storing the result in the cache is that ASP.NET might scavenge that cache entry for other purposes (ie if its short on memory, if its grumpy, etc). Something that is served from the cache should be something you can recreate on demand if its not found in the cache, the ASP.NET runtime is free to dump cache entries whenever it feels like it.
The usage of the cache in the badge discussion seems fundamentally different, in that case the task was shortlived. The cache was just being used as a hacky timer to fire off the task periodically.
Can you confirm this is a task that is going to take 5 minutes, and require its own thread that whole time? This is a performance concern in itself, you will only be able to support a limited number of such requests if each requires its own thread for so long. Only if thats acceptable would I let the task camp a thread for so long.
If its ok for these tasks to camp a thread, then I'd just go ahead and store the result in a dictionary global to the process. The key of the dictionary would correlate to the client request / AJAX callback series. The key should incorporate the user ID as well if security is at all important.
If you need to scale up to many users, then I think you need to break the task down into asynchronous steps, and in that case I'd probably use a DB table to store the results (again keyed per request / user).
Microsoft Message Queuing was built for scenarios like the one you try to solve:
http://www.microsoft.com/windowsserver2003/technologies/msmq/default.mspx
Windows Communicatio Foundation also has message queuing support.
Hope this helps.
Thomas
One approach for doing this is to use application state. When you spawn a worker thread, pass it a request ID that you generate, and return this to the client. The client will then pass that request ID back to the server in its AJAX calls. The server will then fetch the status using the request ID from application state. (The worker thread would be updating the application state based on its status).
I saw an approach to a similar problem somewhere. The solution was something like:
Start the background task on server.Return immediately with a url to the result.
Until the result is posted, this url will return 404.
The client checks periodically for this url.
The client reads the results when
they are finally posted.
The url will be something like http://mysite/myresults/cffc6c30-d1c2-11dd-ad8b-0800200c9a66.
The best document format is probably JSON.
If feedback on progress is important, modify the document to also contain status (inprogress/finish) and progress (42 %).