We are using Spring Web Flow (2.0.9) in the Weblogic 10 clustured environment. And in production we are getting a lot of LockTimeoutException : Unable to acquire conversation lock after 30 seconds.
I have been trying to figure out why does above exception comes in some cases when there is only a single click or we are accessing the home page of the site itself.
Please find the code which is trying to lock for FlowController in SWF. What I can't figure out is the lock is on the servlet which is being accessed or something else ?
Please help to understand in a web application when this lock occurs which resource is actually locked in SWF ?
To understand the concept of ReentrantLock , please refer to the link below.
What is the Re-entrant lock and concept in general?
Thanks in advance.
Exception Stack Trace
org.springframework.webflow.conversation.impl.LockTimeoutException: Unable to acquire conversation lock after 30 seconds
at org.springframework.webflow.conversation.impl.JdkConcurrentConversationLock.lock(JdkConcurrentConversationLock.java:44)
at org.springframework.webflow.conversation.impl.ContainedConversation.lock(ContainedConversation.java:69)
at org.springframework.webflow.execution.repository.support.ConversationBackedFlowExecutionLock.lock(ConversationBackedFlowExecutionLock.java:51)
at org.springframework.webflow.executor.FlowExecutorImpl.resumeExecution(FlowExecutorImpl.java:166)
at org.springframework.webflow.mvc.servlet.FlowHandlerAdapter.handle(FlowHandlerAdapter.java:183)
at org.springframework.webflow.mvc.servlet.FlowController.handleRequest(FlowController.java:174)
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:48)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:96)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:76)
at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
Lock Implementation in SWF
package org.springframework.webflow.conversation.impl;
import java.io.Serializable;
import java.util.concurrent.locks.ReentrantLock;
/**
* A conversation lock that relies on a {#link ReentrantLock} within Java 5's <code>util.concurrent.locks</code>
* package.
*
* #author Keith Donald
*/
class JdkConcurrentConversationLock implements ConversationLock, Serializable {
/**
* The lock.
*/
private ReentrantLock lock = new ReentrantLock();
public void lock() {
// ensure non-reentrant behaviour
if (!lock.isHeldByCurrentThread()) {
lock.lock();
}
}
public void unlock() {
// ensure non-reentrant behaviour
if (lock.isHeldByCurrentThread()) {
lock.unlock();
}
}
}
Spring Webflow operates as a state-machine, executing transitions between different states which might have associated views. It doesn't make sense to have multiple concurrently executing transitions, so SWF uses a locking system to make sure that each flow execution (or conversation) only handles one HTTP request at a time.
Don't get too hung up on the concept of ReentrantLock, it just prevents the same thread waiting on a lock that it already holds.
In answer to your question, it is only the flow execution (the specific conversation instance) that is locked by Spring Webflow for the duration of the request handling. The server will still handle requests from other users, or even requests from the same user to a different flow execution.
LockTimeoutException is tricky to troubleshoot because the root problem is not the thread throwing the exception. The LockTimeoutException occurs because another earlier request is taking longer than 30 seconds, so it would be a good idea to find out why the earlier request took so long.
Troubleshooting ideas:
Implement a FlowExecutionListener which measures how long each request takes, and log long requests along with the flowId, stateId and transition event, this will allow you to hone in on long-running requests.
One good way to avoid the LockTimeoutException itself is to disable submit buttons & links using javascript once a button/link has been clicked. Obviously this doesn't solve the problem of the initial 30-second+ request.
You could increase the timeout for LockTimeoutException, but that doesn't solve the actual problem and leads to a worse user-experience. 30-second requests are the problem.
Finally, you mentioned:
I have been trying to figure out why does above exception comes in
some cases when there is only a single click or we are accessing the
home page of the site itself.
I suggest that you try re-create the problem with the browser's developer tools window open, watching the 'Network' tab, maybe there is an AJAX request running in the background which is holding the lock.
Try to manipulate the timeout. Here is described how to do this https://jira.springsource.org/browse/SWF-1059. Maybe this will help you to find where the real problem is.
Related
I use asynchronous XMLHttpRequest to call a function in ASP.net web service.
When I call an abort method on the XMLHttpRequest, after the server has received the request and processing it, the server continues processing the request.
Is there a way to stop the request processing on the server?
Generally speaking, no, you can't stop the request being processed by the server once it has started. After all, how would the server know when a request has been aborted?
It's like if you navigated to a web page but browsed to another one before the first one had loaded. That initial request will, at least to some extent (any client-side work will of course not take place), be fulfilled.
If you do wish to stop a long-running operation on the server, the service that is being invoked will need to be architected such that it can support being interrupted. Some psuedo code:
void MyLongRunningMethod(opId, args)
{
work = GetWork(args)
foreach(workItem in work)
{
DoWork(workItem)
//Has this invocation been aborted?
if(LookUpSet.Contains(opId))
{
LookUpSet.Remove(opId)
return
}
//Or try this:
if(Response.IsClientConnected)
{
HttpContext.Current.Response.End();
return;
}
}
}
void AbortOperation(opId)
{
LookUpSet[opId] = true
}
So the idea here is that MyLongRunningMethod periodically checks to see if it has been aborted, returning if so. It is intended that opId is unique, so you could generate it based on the session Id of the client appended with the current time or something (in Javascript, new Date().getTime() will get you the number of milliseconds since the epoch).
With this sort of approach, the server must maintain state (the LookUpSet in my example), so you will need some way of doing that, such as a database or just storing it in memory. The service will also need to be architected such that calling abort does not leave things in a non-working state, which of course depends very heavily on what it does.
The other really important requirement is that the data can be split up and worked on in chunks. This is what allows the service to be interruptable.
Finally, if some operation is to be aborted, then AbortOperation must be called - simply aborting the XMLHttpRequest invocation won't do help as the operation will continue until completion.
Edit
From this question: ASP.Net: How to stop page execution when browser disconnects?
You could also check the Response.IsClientConnected property to try and determine whether the invocation had been aborted.
Generally speaking, the server isn't going to know that a client has disconnected until it attempts to send data to it. See Best practice to detect a client disconnection in .NET? and Instantly detect client disconnection from server socket.
As nick_w wrote you can't stop the request being processed by the server once it has started. But there is ability to implement solution which will give you ability to cancel server task. Dino Esposito has several great articles about how such things can be implemented:
Canceling Server Tasks with ASP.NET AJAX
And in the following articles to implement pooling to server Dino Esposito describes how to use SignalR library:
Build a Progress Bar with SignalR;
Long Polling and SignalR
So if you really need to cancel some task on server these articles can be used as starting point to implement required solution.
I need to execute an infinite while loop and want to initiate the execution in global.asax.
My question is how exactly should I do it? Should I start a new Thread or should I use Async and Task or anything else? Inside the while loop I need to do await TaskEx.Delay(5000);
How do I do this so it will not block any other processes and will not create memory leaks?
I use VS10,AsyncCTP3,MVC4
EDIT:
public void SignalRConnectionRecovery()
{
while (true)
{
Clients.SetConnectionTimeStamp(DateTime.UtcNow.ToString());
await TaskEx.Delay(5000);
}
}
All I need to do is to run this as a singleton instance globally as long as application is available.
EDIT:SOLVED
This is the final solution in Global.asax
protected void Application_Start()
{
Thread signalRConnectionRecovery = new Thread(SignalRConnectionRecovery);
signalRConnectionRecovery.IsBackground = true;
signalRConnectionRecovery.Start();
Application["SignalRConnectionRecovery"] = signalRConnectionRecovery;
}
protected void Application_End()
{
try
{
Thread signalRConnectionRecovery = (Thread)Application["SignalRConnectionRecovery"];
if (signalRConnectionRecovery != null && signalRConnectionRecovery.IsAlive)
{
signalRConnectionRecovery.Abort();
}
}
catch
{
///
}
}
I found this nice article about how to use async worker: http://www.dotnetfunda.com/articles/article613-background-processes-in-asp-net-web-applications.aspx
And this:
http://code.msdn.microsoft.com/CSASPNETBackgroundWorker-dda8d7b6
But I think for my needs this one will be perfect:
http://forums.asp.net/t/1433665.aspx/1
ASP.NET is not designed to handle this kind of requirement. If you need something to run constantly, you would be better off creating a windows service.
Update
ASP.NET is not designed for long running tasks. It's designed to respond quickly to HTTP requests. See Cyborgx37's answer or Can I use threads to carry out long-running jobs on IIS? for a few reasons why.
Update
Now that you finally mentioned you are working with SignalR, I see that you are trying to host SignalR within ASP.NET, correct? I think you're going about this the wrong way, see the example NuGet package referenced on the project wiki. This example uses an IAsyncHttpHandler to manage tasks.
You can start a thread in your global.asax, however it will only run till your asp.net process get recycled. This will happen at least once a day, or when no one uses of your site. If the process get recycled, the only way the thread is restarted agian, is when you have a hit on your site. So the thread is not running continueuosly.
To get a continues process it is better to start a windows service.
If you do the 'In process' solution, it realy depends on what your are doing. The Thread itself will not cause you any problems in memory or deadlocks. You should add a meganism to stop your thread when the application stops. Otherwise restarting will take a long time, because it will wait for your thread to stop.
This is an old post, but as I was seraching for this, I would like to report that in .NET 4.5.2 there is a native way to do it with QueueBackgroundWorkItem.
Take a look at this post: https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/
MarianoC
It depends what you are trying to accomplish in your while loop, but in general this is the kind of situation where a Windows Service is the best answer. Installing a Windows Service is going to require that you have admin privileges on the web server.
With an infinite loop you end up with a lot of issues regard the Windows message pump. This is the thing that keeps a Windows application alive even when the application isn't "doing" anything. Without it, a program simply ends.
The problem with an infinite loop is that the application is stuck "doing" something, which prevents other applications (or threads) from "doing" their thing. There have been a few workarounds, such as the DoEvents in Windows Forms, but they all have some serious drawbacks when it comes to responsiveness and resource management. (Acceptable on a small LOB application, maybe not on a web server.) Even if the while-loop is on a separate thread, it will use up all available processing power.
Asynchronus programming is really designed more for long-running processes, such as waiting for a database to return a result or waiting for a printer to come online. In these cases, it's the external process that is taking a long time, not a while-loop.
If a Window Service is not possible, then I think your best bet is going to be setting up a separate thread with its own message pump, but it's a bit complicated. I've never done it on a web server, but you might be able to start an Application. This will provide a message pump for you and allow you to respond to Windows events, etc. The only problem is that this is going to start a Windows application (either WPF or WinForms), which may not be desirable on a web server.
What are you trying to accomplish? Is there another way you might go about it?
I found this nice article about how to use async worker, will give it a try. http://www.dotnetfunda.com/articles/article613-background-processes-in-asp-net-web-applications.aspx
And this:
http://code.msdn.microsoft.com/CSASPNETBackgroundWorker-dda8d7b6
But I think for my needs this one will be perfect:
http://forums.asp.net/t/1433665.aspx/1
I have an ASP .NET website running on GoDaddy in a shared environment. The application is a subscription-based service with options for recurring billing to users.
Every hour, we need to synchronize user data with our payment processor to update users who have upgraded or cancelled their accounts. The payment processor, does not have a mechanism for calling a URL or otherwise notifying us of changes.
The problem: We need to create a background thread that runs some code at a predefined interval. There are some good articles about background tasks in .NET but I am sure, there could be a simpler way around this. Maybe an application-wide timer that can call a function, etc.
The limitation: Shared environment does not allow windows services, external applications, full-trust, etc.
Since this is a production application, I would like to use the safest approach possible rather than arm-twisting IIS.
I had a similar problem, I'm developing a ASP proof of concept and use a background thread that performs a task that could take several hours. Problem is, ASP.Net can recycle the AppDomain at anytime (killing my background thread).
To prevent this, you can register your background thread to ASP.Net so it will notify your thread to shut down. To do this implement the following interface:
public interface IRegisteredObject
{
void Stop(bool immediate);
}
And register your object to ASP using the following static method:
HostingEnvironment.RegisterObject(this);
When ASP.NET tears down the AppDomain, it will first attempt to call Stop method on all registered objects. In most cases, it’ll call this method twice, once with immediate set to false. This gives your code a bit of time to finish what it is doing. ASP.NET gives all instances of IRegisteredObject a total of 30 seconds to complete their work, not 30 seconds each. After that time span, if there are any registered objects left, it will call them again with immediate set to true.
By preventing the Stop method from returning (by locking a field when the worker is busy), we stop ASP from shutting down the AppDomain until our work is finished.
public void Stop(bool immediate)
{
lock (_lock)
{
_shuttingDown = true;
}
HostingEnvironment.UnregisterObject(this);
}
public void DoWork(Action work)
{
lock (_lock)
{
if (_shuttingDown)
{
return;
}
work();
}
}
Use a Task instead of action to benefit from cancellation options. For your specific case you could start a timer that executes tasks like this.
PS. This is a hack and ASP isn't meant to run background tasks so use a windows service or WCF service when possible! I use this since it simplifies development, maintenance and installation.
For more information see my source: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
To update for 2018 - The Hangfire NuGet package is perfect for this
Since there were no answers, I thought I'd post my solution in case it helps others.
Not the ideal approach by any means but for those who might gain from it, I created a cron job on another Linux hosting account we had to call the required ASP .NET url. Management horror but does the job.
We have a web front end on our business layer server.
Certain pages in our web application instantiate very long running tasks (could be up to 10+ minutes). The way that these requests are handled is like so: -
(on the HTTP request thread)
we make a connection to the business server.
we create a new thread to make the long running call passing in the connection object.
The HTTP request then completes, passing a handle back to the browser,
the browser periodically polls the web server to get updates on the long running task progress.
All requests to the business server are authenticated - the connection's user principal page must have permission to call the method on the business server.
This mechanism works fine as long as our web application is running in Classic mode.
When we run in pipeline mode, we get ObjectDisposedExceptions when the browser polls.
System.ObjectDisposedException: Safe handle has been closed
at System.StubHelpers.StubHelpers.SafeHandleC2NHelper(Object pThis, IntPtr CleanupWorkList)
at Microsoft.Win32.Win32Native.GetTokenInformation(SafeTokenHandle TokenHandle, UInt32 TokenInformationClass, SafeLocalAllocHandle TokenInformation, UInt32 TokenInformationLength, ref UInt32 ReturnLength)
at System.Security.Principal.WindowsIdentity.GetTokenInformation(SafeTokenHandle tokenHandle, TokenInformationClass tokenInformationClass, ref UInt32 dwLength)
at System.Security.Principal.WindowsIdentity.get_User()
at System.Security.Principal.WindowsIdentity.GetName()
at System.Security.Principal.WindowsIdentity.get_Name()
the problem appears to be that the windows principal used to make the connection is disposed when the original request ends (which is understandable - in fact I am surprised that the code worked at all!).
As a way around this problem I was wondering if it was possible to either create a duplicate of the HTTP request principal and use that to create the connection (and dispose of it when the long running task completes) or would it be possible to impersonate the HTTP request principle on the worker thread even after the principal is disposed?
Update
(My comment under Aliostad's question was incorrect: the test page did fail. I managed to confuse myself sufficiently that I wrote my test page so that it did not exercise the same code path as the real (faulting) code. Nevermind!)
I have written a "workaround" for this problem: -
I am in the fortunate position of knowing what roles/groups the business server logic will be querying for before the call to the business server is made. So my workaround is to create a new generic principal based upon the request's principal's membership of these roles. The long running task is run using the generic principal.
I am not 100% happy with this workaround because it is very much a "hack" - i.e. I can see that it would easily fall down if some logic did the (eminently sensible) check of verifying that the principal's identity is authenticated.
So I would still very much appreciate any help / insight into this issue.
Thanks
OK, here is my catch on this.
First of all, if you create a thread, all the current thread's security context will be copied to the new thread - by default. This operation is heavy but much needed (as you can imagine most things will not work without it). In case you need to prevent it and you do not need the copying of context, there is a way to do it and it has been explained in Richter's C# via CLR. Lucky enough, he has shared this very bit of the book here and basically calling a static method to prevent context to be flowed:
ExecutionContext.SuppressFlow();
I cannot think this is being called in WCF although using Reflector, I found a single use of it in here:
[SecuritySafeCritical]
private IAsyncResult BeginGetContext(bool startListening)
{
Exception exception;
do
{
exception = null;
try
{
try
{
if (ExecutionContext.IsFlowSuppressed())
{
return this.listener.BeginGetContext(this.onGetContext, null);
}
using (ExecutionContext.SuppressFlow())
{
return this.listener.BeginGetContext(this.onGetContext, null);
}
}
// .... the rest
Interestingly enough, this is used in 3 places one of them in SharedHttpTransportManager.
Now all this might look like we have found the issue and it is a bug but I very much doubt it.
My hunch is that there is a process recycling happening in between and the context is lost. The way to prove or disprove this would be to use perfmon to register all process recycles and find out if any was in between.
My solution is basically - which you might not like! - to simply insert an item into a queue (MSMQ or a simple database queue) and have a windows service reading it. With this operation being so important, I would never trust IIS to carry out to the finish.
Hope this is useful to you.
I am writing a custom Windows Workflow Foundation activity, that starts some process asynchronously, and then should wake up when an async event arrives.
All the samples I’ve found (e.g. this one by Kirk Evans) involve a custom workflow service, that does most of the work, and then posts an event to the activity-created queue. The main reason for that seems to be that the only method to post an event [that works from a non-WF thread] is WorkflowInstance.EnqueueItem, and the activities don’t have access to workflow instances, so they can't post events (from non-WF thread where I receive the result of async operation).
I don't like this design, as this splits functionality into two pieces, and requires adding a service to a host when a new activity type is added. Ugly.
So I wrote the following generic service that I call from the activity’s async event handler, and that can reused by various async activities (error handling omitted):
class WorkflowEnqueuerService : WorkflowRuntimeService
{
public void EnqueueItem(Guid workflowInstanceId, IComparable queueId, object item)
{
this.Runtime.GetWorkflow(workflowInstanceId).EnqueueItem(queueId, item, null, null);
}
}
Now in the activity code, I can obtain and store a reference to this service, start my async operation, an when it completes, use this service to post an event to my queue. The benefits of this - I keep all the activity-specific code inside activity, and I don't have to add new services for each activity types.
But seeing the official and internet samples doing it will specialized non-reusable services, I would like to check if this approach is OK, or I’m creating some problems here?
There is a potential problem here with regard to workflow persistence.
If you create long running worklfows that are persisted in a database to the runtime will be able to restart these workflows are not reloaded into memory until there is some external event that reloads them. As there they are responsible for triggering the event themselves but cannot until they are reloaded. And we have a catch 22 :-(
The proper way to do this is using an external service. And while this might feel like dividing the code into two places it really isn't. The reason is that the workflow is responsible for the big picture, IE what should be done. And the runtime service is responsible for the actual implementation or how it should be done. That way you can change the how without changing the why and when part.
A followup - regardless of all the reasons, why it "should be done" using a service, this will be directly supported by .NET 4.0, which provides a clean way for an activity to start an asynchronous work, while suspending the persistence of the activity.
See
http://msdn.microsoft.com/en-us/library/system.activities.codeactivitycontext.setupasyncoperationblock(VS.100).aspx
for details.