I get this error sometimes on a session scoped component, still haven't figured out what is causing this to fail. Any ideas?
ERROR [Exceptions] handled and logged exception
javax.el.ELException: org.jboss.seam.core.LockTimeoutException: could not acquire lock on #Synchronized component: importUser
Session scoped components are synchronized by default. That means, Seam takes care that only one request at a time may access such a component. All other requests have to wait until the first is finished. To prevent starvation, the waiting requests have a timeout (see org.jboss.seam.core.SynchronizationInterceptor for the corresponding implementation). When the waiting request does not get access to the component until the timeout is reached, the SynchronizationInterceptor throws a org.jboss.seam.core.LockTimeoutException.
Assuming to requests, A and B, need your importUser component and A is first. If A takes a long time to finish, B will end in the LockTimeoutException. To find the cause of your issue, try to find out how a request to importUser may take longer than the defined timeout.
I had a page where this would happen infrequently under heavy load. I was able to reduce the frequency of this occurring by putting this annotation on the offending Seam object class:
#Synchronized(timeout=5000)
That increases the timeout to five seconds instead of the default one second Seam gives them. It's just a band-aid, but I wasn't up for rewriting that behemoth.
Related
I am using an Axon Event Tracking processor. Sometimes events take longer that 10 seconds to process.
This seems to cause the message to be processed again and this appears in the log "Releasing claim of token X/0 failed. It was owned by another node."
If I up the number of segments it does not log this BUT the event is still processed twice so I think this might be misleading. (I think I was mistaken about this)
I have tried adjusting the fetchDelay, cleanupDelay and tokenClaimInterval. None of which has fixed this. Is there a property or something that I am missing?
Edit
The scenario taking longer than 10 seconds is making a HTTP request to an external service.
I'm using axon 4.1.2 with all default configuration when using with Spring auto configuration. I cannot see the Releasing claim on token and preparing for retry in [timeout]s log.
I was having this issue with a single segment and 2 instances of the application. I realised I hadn't increased the number of segments like I thought I had.
After further investigation I have discovered that adding an additional segment seems to have stopped this. Even if I have for example 2 segments and 6 applications it still doesn't reappear, however I'm not sure how this is different to my original scenario of 1 segment and 2 application?
I didn't realise it would be possible for multiple threads to grab the same tracking token and process the same event. It sounds like the best action would be to put an idem-potency check before the HTTP call?
The Releasing claim of token [event-processor-name]/[segment-id] failed. It was owned by another node. message can only occur in three scenarios:
You are performing a merge operation of two segments which fails because the given thread doesn't own both segments.
The main event processing loop of the TrackingEventProcessor is stopped, but releasing the token claim fails because the token is already claimed by another thread.
The main event processing loop has caught an Exception, making it retry with a exponential back-off, and it tries to release the claim (which might fail with the given message).
I am guessing it's not options 1 and 2, so that would leave us with option 3. This should also mean you are seeing other WARN level messages, like:
Releasing claim on token and preparing for retry in [timeout]s
Would you be able to share whether that's the case? That way we can pinpoint a little better what the exact problem is you are encountering.
By the way, very likely you have several processes (event handling threads of the TrackingEventProcessor) stealing the TrackingToken from one another. As they're stealing an un-updated token, both (or more) will handled the same event. Hence why you see the event handler being invoked twice.
Obviously undesirable behavior and something we should resolve for you. I would like to ask you to provide answers to my comments under the question, as right now I have to little to go on. Let us figure this out #Dan!
Update
Thanks for updating your question #dan, that's very helpful.
From what you've shared, I am fairly confident that both instances are stealing the token from one another. This does depend though on whether both are using the same database for the token_entry table (although I am assuming they are).
If they are using the same table, then they should "nicely" share their work, unless one of them takes to long. If it takes to long, the token will be claimed by another process. This other process in this case is the thread of the TEP of your other application instance. The "claim timeout" is defaulted to 10 seconds, which also corresponds with the long running event handling process.
This claimTimeout is adjustable though, by invoking the Builder of the JpaTokenStore/JdbcTokenStore (depending on which you are using / auto wiring) and calling the JpaTokenStore.Builder#claimTimeout(TemporalAmount) method. And, I think this would be required on your end, giving the fact you have a long running operation.
There are of course different ways of tackling this. Like, making sure the TEP is only ran on a single instance (not really fault tolerant though), or offloading this long running operation to a schedule task which is triggered by the event.
But, I think we've found the issue at least, so I'd suggest to tweak the claimTimeout and see if the problem persists.
Let us know if this resolves the problem on your end #dan!
After a previous post about an issue with Session State being locked on every request (normal behavior for Asp.Net), tried fully disabling Session State (). This, in fact, disables the Session object and throws exceptions if try to use it. However, as stated in the named post, all requests are still serviced in a serialized fashion. This is, a second "simultaneous" request doesn't get served till previous gets finished served. Related documentation states that disabling Session State avoids the lock in the session but, in my case, my requests are still serviced serially.
This is not MVC.
This is my previous post Custom handler processes multiple requests serially and not simultaneouslly
Any help would be appreciated.
Turns out this always happens inside the developer environment (cassini). When dissabling session state there is no possible access to the Session object but it seems to exists a lock request somewhere.
Debugging in IIS this doesn't happen.
Hope this help.
I have a request to my own service that takes 15 seconds for the service to complete. Should not be a problem right? We actually have a service-side timer so that it will take at most 15 seconds to complete. However, the client is seeing "the connection was forcibly closed " and is automatically (within in the System.Net layer--I have seen it by turning on the diagnostics) retrying the GET request twice.
Oh, BTW, this is a non-SOAP situation (WCF 4 REST Service) so there is none of that SOAP stuff in the middle. Also, my client is a program, not a browser.
If I shrink the time down to 5 seconds (which I can do artificially), the retries stop but I am at a loss for explaining how the connection should be dropped so quickly. The HttpWebRequest.KeepAlive flag is, by default, true and is not being modified so the connection should be kept open.
The timing of the retries is interesting. They come at the the end of whatever timeout we choose (e.g. 10, 15 seconds or whatever) so the client side seems to be reacting only after getting the first response.
Another thing: There is no indication of a problem on the service side. It works just fine but sees a surprising (to me) couple of retry of the requests from the client.
I have Googled this issue and come up empty. The standard for keep-alive is over 100 seconds AFAIK so I am still puzzled why the client is acting the way it is--and the behavior is within the System.Net layer so I cannot step through it.
Any help here would be very much appreciated!
== Tevya ==
Change your service so it sends a timeout indication to the client before closing the connection.
Sounds like a piece of hardware (router, firewall, load balancer?) is sending a RST because of some configuration choice.
I found the answer and it was almost totally unrelated to the timeout. Rather, the problem is related to the use of custom serialization of response data. The respose data structures have some dynamically appearing types and thus cannot be serialized by the normal ASP.NET mechanisms.
To solve that problem, we create an XmlObjectSerializer object and then pass it plus the objects to serialize to System.ServiceModel.Channels.Message.CreateMessage(). Here two things went wrong:
An unexpected type was added to the message [how and why I will not go into here] which caused the serialization to fail and
It turns out that the CreateMessage() method does not immediately serialize the contents but defers the serialization until sometime later (probably just-in-time).
The two facts together caused an uncatchable serialization failure on the server side because the actual attempt to serialize the objects did not occur until the user-written service code had returned control to the WCF infrastructure.
Now why did it look like a timeout? Well, it turns out that not all the objects being returned had the unexpected object type in them. In particular, the first N objects did not. So, when the time limit was lengthened beyond 5 seconds, the N+1th object did reference the unknown type and was included in the download which broke the serialization. Later testing confirmed that the failure could happen even when only one object was being passed back.
The solution was to pre-process the objects so that no unexpected types are referenced. Then all goes well.
== Tevya ==
I have written a small ASHX handler that uses a small state object that I'd like to persist through the lifetime of a series of requests. I have the handler putting the object in the server-side cache (HttpContext.Current.Cache) and retrieving it at the start of ProcessRequest.
How long can I expect that object to remain in the cache? I expect handler instances to come and go, so I wanted something to persist across all of them (until no longer needed as determined by the requests themselves). However, if I have the handler write to the application log when it has to create a new state object due to it not being in the cache, I'm seeing it create it 2-3 times.
You can specify the lifetime and priority when you add the item to the cache.
An item isn't guaranteed to stay in the cache for the entire requested lifetime. For example, if there's memory pressure then the cache might be purged, but setting a higher priority for your item makes it more likely that it will remain in the cache.
I have a WCF web service that uses ASP.NET session state. WCF sets a read-write lock on the session for every request. What this means is that my web service can only process one request at a time per user, which hurts perceived performance of our AJAX application.
So I'm trying to find a way to get around this limitation.
Using a read-only lock (which then allows concurrent access to the session) isn't supported by WCF.
I haven't found a way to release the read-write lock manually during processing of a request
So now I'm thinking that there may be some way to set the read-write lock timeout to some very short interval, in order that waiting requests don't need to wait very long. See the below part in bold.
From MSDN:
http://msdn.microsoft.com/en-us/library/ms178581.aspx
"If two concurrent requests are made for the same session, the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data."
...But I haven't found any information on how long this lock time-out is, or how to change it.
I can tell you that httpRuntime execution timeout controls this lock time, however, the documentation for this field states that the thread should be terminated at this point. I know from experience that this thread is not terminated and will eventually return data, but a new thread is spawned to handle requests in the queue.
By default this value is 110 seconds after 2.0 asp, before that it is 90 seconds. I would be concerned about this behavior changing in the future and being "fixed".
Has anyone tried using SQLSessionStateProvider and modifying the SPs? I did this in dev and seems to get around the locking issues but not sure if it has any side-effects. Basically what I did was change the 3 SPs which obtain Exclusive locks so that the Lock column is always set to 0.