Teradata Database 3130 Response limit exceeded? - teradata

[Teradata Database] [3130] Response limit exceeded
I have no idea what is causing this random error message. It happens when I am making a call to the database for a SELECT or to execute a stored procedure. I wish I had more information on how to reproduce this, but it appears to be intermittent.
What does this error actually mean? What types of conditions could cause this?
Edit: I've discovered that the issue goes away when I build my ASP.NET app (vs2012). It's like something related to connections is being cached somewhere on my machine. After I recycle the app pool with a rebuild, it resets everything. The next time this happens, I will try saving the web.config file which automatically recycles the app pool without rebuilding the DLL.

This is a cut&paste from the Messages manual:
3130 Response limit exceeded.
Explanation: There is a TDBMS limit of 16 outstanding responses for a
single session. If responses are allowed to pile up by an application,
this error will occur. A response is the response set from a SELECT
statement. A response is kept by the TDBMS until we know the user is
done with it at which point it is cancelled. There are two scenarios:
If KeepResp is OFF, the response is automatically cancelled when all rows have been returned to the application and the host has been
notified of the end of the response.
If KeepResp is ON, the response is held until explicitly cancelled by the user. In each case, the response can be explicitly cancelled by
the application.as soon as it is no longer needed.
Generated By: Dispatcher.
For Whom: End user.
Remedy: The responses are the property of the session and will
automatically automatically be cancelled if the session is logged off.
Cancel an old response and resubmit the request or transaction.
As you already noticed this is usually caused by a misbehaving application, too many open result sets on the server side. It's the client's responsibility to close them :-)

In my case error cam when I tried to create more then 15 Statements/Preparedstatements on same Connection Instance & then executedQuery on them.
So you should check that more then 15 Statements are not getting created on same connection or they must get closed before creating another.
Result set is generally out the picture if we are talking about 313 response limit exceed as Statement automatically closes existing resultset in case if it's reused.

Related

Error during GENERAL_REQUEST_ENTITY for POST request causes ASP .NET session state to never get unlocked

(Cross posting from Server Fault where I wasn't getting any traction):
I have been trying to chase down the root cause of a condition where ASP .NET session state remains locked after a web request has been terminated due to an unexpected error. We use the SQL Server session state provider for session because we have several servers in a web farm. This issue first presented itself in the form of many requests getting stuck on the 'AcquireRequestState' event of their lifecycle for no apparent reason. I was able to finding corresponding entries for these requests in the session state database in SQL server that were all locked (column Locked = 1). I was also able to correlate these requests to entries in the IIS log with HTTP status codes of 500 (with a sub status of 0). These findings lead me to believe that, in some cases, a request was erroring out but was NOT releasing its lock on session state like it should.
I enabled Failed Request Tracing in IIS for the website in question for status code 500 with all available providers selected each with the 'Verbose' setting for verbosity. I've since gathered several failed traces that have caused permanently locked ASP .NET sessions. They all share the same characteristics:
They are all 'POST' requests where the browser is posting data to be processed/saved.
They all have events indicating that the 'Session' module was invoked during the REQUEST_ACQUIRE_STATE event. At this point the request would have marked the row in the session state database as being "locked". This is normal and expected.
They all have GENERAL_READ_ENTITY_START, GENERAL_READ_ENTITY_END, and GENERAL_REQUEST_ENTITY entries that appear to be reading in the data that was posted to the server as part of the request. This appears to be a buffered operation as these events get repeated over and over with each one reading in some subset of the posted data.
At some point during the 'read entity' related events an error occurs. Some have the error code "Incorrect function. (0x80070001)" and others have "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)".
Once the error has been encountered, they all jump directly to the END_REQUEST events.
The issue here is that, under normal circumstances, there should be a RELEASE_REQUEST_STATE event that will allow the Session module to release the lock it has on the session. This event is being skipped in this scenario. Just to be sure, I enabled failed request tracing for the '200' status code as well and generated several traces of successful requests that do have the RELEASE_REQUEST_STATE event being handled by the Session module.
A co-worker pointed out that you can also cause a request to skip directly to the 'END_REQUEST' event by calling HttpContext.Current.ApplicationInstance.CompleteRequest(). I tested this out and saw that using this method during a post request creates a trace very similar to the ones I've been capturing when this issue has been happening, but session does still get cleaned up properly. This lead to me to running SQL Profiler on the SQL Server database where the session is stored to trace all calls to stored procedures. When we skip directly to END_REQUEST due to calling CompleteRequest(), a call is made to update the session state (and release the lock) as expected. When we skip to END_REQUEST as a result of an error during GENERAL_REQUEST_ENTITY, the call to update or release the lock on session state is never made.
My theory at this point is that some kind of network issue is causing the 'Incorrect function' and 'I/O operation has been aborted because of either a thread exit or an application request' errors, but I don't understand why this seems to be causing the request handling to skip over the releasing the lock on session state. If the request went through REQUEST_ACQUIRE_STATE it seems like it should also release the lock at some point toward the end of the request as well. I'm loathe to say that this is a bug in IIS or ASP .NET, but it certainly appears that way to me at this point.
Are there any known conditions under which errors will lead to a session state lock not being released?
As it turns out, this was related to this question: ManagedPipelineHandler for an AJAX POST crashes if an IE9 user navigates away from a page while that call was in progress
The workaround specified in the accepted answer on that question does work, but Microsoft has also since released a hotfix (not yet publicly available as of this writing) that patches the session handling logic to avoid the issue all together.

Should POST data have a re-try on timeout condition or not?

When POSTing data - either using AJAX or from a mobile device or what have you - there is often a "retry" condition, so that should something like a timeout occue, the data is POSTed again.
Is this actually a good idea?
POST data is meant to be idempotent, so if you
make a POST to the server,
the server receives the request,
takes time to execute and
then sends the data back
if the timeout is hit sometime after 3. then the next retry will send data that was meant to be idempotent.
The question then is that should a retry (when calling from the client side) be set for POST data, or should the server be designed to always handle POST data appropriately (with tokens and so on), or am i missing something?
update as per the questions - this is for a mobile app. As it happens, during testing it was noticed that with too short a timeout, the app would retry. Meanwhile, the back-end server had in fact accepted and processed the initial request, and got v. upset when the new (otherwise identical) re-request came in.
nonce's are a (partial) solution to this. The server generates a nonce and gives it to the client. The client sends the POST including the nonce, the server checks if the nonce is valid and unused and if so, acts on the POST and invalidates the nonce, if not, it reports back that the nonce is used and discards the data. Also very usefull to avoid the 'double post' problem by users clicking a submit button twice.
However, it is moving the problem from the client to a different one on the server. If you invalidate the nonce before the action, the action might still fail / hang, if you invalidate it after, the nonce is still valid for requests during the processing. So, a possible scenario on the server becomes on receiving.
Lock nonce
Do action
On any processing error preventing action completion, rollback, release lock on nonce.
On no errors, invalidate / remove nonce.
Semaphores on the server side are most helpfull with this, most backend languages have libraries for these.
So, implementing all these:
It is safe to retry, if the action is already performed it won't be done again.
A reply that the nonce has already been used can be understood as a a confirmation that the original POST has been acted upon.
If you need the result of an action where the second requests shows that the first came through, a short-lived cache would be needed server-sided.
Up to you to set a sane limit on subsequent tries (what if the 2nd fails? or the 3rd?).
The issue with automatic retries is the server needs to know if the prior POST was successfully processed to avoid unintended consequences. For example, if each POST inserts a DB record, but the last POST timed out after the insert, the automatic re-POST will cause a duplicate DB record, which is probably not what you want.
In a robust setup you may be able to catch that there was a timeout and rollback any POSTed updates. That would safely allow an automatic re-POST. But many (most?) web systems aren't this elaborate, so you'll typically require human intervention to determine if the POST should happen again.
If you can't avoid the long wait until the server responds, a better practice would be to return an immediate response (200OK with the message "processing") and have the client, later on, send a new request that checks if the action was performed.
AJAX was not designed to be used in such a way ("heavy" actions).
By the way, the default HTTP timeout is 7200 secs so I don't think you'll reach it easily - but regardless, you should avoid having the user wait for long periods of time.
Providing more information about the process (like what exactly you're trying to do) would help in suggesting ways to avoid such obstacles.
If your requests are failing enough to instigate something like this, you have major problems that have nothing to do with implementing a retry condition. I have never seen a Web app that needed this type of functionality. Programming your app to automatically beat an overloaded sever with the F5 hammer is not the solution to anything.
If this ajax request is triggered from a button click, disable that button until it returns, successful or not. If it failed, let the user click it again.

Connection forcibly closed on 15 second request

I have a request to my own service that takes 15 seconds for the service to complete. Should not be a problem right? We actually have a service-side timer so that it will take at most 15 seconds to complete. However, the client is seeing "the connection was forcibly closed " and is automatically (within in the System.Net layer--I have seen it by turning on the diagnostics) retrying the GET request twice.
Oh, BTW, this is a non-SOAP situation (WCF 4 REST Service) so there is none of that SOAP stuff in the middle. Also, my client is a program, not a browser.
If I shrink the time down to 5 seconds (which I can do artificially), the retries stop but I am at a loss for explaining how the connection should be dropped so quickly. The HttpWebRequest.KeepAlive flag is, by default, true and is not being modified so the connection should be kept open.
The timing of the retries is interesting. They come at the the end of whatever timeout we choose (e.g. 10, 15 seconds or whatever) so the client side seems to be reacting only after getting the first response.
Another thing: There is no indication of a problem on the service side. It works just fine but sees a surprising (to me) couple of retry of the requests from the client.
I have Googled this issue and come up empty. The standard for keep-alive is over 100 seconds AFAIK so I am still puzzled why the client is acting the way it is--and the behavior is within the System.Net layer so I cannot step through it.
Any help here would be very much appreciated!
== Tevya ==
Change your service so it sends a timeout indication to the client before closing the connection.
Sounds like a piece of hardware (router, firewall, load balancer?) is sending a RST because of some configuration choice.
I found the answer and it was almost totally unrelated to the timeout. Rather, the problem is related to the use of custom serialization of response data. The respose data structures have some dynamically appearing types and thus cannot be serialized by the normal ASP.NET mechanisms.
To solve that problem, we create an XmlObjectSerializer object and then pass it plus the objects to serialize to System.ServiceModel.Channels.Message.CreateMessage(). Here two things went wrong:
An unexpected type was added to the message [how and why I will not go into here] which caused the serialization to fail and
It turns out that the CreateMessage() method does not immediately serialize the contents but defers the serialization until sometime later (probably just-in-time).
The two facts together caused an uncatchable serialization failure on the server side because the actual attempt to serialize the objects did not occur until the user-written service code had returned control to the WCF infrastructure.
Now why did it look like a timeout? Well, it turns out that not all the objects being returned had the unexpected object type in them. In particular, the first N objects did not. So, when the time limit was lengthened beyond 5 seconds, the N+1th object did reference the unknown type and was included in the download which broke the serialization. Later testing confirmed that the failure could happen even when only one object was being passed back.
The solution was to pre-process the objects so that no unexpected types are referenced. Then all goes well.
== Tevya ==

How to set the ASP.NET SessionState read-write LOCK time-out?

I have a WCF web service that uses ASP.NET session state. WCF sets a read-write lock on the session for every request. What this means is that my web service can only process one request at a time per user, which hurts perceived performance of our AJAX application.
So I'm trying to find a way to get around this limitation.
Using a read-only lock (which then allows concurrent access to the session) isn't supported by WCF.
I haven't found a way to release the read-write lock manually during processing of a request
So now I'm thinking that there may be some way to set the read-write lock timeout to some very short interval, in order that waiting requests don't need to wait very long. See the below part in bold.
From MSDN:
http://msdn.microsoft.com/en-us/library/ms178581.aspx
"If two concurrent requests are made for the same session, the first request gets exclusive access to the session information. The second request executes only after the first request is finished. (The second session can also get access if the exclusive lock on the information is freed because the first request exceeds the lock time-out.) If the EnableSessionState value in the # Page directive is set to ReadOnly, a request for the read-only session information does not result in an exclusive lock on the session data."
...But I haven't found any information on how long this lock time-out is, or how to change it.
I can tell you that httpRuntime execution timeout controls this lock time, however, the documentation for this field states that the thread should be terminated at this point. I know from experience that this thread is not terminated and will eventually return data, but a new thread is spawned to handle requests in the queue.
By default this value is 110 seconds after 2.0 asp, before that it is 90 seconds. I would be concerned about this behavior changing in the future and being "fixed".
Has anyone tried using SQLSessionStateProvider and modifying the SPs? I did this in dev and seems to get around the locking issues but not sure if it has any side-effects. Basically what I did was change the 3 SPs which obtain Exclusive locks so that the Lock column is always set to 0.

Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction

Just curious if anyone else has got this particular error and know how to solve it?
The scenario is as follow...
We have an ASP.NET web application using Enterprise Library running on Windows Server 2008 IIS farm connecting to a SQL Server 2008 cluster back end.
MSDTC is turned on. DB connections are pooled.
My suspicion is that somewhere along the line there is a failed MSDTC transaction, the connection got returned to the pool and the next query on a different page is picking up the misbehaving connection and got this particular error. Funny thing is we got this error on a query that has no need whatsoever with distributed transaction (committing to two database, etc.). We were only doing select query (no transaction) when we got the error.
We did SQL Profiling and the query got ran on the SQL Server, but never came back (since the MSDTC transaction was already aborted in the connection).
Some other related errors to accompany this are:
New request is not allowed to start
because it should come with valid
transaction descriptor.
Internal .Net Framework Data Provider error 60.
MSDTC has default 90 seconds timeout, if one query execute exceed this time limit, you will encounter this error when the transaction is trying to commit.
A bounty may help get the answer you seek, but you're probably going to get better answers if you give some code samples and give a better description of when the error occurs.
Does the error only intermittently occur? It sounds like it from your description.
Are you enclosing the close that you want to be done as a transaction in a using TransactionScope block as Microsoft recommends? This should help avoid weird transaction behavior. Recall that a using block makes sure that the object is always disposed regardless of exceptions thrown. See here: http://msdn.microsoft.com/en-us/library/ms172152.aspx
If you're using TransactionScope there is an argument System.TransactionScopeOption.RequiresNew that tells the framework to always create a new transaction for this block of code:
Using ts As New Transactions.TransactionScope(Transactions.TransactionScopeOption.RequiresNew)
' Do Stuff
End Using
Also, if you're suspicious that a connection is getting faulted and then put back into the connection pool, the likely solution is to enclose the code that may fault the connection in a Try-Catch block and Dispose the connection in the catch block.
Old question ... but ran into this issue past few days.
Could not find a good answer until now. Just wanted to share what I found out.
My scenario contains multiple sessions being opened by multiple session factories. I had to correctly rollback and wait and make sure the other transactions were no longer active. It seems that just rolling back one of them will rollback everything.
But after adding the Thread.Sleep() between rollbacks, it doesn't do the other and continues fine with the rollback. Subsequent hits that trigger the method don't result in the "New request is not allowed to start because it should come with valid transaction descriptor." error.
https://gist.github.com/josephvano/5766488
I have seen this before and the cause was exactly what you thought. As Rice suggested, make sure that you are correctly disposing of the db related objects to avoid this problem.

Resources