Having spent more time chasing my tail on this than is healthy, I'm just going to have to ask:
In R, how can one figure out what error handlers are in force at a given point in the code? Somebody somewhere is gracefully intercepting my errors, and I want them to stop!
Related
I have a difficult time classifying “infinite loop error”. It does not necessarily cause a system crash, but it sometimes does.
Is there a standard answer in the IT community?
I am writing a PinTool, which can manipulate certain register/memory value. However, after manipulation, one challenge I am facing now, is the deadloop.
In particular, due to the frequent manipulation of certain register value, it is indeed common to create deadloop in the execution trace. I am thinking to detect such case, and terminate the execution.
So here is my question, what is a good practice to detect a deadloop in a PinTool? I can come up with some naive solutions, say, record the executed instructions, and if certain instruction has been executed for a large amount of times, just terminate the execution.
Could anyone help me on this issue? Thank you.
Detecting whether a program will terminate isn't a computable problem in general, so no, I don't think it's a good idea.
I'm trying to optimise a html parser for a Roku application I'm helping to develop. The parser currently takes too long to parse the data (8 seconds) and it does this by recursivly traversing the children of each tag encountered within a for each loop.
parser (nodes):
for each node in nodes
if node.isTag
parser(node.nodes)
else if node.isBlock
text.push(node)
something akin to that, although much more convoluted! I'm assuming it's slow because it's recursive and there is no tail recursion optimisation on the platform etc
I'm not too sure how to implement a stack to remove the recursiveness from this - I've tried using GoTo but that didn't seem to work :/
Can anyone provide some insight, and or whether you think the problem might be caused by the recursion?
What are you trying to achieve?
Some boxes are slow, there is only a certain amount of optimisation you can apply. If you need to parse the whole document, it usually takes what it takes. We had similar problem, app froze for a few seconds, looking like it's crashed but we got around it by displaying asynchronous spinning icon.
i know its by design, in finally block i should do resource cleanup - that's why the finally block is always executed no matter what the exception handling code is.
But "WHY" it will execute is my question?, this was asked to my friend in an interview, so even i got confused after discussing with him, please clarify, thanks in advance.?
"WHY" here could be summarised as "because that is what the specification says; that is why it was designed, specified, implemented, tested, and supported: because they wanted something that would always execute, no matter what the exception handling code is". It is a bit like asking "WHY does execution flow to the else block (if one) if the condition in an if test fails?"
The uses of finally include:
resource cleanup (Dispose() being an important one, but not the only one)
logging / tracing / profiling the fact that we finished (whether successful or not)
making the state consistent again (for example, resetting an isRunning flag)
Anecdotally, I make much more use of finally than I do catch. It is pretty common that I want something to happen while leaving, but often with exceptions the best thing to do is to let them bubble upwards. The only thing I usually need to be sure to do during an exception is to clean up any mess I made - which I need to do either way - so I might as well do that using a combination of finally and using (which is really just a wrapper around finally anyway).
It's almost always around resource cleanup - or sometimes logical cleanup of getting back to a reasonable state.
If I open a file handle or a database connection (or whatever) then when I leave that piece of code I want the handle to be closed regardless of how I leave it - whether it's "normally" or via an exception.
finally blocks are simply there to give that "Execute this no matter what1" behaviour which can often be useful.
1 Well, within reason. Not if the process dies abruptly, for example - of if the power cable is kicked out.
Working on an application that has very heavy use of the local sqlite db. Initially it was setup for synchronous database communication, but with such heavy usage we were seeing the application "freeze" for brief periods fairly often.
After doing a refactor to asynchronous communication we are seeing a different issue. The application seems to be far less reliable. Jobs seem to simply not complete. After much debugging and tweaking the problem seems to be the database event handles not always being caught. I'm seeing this specifically when beginning a transaction or closing the connection.
Here is an example:
con.addEventListener(SQLErrorEvent.ERROR, tran_ErrorHandler);
con.addEventListener(SQLEvent.BEGIN, con_beginHandler);
con.begin(SQLTransactionLockType.IMMEDIATE);
Most of the time this works just fine. But every now and then con_beginHandler isn't hit after con.begin is called. This makes it so we have an open transaction that never gets committed and can really hang up future requests. When investigating this same issue with the connection close handler, one of the solutions was to simply delay it. In that context it was OK to wait even several seconds.
setTimeout(function():void{ con.begin(SQLTransactionLockType.IMMEDIATE); }, 1000);
Changing to something like this does seem to make the transaction more reliable, however, that really stretches out the time it takes for the application to complete actions. This is a very db heavy application, so even adding 200ms has a noticeable affect. But something as short as 200ms also doesn't seem to fully solve the issue. It has to be 500-1000ms or higher in order for me to stop seeing this issue.
I've written a separate AIR application to try and stress test our code and the transactions, but am unable to reproduce this in that environment. I even have it try to do something that will "freeze" the application (long loops that do some math or other processing) to see if application strain is what makes them misfire, but everything seems reliable.
I'm at a loss for how to resolve this at this point. I even tried running con.begin off of a binding event, just to add more time. The only thing that seems to work is excessively long timers/timeouts, which I don't think is an acceptable solution.
Has anybody else run into this? Is there some trick to async that I'm missing?
I had a few more ideas to try after the refreshing weekend, none of which panned out; however, during these attempts and more investigations I finally found a pattern to the issue. Even though it doesn’t happen consistently, when it does happen it is fairly consistent on where it happens. There are 1 or 2 spots during the problematic processes that try to compact the DB after doing data clearing, in order to help keep the file sizes smaller. I think the issue here is compact wasn’t worked into the async flow properly. So while we are trying to compact the db, we are also trying to start up the new transaction. So if the compact takes a bit of time every once in a while, then we get a hang up. I think the assumed behavior was for async event handling to dispatch when the transaction is finally started instead of just never happening at all, but this does make some amount of sense.