Kill turtle which causes runtime error - runtime-error

I'm curious as to whether there is a way of reporting the turtle which causes a runtime error?
I have a model which includes many agents and will run fine for hours, however sometimes a runtime error will occur. I have tried a few different things to fix it but always seems like an error occurs to the point I can't spare the time to fix it due to deadlines.
As the occurrence is so rare the easiest solution is to just write in the command center ask turtle X [die] after which I click GO and the problem is 'fixed'.
However I was wondering if anyone knew of a way to kill the turtle producing the error automatically every time a runtime error occurred to save me entering this manually.

Related

In the debugger, can we make `q` choose a given restart?

I am trying a program. On error, I get into the debugger with several custom restarts. The first one retries the operation (and thus does nothing), the fourth one is the one to quit correctly. Pressing q leads to a memory error.
How can the developer make sure programatically that when the user presses q, the right restart is called ? And not the one bound to q that leads to a memory error ? Is that possible ?
That may be too specific to the library I'm trying, or totally the wrong approach.
I only found that q is sldb-quit and that it "invoke[s] a restart which restores to a known program state". q doesn't call the first restart. What does it do ? Is it possible to make it call a given restart ?
thanks

MiniProfiler not logging all steps - only with active breakpoints

When using the MiniProfiler to hunt down a performance issue, I encountered a situation, where the MiniProfiler would log only a few of the calls to MiniProfiler.Step().
This is the code:
The breakpoints are set to only count the number of hits (313 per run), they are not interrupting the execution. Notice, that they are deactivated in the screenshot above. After running the application, I get a very incomplete log from MiniProfiler, which, from run to run, has a differing number of entries, usually 2 to 5.
However, when I activate the breakpoints, the log is complete. Remember, that the breakpoints still do not interrupt the execution.
Is this a bug in the MiniProfiler?
Click show trivial - I suspect the ones you're not seeing are "trivial". When you hit the breakpoints, you make them appear not-so-trivial by increasing their execution time.
EDIT:
... as I was poking around their code I stumbled on this
public bool IsTrivial...

sensu: "previous check command execution in progress"

My client-side sensu metric is reporting a WARN and the data is not getting to my OpenTSDB.
It seems to be stuck, but I don't understand what the message is telling me. Can someone translate?
The command is a ruby script.
In /var/log/sensu/sensu-client.log :
{"timestamp":"2014-09-11T16:06:51.928219-0400",
"level":"warn",
"message":"previous check command execution in progress",
"check":{"handler":"metric_store","type":"metric",
"standalone":true,"command":"...",
"output_type":"json","auto_tag_host":"yes",
"interval":60,"description":"description here",
"subscribers"["system"],
"name":"foo_metric","issued":1410466011,"executed":1410465882
}
}
My questions:
what does this message mean?
what causes this?
Does it really mean we are waiting for the same check to run? if so, how do we clear it?
This error means that sensu is (or thinks it is, actually executing this check currently
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L115
This can be caused by stacking checks, that take longer than their interval to run. (60 seconds in this case)
You can try to set the "timeout" option in the check definition:
https://github.com/sensu/sensu/blob/4c36d2684f2e89a9ce811ca53de10cc2eb98f82b/lib/sensu/client.rb#L101
To try to make sensu time out after a while on that check. You could also add internal logic to your check to make it not hang.
In my case, I had accidentally configured two sensu-client instances to have the same name. I think that caused one of them to always think its checks were already running when in reality they were not. Giving them unique names solved the problem for me.

"random" kernel crash after running for minutes.... HEP!? -- [same question posted on Khronos]

I have a thoroughly complex kernel processing audio input data. It will run for a couple of minutes, 60 times a second, and then hang. That's on the GPU; on the CPU it will run for hours. The input data are constantly changing, but each variable is always within proscribed ranges. I have inserted test code before uploading the inputs to the kernel each frame; in this test code, I can force these inputs to be well below their valid input range, but it still will eventually crash. (Say the valid range for a particular input is 0->400; I can force it to 0->1 and it will STILL eventually crash. I can force it to be below 0.1 and it will still ultimately bite the dust.) However, if I force the input variables to zero, the GPU will happily dance for hours. Of course, that input-free dance is not so particularly interesting.
I'm at a loss so far, though I have clues. I can make it crash much faster than 2 minutes if an input variable is high in its approved range. I can make it crash in less then 10 seconds under the right circumstances. BUT, I can't seem to _back_off_of_ those certain circumstances such that they go away. As said above, I can force the input vars into ridiculously small portions of their valid range, and the kernel (let's call him Harlan Sanders) will eventually go belly-up. BUT, if they're forced to actual zero, no problems puppy, we can run all day long.
To repeat, I'm a bit at a loss - although I have things that look like clues, I have not yet figured out what they are hinting at, though I've been trying for a few days. Frankly, I do not expect to find a real solution by asking here; whenever I stumble over a problem in opencl it seems that my fate is to be the first to articulate that particular problem. I guess this is part of the fun of being in on a technology during its infancy!!!!!!!!!! BUT, I want to do some serious, sustainable work with this "baby" (or, maybe, "toddler").
Op details: MacBook Pro 2010, OS 10.6.8, nv 330M GPU, xcode 3.2.5, shorts, teeshirt.
bonus P.S. for those who've read this far, including a related question:
My laptop, soldier that it has proved to be, is not powerful enough for the next stage. I must sell some stocks/bonds and purchase a Mac Pro. I'm looking at the ATI 5870. So, PERHAPS my problem will simply go away when I compile the .cl for the ATI??? Maybe I have run into a bug in the nV implementation. Maybe my kernel is so complex that I'm running into undetected resource limits (it's 1300 lines of code). So, SINCE I run fine on the CPU, perhaps I'll have no bugs, or different bugs, on the ATI card???
Any thoughts?
Thanks, guys & dolls --
Dave
Use "cl_" data types on the CPU side, because maybe you are not coping data the right way, or it is not being understood by the GPU. This could lead to GPU hangs on invalid pointers while handing the data.
You should also try -Werror, and read the error output. You can be doing smt wrong.
Without any code, we can only guess. But I haven't found any bug in the actual OpenCL NV or ATI implementations.
Make sure you release all resources. Events returned by Enqueue functions must be released. This error sometimes occurs after accessing buffers out of range.

Flex error- White exclamation point in gray circle: What does it mean?

We have a flex app that will typically run for long periods of time (could be days or weeks). When I came in this morning I noticed that the app had stopped running and a white exclamation point in a gray circle was in the center of the app. I found a post about it on the Adobe forums, but no one seems to know exactly what the symbol means so I thought I'd reach out to the SO community.
Adobe forum post: http://forums.adobe.com/message/3087523
Screen shot of the symbol:
Any ideas?
Here's an answer in the post you linked to from an Adobe employee:
The error you are seeing is the new
out of memory notification. It is
basically shielding the user when
memory usage gets near the system
resource cap. The best course of
action here (if you own the content)
is to check your application for high
memory usage and correct the errors.
If you don't own the content, it would
probably be best to contact the owners
and make them aware of the issue you
are seeing.
He also says this in a later response:
Developers can use the
System.totalMemory property in AS3 to
monitor the memory usage that the
Flash Player is taking up. This iwll
allow you to see how much memory is
used, where leaks are and allow you to
optimize your content based on this
property.
I work for a digital signage company and we have also came across this error, however, it is not only memory leak related because it can be caused by utilizing the vector code on that page provided. We have also noted that it occurs without any kind of memory spike whatsoever, and sometimes appears randomly. However we noticed that when we replicated the bug with the vector error, it was saying it was an out of memory error - which clearly was not the case.
In our internal tests we noted that this bug only occurs with flash player 10.1 and up, flash player 10 does not seem to have this issue. Further, there seems to be a weak connection between the error occurring and the use of video. I know this might not be too much help, but just thought you should know it is not only a memory leak related issue. I have submitted this bug to Adobe, and hopefully they resolve it soon.
This can occur when using a Vector.int which is initialized using an array of a single, negative int. Because of the way you initialize the vector class with code such as:
Vector.int([-2])
The -2 gets passed to the vector class as it's initial length like Array(5) would be. This causes an error somehow (and is not checked and raised as an exception).
I have also noted the issue repeating when passing negative values to length of a Vector.
A possible explanation would be that the vector tries to allocate the length its been given immediately.
Since the negative value is being forced into a uint, the negative value autumatically translates to a very large positive value. this causes the Vector to attempt allocation of too much memory (about 4GB) and hence the immediate crash.
if you pass a negative value to the length of an Array nothing happens, because apparently it does not attempt to allocate the length. but you can inspect the value and see that it is a very large positive number.
This explanattion is pure conjecture, I did not hear it anywhere. but it is consistent with as semantics and the meaning of the exclamation mark.
This said, I have search our entire code base for the use of the setter "length" and could not find it used with a Vector. Still, we are experiencing very often crashes of this sort - some of them are caused by actual high memory consumption (probably leaks) but other times it just happens when the memory is relatively low.
I cannot explain it. perhaps there are other operations that can potentially lead to allocation of large amounts of memory other the the setter "lenght"?

Resources