Flex error- White exclamation point in gray circle: What does it mean? - apache-flex

We have a flex app that will typically run for long periods of time (could be days or weeks). When I came in this morning I noticed that the app had stopped running and a white exclamation point in a gray circle was in the center of the app. I found a post about it on the Adobe forums, but no one seems to know exactly what the symbol means so I thought I'd reach out to the SO community.
Adobe forum post: http://forums.adobe.com/message/3087523
Screen shot of the symbol:
Any ideas?

Here's an answer in the post you linked to from an Adobe employee:
The error you are seeing is the new
out of memory notification. It is
basically shielding the user when
memory usage gets near the system
resource cap. The best course of
action here (if you own the content)
is to check your application for high
memory usage and correct the errors.
If you don't own the content, it would
probably be best to contact the owners
and make them aware of the issue you
are seeing.
He also says this in a later response:
Developers can use the
System.totalMemory property in AS3 to
monitor the memory usage that the
Flash Player is taking up. This iwll
allow you to see how much memory is
used, where leaks are and allow you to
optimize your content based on this
property.

I work for a digital signage company and we have also came across this error, however, it is not only memory leak related because it can be caused by utilizing the vector code on that page provided. We have also noted that it occurs without any kind of memory spike whatsoever, and sometimes appears randomly. However we noticed that when we replicated the bug with the vector error, it was saying it was an out of memory error - which clearly was not the case.
In our internal tests we noted that this bug only occurs with flash player 10.1 and up, flash player 10 does not seem to have this issue. Further, there seems to be a weak connection between the error occurring and the use of video. I know this might not be too much help, but just thought you should know it is not only a memory leak related issue. I have submitted this bug to Adobe, and hopefully they resolve it soon.

This can occur when using a Vector.int which is initialized using an array of a single, negative int. Because of the way you initialize the vector class with code such as:
Vector.int([-2])
The -2 gets passed to the vector class as it's initial length like Array(5) would be. This causes an error somehow (and is not checked and raised as an exception).

I have also noted the issue repeating when passing negative values to length of a Vector.
A possible explanation would be that the vector tries to allocate the length its been given immediately.
Since the negative value is being forced into a uint, the negative value autumatically translates to a very large positive value. this causes the Vector to attempt allocation of too much memory (about 4GB) and hence the immediate crash.
if you pass a negative value to the length of an Array nothing happens, because apparently it does not attempt to allocate the length. but you can inspect the value and see that it is a very large positive number.
This explanattion is pure conjecture, I did not hear it anywhere. but it is consistent with as semantics and the meaning of the exclamation mark.
This said, I have search our entire code base for the use of the setter "length" and could not find it used with a Vector. Still, we are experiencing very often crashes of this sort - some of them are caused by actual high memory consumption (probably leaks) but other times it just happens when the memory is relatively low.
I cannot explain it. perhaps there are other operations that can potentially lead to allocation of large amounts of memory other the the setter "lenght"?

Related

How can the processor discern a far return from a near return?

Reading Intel's big manual, I see that if you want to return from a far call, that is, a call to a procedure in another code segment, you simply issue a return instruction (possibly with an immediate argument that moves the stack pointer up n bytes after the pointer popping).
This, apparently, if I'm interpreting things correctly, is enough for the hardware to pop both the segment selector and offset into the correct registers.
But, how does the system know that the return should be a far return and that both an offset AND a selector need to be popped?
If the hardware just pops the offset pointer and not the selector after it, then you'll be pointing to the right offset but wrong segment.
There is nothing special about the far return command compared to the near return version.
They both look identical as far as I can tell.
I assume then that the processor, perhaps at the micro-architecture level, keeps track of which calls are far and which are close so that when they're returned from, the system knows how many bytes to pop and where to pop them (pointer registers and segment selector registers).
Is my assumption correct?
What do you guys know about this mechanism?
The processor doesn't track whether or not a call should be far or near; the compiler decides how to encode the function call and return using either far or near opcodes.
As it is, FAR calls have no use on modern processors because you don't need to change any segment register values; that's the point of a flat memory model. Segment registers still exist, but the OS sets them up with base=0 and limit=0xffffffff so just a plain 32-bit pointer can access all memory. Everything is NEAR, if you need to put a name on it.
Normally you just don't even think about segmentation so you don't actually call it either. But the manual still describes the call/ret opcodes we use for normal code as the NEAR versions.
FAR and NEAR were used on old 86 processors, which used a segmented memory model. Programs at that time needed to choose what kind of architecture they wished to support, ranging from "tiny" to "large". If your program was small enough to fit in a single segment, then it could be compiled using NEAR calls and returns exclusively. If it was "large", the opposite was true. For anything in between, you had power to choose whether local functions needed to be able to be either callable/returnable from code in another segment.
Most modern programs (besides bootloaders and the like) run on a different construct: they expect a flat memory model. Behind the scenes the OS will swap out memory as needed (with paging not segmentation), but as far as the program is concerned, it has its virtual address space all to itself.
But, to answer your question, the difference in the call/return is the opcode used; the processor obeys the command given to it. If you mistake (say, give it a FAR return opcode when in flat mode), it'll fail.

Dissasemble 68xx code without entry point vector

I am trying to disassemble a code from a old radio containing a 68xx (68hc12 like) microcontroller. The problem is, I dont have the access to the interrupt vector of the micro in the top of the ROM, so I don't know where start to look. I only have the code below the top. There is some suggestion of where or how can I find meaningful routines in the code data?
You can't really disassemble reliably without knowing where the reset vector points. What you can do, however, is try to narrow down the possible reset addresses by eliminating all those other addresses that cannot possibly be a starting point.
So, given that any address in the memory map that contains a valid opcode is a potential reset point, you need to either eliminate it, or keep it for further analysis.
For the 68HC11 case, you could try to guess somewhat the entry point by looking for LDS instructions with legitimate operand value (i.e., pointing at or near the top of available RAM -- if multiple RAM banks, then to any of them).
It may help a bit if you know the device's full memory map, i.e., if external memory is used, its mapping and possible mapped peripherals (e.g., LCD). Do you also know CONFIG register contents?
The LDS instruction is usually either the very first instruction, or close thereafter (so look back a few instructions when you feel you have finally singled out your reset address). The problem here is some data may, by chance, appear as LDS instructions so you could end up with multiple potentially valid entry points. Only one of them is valid, of course.
You can eliminate further by disassembling a few instructions starting from each of these LDS instructions until you either hit an illegal opcode (i.e. obviously not a valid code sequence but an accidental data arrangement that looks like opcodes), or you see a series of instructions that are commonly used in 68HC11 initialization. These involve (usually) initialization of any one or more of the registers BPROT, OPTION, SCI, INIT ($103D in most parts, but for some $3D), etc.
You could write a relatively small script (e.g., in Lua) to do the basic scanning of the memory map and produce a (hopefully small) set of potential reset points to be examined further with a true disassembler for hints like the ones I mentioned.
Now, once you have the reset vector figured out the job becomes somewhat easier but you still need to figure out where any interrupt handlers are located. For this your hint is an RTI instruction and whatever preceding code that normally should acknowledge the specific interrupt it handles.
Hope this helps.

What happens when I divide by zero?

Now I'm not asking about the mathematical answer of dividing by zero, I want to what my computer does when it tries to divide by zero?
I can see different answers depending on what level we look at?
Looking at a high-level I expect the language specification may just say "hey you can't do that throw an error"
Looking at an assembly level, will the CPU try to call the divide instruction when we try to divide by 0?
If it does that'll take us to the machine code level. What happens now?
Now if that doesn't happen and we force it to happen, what would the result be?
I think what you're asking is what's going to happen if we perform binary division algorithm with 0 in the denominator.
The algorithm will go into an infinite loop, and the quotient will grow larger and larger until it exhausts all available memory.
Looking at an assembly level, will the CPU try to call the divide instruction when we try to divide by 0?
Yep instruction gets fetched and decoded. What happens then? The divisor is found to be zero and the process stops doing what it's doing. It throws some kind of an exception, pipeline (if there is one) is flushed and most likely control jumps to some predefined error handling code - often the OS controls a machine level jump table called an interrupt vector (or this could be a separate to the interrupt vector table).
There are many architectures however, and things like error handling vary. Intel x86 follows above procedure at least.
If it does that'll take us to the machine code level. What happens now?
I have no idea what that means. From the CPU's perspective, it is all machine code level.

"random" kernel crash after running for minutes.... HEP!? -- [same question posted on Khronos]

I have a thoroughly complex kernel processing audio input data. It will run for a couple of minutes, 60 times a second, and then hang. That's on the GPU; on the CPU it will run for hours. The input data are constantly changing, but each variable is always within proscribed ranges. I have inserted test code before uploading the inputs to the kernel each frame; in this test code, I can force these inputs to be well below their valid input range, but it still will eventually crash. (Say the valid range for a particular input is 0->400; I can force it to 0->1 and it will STILL eventually crash. I can force it to be below 0.1 and it will still ultimately bite the dust.) However, if I force the input variables to zero, the GPU will happily dance for hours. Of course, that input-free dance is not so particularly interesting.
I'm at a loss so far, though I have clues. I can make it crash much faster than 2 minutes if an input variable is high in its approved range. I can make it crash in less then 10 seconds under the right circumstances. BUT, I can't seem to _back_off_of_ those certain circumstances such that they go away. As said above, I can force the input vars into ridiculously small portions of their valid range, and the kernel (let's call him Harlan Sanders) will eventually go belly-up. BUT, if they're forced to actual zero, no problems puppy, we can run all day long.
To repeat, I'm a bit at a loss - although I have things that look like clues, I have not yet figured out what they are hinting at, though I've been trying for a few days. Frankly, I do not expect to find a real solution by asking here; whenever I stumble over a problem in opencl it seems that my fate is to be the first to articulate that particular problem. I guess this is part of the fun of being in on a technology during its infancy!!!!!!!!!! BUT, I want to do some serious, sustainable work with this "baby" (or, maybe, "toddler").
Op details: MacBook Pro 2010, OS 10.6.8, nv 330M GPU, xcode 3.2.5, shorts, teeshirt.
bonus P.S. for those who've read this far, including a related question:
My laptop, soldier that it has proved to be, is not powerful enough for the next stage. I must sell some stocks/bonds and purchase a Mac Pro. I'm looking at the ATI 5870. So, PERHAPS my problem will simply go away when I compile the .cl for the ATI??? Maybe I have run into a bug in the nV implementation. Maybe my kernel is so complex that I'm running into undetected resource limits (it's 1300 lines of code). So, SINCE I run fine on the CPU, perhaps I'll have no bugs, or different bugs, on the ATI card???
Any thoughts?
Thanks, guys & dolls --
Dave
Use "cl_" data types on the CPU side, because maybe you are not coping data the right way, or it is not being understood by the GPU. This could lead to GPU hangs on invalid pointers while handing the data.
You should also try -Werror, and read the error output. You can be doing smt wrong.
Without any code, we can only guess. But I haven't found any bug in the actual OpenCL NV or ATI implementations.
Make sure you release all resources. Events returned by Enqueue functions must be released. This error sometimes occurs after accessing buffers out of range.

Questions about maxJsonLength in ASP.NET

Recently, I ran into a problem with my application: the size of the JSON string returned from the server was exceeding the default maxJsonLength. I've done some research and implemented some fixes including a variation of paging. Everything looks great at the moment. However, I still have some questions unanswered.
First of all, the majority of the sources point to this article:
http://geekswithblogs.net/frankw/archive/2008/08/05/how-to-configure-maxjsonlength-in-asp.net-ajax-applications.aspx
1. Why 2,097,152 (2MB)? 2MB is way too much data to be loaded for a web page. (Unless, the user is downloading something, but that's a different story) Even 1MB is too much.
2. Than, the author goes on with an example of maxJsonLength of 500,000. Why this number? Is this just an example of how to set the property? Some sources state that 500,000 is the limit. Well, it's not, because I tested my application with 2,097,152 (2MB, roughly 4 times the 500,000) and it worked.
3. Some other sources state that 4MB is the limit... So, what is the limit? Is there a limit? Does it have something to do with the limit of the response from the server?
4. Finally, I'd like to get a strong suggestion on a length of JSON string being received from the server. Not the number to which maxJsonLength should be set, but the actual length of the JSON string, kind of "what to strive for".
Thank you in advance.
There is no hard and fast rule here. Your Json length is going to depend on your application and what information you are returning to the client.
If you really want a "rule of thumb", it should be as SMALL as possible to communicate the data that you need.
For max values, the true limitation is again going to depend most likely on browser requirements, but I personally would never go with more than 2mb for a Json message simply due to what it would take to send that down.
I understand that the total limit is determined by the lesser of the maxJsonLength that you have mentioned and the HttpRuntimeSection.MaxRequestLength. I am currently testing this and I will get back to you.
Of course, the big issue here is that it is seldom a good idea to return such large amounts of data. Whenever I have a response that starts to exceed about 100KB, I take another look at
my overall design and find ways to serve out smaller chunks as they are needed. Even this 100KB is high for must pure data scenarios, by which I mean textual data, not images or scripts.

Resources