Adobe AIR SQLite Async Events Not Dispatching - sqlite

Working on an application that has very heavy use of the local sqlite db. Initially it was setup for synchronous database communication, but with such heavy usage we were seeing the application "freeze" for brief periods fairly often.
After doing a refactor to asynchronous communication we are seeing a different issue. The application seems to be far less reliable. Jobs seem to simply not complete. After much debugging and tweaking the problem seems to be the database event handles not always being caught. I'm seeing this specifically when beginning a transaction or closing the connection.
Here is an example:
con.addEventListener(SQLErrorEvent.ERROR, tran_ErrorHandler);
con.addEventListener(SQLEvent.BEGIN, con_beginHandler);
con.begin(SQLTransactionLockType.IMMEDIATE);
Most of the time this works just fine. But every now and then con_beginHandler isn't hit after con.begin is called. This makes it so we have an open transaction that never gets committed and can really hang up future requests. When investigating this same issue with the connection close handler, one of the solutions was to simply delay it. In that context it was OK to wait even several seconds.
setTimeout(function():void{ con.begin(SQLTransactionLockType.IMMEDIATE); }, 1000);
Changing to something like this does seem to make the transaction more reliable, however, that really stretches out the time it takes for the application to complete actions. This is a very db heavy application, so even adding 200ms has a noticeable affect. But something as short as 200ms also doesn't seem to fully solve the issue. It has to be 500-1000ms or higher in order for me to stop seeing this issue.
I've written a separate AIR application to try and stress test our code and the transactions, but am unable to reproduce this in that environment. I even have it try to do something that will "freeze" the application (long loops that do some math or other processing) to see if application strain is what makes them misfire, but everything seems reliable.
I'm at a loss for how to resolve this at this point. I even tried running con.begin off of a binding event, just to add more time. The only thing that seems to work is excessively long timers/timeouts, which I don't think is an acceptable solution.
Has anybody else run into this? Is there some trick to async that I'm missing?

I had a few more ideas to try after the refreshing weekend, none of which panned out; however, during these attempts and more investigations I finally found a pattern to the issue. Even though it doesn’t happen consistently, when it does happen it is fairly consistent on where it happens. There are 1 or 2 spots during the problematic processes that try to compact the DB after doing data clearing, in order to help keep the file sizes smaller. I think the issue here is compact wasn’t worked into the async flow properly. So while we are trying to compact the db, we are also trying to start up the new transaction. So if the compact takes a bit of time every once in a while, then we get a hang up. I think the assumed behavior was for async event handling to dispatch when the transaction is finally started instead of just never happening at all, but this does make some amount of sense.

Related

Usage of Cpu and Outofmemory Exception

We have a web application based on asp.net 1.1. We deployed it on a web server but there is a problem about it.
In the webserver sometimes cpu usage is increasing to 100% and outofmemory exception is occuring.
I think there are some wrong code inside the project but i don't know where it's.
Now, i want hear your advices about how to find problem and what kind of codes make cpu usage increased.
it looks like the garbage collector is not doing its work as supposed for some reason. i suggest to look in the code where you have variable declarations inside long loops. for example you need to check for loops that look like this:
dim c as car
for i as integer = 0 to 20
c= new car
c.brand=""
Next
the above loop creates a lot of garbage so make sure to call dispose() when you finish using an object.
another issue to check for is recursion. if you have recursive calls, make sure to check that the breaking condition is correct and make sure to call dispose() too before jumping in the next recursion.
If you have no idea how to debug something once it's deployed, the first place you should look to learn is Tess Ferrandez's blog. Click, and read. A lot. :) May I suggest you start with the debugging labs.

MSMQ - Message Queue Abstraction and Pattern

Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind.
I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted).
The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues).
However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly.
Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have any of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-)
Any and ALL input would be highly appreciated and seriously considered…
Thanks again.
For anyone interested. I decided in
the end to simply cache the
transaction in another location and
use the MSMQ as intended and described
below.
If the queue has a large-ish number of messages on it, then enumerating those messages will become a serious bottleneck. MSMQ was designed for first-in-first-out kind of access and anything that doesn't follow that pattern can cause you lots of grief in terms of performance.
The answer depends greatly on the sort of queries you're going to be executing, but the answer may be some kind of no-sql database (CouchDB or BerkeleyDB, etc)

How to debug issues with differing execution times in different contexts

The following question seems to be haunting me more consistently than most other questions recently. What kinds of things would you suggest I suggest that they look for when trying to debug "performance issues" like this?
ok, get this - running this in query analyzer takes < 1 second
exec usp_MyAccount_Allowance_Activity '1/1/1900', null, 187128
debugging locally, this takes 10 seconds:
DataSet allowanceBalance =
SqlHelper.ExecuteDataset(
WebApplication.SQLConn(),
CommandType.StoredProcedure,
"usp_MyAccount_Allowance_Activity",
Params);
same parameters
Horrible question to answer really - code in debugger vs code not in debugger introduces all manner of Heisenbug timing problems, most of which you'll never know about, into the soup of things that can muck things up for you.
Debuggers tend to put their fingers in all the tasty places that may affect performance.
Debug Events. The debugger gets special events during the application load, execution, dll load/unload, shutdown. The debugger will do whatever it wants in these events. That will be a source of slowdown.
Debug Output. OutputDebugString() and all the code that uses it (the trace output in .Net, for example) suddenly become active. This is slow.
The HeapAlloc() family of functions, when run under a debugger start to check for all sorts of heap inconsistencies, which consumes more time.
If you have Symbol Discovery turned on, there may be delays as various Symbol Servers are queried for symbols and downloaded if required (you'll notice the delay if they are downloaded).

Is it acceptable to keep a db connection open for the life of the page?

Everybody knows that you should close a connection immediately after you finish using it.
Due to a flaw in my domain object model design, I've had to leave the connection open for the full page life cycle. Essentially, I have a Just In Time property which opens a connection on first call, and then on Page.Unload (..) it would check if a db connection was ever open, and then close it if it was. Since it only takes a second, I’ve had the opinion its not too much different than closing it immediately.
Is this ok? Or should it still be closed immediately after every single use?
Thanks in advance.
No, it is not OK.
If your application will ever need to grow or scale, you'll want to fix this issue. By holding that connection open you're reducing your ability to scale. Keep in mind that open connections take up memory on the server, memory on the client, hold open locks, etc.
What if you page crashes before reaching the Page.Unload event? You will have a opened connection. For me it is better to always close the connection as soon as possible.
It's not ideal but I wouldn't re-write my application over it. Unless your page is doing a large amount of time-consuming work in various methods, the whole page lifecycle should execute quickly. In practice it may just mean that your connection object is open a few milliseconds longer than it would have been otherwise. That might be significant in some scenarios, but it doesn't sound like it would be in your case.
Yes, it is ok.
Closing the connection as soon as you can is a best practice for preventing orphan open connections, but if you are sure that the connection is being close, there is nothing wrong with that.
Every decent ASP.NET app uses connection pooling nowadays, and a pool is basically a bunch of open connections. In your case that would mean that the connection you're holding on to is "occupied" and can't be used to serve other requests.
As far as I see it would be a scalability issue depending on the amount of time your page needs to do work/render. If you expect only 100 users, like you say, then probably it's not an issue - unless it's 100 req/sec of course.
From the technological perspective it's OK. As far as I remember most client-server applications (web- and non-web), including classic ASP-code used to work like that, e.g you declare one connection for the entire page and work with it.
page crashes? this is what using and finally are for
that said, for the sake of DB performance (i.e. scaling)* it's best to keep connections open as short a period as possible allowing only that you don't want to open close open close open close for rapidly sequential and predictable work
* I was told this by a mentor early in my career, I must say I've not actually tested this myself but it sounds right theoretically
Of course you can keep them open, but no no. Close it after use in finally blocks. A fair trade off from "after every single use" is to close it after every block of use, if you're apt to run a stored proc, update a column, then delete some other row, you could open/close around those three operations, presuming they're all wrapped in a try/catch/finally.
You should certainly keep the connection open across the lifetime of the page, if you're doing multiple queries during it. Generally, one re-uses connections across many pages, actually.
I think a better question with much more informed and productive feedback would be possibly providing some snippets of what you're doing (code) and expanding on the reasons why you've made this choice. There is most likely a better solution that doesn't require keeping the connection open so long, but at least, for pragmatic reasons, you could get some feedback on whether it's worth revamping.
In future, you definitely want to move away from data access in your code-behind.
I find it convenient to keep the connection open when using ORM (Open Session in View) so that after an initial eager fetch, other data can be lazily loaded as needed. This works well when page response times are reasonable so as not to tie up connections.

Adobe Flex App page file usage going through the roof!

I have been working on an Adobe Flex application for some months now, and the application is meant to run 24/7 for days (weeks!) continuously. However, I'm now seeing that after a few days of running nonstop the computer it runs on tells me that the system is low on virtual memory and gives me an error about Page File usage. Once I close the Flex app, the Page File usage goes down from 1.9 GB to 100 MB (or less). It seems that its using up all this memory and not freeing it although I have been very careful in my app to not keep huge arrays.
The app does some graphing and draws a lot of shapes (to greate a 'gauge') and then gets rid of them by re-declaring that object as another 'gauge'.
Any idea why my page file usage is climbing so high?!
You most probably have eventListeners that are not being removed. They keep references to objects and prevent them from being garbage collected.
You can use the profiler in Flex Builder professional to see where your memory usage is going. Like another poster mentioned, event listeners are alot of times the culprits in cases like this, but more generally, just because you think you are getting rid (destroying or deleting) a variable, doesn't mean that it is really getting taken care of by the garbage collector. If any reference (like an event listener) still exists to that variable (or object) it will not be collected. The profiler will point out these things.
I've heard rumors that putting anything on the Stage will create memory leaks. In other words, you can be as careful as possible with your code, but you'll still leak memory. This has not been validated by Adobe, as far as I know. A good test might be to instantiate a Shape and a Sprite and a MovieClip, add them to the display list, and then let the app run overnight. Would love to hear the results if you do end up testing this.

Resources