I have a Flex web application where I am visualizing data (for different countries) in the form of charts. The data is in the form of CSV files. There are individual files for individual charts i.e. one file has all data pertaining to one chart for all countries.
I have a left navigation menu that allows one to see data on a country by country basis. As I view more and more countries, the web application becomes progressively slower till it freezes completely. The problem goes away if I refresh the browser and empty the cache.
I am using the URLLoader class in flex to read the CSV data into a string and then I am parsing the string to generate the charts.
I realize this is happening because more and more data is somehow accumulating in the browser. Is there any way in Flex to rectify this? Any pointers/help would be appreciated.
Thanks
- Vinayak
Like #OXMO456 said before my, I would use the profiler to check this issue.
to refine my answer I would also say please make sure that you are following all of the rules for low memory in flex like
1. clearing out (removing) event listeners
2. nulling out static variables
and more like so.
I would use the "snapshot" feature of the profiler and see what is happening in minute 1 and then minute 2, the difference between the two of these is probably the source of your leak.
Related
Im learning about using NSOperation, NSOperationQueue for my networking calls to deliver a more responsive UI in my apps' table view.
The result of the networking operation get stored into the realm and displayed in the table view.
This is an infinite scroll table view and as the user gets the end, more data is pulled into the app.
I am wondering what is the best design paradigm to use here, and where is the best spot to clear the realm. I don't want to inflate the app with useless data. I just want them to have data if they log back in with no network (airplane mode).
I also would like to know where the best spot to trigger these networking operations is? cellForRowAtIndexPath perhaps? I am not to sure since I usually just use Alamofire and trigger a network request in viewDidLoad. But these are not cancellable calls.
I've gone through the great tutorials on ray wenderlich but other then the playground examples, I am still not getting a real world application tutorial. If anyone knows of a good one on this subject let me know
thanks
This might be tricky to answer since it all depends on your app, the size/type of data it's displaying and how often you want to perform network fetches. In the end, it will be most likely be a compromise between what 'feels good' and how many system resources need to be consumed to make it happen.
In this particular scenario, Realm is being used as a caching mechanism and nothing more, so when to clear it should probably depend on how aggressively you wish to clear it.
If I was building a system like this, I would decide on a set number of the latest items I would always want to have available and save them in Realm. If the user then decided to start scrolling down beyond that limit, more data would be downloaded and appended to the Realm database as they went. Eventually the user will get tired and scroll back to the top (Or they might even just quit the app and restart from the top). At that point, it would be appropriate to trigger an operation to review the size of the Realm cache and remove as many items as necessary to bring it back to the desired size. If they start scrolling down again, then it's appropriate to just re-download that data.
Unlike SQLite, where items are copied into memory, Realm is very good at lazy-loading resources mapped from disk, so it's not necessary to worry about the number of Realm items in memory, more just the size of the Realm file on disk, which again depends on how big the data you're downloading is.
As for when to trigger another network operation to request more data, it's probably best to do it in tableView(_:willDisplay:forRowAt:). Depending on how large the data to download is (and the size of your table cells are), you should play with it until it feels natural when scrolling at a pretty normal speed. As a starting point, I'd recommend starting at maybe a whole screen-worth of table cells before hitting the bottom of the scroll view.
Good luck!
I know there are lots of similar questions out there like this, but all of the solutions are eithers ones I cannot use or do not work. The basics of the issue is that I have to make a web service call that returns a typed dataset. This dataset can have 30,000 rows or more in some cases. So my issue is how do I get the page to be more responsive and perhaps load everything while the web service is still downloading the dataset?
Please note that normally I would never return this amount of data and would instead do paging on the server side, but the requirements for this really lock down what I can do. I can make the web service return JSon if need be, but my problem at that point is how to get the JSON data back into a format that the gridview could use to bind the data. I know there is an external library out there, but that is out as well.
Sad to say that the restrictions I have here are pretty obscene, but they are what they are and I cannot really change them.
TIA
-Stanley
A common approach to this kind of scenario is to page (in chunks) your data as it comes back. Do this asynchronously (separate thread). You might even be able to do this in only two chunks: first 1000 rows, then the rest. It will seem very responsive to your users. If there is any way to require the users to filter the result-set, to reduce the result-set, that would be ideal.
#Lostdreamer is right. Use JQuery to do two AJAX calls. The first call gets the first 1000 rows then kicks off the second call (etc). Honestly, this is simply simulating what HTTP typically does (limiting packet sizes and loading multiple chunks).
Given the chart here, what should I be looking at to identify the bottleneck? As you can see, requests are averaging nearly 14 seconds under load and the bulk of that time is attributed to the CLR in New Relic's profiling data. In the performance breakdown for a particular page, it attributes the bulk of the time to the WebTransaction/.aspx page.
I see that the database is readed also (the orange) and this is seams that one of all pages have delay the rest of pages because of the lock that session make on the pages.
you can read also :
Replacing ASP.Net's session entirely
My suggestion is totally remove the session calls and if this is not possible, find an other way to save them somewhere in database by your self.
Actually in my pages I have made all three possible options. 1. I call the page with out session. 2 I have made totally custom session that are values connected to the user cookie, and last 3. I have made threads that are run away from the session and they make calculations on background, when they finish I show the results.
In some cases the calculations are done on iframe that call a page without session and there later I show the results.
In the Pro version, you can use Transaction Traces, which help pinpoint exactly where the issue is happening.
I'm thinking of a architectural way of displaying messages in our application (Flex-Asp.NET-SqlServer), mostly messages that announce for instance a downtime.
Currently I was thinking of creating a table FlexMessage that holds the name of a message (based on that name I now where to put in Flex) and the value (the message itself). As a result however, someone will have to create these messages and also delete them when they are no longer valid. So, thinking further, I thought of creating messages having a startdate and enddate, so an interval in which they need to be displayed. Like this, someone could login to the management part and create a message that needs to be displayed from a certain date until a certain date.
I could also hardcode it in the Flex Application, but that would mean putting a new build online (of the swf) each time something changes with a certain message. No good idea I guess.
Is there a better way for this that I haven't thought about?
One way to do this is to place your messages in an RSS feed, then read that feed from the Flex application.
There is an example of how to do this here: http://www.artima.com/weblogs/viewpost.jsp?thread=23819
I'm having a bit of a time trying to get Dojo grids (1.5) to play nice. Specifically I've spent about two weeks of work trying to implement a grid that allows for our result set data to collapse into rows, where rows can be expanded. Data comes in as a full set in JSON format, using ItemFileReadStore as the store. Any subsequent sorts or pagings are handled by GETing a new json from the application, and passing in new query parameters in the url.
The nested data was only two layers deep - a top layer to always be displayed and an array of child data with identical structure as the top layer. Each node has a unique ID and a cluster ID - on a parent node the unique ID and cluster ID will match.
I was initially very excited with TreeGrid - but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' and one extra row full of null cells (???) that I just couldnt figure out how to remove unless I focused the query to only one cluster. I studied the test examples, built many test pages myself, tried to understand the forestModel, which for what I could tell was unnecessary... I found so little documentation, and sources I found online hinted that TreeGrid might not be reliable...
So I decided I would try to implement the expandable/collapsible rows in dataGrid.
I flattened the JSON data and added another attribute to indicate being a top level node ('alwaysShow' = true). I built my grid programaticaly and applied grid.filter() to pull only those top level nodes. I modified that filter by extending the ItemFileReadStore _FetchItems "filter" method to allow for OR querying instead of AND, and also modified it to allow for keys to point to arrays - when a top level node (small +/- icon in the cell) is clicked, the cluster ID of the parent node is added to the grid.filter.allowed[] and the filter is updated, allowing nodes with that cluster_id value to be displayed.
This worked fine on my small test set of five records (although id say a little slugish...) - but now I am pulling ~900 rows back from the application, and on expanding large clusters (~80 rows) I am seeing a very long flash of blue and white on the filter updates. I've spent most of my day trying to step through in firebug to find where its happening, but the dojo logic is so spread out. Seems to be happening before the call to _Grid.js defaultUpdate.
Its so bad that I am considering trying again with TreeGrid. Im also considering just doing this by hand... Im kicking myself for spending so much time trying to get Dojo to work to begin with. I would also consider a commercial "JSON->table with collapsible row" library if anyone has any recommendations...
Any suggestions or insights? Familiarity with the flashing problem or how I could adapt TreeGrid to my needs? I'm aware this is a bit of a rant... Many thanks for any help.
-robbie
EDIT: I eventually gave up trying to get Dojo to do what I needed and coded it myself in less than a day. Not the best use of three weeks...
EDIT:
I just found a solution that works for me, I have added the following CSS:
.dojoxGridSummaryRow {
visibility: collapse
}
Basically the summaries are probably still created but they are not visible nor taken into account in the table layout. That's good for me. Hope this will solve your issue.
This won't help but just to let you know that:
"- but I couldn't see how I could format it to do what I needed - namely eliminate the 'summary row' "
is the very EXACT same thing I'm trying to achieve and did not find the solution even though this looks like a very simple feature... Will let you know if I found a solution...