Thousands of polygons and the IE JavaScript statement limit - google-maps-api-3

I am writing an application which needs to display up to 4000 polygons at once, some with hundreds of coordinates. Each polygon requires an event for mouseover, mouseout, and doubleclick as well as an infobox label.
I hit a problem initially with the JavaScript statement limit in IE 8. It seems IE restricts JavaScript to processing 5,000,000 VM statements in a single execution block before throwing up an ugly dialog box warning the user of a long-running script (regardless of how quickly the script executes.) I got around this problem by placing JavaScript "setTimeout" statements in my code, and breaking the processing into chunks. The script now executes to completion in a reasonable amount of time without hitting the ugly error.
But the problem I am having now comes after my polygons are already built and displayed. If the user then tries to zoom, the 5,000,000 statement limit is exceeded in IE. But, in this case, the limit is reached during execution of the Google Maps API code, where I have no control and can't put in the setTimeouts or break it into chunks.
In Chrome and Firefox, there is no issue whatsoever, and the polygons load surprisingly quickly, within a few seconds. Panning and zooming likewise cause no problem.
Things I have tried:
1) reducing the amount of JavaScript statements. I've tried making my code leaner. But this is actually happening on the redrawing of the polygons within the Google Maps API code, so I'm not sure that would have an effect anyway.
2) reducing the number of points per polygon by requiring less accurate polygons. This helped a little but didn't fix the problem.
3) Changed the IE "MaxScriptStatements" registry entry to stop IE from using this limitation. It worked, but it's not a practical solution for my application.
Has anyone else run into this problem?

Related

DynamoDB input broken item if putting items too fast?

I'm facing a weird phenomenal when putting items to DynamoDB.
It seems like if putting items too fast, DynamoDB can't put the whole data to the table(kinda like a broken data, it has partial attributes but with some weird values)?
I'm using the AWS JavaScript SDK to putting items, no errors shown up, everything seemed to work fine, but once I checked the data from web console, some of the inserted data was broken. Is this is related to write capacity units? (but no errors tell me it's caused by the write capacity units..) I could confirm the spike of my write capacity units was about 60/min, the setting is "on-demand".
I tried to slow down the putting speed with one second interval and with the exactly same data, the data could be inserted correctly...
Anyone knows why and how to fix this..?
The answer is no: If DynamoDB decides to throttle your requests because you exceeded your provisioned capacity or exceed their own hardware's capacity or whatever - it will refuse to do whole requests, or in the case of BatchWriteItems do some of the writes and not do others (and it will tell you which it did and which it didn't). DynamoDB will never write part of a request or corrupt parts of one attribute.
If you are seeing that, the most likely culprit is a bug in your own code that does the write. Maybe your own code is not thread-safe, so if your code is trying to prepare two items for writing concurrently, the code doing this preperation has a data race and results in a corrupt item to be written. Obviously, it is also possible that DynamoDB has a bug causing this, but it can't be as simple a bug as "writing more than 60 items a minute causes corruption" - if this were the case, everyone would have encountered this bug...

Using realm with infinite scrolling table view in swift

Im learning about using NSOperation, NSOperationQueue for my networking calls to deliver a more responsive UI in my apps' table view.
The result of the networking operation get stored into the realm and displayed in the table view.
This is an infinite scroll table view and as the user gets the end, more data is pulled into the app.
I am wondering what is the best design paradigm to use here, and where is the best spot to clear the realm. I don't want to inflate the app with useless data. I just want them to have data if they log back in with no network (airplane mode).
I also would like to know where the best spot to trigger these networking operations is? cellForRowAtIndexPath perhaps? I am not to sure since I usually just use Alamofire and trigger a network request in viewDidLoad. But these are not cancellable calls.
I've gone through the great tutorials on ray wenderlich but other then the playground examples, I am still not getting a real world application tutorial. If anyone knows of a good one on this subject let me know
thanks
This might be tricky to answer since it all depends on your app, the size/type of data it's displaying and how often you want to perform network fetches. In the end, it will be most likely be a compromise between what 'feels good' and how many system resources need to be consumed to make it happen.
In this particular scenario, Realm is being used as a caching mechanism and nothing more, so when to clear it should probably depend on how aggressively you wish to clear it.
If I was building a system like this, I would decide on a set number of the latest items I would always want to have available and save them in Realm. If the user then decided to start scrolling down beyond that limit, more data would be downloaded and appended to the Realm database as they went. Eventually the user will get tired and scroll back to the top (Or they might even just quit the app and restart from the top). At that point, it would be appropriate to trigger an operation to review the size of the Realm cache and remove as many items as necessary to bring it back to the desired size. If they start scrolling down again, then it's appropriate to just re-download that data.
Unlike SQLite, where items are copied into memory, Realm is very good at lazy-loading resources mapped from disk, so it's not necessary to worry about the number of Realm items in memory, more just the size of the Realm file on disk, which again depends on how big the data you're downloading is.
As for when to trigger another network operation to request more data, it's probably best to do it in tableView(_:willDisplay:forRowAt:). Depending on how large the data to download is (and the size of your table cells are), you should play with it until it feels natural when scrolling at a pretty normal speed. As a starting point, I'd recommend starting at maybe a whole screen-worth of table cells before hitting the bottom of the scroll view.
Good luck!

Explanation of XDMP-EXTIME in Marklogic

I need a lucid explanation of why XDMP-EXTIME happens in Marklogic. In my case it's happening during a search(read operation). In the exception message a line from the code is being printed:
XDMP-EXTIME: wsssearch:options($request, $req-config) -- Time limit exceeded
This gives me the impression that the execution does not go beyond that line. But it seems that it's a pretty harmless line of code ,it does not fetch any data from the DB, just sets certain search options. How can I pin point which part of the code is causing this? I have heard that increasing the max time limit of the task server solves such problems but that's not an option with me. Please let me know how such problems are tackled. It would be very very hard for me to show you the code base.Still hoping to hear something helpful from you guys.
The error message can sometimes put you on the wrong foot because of lazy evaluation. The execution can actually be further down the road than the error message seems to indicate. Could be one line, could be several. Look for where the returned value is being used.
Profiling can sometimes help getting a clearer picture of where most time is spent, but the lazy evaluation can throw things off here as well.
The bottom-line meaning of the message is pretty simple: the execution of your code takes too long. The actual search in which the options are being used is the most likely candidate of where it goes wrong.
If you are using cts:search or search:search under the covers, then that should normally perform well. A search typically gets slow when you end up returning many results, e.g. don't apply pagination. Search:search does that by default however.
A search can also get slow if you are running your search in update mode. You could potentially end up having MarkLogic trying to apply many (unnecessary) read locks. Put the following declaration in your search endpoint code, or xquery main module that does the search:
declare option xdmp:update "false";
HTH!
You could try profiling the code to see what specifically is taking so long. This might require increasing the session time limit temporarily to prevent the timeout from occurring while profiling. Note that unless this is being executed on the Task Server via xdmp:spawn or xdmp:spawn-fucntion, you would need to increase the value on the App Server hosting the script.
If your code is in a module, the easiest thing to do is make a call to the function that times out from Query Console using the Profile tab. Alternatively, you could begin the function with prof:enable(xdmp:request()) and later output the contents of prof:report(xdmp:request()) to a file on the filesystem, or insert it somewhere in the database.

What perfmon counters are useful for identifying ASP.NET bottlenecks?

Given the chart here, what should I be looking at to identify the bottleneck? As you can see, requests are averaging nearly 14 seconds under load and the bulk of that time is attributed to the CLR in New Relic's profiling data. In the performance breakdown for a particular page, it attributes the bulk of the time to the WebTransaction/.aspx page.
I see that the database is readed also (the orange) and this is seams that one of all pages have delay the rest of pages because of the lock that session make on the pages.
you can read also :
Replacing ASP.Net's session entirely
My suggestion is totally remove the session calls and if this is not possible, find an other way to save them somewhere in database by your self.
Actually in my pages I have made all three possible options. 1. I call the page with out session. 2 I have made totally custom session that are values connected to the user cookie, and last 3. I have made threads that are run away from the session and they make calculations on background, when they finish I show the results.
In some cases the calculations are done on iframe that call a page without session and there later I show the results.
In the Pro version, you can use Transaction Traces, which help pinpoint exactly where the issue is happening.

Flex web application gets progressively slower and freezes

I have a Flex web application where I am visualizing data (for different countries) in the form of charts. The data is in the form of CSV files. There are individual files for individual charts i.e. one file has all data pertaining to one chart for all countries.
I have a left navigation menu that allows one to see data on a country by country basis. As I view more and more countries, the web application becomes progressively slower till it freezes completely. The problem goes away if I refresh the browser and empty the cache.
I am using the URLLoader class in flex to read the CSV data into a string and then I am parsing the string to generate the charts.
I realize this is happening because more and more data is somehow accumulating in the browser. Is there any way in Flex to rectify this? Any pointers/help would be appreciated.
Thanks
- Vinayak
Like #OXMO456 said before my, I would use the profiler to check this issue.
to refine my answer I would also say please make sure that you are following all of the rules for low memory in flex like
1. clearing out (removing) event listeners
2. nulling out static variables
and more like so.
I would use the "snapshot" feature of the profiler and see what is happening in minute 1 and then minute 2, the difference between the two of these is probably the source of your leak.

Resources