Using RSLs with Adobe AIR - apache-flex

Does anyone know how exactly RSLs work with AIR? I have a terminal server that runs several instances of a very large AIR application, which unfortunately has 100M RAM on startup and 200 after a bit of use. This is obviously not really workable, and I'm thinking that RSLs may be a solution if they're cached on the machine. However I haven't been able to find much of anything on this, and I'd really like to know if anyone has.
On a second note, what are some good ways to reduce the initial memory size of an AIR applicaiton?

RSLs will only help with download size not RAM usage. To use less memory I recommend AMF instead of XML as XML parsing has some overhead.
Hope that helps.
-James

Try using the profiler that comes with flexbuilder. It will help you see what is eating up the memory and then you can change your code accordingly.

After a good bit of research, I found that the answer is simply this: Using RSL with Flex gives you the advantage of caching most of the core flex libraries within the browser, thus grossly lowering download speed after that initial one. This is not, however, the case for AIR. It does you basically no good.
Like james said, it'll help with the DL speed only.
As far as memory goes... check out articles by Grant Skinner - he's helped me out a lot.
Thanks!

Related

RAMDisk reliability

I was looking at the existing RAMDisk discussions ... and none seem to bring up any reliability issues. I recently started using a Dataram ramdisk for my source code and am wondering if there are any risks I should be concerned with.
It did speed up the compile time by 30%
I am not fully familiar with that product, but the answer probably depends on whether you have a (good) UPS, and what you are using to sync changes with your hard drive. I had looked into this a while ago (on a linux machine) mapping a portion of the ram as a disk and using RSYNC to persist changes to the hard drive, but discontinued the idea and got a faster hard drive instead :) I would be very interested in seeing this working...

Memory problems in ASP.NET

I got problems with memory in my asp.net application. The problem is that I can't see any problems when running it locally (between 100-200mb) but on the production system I get 503-errors because of the memory limit (512mb) being reached (running it on shared hosting).
How can I pin down the problem? I don't think that I have access to the current memory usage, at least I have not found any way and the company who hosts my site says that there is no way.
I have absolutely no experience tracking down memory leaks. :)
Thanks
Use a trial version of RedGate's Memory Profiler
http://www.red-gate.com/products/ants_memory_profiler/index.htm?utm_source=google&utm_medium=cpc&utm_content=unmet_need&utm_campaign=antsmemoryprofiler&gclid=CJLijJblm6UCFQqAgwodHjokHg
or JetBrains dotTrace
http://www.jetbrains.com/profiler/
Both tools are very simple and easy to use and do a great job of identifying protential memory leaks etc.
Most common sources of leaks are missed dispose calls, or poor management of event handlers... depending on the size of your code base, you may be able to just "spot" the trouble spots, but I find using a tool speeds up the process greatly as both will present before/after snapshots of the object graphs so you can see what is and is not being cleaned up by th GC.
Good overview of memory management:
http://msdn.microsoft.com/en-us/library/ee817660.aspx
I don't know that this is completely answerable here, but here's a start for you... The other answers are addressing specific memory issues, but tirst, you need to understand how memory is allocated and deallocated (reserved, used, and released) by the computer, the .NET runtime and in turn, your program.
Then you need to understand your code well enough to understand which functions happen on a per-user bases, and look at how much memory is being used. From there, you can get into your code and track down issues, but you need a firm understanding of the basics.
If I were you, I'd start with this article, and plan on spending some more time researching and learning. Hoefully, this article will not only answer questions, but give you enough knowledge to ask more specific/better questions. It's a good article, and I believe it will really help you, but it's not the whole kit-n-kaboodle. There's a quite a bit to learn.
http://msdn.microsoft.com/en-us/magazine/cc188781.aspx
The article is a bit old, and I'm assuming you're using more recent tools, so when you're done digesting that article, jump to http://msdn.microsoft.com/en-us/library/ms182372.aspx to learn about the Visual Studio Profiler.
This isn't necessarily an answer to your problem, per se, but more of a suggestion as to how to track things like this down.
One thing that I've found helps in tracking down these sorts of issues is to build into your application some sort of instrumentation. It could start as simple as providing a cache of sorts to keep track of pages request durations. This could be accomplished by creating a static cache class to hold either all (not recommended) or just long-running requests that you define (a safer approach) and have it all triggered in the OnBegin and OnEnd events (an HTTP module would be ideal). You could then create a basic dashboard page to list the contents of the cache to see potential places for trouble.
First things first... 503 is not only because of memory. If your application crashes 5 times in 5 minutes, due to rapid fail the application pool gets shut down and you get 503 - Service unavailable error.
500 MB odd memory seems pretty less to me and hence, memory could be adding to your problem. If it is 503 error, it means you have troubleshoot the issue from a crash perspective. Link
If you are having memory issues, you will typically get Out of memory exceptions, in which case, you should take multiple memory dumps of your process (w3wp.exe) and analyze it. Link has many posts on how you should analyze the memory dumps for memory leak. Right now, it would be too early for you to call it a memory leak.

Cassandra and asp.net (C#)

I am interested to create portal on cassandra services, since I faced some performance and scale issues starting from 1 million of records.
Definitely, it could be solved, but I am interested on other options.
My main issues is cost of updating all necessary indexes, to make reading fast.
First, is cassandra is good way for asp.net programmers? I mean, maybe there is some other projects, which worth to take a look
And second, can you provide any documentation samples on how to start with cassandra programming from C#?
since I faced performance and scale issues starting from 1 million of records.
Maybe your design was not that good, NoSQL is not a magic bullet for bad design. I have multi billion row tables and 95% of the response is sub second. Also what do you mean by updating indexes, do you mean updating statistics or rebuilding indexes?
since I faced performance and scale
issues starting from 1 million of
records.
You know, the one million mark for modern databases is where it is not something "totally ridiculously small" where you can ignore actually knowing what you do. Below one million is "tiny". I have a 800 million row table and get a LOT of sql running through with it - no problem at all.
First, is cassandra is good way for
asp.net programmers?
I would more suggest a basic book about SQL, reading the documentation and POSSIBLY throwing some hardware on the problem. As in: having totally bad hardware will kill all data management systems.
If you are using Cassandra for your .NET Application take a look at Aquiles. I developed it based on my company needs. If you find it useful or need any help let me know.
You can't really speak of Cassandra documentation. There's a myriad of partial tutorials on the web.
You may want to setup Linux in a virtual machine, because the windows build process is quite challenging, to say the least. (http://www.virtualbox.org, http://www.ubuntu.com)
Here's the howto:
http://www.ridgway.co.za/archive/2009/11/06/net-developers-guide-to-getting-started-with-cassandra.aspx
Note that the cassandra SVN url and the code sample have changed since the writing of this tutorial.
Here's another C# client:
http://github.com/mattvv/hectorsharp
And here some sample code:
http://www.copypastecode.com/26752/
Note that you need to download the latest Java Development Kit (JDK) from Sun for Linux.
It's not in the repositories of Ubuntu 10.04.
Then you need to type
export JAVA_HOME="/path/to/jdk"
in order for Cassandra to find your Java installation.
You might also want to take a look at:
http://en.wikipedia.org/wiki/NoSQL
Especially the taxonomy section is interesting.
Make sure Cassandra is the right type of NoSQL solution for your problem, e.g. use Neo4J if your problem actually is a graph problem.
Also, you need to make sure your NoSQL solution is ACID-compliant.
For example, Neo4J is the only ACID-compliant NoSQL graph engine.
Edit: Here's a jumpstart guide for Windows, without compiling:
http://coderjournal.com/2010/03/cassandra-jump-start-for-the-windows-developer/
http://www.ronaldwidha.net/2010/06/23/running-cassandra-on-windows-first-attempt/
http://www.yafla.com/dforbes/Getting_Started_with_Apache_Cassandra_a_NoSQL_frontrunner_on_Windows/
Instead of cassandra you might take a look at: ravendb. Supposedly it is a document store made with and created for .Net. It has Linq integration, and is (again supposedly) very fast.
As with any new technology, read if it helps you with your specific case, and check if it is proven technology (Do they have mainstream clients using it).
Before you go into this route see if you can't optimize your current solution first. Check if your queries are fast, if the indexes are done correctly, and if you can't remove load by adding caching.
Last nut not least, if adding some processors to your SQL machine might fix issues, it is typically a much cheaper solution.
If you want to do something new, then instead of going for noSQL, you might want to consider trying a database cluster.
The idea is when two machines each search half of the original database at the same time, you have half the search time without totally redesigning your existing database.

How do you profile a production ASP.NET applicaiton?

We have some performance problems with one of our applications. I thought about using something like dotTrace to find out where the problems are, but dotTrace would probably degrade performance even more.
What's the best way to profile an application that's on a prod environment w/o affecting performance too much?
The general answer is "don't do it".
Other than that, you can gain a lot by using performance counters. If the built-in counters don't help, you can create your own.
Among other things, the performance counters may give you an idea of how to reproduce the performance problems through load testing.
The next idea is to narrow down the area you're interested in. There's no sense impacting performance for the entire application if it turns out to be your web service access that's slow.
Next, be sure to have instrumented your application, preferably by using configuration. The Enterprise Library Logging Application Block is great for that, as it allows you to add the logging to your application, but have it configured off. Then, you can configure what kind of information to log, and where to log it to.
This gives you choices about how expensive the logging should be, from logging to the event log to logging to an XML file. And you can decided this all at runtime.
Finally, you're not going to be able to use dotTrace or something else that requires restarting IIS an adding code to your running application. Not in production. The ideas above are for the purpose of not needing to do so.
Profiling memory or cpu?
Memory: the best way would be to create a memory dump of the w3wp process (launch task manager, right click the process then "create dump"), then copy the dump to your local machine and analyse it with WinDbg. And look at which classes consume the most memory. There are lots of questions/answers here on Stackoverflow on how do do that (how to use WinDbg and analyse the .NET heap).
CPU: we use a short command-line profiler by Sam Saffron (woohoo, one of the creators of Stackoverflow!) His project is abandoned, but we forked it and maintain it here. https://github.com/jitbit/cpu-analyzer Everyone's welcome to contribute. It attaches to your threads using Microsoft's DbgManager and finds call-stacks that take longest time to execute.
Did you load-test the application on a number of sessions that's anywhere near the actual load of the production environment?
The first thing that comes to mind is that your app is not scaling well under load or that your db is not scaling well with an increase in size (causing this way problems even with a very limited number of concurrent sessions) but it could be anything really.
My suggestion is to replicate the production environment and run proper load-testing then look at the data and it'll give you some clue.
you don't wanna play games with your production environment, but if you don't have it already you could use logging to keep track of the sequence and time spans of key-events and take it from there.
You could use ants profiler
http://www.red-gate.com/products/ants_performance_profiler/index2.htm
They claim that "the overhead was hardly noticeable".
There is a 14 day free trial so you could give it a try.
Edit: I agree with John's comment, it will disrupt, require some down time, to get it started / stopped. Best to use it on a test environment to identify the bottle necks.
You can use ants profile as well as performance counter of the system. It will help you to determine whats the problem.
Here are some details about performance counter..
http://msdn.microsoft.com/en-us/library/fxk122b4.aspx
http://msdn.microsoft.com/en-us/library/ms979204.aspx
http://www.codeproject.com/KB/dotnet/perfcounter.aspx
I would recommend to take several memory dumps of the process in Production, look at all the stack traces and see if you find a pattern.

What are you using for Distributed Caching in web farms running ASP.NET?

I am curious as to what others are using in this situation. I know a couple of the options that are out there like a memcached port or ScaleOutSoftware. The memcached ports don't seem to be actively worked on (correct me if I'm wrong). ScaleOutSoftware is too expensive for me (I don't doubt it is worth it). This is not to say that I don't want to hear about people using memcached or ScaleOutSoftware. I'm just stating what I "know" at this point.
So my question is basically this: for those of you ACTIVELY using distributed caching, what are you using, are you happy with it, and what should I look out for?
I am moving to two servers very soon...both will be at the same location. I use caching fairly heavily (but carefully) to reduce the load on my database server.
Edit: I downloaded Scaleout Software's solution. I've coded for it and it seems to work real well. I just have to decide if my wallet will part with the cash for it. :) Anyone have experiences good or bad with ScaleoutSoftware?
Edit Again: It's been a little while since I asked this? Any more thoughts on it? We ended up buying the solution from ScaleOutSoftware and have been happy with it, but I'm curious what others are doing.
Microsoft has a product pending code-named Velocity. It's still in CTP, and is moving slowly, but looks like it will be pretty good. We'll be beating it up in the near future to see how it handles what we want it to do (> 2 million read/writes per hour). Will post back with results.
There is a 100% native .NET, well documented open source (LGPL) project called Shared Cache. Looks like it is not yet mentioned on SO, but it's promising and should be able to do what most people expect from a distributed cache. It even supports different strategies like distributed or replicated caching etc.
I will update this post with more details as soon as I had a chance to try it on a real project.
We're currently using an incredibly simple cache that I wrote in a couple of hours, based on re-hosting the ASP.NET cache in a Windows Service (more info and source code here). I won't pretend it's anywhere near as optimised as something like Memcached but we were just looking for something simple and free until Velocity came along, and it's held up extremely well even under fairly heavy load.
It comes down to our personal preference for core components - i.e. ones that affect whether the site is available or not - that they are either (a) supported by a vendor with a history of rapid and high quality support, or (b) written by us so that if something goes wrong we can fix it quickly. Open source is all well and good, and indeed we do use some OSS, but if your site is offline then unfortunately newsgroups et al don't have a 1 hour SLA, and just because it's OSS doesn't mean you have the necessary understanding or ability to fix it yourself.
We are using the memcached port for Windows and we are very pleased with it. The enyim.com memcached client API is great and easy to work with. It's also open source, which is a big advantage, if you ask me.
We are now using this setup in a production web-app and it has helped a lot in improving its performance.
There's a great .NET wrapper/port found here on Codeplex. Awesomesauce!
We use memcached with the enyim library in a production environment (www.funda.nl). Works fine, very pleased with it, but we did notice a substantial raise in CPU use on the clients. Presumably due to the serializing/deserializing going on. We do around 1000 reads per second.
One tried and tested product by 100's of customers worldwide is NCache. Its
a feature rich product that lets you store session state in a redundant and highly available manner, lets you share data
within the enterprise as well as bridging for WAN communication essentially acting as a data fabric and lastly it lets you build an elastic caching tier so that when
your application scales, you can add servers to the cache and actually boost performance further.

Resources