How much is too much asp.net session size? - asp.net

I have an application on the corporate intranet that makes use of session state to store values between a wizard (string of pages/ user controls). I'm measuring the size of the session and navigating around to make sure things dotn get out of hand.
At worst, I can get the size up to 900 Bytes.
Is this too much?
I know it all depends on other factors such as the number of users and the amount of memory in the server. So lets set some parameters around these... The server is allocated 1 Gig of RAM for ASP.net (the rest is allocated to the OS and other items). I have at most 10 users on the system concurrently.
Thanks for the help

Personally I'd say 900 bytes is nothing. Lets say it's 1kB -> which means with 1Gig of RAM you should be able to store roughly 1000k of those sessions (not including anything else).
Personally I think you shouldn't look at the raw numbers. What's important is: is the stuff in the session really meant to be in the session. You should put information in the session that's useful when the user is browsing your website and what can be discarded if the user leaves your website.
As long as you don't store big data objects in your session, you should be fine.

Too much is when you run out of memory attempting to serve whatever user load you want to be able to serve on each machine. We can't tell you how much is too much, but we can tell you that 900 bytes isn't very much at all.

Given your user load, I think you should be alright...
Don't forget to handle the situation where your session drops out halfway through for some reaason.

900 Bytes per session, total? No thats hardly a problem with only 10 users. Even 900K per user wouldnt be much of an issue, as you would only be talking about ~10MB of session state.
900K per page per user would be something you need to worry about.

Related

Out of Memory with large data in codename one

My Codename One application downloads around 16000 records of data (approx 10 fields in each record).
On my Android phone (OS6.0, RAM 2GB) it's able to load 8000 to 9000 records but then shows out of memory error.
From the trace, it looks like it run out of heap memory allocated to the app.
Any suggestion what would be the ideal way to handle that large amount of data, please?
Here is the log file
The amount of RAM on the phone doesn't mean much. The OS takes about half and then divides the rest to the various apps running in parallel. You would typically have much less see What is the maximum amount of RAM an app can use?
You need to review your code and check what is eating up memory. 16k records of 1kb each would be 16Mb which probably shouldn't crash an app so the question is where is memory taken, I would suggest reading the performance section of the developer guide to figure out memory usage.
This might not apply to your situation, but would it be possible to only download x number of records at a time? Then, when the user takes some action (scrolls, hits next page, etc) it loads the next batch? Codename one has a great endless scroller implementation. See here for an example - https://www.codenameone.com/blog/property-cross-revisited.html

Is it fine to use Viewstate when there are plenty of variables to store

I know the differences between SessionState and ViewState:
SessionState persists for the whole session where as ViewState is for one the same page.
SesssionState stays in the server but VewState travels between client and server
Now taking the above into account, if I have plenty of variables(which means that much bandwidth) that I need to keep through postbacks which one should I pick? I stuck in the middle because:
I know that I'm going to use those variables only in one page and ViewState is appropriate for this case
On the other, it seems it's going to take much bandwidth as the variables are quite a few.
Unless you are speaking of a few thousand variables, there is nothing to worry about.
Most asp.net controls store a lot of their state variables in the ViewState.
You can easily use a page performance tool to see the increase in your page size after you put the variables in ViewState. In most cases it is not something to worry about.
Variables usually do not take much space would be in kbs or even less, putting data in session un-necessarily could degrade the performance of server as the number of clients increase the load on server machine is multiplied. On the other hand view state does not hold space on server and could save memory for other useful operations.

How can I find the average number of concurrent users for IIS to simulate during a load/performance test?

I'm using JMeter for load testing. I'm going through and exercise of finding the max number of concurrent threads (users) that our webserver can handle by simply increasing the # of threads in my distributed JMeter test case, and firing off the test.
Then -- it struck me, that while the MAX number may be useful, the REAL number of users that my website actually handles on average is the number I need to make the test fruitful.
Here are a few pieces of information about our setup:
This is a mixed .NET/Classic ASP site. Upon login, a browser session (with timeout) is created in both for the users.
Each session times out after 60 minutes.
Is there a way using this information, IIS logs, performance counters, and/or some calculation that will help me determine the average # of concurrent users we handle on our production site?
You might use logparser with the QUANTIZE function to determine the peak number of requests over a suitable interval.
For a 10 second window, it would be something like:
logparser "select quantize(to_localtime(to_timestamp(date,time)), 10) as Qnt,
count(*) as Hits from yourLogFile.log group by Qnt order by Hits desc"
The reported counts won't be exactly the same as threads or users, but they should help get you pointed in the right direction.
The best way to do exact counts is probably with performance counters, but I'm not sure any of the standard ones works like you would want -- you'd probably need to create a custom counter.
I can see a couple options here.
Use Performance Monitor to get the current numbers or have it log all day and get an average. ASP.NET has a Requests Current counter. According to this page Classic ASP also has a Requests current, but I've never used it myself.
Run the IIS logs through Log Parser to get the total number of requests and how long each took. I'm thinking that if you know how many requests come in each hour and how long each took, you can get an average of how many were running concurrently.
Also, keep in mind that concurrent users isn't quite the same as concurrent threads on the server. For one, multiple threads will be active per user while content like images is being downloaded. And after that the user will be on the page for a few minutes while the server is idle.
My suggestion is that you define the stop conditions first, such as
Maximum CPU utilization
Maximum memory usage
Maximum response time for requests
Other key parameters you like
It is really subjective to choose the parameters and I personally cannot provide much experience on that.
Secondly you can see whether performance counters or IIS logs can map to the parameters. Then you set up proper mappings.
Thirdly you can start testing by simulating N users (threads) and see whether the stop conditions hit. If not hit, you can go to a higher number. If hit, you can use a smaller number. Recursively you will find a rough number.
However, that never means your web site in real world can take so many users. No simulation so far can cover all the edge cases.

Asp.net guaranteed response time

Does anybody have any hints as to how to approach writing an ASP.net app that needs to have a guaranteed response time?
When under high load that would normally cause us to exceed our desired response time, we want to throw out an appropriate number of requests, so that the rest of the requests can return before the max response time. Throwing out requests based on exceeding a fixed req/s is not viable, as there are other external factors that will control response time that cause the max rps we can safely support to fiarly drastically drift and fluctuate over time.
Its ok if a few requests take a little too long, but we'd like the great majority of them to meet the required response time window. We want to "throw out" the minimal or near minimal number of requests so that we can process the rest of the requests in the allotted response time.
It should account for ASP.Net queuing time, ideally the network request time but that is less important.
We'd also love to do adaptive work, like make a db call if we have plenty of time, but do some computations if we're shorter on time.
Thanks!
SLAs with a guaranteed response time require a bit of work.
First off you need to spend a lot of time profiling your application. You want to understand exactly how it behaves under various load scenarios: light, medium, heavy, crushing.. When doing this profiling step it is going to be critical that it's done on the exact same hardware / software configuration that production uses. Results from one set of hardware have no bearing on results from an even slightly different set of hardware. This isn't just about the servers either; I'm talking routers, switches, cable lengths, hard drives (make/model), everything. Even BIOS revisions on the machines, RAID controllers and any other device in the loop.
While profiling make sure the types of work loads represent an actual slice of what you are going to see. Obviously there are certain load mixes which will execute faster than others.
I'm not entirely sure what you mean by "throw out an appropriate number of requests". That sounds like you want to drop those requests... which sounds wrong on a number of levels. Doing this usually kills an SLA as being an "outage".
Next, you are going to have to actively monitor your servers for load. If load levels get within a certain percentage of your max then you need to add more hardware to increase capacity.
Another thing, monitoring result times internally is only part of it. You'll need to monitor them from various external locations as well depending on where your clients are.
And that's just about your application. There are other forces at work such as your connection to the Internet. You will need multiple providers with active failover in case one goes down... Or, if possible, go with a solid cloud provider.
Yes, in the last mvcConf one of the speakers compares the performance of various view engines for ASP.NET MVC. I think it was Steven Smith's presentation that did the comparison, but I'm not 100% sure.
You have to keep in mind, however, that ASP.NET will really only play a very minor role in the performance of your app; DB is likely to be your biggest bottle neck.
Hope the video helps.

Questions about maxJsonLength in ASP.NET

Recently, I ran into a problem with my application: the size of the JSON string returned from the server was exceeding the default maxJsonLength. I've done some research and implemented some fixes including a variation of paging. Everything looks great at the moment. However, I still have some questions unanswered.
First of all, the majority of the sources point to this article:
http://geekswithblogs.net/frankw/archive/2008/08/05/how-to-configure-maxjsonlength-in-asp.net-ajax-applications.aspx
1. Why 2,097,152 (2MB)? 2MB is way too much data to be loaded for a web page. (Unless, the user is downloading something, but that's a different story) Even 1MB is too much.
2. Than, the author goes on with an example of maxJsonLength of 500,000. Why this number? Is this just an example of how to set the property? Some sources state that 500,000 is the limit. Well, it's not, because I tested my application with 2,097,152 (2MB, roughly 4 times the 500,000) and it worked.
3. Some other sources state that 4MB is the limit... So, what is the limit? Is there a limit? Does it have something to do with the limit of the response from the server?
4. Finally, I'd like to get a strong suggestion on a length of JSON string being received from the server. Not the number to which maxJsonLength should be set, but the actual length of the JSON string, kind of "what to strive for".
Thank you in advance.
There is no hard and fast rule here. Your Json length is going to depend on your application and what information you are returning to the client.
If you really want a "rule of thumb", it should be as SMALL as possible to communicate the data that you need.
For max values, the true limitation is again going to depend most likely on browser requirements, but I personally would never go with more than 2mb for a Json message simply due to what it would take to send that down.
I understand that the total limit is determined by the lesser of the maxJsonLength that you have mentioned and the HttpRuntimeSection.MaxRequestLength. I am currently testing this and I will get back to you.
Of course, the big issue here is that it is seldom a good idea to return such large amounts of data. Whenever I have a response that starts to exceed about 100KB, I take another look at
my overall design and find ways to serve out smaller chunks as they are needed. Even this 100KB is high for must pure data scenarios, by which I mean textual data, not images or scripts.

Resources