1000 users a day for 10 minutes equal what concurrent load? - asp.net

Is there a formula that will tell me the max/average # of concurrent users I would expect to have with a population of 1000 users a day using an app for 10 minutes?

1000 users X 10 minutes = 10,000 user minutes
10,000 user minutes / 1440 minutes in a day = 6.944 average # of concurrent users
If you are looking for better estimates of concurrent users, I would suggest putting google analytics on your site. It would give you an accurate reading of highs, lows, and averages for your site.

In the worst case scenario, all 1000 users use the app at the same time, so max # oc concurrent users is 1000.
1000 users * 10 minutes = 10000 total minutes
One day has 24 hours * 60 minutes = 1440 minutes.
Assume normal distribution, you would expect 10000 / 1440 = 6.9 users on average using your app concurrently. However, normal distribution is not a valid assumption since you probably won't expect a lot of users in the middle of a night.

Well, assuming there was a steady arrival pattern, people visited the site with an equal distribution, you would have:
( Users * Visit Length in Minutes) / (Minutes in a Day)
or
(1000 * 10 ) / 1440
Which would be about 7 concurrent users
Of course, you would never have an equal distribution like that. You can take a stab at the anticipated pattern, and distribute the load based on that pattern. Best bet would be to monitor the traffic for a bit, with a decent sampling of user traffic.

That depends quite a bit on their usage pattern and no generic formula can cover it.
As an extreme example, if this is a timecard application, you would have a large peak at the start and stop of each work day, with scattered access between start and stop as people work on different projects, and almost no access outside of working hours.
Can you describe the usage pattern you expect?

Not accurately.
Will the usage be spread evenly over the day or are there events that will cause everyone to use their 10 mintes at the same time?
For example lets compare a general purpose web site with a time card entry system.
General purpose web site will have ebbs and flows throughout the day with clusters of time when you have lots of users (during work hours...).
The time card entry system might have all 1000 people hitting the system within a 15 minute span of time twice a day.
Simple math can show you an average, but people don't generally behave "on average"...

As is universally said, the answer is dependent upon the distribution of when people "arrive." If it's equally likely for someone to arrive at 3:23 AM as at 9:01 AM, max_concurrent is low; if everyone checks in between 8:55 AM and 9:30 AM, max_concurrent is high (and if response time slows depending on current load, so that the 'average' time on the site goes up significantly when there are lots of people on the site...).
A model is only as good as its inputs, but having said that, if you have a good sense of usage patterns, a Monte Carlo model might be a good idea. (If you have access to a person with a background in statistics or probability, they can do the math just based on the distribution parameters, but a Monte Carlo model is easier for most people to create and manipulate.)
In comments, you say that it's a "self-serve reference app similar to Wikipedia," but your relatively low usage means that you cannot rely on the power of large numbers to "dampen out" your arrival curves over 24-hours.

Related

AWS DynamoDB matrix doesn't work correctly

I am using dynamoDB and curious of this
When I see table matrix with period of 10, 30 seconds, it seems that it exceeds provisioned value
Then, when I see table matrix with period of 1 minutes, it doesn't reach provisioned value at all
I want to know why this happens.
This isn't a DynamoDB thing. It's a CloudWatch rendering thing.
Consumed capacity has a natural period of 1 minute. When you set your graph period to 1 minute everything is correct. Consumed is below provisioned.
If you change the graph period to 30 seconds, your consumed view adjusts and you see consumption that's double what's real. The math behind the scenes divides by the wrong period. Graph period of 10 seconds, you get 6x reality. Graph period of 5 minutes, you get 1/5th reality.
The Provisioned line isn't based on an equation involving Period so it's not affected by the chosen period.
Maybe someone can comment on why the user is allowed to control the Period of the view but it just messes things up when it doesn't match the natural period of the data.

CPU memory access time

Does the average data and instruction access time of the CPU depends on the execution time of an instruction?
For example if miss ratio is 0.1, 50% instructions need memory access,L1 access time 3 clock cycles, mis penalty is 20 and instructions execute in 1 cycles what is the average memory access time?
I'm assume you're talking about a CISC architecture where compute instructions can have memory references. If you have a sequence of ADDs that access memory, then memory requests will come more often than a sequence of the same number of DIVs, because the DIVs take longer. This won't affect the time of the memory access -- only locality of reference will affect the average memory access time.
If you're talking about a RISC arch, then we have separate memory access instructions. If memory instructions have a miss rate of 10%, then the average access latency will be the L1 access time (3 cycles for hit or miss) plus the L1 miss penalty times the miss rate (0.1 * 20), totaling an average access time of 5 cycles.
If half of your instructions are memory instructions, then that would factor into clocks per instruction (CPI), which would depend on miss rate and also dependency stalls. CPI will also be affected by the extent to which memory access time can overlap computation, which would be the case in an out-of-order processor.
I can't answer your question a lot better because you're not being very specific. To do well in a computer architecture class, you will have to learn how to figure out how to compute average access times and CPI.
Well, I'll go ahead and answer your question, but then, please read my comments below to put things into a modern perspective:
Time = Cycles * (1/Clock_Speed) [ unit check: seconds = clocks * seconds/clocks ]
So, to get the exact time you'll need to know the clock speed of your machine, for now, my answer will be in terms of Cycles
Avg_mem_access_time_in_cycles = cache_hit_time + miss_rate*miss_penalty
= 3 + 0.1*20
= 5 cycles
Remember, here I'm assuming your miss rate of 0.1 means 10% of cache accesses miss the cache. If you're meaning 10% of instructions, then you need to halve that (because only 50% of instrs are memory ops).
Now, if you want the average CPI (cycles per instr)
CPI = instr% * Avg_mem_access_time + instr% * Avg_instr_access_time
= 0.5*5 + 0.5*1 = 3 cycles per instruction
Finally, if you want the average instr execution time, you need to multiply 3 by the reciprocal of the frequency (clock speed) of your machine.
Comments:
Comp. Arch classes basically teach you a very simplified way of what the hardware is doing. Current architectures are much much more complex and such a model (ie the equations above) is very unrealistic. For one thing, access time to various levels of cache can be variable (depending on where physically the responding cache is on the multi- or many-core CPU); also access time to memory (which typically 100s of cycles) is also variable depending on contention of resources (eg bandwidth)...etc. Finally, in modern CPUs, instructions typically execute in parallel (ILP) depending on the width of the processor pipeline. This means adding up instr execution latencies is basically wrong (unless your processor is a single-issue processor that only executes one instr at a time and blocks other instructions on miss events such as cache miss and br mispredicts...). However, for educational purpose and for "average" results, the equations are okay.
One more thing, if you have a multi-level cache hierarchy, then the miss_penalty of level 1 cache will be as follows:
L1$ miss penalty = L2 access time + L1_miss_rate*L2_miss_penalty
If you have an L3 cache, you do a similar thing to L2_miss_penalty and so on

Difference between load and spike testing

How is a load test different from a Spike test, Considering the below scenarios.
Load test: Using an automation tool(JMeter in my case) I create a load of 1000 virtual users loaded in 1 sec(ramp up period).
Spike test: Using an automation tool(JMeter in my case) I create a continuous load of 400 virtual users loaded every 1 sec and a spike load of 600 virtual users loaded in 1 sec at a certain point in time.
When there is a spike load induced is it not the same as a load test described?
So my point is what is the need of a spike test if load tests can be carried out continuously at varied load conditions?
Test scenario:
Application tested : Website.
Automation tool : Jmeter.
Speed of internet used while testing: 3 MBPS.
I`m thanking you all in advance.
According to "Performance Testing Guidance for Web Applications", "spike testis a type of performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.". So I think about analogy with Geometric or Algebraic progression, because volumes are repeatedly (and rapidly) increased. Also this and other definitions are paying attention to short period of time.
Load testing is more general term, without specified time (short or long) of testing or pattern to increase load volumes.
Load Testing: It helps us to know how much load a application/System can bear at a point of time.
Ex: Let a normal man can drink Maximum 3lt water at a time.
spike testing: It helps us to know the behaviour of a system by giving suddenly high amount of load.
Ex: For spike testing we try to know whether a normal man can drink 4lt or more at a time?
A spike test is a kind of load test, used to simulate bursty traffic patterns.
For example, you might want to support 1 million client requests an hour. That's an average of 277 requests/sec. However, that doesn't account for varying usage patterns, like a sudden burst of traffic followed by a lull period. A spike test would simulate these bursts, where the short-term request rate can be much higher or lower than the expected average.

How can I find the average number of concurrent users for IIS to simulate during a load/performance test?

I'm using JMeter for load testing. I'm going through and exercise of finding the max number of concurrent threads (users) that our webserver can handle by simply increasing the # of threads in my distributed JMeter test case, and firing off the test.
Then -- it struck me, that while the MAX number may be useful, the REAL number of users that my website actually handles on average is the number I need to make the test fruitful.
Here are a few pieces of information about our setup:
This is a mixed .NET/Classic ASP site. Upon login, a browser session (with timeout) is created in both for the users.
Each session times out after 60 minutes.
Is there a way using this information, IIS logs, performance counters, and/or some calculation that will help me determine the average # of concurrent users we handle on our production site?
You might use logparser with the QUANTIZE function to determine the peak number of requests over a suitable interval.
For a 10 second window, it would be something like:
logparser "select quantize(to_localtime(to_timestamp(date,time)), 10) as Qnt,
count(*) as Hits from yourLogFile.log group by Qnt order by Hits desc"
The reported counts won't be exactly the same as threads or users, but they should help get you pointed in the right direction.
The best way to do exact counts is probably with performance counters, but I'm not sure any of the standard ones works like you would want -- you'd probably need to create a custom counter.
I can see a couple options here.
Use Performance Monitor to get the current numbers or have it log all day and get an average. ASP.NET has a Requests Current counter. According to this page Classic ASP also has a Requests current, but I've never used it myself.
Run the IIS logs through Log Parser to get the total number of requests and how long each took. I'm thinking that if you know how many requests come in each hour and how long each took, you can get an average of how many were running concurrently.
Also, keep in mind that concurrent users isn't quite the same as concurrent threads on the server. For one, multiple threads will be active per user while content like images is being downloaded. And after that the user will be on the page for a few minutes while the server is idle.
My suggestion is that you define the stop conditions first, such as
Maximum CPU utilization
Maximum memory usage
Maximum response time for requests
Other key parameters you like
It is really subjective to choose the parameters and I personally cannot provide much experience on that.
Secondly you can see whether performance counters or IIS logs can map to the parameters. Then you set up proper mappings.
Thirdly you can start testing by simulating N users (threads) and see whether the stop conditions hit. If not hit, you can go to a higher number. If hit, you can use a smaller number. Recursively you will find a rough number.
However, that never means your web site in real world can take so many users. No simulation so far can cover all the edge cases.

How to intrepret values of the ASP.NET Requests/sec performance counter?

If I monitor ASP.NET Requests/sec performance counter every N seconds, how should I interpret the values? Is it number of requests processed during sample interval divided by N? or is it current requests/sec regardless the sample interval?
It is dependent upon your N number of seconds value. Which coincidentally is also why the performance counter will always read 0 on its first nextValue() after being initialized. It makes all of its calculations relatively from the last point at which you called nextValue().
You may also want to beware of calling your counters nextValue() function at intervals less than a second as it can tend to produce some very outlandish outlier results. For my uses, calling the function every 5 seconds provides a good balance between up to date information, and smooth average.

Resources