CPU Stress test on Azure Data Explorer - azure-data-explorer

Recently I was facing query slowness on our ADX. I need to test the performance of query under a 70% CPU utilization. I am thinking of triggering some high CPU utilization queries from Logic app or ADF in order to achieve this, is it the right approach or any good approaches are there?

You can also use the client SDK to trigger the queries, this would likely be simpler.

Related

dhrystone results are same

I am executing dhrystone 2.1 on freescale IMX6 quad processor with 1GHz.Below are the things I tried.
1. Executed dhrystone alone first time.
2. With an application running in the background, I executed dhrystone.
In either cases I am getting DMIPS value same. I do not understand. In second case DMIPS should reduce.Please let me know
You should think about why you expect the dhrystone benchmark to perform worse with another program running in the background. If you want them to fight for cpu time, you need to make sure they are bother scheduled on the same core, because if they are scheduled on different cores, they will each receive 100% cpu time. Another reason your could expect dhrystone to run slower with an app in the background would be shared cache collisions, or memory bandwidth. Both of these reason though I would disqualify for this discussion, because dhrystone is an extremely simple benchmark, which doesn't require much memory bandwidth or cache space. So your best way of slowing it down is scheduling the other app on the same core and restricting them both, so they can't be scheduled elsewhere. For more info on how to perform dhrystone benchmarking for arm refer to this document:
http://infocenter.arm.com/help/topic/com.arm.doc.dai0273a/DAI0273A_dhrystone_benchmarking.pdf

Detecting W3WP CPU issues using jetBrains dotTrace

Our W3WP process on our production server is constantly high. It doesn't max out at 100% but jumps up into the 90%s a fair bit. To help look into this I profiled the live aplication using JetBrains dotTrace.
The results were as expected. All the slow methods were NHibernate functions that queried our database. My question is, would these slow methods actaully affect the CPU on our web server, as our database server is on a separate machine. Surely if the database server is doing some work then the web server jsut waits for a response, and the CPU shouldn't go up?
If this is the case, how do I use dotTrace (or another tool if neccesary) to work out where the CPU is being used as opposed to the server just waiting for a response from elsewhere?
dotTrace screenshot of hot spots
You can see from the screenshot that most of the time is spend waiting for external HTTP requests to complete. However, these shouldn't affect the CPU usage on the web server I'd have thought
It may well be NHibernate itself that is doing the hard work on your web server, and that the database is actually doing relatively little.
I would recommend running a SQL profiler to see whether it is really the case that the database is taking a long time on a single call (from NHibernate).
My guess is that you will see NHibernate making lots and lots of calls to the database and then processing them (on your erb server server) and that it is this that is responsible for the high CPU.
If you have a lot of lazy fetching on joins, you can end up in the situation where NHibernate makes many, many calls to the database get the data for one request.

How many requests per second should my asp(class) app handle

I'm profiling a asp(classic) web service. The web service makes database calls, reads/writes to files, and processes xml. On a windows server 2003 box(2.7ghz, 4 core, 4gb ram) how many requests per second should I be able to handle before things start to fail.
I'm building a tool to test this, but I'm looking for a number of requests per second to shoot for.
I know this is fairly vague, but please give the best estimate you can. If you need more information, please ask.
95% of the performance of any data-driven app is dependent on the database: 1) the way you do your calls, 2) the indexes, 3) the hardware under the database (disk subsystem in particular).
I have seen a machine, like you are describing, handle 40 requests per second (2500/minute), but numbers like 10 per second (600/minute) are more common. I would expect even lower if you are running your DB on the same machine, and even lower still if that DB is SQLExpress or MSAccess.
Also, at capacity, your app will probably not fail, but IIS will Queue requests, once it is saturated, and may timeout some of those requests if it can't service them before the timeout expires.
Btw, instead of building a tool to test your app, you may want to look into using a test tool such as Microsoft WCAT. It is pretty smooth and easy to use.
How fast should it be? Fast enough.
How fast is fast enough? That's a question that only you and your users can answer. If your service is horrifically inefficient and keeps up with demand, it's fast enough. If your service is assembly-optimized, lightning-fast, and overwhelmed with requests, it's not fast enough.
If the server is handling its actual workload, then don't worry about how fast it "should" be. When the server is having trouble, or when you anticipate that it soon will, then you should look at improving the code or upgrading the hardware. Remember Knuth's Law – premature optimization is the root of all evil. Any work you do now to make it faster may never pay off, and you may be forced to make compromises with flexivility or maintainability. Remember, too, an older adage – if it ain't broke, don't fix it.
Yes I would also say 10 per second is a good benchmark. For a high performance app you would want to get more than this, but if you have no specific goal you should generally be able to get at least 10 requests per sec for a general web page with a bunch of database queries.

Should a non-blocking code push CPU to 100%

I've been thinking today about NodeJS and it attitude towards blocking, it got me thinking, if a block of code is purely non-blocking, say calculating some real long alogirthm and variables are all present in the stack etc.. should this push a single core non hyperthreaded to CPU as Windows Task Manager defines it to 100% as it aims to complete this task as quickly as possible? Say that this is generally calculation that can take minutes.
Yes, it should. The algorithm should run as fast as it can. It's the operating system's job to schedule time to other processes if necessary.
If your non-blocking computation intensive code doesn't use 100% of the CPU then you are wasting cycles in the idle task. It always irritates me to see the idle task using 99% of the CPU.
As long as the CPU is "given" to other processes when there are some that need it to do their calculations, I suppose it's OK : why not use the CPU if it's available and there is some work to do ?
As RAM can be paged out to disk, all applications are potentially blocking. This would happen if the algorithm uses more RAM than available on the system. As a result, it won't hit 100%.

Best way to determine the number of servers needed

How much traffic can one web server handle? What's the best way to see if we're beyond that?
I have an ASP.Net application that has a couple hundred users. Aspects of it are fairly processor intensive, but thus far we have done fine with only one server to run both SqlServer and the site. It's running Windows Server 2003, 3.4 GHz with 3.5 GB of RAM.
But lately I've started to notice slows at various times, and I was wondering what's the best way to determine if the server is overloaded by the usage of the application or if I need to do something to fix the application (I don't really want to spend a lot of time hunting down little optimizations if I'm just expecting too much from the box).
What you need is some info on Capacity Planning..
Capacity planning is the process of planning for growth and forecasting peak usage periods in order to meet system and application capacity requirements. It involves extensive performance testing to establish the application's resource utilization and transaction throughput under load. First, you measure the number of visitors the site currently receives and how much demand each user places on the server, and then you calculate the computing resources (CPU, RAM, disk space, and network bandwidth) that are necessary to support current and future usage levels.
If you have access to some profiling tools (such as those in the Team Suite edition of Visual Studio) you can try to set up a testing server and running some synthetic requests against it and see if there's any specific part of the code taking unreasonably long to run.
You should probably check some graphs of CPU and memory usage over time before doing this, to see if it can even be that. (A number alike to the UNIX "load average" could be a useful metric, I don't know if Windows has anything like it. Basically the average number of threads that want CPU time for every time-slice.)
Also check the obvious, that you aren't running out of bandwidth.
Measure, measure, measure. Rico Mariani always says this, and he's right.
Measure req/sec, RAM, CPU, Sessions, etc.
You may come up with a caching strategy (Output caching, data caching, caching dependencies, and so on.)
See also how your SQL Server is doing... indexes are a good place to start but not the only thing to look at..
On that hardware, a .NET application should be able to serve about 200-400 requests per second. If you have only a few hundred users, I doubt you are seeing even 2 requests per second, so I think you have a lot of capacity on that box, even with SQL server running.
Without know all of the details, I would say no, you will not see any performance improvement by adding servers.
By the way, if you're not using the Output Cache, I would start there.

Resources