When considering using performance counters as my companies' .NET based site, I was wondering how big the overhead is of using them.
Do I want to have my site continuously update it's counters or am I better off to only do when I measure?
The overhead of setting up the performance counters is generally not high enough to worry about (setting up a shared memory region and some .NET objects, along with CLR overhead because the CLR actually does the management for you). Here I'm referring to classes like PerformanceCounter.
The overhead of registering the perfromance counters can be decently slow, but generally is not a concern because it is intended to happen once at setup time because you want to change machine-wide state. It will be dwarfed by any copying that you do. It's not generally something you want to do at runtime. Here I'm referring to PerformanceCounterInstaller.
The overhead of updating a performance counter generally comes down to the cost of performing an Interlocked operation on the shared memory. This is slower than normal memory access but is a processor primitive (that's how it gets atomic operations across the entire memory subsystem including caches). Generally this cost is not high to worry about. It could be 10 times a normal memory operation, potentially worse depending on the update and what contention is like across threads and CPUs. But consider this, it's literally impossible to do any better than interlocked operations for cross-process communication with atomic updates, and no locks are held. Here I refer to PerformanceCounter.Increment and similar methods.
The overhead of reading a performance counter is generally a read from shared memory. As others have said, you want to sample on a reasonable period (just like any other sampling) but just think of PerfMon and try to keep the sampling on a human scale (think seconds instead of milliseconds) and you proably won't have any problems.
Finally, an appeal to experience: Performance counters are so lightweight that they are used everywhere in Windows, from the kernel to drivers to user applications. Microsoft relies on them internally.
Advice: The real question with performance counters is the learning curve in understanding (which is moderate) and one measuring the right things (seems easy but often you get it wrong).
The performance impact is negligible in updating. Microsoft's intent is that you always write to the performance counters. It's the monitoring of (or capturing of) those performance counters that will cause a degradation of performance. So, only when you use something like perfmon to capture the data.
In effect, the performance counter objects will have the effect of only "doing it when you measure."
I've tested these a LOT.
On an old compaq 1Ghz 1 processor machine, I was able to create about 10,000 counters and monitor them remotely for about 20% CPU usage. These aren't custom counters, just checking CPU or whatever.
Basically, you can monitor all the counters on any decent newer machine with very little impact.
The instantiation of the object can take a long time tho, a few seconds to a few minutes. I suggest you multithread this for all the counters you collect otherwise your app will sit there forever creating these objects. Not sure what MS does once you create it that takes so long, but you can do it for 1000 counters with 1000 threads in the same time you can do it for 1 counter and 1 thread.
A performance counter is just a pointer to 4/8 bytes in shared memory (aka memory mapped file), so their cost is very similar to that of accessing an int/long variabile.
I agree with famoushamsandwich, but would add that as long as your sampling rate is reasonable (5 seconds or more) and you monitor a reasonable set of counters, then the impact of measuring is negligible as well (in most cases).
The thing that I have found is that it is not that slow for the majority of applications. I wouldn't put one in a tight loop, or something that is called thousands of times a second.
Secondly, I found that programmatically creating the performance counters is very slow, so make sure that you create them before hand and not in code.
Related
I want to build a MariaDB server.
I am researching DB performance according to CPU specification before server purchase.
Are there any benefits to gaining as CPU Cache changes from 8MB to 12MB?
Is it a good option to purchase a large CPU Cache?
No.
Don't pay extra for more cores. MariaDB essentially uses part of only one core for each active connection.
Don't pay extra for faster CPU or anything else to do with the CPU. If you do become CPU-bound, then come back here for help with optimizing your query; it is usually as simple as adding a composite index or reformulating a query.
Throwing hardware at something is a one-time partial fix and of low benefit -- a few percent: like 2% or, rarely, 20%.
Reformulation or indexing can (depending on the situation) give you 2x or 20x or even 200x.
More RAM up to a point is beneficial. Make a crude estimate of dataset size. Let us know that value, plus some clue of what type of app.
SSD instead of HDD is rather standard, but that may give noticeable benefit.
Major difference in RISC and CISC is that in RISC we must need to use registers to do any arithmetic or logic operation. But in case of CISC we can do such operation directly with memory locations. So what is the advantage of implementing register banking in micro controller architectures? Question is not for the advantage of RISC but the question is for what is need of register in RISC architecture. As in other architecture CISC operation can be done directly with meomery location we don't need to take it in register and then again move into the memory location. Below is the example:
CISC: MUL A,B
RISC:
LDA R0,A
LDA R1,B
MUL R0,R1
STR A,R0
So in above example what is the advantage of using R0 and R1 ie. registers. what is the advantage of load store architecture?
Register banking is something else, I assume you are simply asking about using a register directly or not. Well the memory access takes an eternity, even if cached. Several to hundreds of clock cycles for each of the operands where in RISC if you are assuming a pure register based scheme which not all are, the lines are getting fuzzy. With CISC if microcoded it is going to registers anyway, then the operation is happening, if not microcoded then it still gets latched into internal temporary storage (registers) and then the operation can begin. With risc you have a couple-three extra, simpler, instructions the latching to registers takes the same amount of time as it does in CISC. Now if the algorithm never uses that result or does not use it for a while, it might be a win for CISC (if not microcoded) but if the value is an intermediate value in an algorithm then a clear win for RISC. Even if everything is cached it is a half a dozen to dozen clock cycles to get each parameter and write it back, any cache misses and it is an eternity. Same for RISC but with more registers, and significantly faster access to those registers, zero or one clock for each value and to store back, for some percentage if not the whole algorithm.
As with any benchmarking it is trivial to show a RISC winning case and to show a CISC winning case.
The major difference between RISC and CISC is CISC are complicated time consuming instructions where RISC they are much simpler, you arrange the tasks you need to do and have tighter control over those tasks, you dont have a lot of waste per step. One could argue caches were created to deal with the inefficiencies of CISC or at least one popular one. Both benefit sure, but one relies on the other doesnt as much. Trivial to show CISC winning code and trivial to show RISC winning code. Same goes for VLIW, and others.
RISC designs are simpler, smaller, pipes can be shorter, compiler has more control over the performance, etc. So with microcontrollers you can have a very nice processor core with a 3 stage pipeline that is really low power and still quite efficient. The 6502, z80, 8051, etc have really died off for the most part, you still do see a lot of 8051s if you are looking, the desktop/laptop you might be reading this with probably has one 8051, but that is due to royalties and not because of its size or performance, you probably have several to dozens of ARM cores for every x86, within the same box or certainly around the house. A CISC is going to be relatively massive and inefficient, it might be possible to get the power consumption down to RISC levels, that may just be a matter of design and not CISC vs RISC, but the RISC implementations are doing a much better job at watts per mhz than the CISC implementations.
Using registers can simplify the operand fetching logic of functional unis. With CISC functional units should be able to fetch data from memory. With RISC, all the functional units will operate on registers as it is guaranteed that the data will be there, so less complicated.
Also, think of a case where you have multiple MUL operations some uses data at location A, some use B, shown below.
'MUL A, B'
'MUL C, B'
When you perform the operation in CISC, you will be reading B, twice. But in RISC, you load it to a register once, and can use multiple times. So less memory (cache) accesses.
Also think of number of bits needed to represent that MUL in CISC. As A, B, C can be memory locations, they could be anywhere within your address spaces. On the other hand with registers in RISC, bits needed to represent your operands are less, hence less complicated instruction set.
As from above responses, we can conclude that the using registers instead of direct memory location gives the benefit in efficiency in terms of clock cycle and so the power consumption. They also give the benefit in term of complexity of instructions.
I'm writing a business case for implementing the lruskips parameter. I know the benefits (and they are likely sizeable). What I don't know is the performance overhead in terms of memory etc I should consider as the downsides. We need a balanced viewpoint after all!
OpenEdge 10.2B08
Various flavours of Windows that have not been patched with Linux.
The feature eliminates overhead by avoiding the housekeeping associated with maintaining the LRU chain. Using it does not add any memory requirements and it reduces CPU consumption.
Instead of moving a block to the head of the LRU chain every time it is referenced it only does that every X references. So rather than evicting the block that is absolutely the "least recently used" a block that is "probably not very recently used" will be evicted.
The only potential downside is that there could, theoretically, be a perverse case where setting it too high might result in some poor eviction decisions. IOW the "probably" part of things turns out to be untrue because you aren't checking often enough (you have basically converted the -B management to a FIFO queue).
For instance, setting it to 1000000 with a -B of 1000 might not be very smart. (But the bigger issue is probably-B 1000)
Suppose that you did that (set -lruskips 1000000 -B 1000) and your application has a mix of "normal" access along with a few background processes doing some sort of sequential scan to support reporting.
The reporting stuff is going to read lots of data that it only looks at once (or a very few times). This data is going to be immediately placed at the MRU end of the queue and will move towards the LRU end as new data is read.
Even though it is being frequently accessed the working set of "normal" data will be pushed to the LRU end because the access counter update is being skipped. With a smallish -B it is going to fall off the end pretty quickly and be evicted. And then the next access will cause an IO to occur and the cycle will restart.
The higher you set -lruskips the more sensitive you will become to that sort of thing. -lruskips 10 will eliminate 90% of lru housekeeping and should be a very, very low risk setting. -lruskips 100 will eliminate 99% of the housekeeping and should still be awfully low risk (unless -B is crazy small). Unless you have a very special set of circumstances setting it higher than 100 seems mostly pointless. You are well into "diminishing returns" and possibly pushing your luck vis a vis perverse outcomes.
I was reading this excellent article which gives an introduction to Asynchronous programming here http://krondo.com/blog/?p=1209 and I came across the following line which I find hard to understand.
Since there is no actual parallelism(in asnyc), it appears from our diagrams that an asynchronous program will take just as long to execute as a synchronous one, perhaps longer as the asynchronous program might exhibit poorer locality of reference.
Could someone explain how locality of reference comes into picture here?
Locality of reference, like that Wikipedia article mentions, is the observation that when some data is accessed (on disk, in memory, whatever), other data near that location is often accessed as well. This observation makes sense since developers tend to group similar data together. Since the data are related, they're often processed together. Specifically, this is known as spatial locality.
For a weak example, imagine computing the sum of an array or doing a matrix multiplication. The data representing the array or matrix are typically stored in continguous memory locations, and for this example, once you access one specific location in memory, you will be accessing others close to it as well.
Computer architecture takes locality of reference into account. Operating systems have the notion of "pages" which are (roughly) 4KB chunks of data that can be paged in and out individually (moved between physical memory and disk). When you touch some memory that's not resident (not physically in RAM), the OS will bring the entire page of data off disk and into memory. The reason for this is locality: you're likely to touch other data around what you just touched.
Additionally, CPUs have the concept of caches. For example, a CPU might have an L1 (level 1) cache, which is really just a big block of on-CPU data that the CPU can access faster than RAM. If a value is in the L1 cache, the CPU will use that instead of going out to RAM. Following the principle of the locality of reference, when a CPU access some value in main memory, it will bring that value and all values near it into the L1 cache. This set of values is known as a cache line. Cache lines vary in size, but the point is that when you access the first value of an array, the CPU might have to get it from RAM, but subsequent accesses (close in proximity) will be faster since the CPU brought the whole bundle of values into the L1 cache on the first access.
So, to answer your question: if you imagine a synchronous process computing the sum of a very large array, it will touch memory locations in order one after the other. In this case, your locality is good. In the asynchronous case, however, you might have n threads each taking a slice of the array (of size 1/n) and computing the sub-sum. Each thread is touching a potentially very different location in memory (since the array is large) and since each thread can be switched in and out of execution, the actual pattern of data access from the point of view of the OS or CPU is poor. The L1 cache on a CPU is finite, so if Thread 1 brings in a cache line (due to an access), this might evict the cache line of Thread 2. Then, when Thread 2 goes to access its array value, it has to go to RAM, which will bring in its cache line again and potentially evict the cache line of Thread 1, and so on. Depending on the system resources and usage as a whole, this pattern could happen on the OS/page level as well.
The poorer locality of reference results in poorer cache usage -- each time you do a thread switch, you can expect that most of what's in the cache relates to that previous thread, not the current one, so most reads will get data from main memory instead of the cache.
He's ultimately wrong though, at least for quite a few programs. The reason is pretty simple: even though you gain nothing on CPU-bound code, when you can combine some CPU-bound code with some I/O bound code, you can expect an overall speed improvement. You can, for example, initiate a read or write, then switch to doing computation while the disk is busy, then switch back to the I/O bound thread when the disk finishes its work.
Everything I've read and experienced ( Tornado based apps ) leads me to believe that ePoll is a natural replacement for Select and Poll based networking, especially with Twisted. Which makes me paranoid, its pretty rare for a better technique or methodology not to come with a price.
Reading a couple dozen comparisons between epoll and alternatives shows that epoll is clearly the champion for speed and scalability, specifically that it scales in a linear fashion which is fantastic. That said, what about processor and memory utilization, is epoll still the champ?
For very small numbers of sockets (varies depending on your hardware, of course, but we're talking about something on the order of 10 or fewer), select can beat epoll in memory usage and runtime speed. Of course, for such small numbers of sockets, both mechanisms are so fast that you don't really care about this difference in the vast majority of cases.
One clarification, though. Both select and epoll scale linearly. A big difference, though, is that the userspace-facing APIs have complexities that are based on different things. The cost of a select call goes roughly with the value of the highest numbered file descriptor you pass it. If you select on a single fd, 100, then that's roughly twice as expensive as selecting on a single fd, 50. Adding more fds below the highest isn't quite free, so it's a little more complicated than this in practice, but this is a good first approximation for most implementations.
The cost of epoll is closer to the number of file descriptors that actually have events on them. If you're monitoring 200 file descriptors, but only 100 of them have events on them, then you're (very roughly) only paying for those 100 active file descriptors. This is where epoll tends to offer one of its major advantages over select. If you have a thousand clients that are mostly idle, then when you use select you're still paying for all one thousand of them. However, with epoll, it's like you've only got a few - you're only paying for the ones that are active at any given time.
All this means that epoll will lead to less CPU usage for most workloads. As far as memory usage goes, it's a bit of a toss up. select does manage to represent all the necessary information in a highly compact way (one bit per file descriptor). And the FD_SETSIZE (typically 1024) limitation on how many file descriptors you can use with select means that you'll never spend more than 128 bytes for each of the three fd sets you can use with select (read, write, exception). Compared to those 384 bytes max, epoll is sort of a pig. Each file descriptor is represented by a multi-byte structure. However, in absolute terms, it's still not going to use much memory. You can represent a huge number of file descriptors in a few dozen kilobytes (roughly 20k per 1000 file descriptors, I think). And you can also throw in the fact that you have to spend all 384 of those bytes with select if you only want to monitor one file descriptor but its value happens to be 1024, wheras with epoll you'd only spend 20 bytes. Still, all these numbers are pretty small, so it doesn't make much difference.
And there's also that other benefit of epoll, which perhaps you're already aware of, that it is not limited to FD_SETSIZE file descriptors. You can use it to monitor as many file descriptors as you have. And if you only have one file descriptor, but its value is greater than FD_SETSIZE, epoll works with that too, but select does not.
Randomly, I've also recently discovered one slight drawback to epoll as compared to select or poll. While none of these three APIs supports normal files (i.e., files on a file system), select and poll present this lack of support as reporting such descriptors as always readable and always writable. This makes them unsuitable for any meaningful kind of non-blocking filesystem I/O, a program which uses select or poll and happens to encounter a file descriptor from the filesystem will at least continue to operate (or if it fails, it won't be because of select or poll), albeit it perhaps not with the best performance.
On the other hand, epoll will fail fast with an error (EPERM, apparently) when asked to monitor such a file descriptor. Strictly speaking, this is hardly incorrect. It's merely signalling its lack of support in an explicit way. Normally I would applaud explicit failure conditions, but this one is undocumented (as far as I can tell) and results in a completely broken application, rather than one which merely operates with potentially degraded performance.
In practice, the only place I've seen this come up is when interacting with stdio. A user might redirect stdin or stdout from/to a normal file. Whereas previously stdin and stdout would have been a pipe -- supported by epoll just fine -- it then becomes a normal file and epoll fails loudly, breaking the application.
In tests at my company, one issue with epoll() came up, thus a single cost compared to select.
When attempting to read from the network with a timeout, creating an epoll_fd ( instead of a FD_SET ), and adding the fd to the epoll_fd, is much more expensive than creating a FD_SET (which is a simple malloc).
As per the previous answer, as the number of FDs in the process becomes large, the cost of select() becomes higher, but in our testing, even with fd values in the 10,000's, select was still a winner. These are cases where there is only one fd that a thread is waiting on, and simply trying to overcome the fact that network read, and network write, doesn't timeout when using a blocking thread model. Of course, blocking thread models are low performance compared to non-blocking reactor systems, but there are occasions where, to integrate with a particular legacy code base, it is required.
This kind of use case is rare in high performance applications, because a reactor model doesn't need to create a new epoll_fd every time. For the model where an epoll_fd is long-lived --- which is clearly preferred for any high performance server design --- epoll is the clear winner in every way.