CPU usage different? - cpu-usage

I have a basic question.
If I run an executable file (Release, Visual Studio 2010) on two computers with the same CPU speed run two different Windows operating systems, eg. Windws7 vs XP, shall I expect to see different CPU usages when I measure it using the task manager? Is the CPU speed the only factor to measuring the CPU usage?
Thanks.
Sar

Different OS's? Yes.
Operating Systems are the go-between between the programs you run and the bare-metal they run on. As OS'es change and evolve the naturally and and remove features that consume resources--these are things that run in the background; or they could be changes to the manner in which the OS speaks to the hardware.
Also, the measurement of CPU usage is done by the OS. There isn't a tachometer on chips saying "running at 87% of redline", but rather that "tach" is constructed largely by the OS.
After better understanding your situation: I would suggest taking a look at the Performance Monitor (perfmon.exe) which ships with both XP and Win7, and gets you much finer-grain detail about processor usage levels. Another (very good) option would be to consider running a profiler on your application on both OSes and compare the results. It would likely be the best option to specifically benchmark your application on both OSes.

Even on the same OS you should expect to see different usages, because there are so many factors that determine CPU usage.

The percentage of CPU usage listed in the task manager is not a very good indication of much of anything, except to say that a program either is, or is not using CPU. That particular statistic is derived from task switching statistics, and task switching is very sensitive to basically every single thing that's going on in a computer, from network access to memory speed to CPU temperature.

Related

Out-of-memory-error on Minecraft Server with 16G RAM

Please excuse my inexperience, this is my first time on the site.
I have a Dell PowerEdge r710 with 2 Xeon L5630 CPUs and 16G RAM installed. I'm trying to host a Minecraft 1.7.10 Forge Server that runs perfectly fine on my Desktop, but refuses to run properly on the server.
This machine is running Java 8, and runs perfectly otherwise. When running the application without the mods, it loads up without a hitch. As I add more mods, it gets worse. As far as my (very, very limited) knowledge goes, the order of JVM arguments doesn't matter, and didn't on my Desktop, but in order to get the application to even run I had to change the order in my .bat file. With all mods installed, the Out Of Memory Error occurs with a chunk loading error when around 41% spawn loaded.
This is the .bat file that I've made to start the server:
java -jar minecraft_server.jar -Xms512M -Xmx8192M nogui -XX:+HeapDumpOnOutOfMemory
This should load up perfectly fine, everything is compatible and tested on another machine, but the exact same setup will not run on the r710, saying Out Of Memory with more than double the desktop's allocated memory.
First you should use Task Manager or a similar utility to make sure that Java process indeed is using more then the amount you allocated with your arguments. Then I would recommend reading through this lovely post written by Cpw and posted on Reddit. If it doesn't help you with your current situation it should at least give you a bit more information on Minecraft's memory footprint.
In a normal situation where you would be running Minecraft as a local server from your computer I would suggest taking a look at how much memory your GPU is taking up. Since you are running a server this is not relevant, but might still be useful to someone who stumbles upon this post so I will leave it here:
Your graphics card is probably the biggest address hog. Today's graphics adapters often contain a gigabyte or more of RAM, and every one of those bytes needs an address. To be fair, I doubt that many of those multi-gigabyte graphics cards are in 32-bit PCs, but even a 512mb video card will take a sizeable bite out of 4GB.
I am not quite familiar with running dedicated servers but another important thing that is worth mentioning is that in case you are on a 32-bit operating system you will only be able to take advantage of 4GB of your RAM due to architecture constraints.
Every byte of RAM requires its own address, and the processor limits the length of those addresses. A 32-bit processor uses addresses that are 32 bits long. There are only 4,294,967,296, or 4GB, possible 32-bit addresses.
If all else fails you should try to seek help on one of the available Discord channels dedicated to Minecraft modding. This should be a rule in general actually, especially for general purpose problems that are difficult for others to reproduce. Here is a small list of three Discord communities dedicated to Minecraft modding that I have experience with:
Modded Minecraft - The one with most traffic so it can be a bit more difficult for your question to get noticed on busy days, but definitely the best moderated one from this list.
Modding Help - The smallest of the three. I don't have much experience with this one.
Mod Dev Cafe - This one has a decent size and a pretty good response rate, but be prepared for the usual facepalms and other unpleasantness common to younger admins and moderators. However if you are willing to look past that this is a good choice.

Optimize mathematical library (libm)

Have anyone tried to compile glibc with -march=corei7 to see if there's any performance improvement over the version that comes by default with any Linux x68_64 distribution? GCC is compiled with -march=i686. I think (not sure) that the mathematical library is also compiled the same way. Can anybody confirm this?
Most Linux distributions for x86 compile using only i686 instructions, but asking for scheduling them for later processors. I haven't really followed later developments.
A long while back different versions of system libraries according to processor lines were common, but the performance differences were soon deemed too small for the cost. And machines got more uniform in performance meanwhile.
One thing that has to be remembered always is that today's machines are memory bound. I.e., today a memory access takes a few hundred times longer than an instruction, and the gap is growing. Not to mention that this machine (an oldish laptop, was top-of-the-line some 2 years back) has 4 cores (8 threads), all battling to get data/instructions from memory. Making the code run a tiny bit faster, so the CPU can wait longer for RAM, isn't very productive.

VMWare - network applications

I am developing a distributed file system using Java, I cannot give many details at this moment. I need to test some things on Linux, I will use WMWare server an install Linux inside a virtual machine. Is there any difference between the simulated network card and a real ethernet interface?
I am developing a distributed file system ... I will use WMWare server an install Linux inside a virtual machine.
VMware is great for this sort of thing. There should be no difference except, as RichieHindle said, in performance, especially if you're planning to run multiple vms on the same server.
Use real hardware if you want usable performance benchmark results.
Java is it's own 'VM'... on top of a layer of virtualization in the guest OS... on top of VMware... on a virtual execution model CPU. Take a little virtualization here, add a little virtualization there, and pretty soon we're talking about some real abstraction!
From the point of view of application code, no, there's no difference.
The only visible difference might be in performance - the speed of response and exact timings of things might be different, but you're talking microseconds.
There's so much general-purpose software that works flawlessly under VMs that the answer to almost every question of the form "Are VMs different from real machines?" at the application level is "No".
(Things might be different if you were talking about kernel-level driver software.)

Does hyperthreading lead to unstable systems?

I'm building a PC with the new Intel I7 quad core processor. With hyperthreading turned on it will report 8 cores in Task Manager.
Some of my colleagues are saying that hyperthreading will make the system unreliable and suggest turning it off.
Can any of you good people enlighten me and the rest of the stockoverflow users.
Follow on: I've been using hyperthreading constantly, and its been spot on. No instability whatsoever. I'm using:
Microsoft Server 2008 64 bit
Microsoft SQL Server 2008 64 bit
Microsoft Visual Studio 2008
Diskeeper Server
Lots of controls (Telerik, Dundas, Rebex, Resharper)
Stability isn't likely to be affected, since the abstraction is very low level and the OS just sees it as another CPU to provide work to. However, performance is another matter.
In all honesty I can't say if this is still the case, but at least when the HT-enabled CPUs first came out, there were known problems with at least some applications. For example, MySQL, and multi-threaded apps like the Java application I support for my day job were known to have decreased performance when HT was enabled. We always recommended it be removed, at least for our particular use case of a server-side enterprise application.
It's possible that this is no longer an issue, and in a desktop environment this is less likely to be a problem for most use cases. The ability to split work on the CPU generally would lead to more responsive applications when the CPU is heavily utilized. However, the context switching and overhead could be a detrement when the app is already heavily threaded and CPU-intensive such as in the case of a database server.
Off the top of my head I can think of a few reasons your colleagues might say this.
Several articles about SQL performance suffering under hyperthreading. I believe it winds up doing too much context switchings or cache thrashing. can't remember exactly.
Early on going from single proc to multi-proc or more likely for most people hyperthreaded procs, brought many threading issues into the open. Race conditions, deadlocks, etc, that they never saw before. Even though its a code problem some people blamed the procs.
Are they making the same claims about multi-core/multi-proc or just about hyperthreaded?
As for me, I've been developing on a hyperthreaded box for 4 years now, only problem has been a UI deadlock issue of my own making.
Hyperthreading will mainly make a difference in the scheduler behaviour/performance when dispatching threads to the same CPU as opposed to different CPU...
It will show in a badly coded application that does not handle race conditions between threads...
So it is usually bad design/code.... that suddendly find a failure mode condition
Unreliable? I doubt so. The only disadvantage of hyperthreading that I can think of is the fact that if the OS is not aware of it, it might schedule two threads on one physical processor when other physical processors are idle which will degrade performance.
There was a problem with SQL server and hyperthreading for some queries because SQL server has its own scheduler, maxdop 1 would solve that
To whatever degree Windows is unstable, it's highly unlikely that hyperthreading contributes significantly (or it would have made big news by now.)
I've had a hyperthreading PC for a couple years now. Not that many cores, but it's worked fine for me.
Wish I had test data to prove your colleagues wrong, but it sounds like it's just my opinion versus theirs at this point. ;)
The threads in a hyperthreaded CPU share the same cache, and as such don't suffer from the cache consistency problems that a multiple cpu architecture can. Though, if the developer of a piece of software is programming with multiple cpus in mind, they will (or should) be writing with read semantics (iirc, that's the term). i.e. all writes are flushed from the cache immediately.
As far as I know, from the OS's point of view, it doesn't see hyperthreading as any different from having actual multiple cores. From the OS's point of view, there is no difference - it's isolated.
So, aside from the fact that hyperthreading's "extra cores" aren't "real" (in the strictly technical sense) and don't have the full performance of "real" CPU cores, I can't see that it'd be any less reliable. Slower, perhaps, in some rare instances, but not less reliable.
Of course, it depends on what you're running - I suppose some applications might get "down & dirty" with the CPU and hyperthreading might confuse them, but that's probably pretty rare.
I myself have been running a PC with hyperthreading for several years now, and I have seen no stability problems.
Sorry I don't have more concrete data!
I own an i7 system, and I haven't had any issues.
If it works w/ multiple cores, it works with hyperthreading.
The short answer: yes.
The long answer, as with almost every question, is "it depends". Depends on the OS, the software, the CPU revision, etc. I have personally had to disable hyperthreading on two occasions to get software working properly (one, with the Synergy application, and two, with the Windows NT 4.0 installer), but your mileage may vary.
As long as you get windows installed detecting multiple HT cores from the beginning (it loads some relevant drivers and such), you can always disable (and re-enable) HT "after the fact". If you have bizarre stability issues with specific software that you can't resolve, it's not hard to disable HT to see if it has any impact.
I wouldn't disable it to start with because, frankly, it will probably work fine in 99.99% of your daily use. But be aware that yes, it can occasionally cause bizarre behaviors, so don't rule it out if you happen to be troubleshooting something very odd down the road.
Personally, I've found that hyperthreading, while not causing any problems, doesn't actually help all that much either. It might be like having an extra .1 of a processor. On my HT machine at work, I only very seldomly see my CPU go above 50%. I don't know if HT has gotten any better with newer processors like the i7, but I'm not optimistic.
Other than hearing a few reports about SQL Server, all I can report is positive. I get about 25% better performance on heavy multi-threaded apps with HT on. Have never run into a problem with it, and I'm using a first generation HT processor...
Late to the party, but for future referrence;
I'm currently having an issue with this with SQLServer. Basically, my understanding is Hyperthreading on the same processor shares the same L1 & L2 cache, which can cause issues between the two. Citrix also appears to have this problem from what I'm reading.
Slava Ok wrote a good blog post on it.
I'm here very late but found this page via Google. I may have discovered a very subtle problem. I have a i7 950 running 2003 Server and it's great. Initially I left hyperthreading on in the BIOS, but during some testing and pushing things hard, I ran a program called "crashme" by Carrette. This program tries to crash an OS by spawning a process and feeding it garbage to try and run. My dual Opteron setup ran it forever without a problem, but the 950 crashed within the hour. It didn't crash for anything else unless I did something stupid, so it was very surprising. On a whim I turned off HT and ran the program again. It runs all night, even multiple instances of it. One anecdote doesn't mean much, but try it and see what happens. Also, it seems that the processor is slightly cooler at any given load if HT is turned off. YMMV.

Build Server Hardware Configuration

So I've seen this question, but I'm looking for some more general advice: How do you spec out a build server? Specifically what steps should I take to decide exactly what processor, HD, RAM, etc. to use for a new build server. What factors should I consider to decide whether to use virtualization?
I'm looking for general steps I need to take to come to the decision of what hardware to buy. Steps that lead me to specific conclusions - think "I will need 4 gigs of ram" instead of "As much RAM as you can afford"
P.S. I'm deliberately not giving specifics because I'm looking for the teach-a-man-to-fish answer, not an answer that will only apply to my situation.
The answer is what requirements will the machine need in order to "build" your code. That is entirely dependent on the code you're talking about.
If its a few thousand lines of code then just pull that old desktop out of the closet. If its a few billion lines of code then speak to the bank manager about giving you a loan for a blade enclosure!
I think the best place to start with a build server though is buy yourself a new developer machine and then rebuild your old one to be your build server.
I would start by collecting some performance metrics on the build on whatever system you currently use to build. I would specifically look at CPU and memory utilization, the amount of data read and written from disk, and the amount of network traffic (if any) generated. On Windows you can use perfmon to get all of this data; on Linux, you can use tools like vmstat, iostat and top. Figure out where the bottlenecks are -- is your build CPU bound? Disk bound? Starved for RAM? The answers to these questions will guide your purchase decision -- if your build hammers the CPU but generates relatively little data, putting in a screaming SCSI-based RAID disk is a waste of money.
You may want to try running your build with varying levels of parallelism as you collect these metrics as well. If you're using gnumake, run your build with -j 2, -j 4 and -j 8. This will help you see if the build is CPU or disk limited.
Also consider the possibility that the right build server for your needs might actually be a cluster of cheap systems rather than a single massive box -- there are lots of distributed build systems out there (gmake/distcc, pvmgmake, ElectricAccelerator, etc) that can help you leverage an array of cheap computers better than you could a single big system.
Things to consider:
How many projects are going to be expected to build simultaneously? Is it acceptable for one project to wait while another finishes?
Are you going to do CI or scheduled builds?
How long do your builds normally take?
What build software are you using?
Most web projects are small enough (build times under 5 minutes) that buying a large server just doesn't make sense.
As an example,
We have about 20 devs actively working on 6 different projects. We are using a single TFS Build server running CI for all of the projects. They are set to build on every check in.
All of our projects build in under 3 minutes.
The build server is a single quad core with 4GB of ram. The primary reason we use it is to performance dev and staging builds for QA. Once a build completes, that application is auto deployed to the appropriate server(s). It is also responsible for running unit and web tests against those projects.
The type of build software you use is very important. TFS can take advantage of each core to parallel build projects within a solution. If your build software can't do that, then you might investigate having multiple build servers depending on your needs.
Our shop supports 16 products that range from a few thousands of lines of code to hundreds of thousands of lines (maybe a million+ at this point). We use 3 HP servers (about 5 years old), dual quad core with 10GB of RAM. The disks are 7200 RPM SCSI drives. All compiled via msbuild on the command line with the parallel compilations enabled.
With that setup, our biggest bottleneck by far is the disk I/O. We will completely wipe our source code and re-checkout on every build, and the delete and checkout times are really slow. The compilation and publishing times are slow as well. The CPU and RAM are not remotely taxed.
I am in the process of refreshing these servers, so I am going the route of workstation class machines, go with 4 instead of 3, and replacing the SCSI drives with the best/fastest SSDs I can afford. If you have a setup similar to this, then disk I/O should be a consideration.

Resources