Please excuse my inexperience, this is my first time on the site.
I have a Dell PowerEdge r710 with 2 Xeon L5630 CPUs and 16G RAM installed. I'm trying to host a Minecraft 1.7.10 Forge Server that runs perfectly fine on my Desktop, but refuses to run properly on the server.
This machine is running Java 8, and runs perfectly otherwise. When running the application without the mods, it loads up without a hitch. As I add more mods, it gets worse. As far as my (very, very limited) knowledge goes, the order of JVM arguments doesn't matter, and didn't on my Desktop, but in order to get the application to even run I had to change the order in my .bat file. With all mods installed, the Out Of Memory Error occurs with a chunk loading error when around 41% spawn loaded.
This is the .bat file that I've made to start the server:
java -jar minecraft_server.jar -Xms512M -Xmx8192M nogui -XX:+HeapDumpOnOutOfMemory
This should load up perfectly fine, everything is compatible and tested on another machine, but the exact same setup will not run on the r710, saying Out Of Memory with more than double the desktop's allocated memory.
First you should use Task Manager or a similar utility to make sure that Java process indeed is using more then the amount you allocated with your arguments. Then I would recommend reading through this lovely post written by Cpw and posted on Reddit. If it doesn't help you with your current situation it should at least give you a bit more information on Minecraft's memory footprint.
In a normal situation where you would be running Minecraft as a local server from your computer I would suggest taking a look at how much memory your GPU is taking up. Since you are running a server this is not relevant, but might still be useful to someone who stumbles upon this post so I will leave it here:
Your graphics card is probably the biggest address hog. Today's graphics adapters often contain a gigabyte or more of RAM, and every one of those bytes needs an address. To be fair, I doubt that many of those multi-gigabyte graphics cards are in 32-bit PCs, but even a 512mb video card will take a sizeable bite out of 4GB.
I am not quite familiar with running dedicated servers but another important thing that is worth mentioning is that in case you are on a 32-bit operating system you will only be able to take advantage of 4GB of your RAM due to architecture constraints.
Every byte of RAM requires its own address, and the processor limits the length of those addresses. A 32-bit processor uses addresses that are 32 bits long. There are only 4,294,967,296, or 4GB, possible 32-bit addresses.
If all else fails you should try to seek help on one of the available Discord channels dedicated to Minecraft modding. This should be a rule in general actually, especially for general purpose problems that are difficult for others to reproduce. Here is a small list of three Discord communities dedicated to Minecraft modding that I have experience with:
Modded Minecraft - The one with most traffic so it can be a bit more difficult for your question to get noticed on busy days, but definitely the best moderated one from this list.
Modding Help - The smallest of the three. I don't have much experience with this one.
Mod Dev Cafe - This one has a decent size and a pretty good response rate, but be prepared for the usual facepalms and other unpleasantness common to younger admins and moderators. However if you are willing to look past that this is a good choice.
When I'm building my web project it takes about 20 seconds to compile. Then when I try to browse to a web page in project, asp.net does its runtime compilation(another 20 seconds). I know I can't escape these steps because thats how asp.net works, just want to see if anyone has some kind of optimization to make these builds faster.
Trying to improve my Edit-Compile-Test loop
My machine details:
-Intel Core i7 processor #2.80GHz
-8GB of RAM
-HD # 7200 RPM
Buy a faster machine? Sounds like a smart answer. I know that the compiler can take advantage of multi core machines. Also, during compilation there's a lot of Hard drive access, so it may make sense to get a solid state drive. Maybe not the answer you are looking for, but it's a definite solution.
The other thing you can do is configure your project to allow for "Edit And Continue". This will allow for small things to be change, and continue debugging, without doing a full recompile.
Here are a couple of thoughts:
Disable any "realtime" virus / malware protection, at least during this process.
Disable indexing (Windows, Google desktop, etc.) for the folders that VS uses during this process.
Disable / stop other processes that may be accessing the hard disk. The biggest issue here is latency - even if other applications are accessing / writing tiny files, it is the access time that kills speed.
As the original poster suggested, your biggest bang will come from hardware: get an SSD and a processor with at least 4 cores. If you were to buy 4 cheap 64GB SSD's and put them in RAID 0, you would be shocked at the difference and even discover that your CPU and RAM will suddenly become bottlenecks.
Move your code onto a RAMDisk, or buy an SSD drive.
Suspend Resharper - R# helps tremendously when you're just coding but really slows down the Edit-Compile-Test loop.
SandCastle is unable to process our class library because it crashes with an OutOfMemoryException exception during XSL transformation.
What can we do about that, except for the obvious, buy more memory? The problem is that this is our developer machines with 3.3GB memory on a 32-bit OS so basically we either have to upgrade to 64-bit and more memory (which won't happen for a while) or set up a virtual server to do this with lots of memory (which will impact production servers).
I seriously doubt we have the biggest class library in the world that requires help files, so what options do we have? Is there a magic "Do not crash with out of memory errors" setting that we forgot to turn off?
If you're on 32 bit Windows, your user process will only be able to address 2 GB memory per default (3 GB is run as large address aware). The 2 GB are used for everything in the process, so the .NET runtime, standard libraries, booking and so forth all take their chuck. In my experience that leaves about 1.5 GB for .NET applications on 32 bit.
You can get access to more memory by using the large address aware switch, but it doesn't come for free if you're on 32 bit Windows. Moving to 64 bit Windows will let your 32 bit application access the entire 32 bit address space and thus give you 4 GB addressable space.
I've recently written a blog entry with details about memory usage for .NET applications, but since I blog in Danish, you may not be able to read it. However, if you want to have a look, the link is: http://kodehoved.dk/?p=156
You may also find this MSDN blog post relevant: http://blogs.msdn.com/maoni/archive/2007/05/15/64-bit-vs-32-bit.aspx
I'm building a PC with the new Intel I7 quad core processor. With hyperthreading turned on it will report 8 cores in Task Manager.
Some of my colleagues are saying that hyperthreading will make the system unreliable and suggest turning it off.
Can any of you good people enlighten me and the rest of the stockoverflow users.
Follow on: I've been using hyperthreading constantly, and its been spot on. No instability whatsoever. I'm using:
Microsoft Server 2008 64 bit
Microsoft SQL Server 2008 64 bit
Microsoft Visual Studio 2008
Diskeeper Server
Lots of controls (Telerik, Dundas, Rebex, Resharper)
Stability isn't likely to be affected, since the abstraction is very low level and the OS just sees it as another CPU to provide work to. However, performance is another matter.
In all honesty I can't say if this is still the case, but at least when the HT-enabled CPUs first came out, there were known problems with at least some applications. For example, MySQL, and multi-threaded apps like the Java application I support for my day job were known to have decreased performance when HT was enabled. We always recommended it be removed, at least for our particular use case of a server-side enterprise application.
It's possible that this is no longer an issue, and in a desktop environment this is less likely to be a problem for most use cases. The ability to split work on the CPU generally would lead to more responsive applications when the CPU is heavily utilized. However, the context switching and overhead could be a detrement when the app is already heavily threaded and CPU-intensive such as in the case of a database server.
Off the top of my head I can think of a few reasons your colleagues might say this.
Several articles about SQL performance suffering under hyperthreading. I believe it winds up doing too much context switchings or cache thrashing. can't remember exactly.
Early on going from single proc to multi-proc or more likely for most people hyperthreaded procs, brought many threading issues into the open. Race conditions, deadlocks, etc, that they never saw before. Even though its a code problem some people blamed the procs.
Are they making the same claims about multi-core/multi-proc or just about hyperthreaded?
As for me, I've been developing on a hyperthreaded box for 4 years now, only problem has been a UI deadlock issue of my own making.
Hyperthreading will mainly make a difference in the scheduler behaviour/performance when dispatching threads to the same CPU as opposed to different CPU...
It will show in a badly coded application that does not handle race conditions between threads...
So it is usually bad design/code.... that suddendly find a failure mode condition
Unreliable? I doubt so. The only disadvantage of hyperthreading that I can think of is the fact that if the OS is not aware of it, it might schedule two threads on one physical processor when other physical processors are idle which will degrade performance.
There was a problem with SQL server and hyperthreading for some queries because SQL server has its own scheduler, maxdop 1 would solve that
To whatever degree Windows is unstable, it's highly unlikely that hyperthreading contributes significantly (or it would have made big news by now.)
I've had a hyperthreading PC for a couple years now. Not that many cores, but it's worked fine for me.
Wish I had test data to prove your colleagues wrong, but it sounds like it's just my opinion versus theirs at this point. ;)
The threads in a hyperthreaded CPU share the same cache, and as such don't suffer from the cache consistency problems that a multiple cpu architecture can. Though, if the developer of a piece of software is programming with multiple cpus in mind, they will (or should) be writing with read semantics (iirc, that's the term). i.e. all writes are flushed from the cache immediately.
As far as I know, from the OS's point of view, it doesn't see hyperthreading as any different from having actual multiple cores. From the OS's point of view, there is no difference - it's isolated.
So, aside from the fact that hyperthreading's "extra cores" aren't "real" (in the strictly technical sense) and don't have the full performance of "real" CPU cores, I can't see that it'd be any less reliable. Slower, perhaps, in some rare instances, but not less reliable.
Of course, it depends on what you're running - I suppose some applications might get "down & dirty" with the CPU and hyperthreading might confuse them, but that's probably pretty rare.
I myself have been running a PC with hyperthreading for several years now, and I have seen no stability problems.
Sorry I don't have more concrete data!
I own an i7 system, and I haven't had any issues.
If it works w/ multiple cores, it works with hyperthreading.
The short answer: yes.
The long answer, as with almost every question, is "it depends". Depends on the OS, the software, the CPU revision, etc. I have personally had to disable hyperthreading on two occasions to get software working properly (one, with the Synergy application, and two, with the Windows NT 4.0 installer), but your mileage may vary.
As long as you get windows installed detecting multiple HT cores from the beginning (it loads some relevant drivers and such), you can always disable (and re-enable) HT "after the fact". If you have bizarre stability issues with specific software that you can't resolve, it's not hard to disable HT to see if it has any impact.
I wouldn't disable it to start with because, frankly, it will probably work fine in 99.99% of your daily use. But be aware that yes, it can occasionally cause bizarre behaviors, so don't rule it out if you happen to be troubleshooting something very odd down the road.
Personally, I've found that hyperthreading, while not causing any problems, doesn't actually help all that much either. It might be like having an extra .1 of a processor. On my HT machine at work, I only very seldomly see my CPU go above 50%. I don't know if HT has gotten any better with newer processors like the i7, but I'm not optimistic.
Other than hearing a few reports about SQL Server, all I can report is positive. I get about 25% better performance on heavy multi-threaded apps with HT on. Have never run into a problem with it, and I'm using a first generation HT processor...
Late to the party, but for future referrence;
I'm currently having an issue with this with SQLServer. Basically, my understanding is Hyperthreading on the same processor shares the same L1 & L2 cache, which can cause issues between the two. Citrix also appears to have this problem from what I'm reading.
Slava Ok wrote a good blog post on it.
I'm here very late but found this page via Google. I may have discovered a very subtle problem. I have a i7 950 running 2003 Server and it's great. Initially I left hyperthreading on in the BIOS, but during some testing and pushing things hard, I ran a program called "crashme" by Carrette. This program tries to crash an OS by spawning a process and feeding it garbage to try and run. My dual Opteron setup ran it forever without a problem, but the 950 crashed within the hour. It didn't crash for anything else unless I did something stupid, so it was very surprising. On a whim I turned off HT and ran the program again. It runs all night, even multiple instances of it. One anecdote doesn't mean much, but try it and see what happens. Also, it seems that the processor is slightly cooler at any given load if HT is turned off. YMMV.
So I've seen this question, but I'm looking for some more general advice: How do you spec out a build server? Specifically what steps should I take to decide exactly what processor, HD, RAM, etc. to use for a new build server. What factors should I consider to decide whether to use virtualization?
I'm looking for general steps I need to take to come to the decision of what hardware to buy. Steps that lead me to specific conclusions - think "I will need 4 gigs of ram" instead of "As much RAM as you can afford"
P.S. I'm deliberately not giving specifics because I'm looking for the teach-a-man-to-fish answer, not an answer that will only apply to my situation.
The answer is what requirements will the machine need in order to "build" your code. That is entirely dependent on the code you're talking about.
If its a few thousand lines of code then just pull that old desktop out of the closet. If its a few billion lines of code then speak to the bank manager about giving you a loan for a blade enclosure!
I think the best place to start with a build server though is buy yourself a new developer machine and then rebuild your old one to be your build server.
I would start by collecting some performance metrics on the build on whatever system you currently use to build. I would specifically look at CPU and memory utilization, the amount of data read and written from disk, and the amount of network traffic (if any) generated. On Windows you can use perfmon to get all of this data; on Linux, you can use tools like vmstat, iostat and top. Figure out where the bottlenecks are -- is your build CPU bound? Disk bound? Starved for RAM? The answers to these questions will guide your purchase decision -- if your build hammers the CPU but generates relatively little data, putting in a screaming SCSI-based RAID disk is a waste of money.
You may want to try running your build with varying levels of parallelism as you collect these metrics as well. If you're using gnumake, run your build with -j 2, -j 4 and -j 8. This will help you see if the build is CPU or disk limited.
Also consider the possibility that the right build server for your needs might actually be a cluster of cheap systems rather than a single massive box -- there are lots of distributed build systems out there (gmake/distcc, pvmgmake, ElectricAccelerator, etc) that can help you leverage an array of cheap computers better than you could a single big system.
Things to consider:
How many projects are going to be expected to build simultaneously? Is it acceptable for one project to wait while another finishes?
Are you going to do CI or scheduled builds?
How long do your builds normally take?
What build software are you using?
Most web projects are small enough (build times under 5 minutes) that buying a large server just doesn't make sense.
As an example,
We have about 20 devs actively working on 6 different projects. We are using a single TFS Build server running CI for all of the projects. They are set to build on every check in.
All of our projects build in under 3 minutes.
The build server is a single quad core with 4GB of ram. The primary reason we use it is to performance dev and staging builds for QA. Once a build completes, that application is auto deployed to the appropriate server(s). It is also responsible for running unit and web tests against those projects.
The type of build software you use is very important. TFS can take advantage of each core to parallel build projects within a solution. If your build software can't do that, then you might investigate having multiple build servers depending on your needs.
Our shop supports 16 products that range from a few thousands of lines of code to hundreds of thousands of lines (maybe a million+ at this point). We use 3 HP servers (about 5 years old), dual quad core with 10GB of RAM. The disks are 7200 RPM SCSI drives. All compiled via msbuild on the command line with the parallel compilations enabled.
With that setup, our biggest bottleneck by far is the disk I/O. We will completely wipe our source code and re-checkout on every build, and the delete and checkout times are really slow. The compilation and publishing times are slow as well. The CPU and RAM are not remotely taxed.
I am in the process of refreshing these servers, so I am going the route of workstation class machines, go with 4 instead of 3, and replacing the SCSI drives with the best/fastest SSDs I can afford. If you have a setup similar to this, then disk I/O should be a consideration.