When I make a change to Javascript file and save it, it takes over 5 seconds to build and restart the development server even if it is simple 10 line example app. I am new to Meteor.js so I don't know if it's normal, but I though changes should appear instantly (about a second or two) on a browser? 5-6 seconds feels pretty long time for me.
Selecting package versions and downloading packages seem to take the major part of the time.
There is one websocket pending (Chrome Dev tools Network tab) when it's restarting. I'm using Meteor 1.0.
That's a known problem they're working on. You can read about it and follow the progress in issue #2846.
There is a new issue about this: https://github.com/meteor/meteor/issues/4284
Upgrading to a pre-release of 1.3 seems one of the best options right now:
meteor update --release METEOR#1.3-modules-beta.8
And upgrading to 1.3 when it comes out (should come around march-april 2016).
Quick measurements to get a rough idea:
BryanARivera says on the thread that updating to 1.3-modules-beta.8 gets him From 6-10s to 1-2s.
I tried on the https://github.com/wekan/wekan project by changing a view component (on an early-2013 mbp with SSD):
with `METEOR#1.2.1` ~10s to reload
with `METEOR#1.2.2-faster-rebuilds.0` ~5s to reload
with `METEOR#1.3-modules-beta.8` ~4s to reload
This was my purely hardware solution to the problem of slow Meteor build times.
When I decided to get into development I was adamant that I wasn't going to run out and buy the latest and greatest in hardware until I actually knew how to code and knew what my requirements were in the long term.
So I went out and bought a used 15" Acer laptop; it had the following spec:
Memory: 6Gb RAM
Processor: Intel Pentium CPu 6200 # 2.13GHz *2
OS: Ubuntu 16.04.1 LTS 32-bit
Storage: 153.5GB HDD
With this setup I saw rebuild times of between 15s ~ 30s (including the browser refresh) of a side project using Meteor 1.4, Reactand a Mongo db instance with around 1500 records. I found these times to be excruciatingly slow when it came to making multiple changes. You can see the initial version of the project I was working on herehere.
After trying to get work done in cafe's and libraries I realised that I was much happier working at home, and once the side project was complete I decided I would reward myself with an upgrade.
My initial choices were between a Macbook Pro and a gaming PC such as the Asus ROG, but since the former are very expensive and the later's graphics abilities were of no use to me (I'm not a gamer) I ruled them both out. These being quality machines, when I compared them against reviews of other PC's and laptops I noticed that where other systems scored higher in performance, they were lower in build quality and vice versa and there was no overall winner that had me sold.
I decided, a self build was in order and that it would be great if I could fit it all into a mini-ITX board and case. My requirements became:
An SSD drive (faster read/write times than a HDD)
Ubuntu: On account of it being free and faster that MacOS
Mini-ITX: so that it can unobtrusively fit onto my desk
Quite
16Gb DDR4 RAM: this seems the bare minimum for development.
After some searching I came across these instructions for a $682 Skylake Mac Mini Hackintosh Build, which formed the basis of my build.
As I currently have no intention of working with MacOS, as per the Hackintosh instructions, I did not change the Wifi card.
My new spec list along with what I paid for each item were as follows:
Intel Core i7 Quad-Core i7-6700 3.4GHz Processor £279.97
Crucial Ballistix Sport 16 GB kit (8 GB x 2) DDR4 2400MT/s UDIMM 288-Pin Memory £121.00
Samsung 850 Pro 250GB SSD £121.00
Gigabyte H-170 Motherboard £109.00
Noctua NH-L9i CPU Cooler £35.00
External PicoPSU Power Brick £24.00
MiniBox 160W PicoPSU £40.00
Streacom F1C-WS £90.00
Noctua NF-A4x10 FLX £10.99
Making a total spend of £830 (correct at time of writing).
The case comes in two finishes; black and silver, I went with the former to keep my monitor my main point of focus on my desk. It took me about an hour as a newbie to put it all together. I imagine it would be much faster if I did this for living.
Pros:
My rebuild times were drastically reduced to only 2s using Meteor 1.4.
The PC as a whole takes only 15 seconds to boot up whereas the laptop could take as long as 2 minutes.
With the laptop, a Meteor reload would fully load the CPU whereas with the PC the 4 cores barley rose above a few percent, as viewed from the System Monitor tool.
It has a footprint of 19cm x 19cm making which is smaller than my laptop.
It was super quite, the fans were barley noticeable on most days.
Cons:
The draw back of such a small case is that there is no room for a graphics card should I want to start gaming in the future.
All the ports are at the rear. However, this does give the front a clean look.
There is no reset button but the power switch can be configured via the bios to act as a "soft" switch such that, pressing it brings up a dialogue giving you the option to restart/suspend/shutdown.
Conclusion:
Overall I am very happy with the choice of components I made and the money I saved by self-building.The PC is powerful enough for video editing and I am working towards editing my first Youtube tutorials.
Being more wise about my needs means that I am now using Docker to create isolated development environments to prevent software conflicts which in themselves can chew up time in resolving.
The component research I did means that I am aware of the massive progress that PC cases, coolers, cables, memory cards, fans have made since I left college. You can now build something not only small and powerful but also attractive with options like tempered glass, liquid coolers, custom braided cables, case strobe lighting and shaped cases.
Related
Please excuse my inexperience, this is my first time on the site.
I have a Dell PowerEdge r710 with 2 Xeon L5630 CPUs and 16G RAM installed. I'm trying to host a Minecraft 1.7.10 Forge Server that runs perfectly fine on my Desktop, but refuses to run properly on the server.
This machine is running Java 8, and runs perfectly otherwise. When running the application without the mods, it loads up without a hitch. As I add more mods, it gets worse. As far as my (very, very limited) knowledge goes, the order of JVM arguments doesn't matter, and didn't on my Desktop, but in order to get the application to even run I had to change the order in my .bat file. With all mods installed, the Out Of Memory Error occurs with a chunk loading error when around 41% spawn loaded.
This is the .bat file that I've made to start the server:
java -jar minecraft_server.jar -Xms512M -Xmx8192M nogui -XX:+HeapDumpOnOutOfMemory
This should load up perfectly fine, everything is compatible and tested on another machine, but the exact same setup will not run on the r710, saying Out Of Memory with more than double the desktop's allocated memory.
First you should use Task Manager or a similar utility to make sure that Java process indeed is using more then the amount you allocated with your arguments. Then I would recommend reading through this lovely post written by Cpw and posted on Reddit. If it doesn't help you with your current situation it should at least give you a bit more information on Minecraft's memory footprint.
In a normal situation where you would be running Minecraft as a local server from your computer I would suggest taking a look at how much memory your GPU is taking up. Since you are running a server this is not relevant, but might still be useful to someone who stumbles upon this post so I will leave it here:
Your graphics card is probably the biggest address hog. Today's graphics adapters often contain a gigabyte or more of RAM, and every one of those bytes needs an address. To be fair, I doubt that many of those multi-gigabyte graphics cards are in 32-bit PCs, but even a 512mb video card will take a sizeable bite out of 4GB.
I am not quite familiar with running dedicated servers but another important thing that is worth mentioning is that in case you are on a 32-bit operating system you will only be able to take advantage of 4GB of your RAM due to architecture constraints.
Every byte of RAM requires its own address, and the processor limits the length of those addresses. A 32-bit processor uses addresses that are 32 bits long. There are only 4,294,967,296, or 4GB, possible 32-bit addresses.
If all else fails you should try to seek help on one of the available Discord channels dedicated to Minecraft modding. This should be a rule in general actually, especially for general purpose problems that are difficult for others to reproduce. Here is a small list of three Discord communities dedicated to Minecraft modding that I have experience with:
Modded Minecraft - The one with most traffic so it can be a bit more difficult for your question to get noticed on busy days, but definitely the best moderated one from this list.
Modding Help - The smallest of the three. I don't have much experience with this one.
Mod Dev Cafe - This one has a decent size and a pretty good response rate, but be prepared for the usual facepalms and other unpleasantness common to younger admins and moderators. However if you are willing to look past that this is a good choice.
I am working on one project which is a standalone javafx application. It will run 24*7*365 days continuously.
So, i have a question in mind.
which things we need to consider for running this application smoothly and with high performance for 24*7*365?
Please guides me sir, regarding it.
Details for used things are as follows for Reference :-
Used java version :- 1.8.0_121
Available Ram :- 2GB
Allocated Memory for application :- -Xmx1524M
Hardware Configuration :- Processor - Intel Atom CPUD425# 1.80GHz x 2
OS :- 32 Bit Fedora 15
I will probably state the obvious here, but OutOfMemory errors are the main thing you should worry about. A small glitch in your code/program could make your app die fast or run extremely slow under memory pressure.
I would say that you need to enable garbage collection logs and monitor those. Also is there a way for a javafx app to actually use another instance if the current one is facing issues? There are tools for that under different apps, but not sure about javafx... I mean can you automatically shut down (and collect heap data) the current running application and automatically start a new one (so that later you can analyze what actually happened)? It might not be feasible, and if it's not, you should have enough stress tests before you actually lunch it into production.
One thing you should check first is whether your system suffers from the notorious memory problems that some Linux graphics drivers have. See for example my answer to this question here on SO:
Javafx growing memory usage when drawing image
I've been using R 3.1.2 on an early-2014 13" MacBook Air with 8GB and 1.7GHz Intel Core I7, running Mavericks OSX.
Recently, I've started to work with substantially larger data frames (2+ million rows and 500+ columns) and I am running into performance issues. In Activity Monitor, I'm seeing virtual memory sizes of 64GB, 32GB paging files, etc. and the "memory pressure" indicator is red.
Can I use the "throw more hardware" at this problem? Since the MacBook Air tops out at 8GB physical memory, I was thinking about buying a Mac Pro with 64GB memory. Before I spend the $5K+, I wanted to ask if there are any inherent limitations in R other than the ones that I've read about here: R Memory Limits or if anyone who has a Mac Pro has experienced any issues running R/RStudio on it. I've searched using Google and haven't come up with anything specific about running R on a Mac Pro.
Note that I realize I'll still be using 1 CPU core unless I rewrite my code. I'm just trying to solve the memory problem first.
Several thoughts:
1) Its a lot more cost effective to use a cloud service like https://www.dominodatalab.com (not affiliated). Amazon AWS would also work, the benefit of domino is that it takes the work out of managing the environment so you can focus on the data science.
2) You might want to redesign your processing pipeline so that not all your data needs to be loaded in memory at the same time (soon you will find you need 128 GB, then what). Read up on memory mapping, using databases, separating your pipeline into several steps that can be executed independent of each other, etc (googling brought up http://user2007.org/program/presentations/adler.pdf). Running out of memory is a common problem when working with real life datasets, throwing more hardware at the problem is not always your best option (though sometimes it really can't be avoided).
Do Chromebooks offer adequate programming capabilities offline?
I can never guarantee my WiFi access.
I know I can access local files, and being Linux-based, what does this mean for programming offline?
Also, I am returning to obtain my MSc in IT. Would this be a good purchase for such a cause? I am focusing on web development (HTML, JavaScript, Rails).
I want to know specifically if a Chromebook (I have my eyes on the Acer C720) can get the work done. True, I'll probably rare ever be offline, but I want to know if I'll be able to both edit code, then run it to troubleshoot.
My main points: editing and running code on a Chromebook. Also, could I amend the drawback by running Windows or Linux (ie, Ubuntu, Mint)?Thanks guys for any advice.
I use an Acer C720 Chromebook (2GB RAM, 16GB SSD) as my Meteor (Javascript, HTML. CSS, MongoDB) development machine. The specs may sound poor but in reality - thanks to the fantastic Haswell chip - the laptop is great.
I have Xubuntu installed instead of ChromeOS... so maybe that is not a real answer to your question.
It's a fantastic little machine - long battery life and boots in a few seconds. I tried Bodhi Linux first but find Xubuntu better for my needs.
I expanded the storage using a keep-in tiny UltraFit 64GB USB 3.0 flash key. Amazing device.
I use an HDMI monitor when doing longer coding sessions.
Device cost me $150 on eBay and around $25 for the USB key.
I use the free http://komodoide.com/komodo-edit/ as my editor.
If you feel like taking the plunge and converting from ChromeOS to Xubuntu, these two links may help:
BIOS changes: https://blogs.fsfe.org/the_unconventional/2014/09/19/c720-coreboot/
Xubuntu distribution: https://www.distroshare.com/distros/get/14/
Good luck and enjoy!
So I've seen this question, but I'm looking for some more general advice: How do you spec out a build server? Specifically what steps should I take to decide exactly what processor, HD, RAM, etc. to use for a new build server. What factors should I consider to decide whether to use virtualization?
I'm looking for general steps I need to take to come to the decision of what hardware to buy. Steps that lead me to specific conclusions - think "I will need 4 gigs of ram" instead of "As much RAM as you can afford"
P.S. I'm deliberately not giving specifics because I'm looking for the teach-a-man-to-fish answer, not an answer that will only apply to my situation.
The answer is what requirements will the machine need in order to "build" your code. That is entirely dependent on the code you're talking about.
If its a few thousand lines of code then just pull that old desktop out of the closet. If its a few billion lines of code then speak to the bank manager about giving you a loan for a blade enclosure!
I think the best place to start with a build server though is buy yourself a new developer machine and then rebuild your old one to be your build server.
I would start by collecting some performance metrics on the build on whatever system you currently use to build. I would specifically look at CPU and memory utilization, the amount of data read and written from disk, and the amount of network traffic (if any) generated. On Windows you can use perfmon to get all of this data; on Linux, you can use tools like vmstat, iostat and top. Figure out where the bottlenecks are -- is your build CPU bound? Disk bound? Starved for RAM? The answers to these questions will guide your purchase decision -- if your build hammers the CPU but generates relatively little data, putting in a screaming SCSI-based RAID disk is a waste of money.
You may want to try running your build with varying levels of parallelism as you collect these metrics as well. If you're using gnumake, run your build with -j 2, -j 4 and -j 8. This will help you see if the build is CPU or disk limited.
Also consider the possibility that the right build server for your needs might actually be a cluster of cheap systems rather than a single massive box -- there are lots of distributed build systems out there (gmake/distcc, pvmgmake, ElectricAccelerator, etc) that can help you leverage an array of cheap computers better than you could a single big system.
Things to consider:
How many projects are going to be expected to build simultaneously? Is it acceptable for one project to wait while another finishes?
Are you going to do CI or scheduled builds?
How long do your builds normally take?
What build software are you using?
Most web projects are small enough (build times under 5 minutes) that buying a large server just doesn't make sense.
As an example,
We have about 20 devs actively working on 6 different projects. We are using a single TFS Build server running CI for all of the projects. They are set to build on every check in.
All of our projects build in under 3 minutes.
The build server is a single quad core with 4GB of ram. The primary reason we use it is to performance dev and staging builds for QA. Once a build completes, that application is auto deployed to the appropriate server(s). It is also responsible for running unit and web tests against those projects.
The type of build software you use is very important. TFS can take advantage of each core to parallel build projects within a solution. If your build software can't do that, then you might investigate having multiple build servers depending on your needs.
Our shop supports 16 products that range from a few thousands of lines of code to hundreds of thousands of lines (maybe a million+ at this point). We use 3 HP servers (about 5 years old), dual quad core with 10GB of RAM. The disks are 7200 RPM SCSI drives. All compiled via msbuild on the command line with the parallel compilations enabled.
With that setup, our biggest bottleneck by far is the disk I/O. We will completely wipe our source code and re-checkout on every build, and the delete and checkout times are really slow. The compilation and publishing times are slow as well. The CPU and RAM are not remotely taxed.
I am in the process of refreshing these servers, so I am going the route of workstation class machines, go with 4 instead of 3, and replacing the SCSI drives with the best/fastest SSDs I can afford. If you have a setup similar to this, then disk I/O should be a consideration.