Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have upwards of 30 Dell GX2xx models doing nothing. I've decided on using them to build a cluster, but I am lost as far as getting started. I've used ClusterKnoppix, and even straight OpenMosix in the past, but those projects are, very sadly, dead now.
I've checked out Xen, to an extent. I don't know if Xen is the solution I need. I'd like to have the ability to spin up a few VMs (when I need) in a server pool, with the VMs having the ability to run just off of resources in the pool, leaving me without the care of which node they run on.
I need some insight here... Thanks all!
Xen is not itself going to manage the whole cluster.
It will act on every single machine to instantiate/manage/delete the VMs.
You can have a look at Eucalyptus if you want to build that kind of private cloud solution with open-source software.
I would also recommend that you use OpenStack that tends to be the successor of Eucalyptus.
Have you checked out XCP, I find that it's really easy to start up a virtual cluster with this software.
Have you looked at such projects as OpenAIS, Corosync, DRBD and Pacemaker? They are all apart of the Linux High Availability project (http://www.linux-ha.org). They offer many different configuration options for numerous types of servers (IE. MySQL, Apache, Xen, etc.).
They have custom scripts (LSB and OCF) that are ran in place of your standard init scripts and assume the roles of these init scripts. I have included a detailed guide for setting up a Xen HA cluster on OpenSuse 11.1 below for your reference. The configuration of the Linux HA components should be the same from distro to distro, except that the package names to be installed will vary, as well as the location of the specific configuration files. The command line tools should be the same as well as functionality. Hope this helps.
http://www.howtoforge.com/installation-and-setup-guide-for-drbd-openais-pacemaker-xen-on-opensuse-11.1
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
We have a linux production server and a number of scripts we are writing that we want to run on it to collect data which then will be put into a Spark data lake.
My background is SQL Server / Fortran and there are very specific best practices that should be followed.
Production environments should be stable in terms of version control, both from the code point of view, but also the installed applications, operating system, etc.
Changes to code/applications/operating system should be done either in a separate environment or in a way that is controlled and can be backed out.
If a second environment exist, then the possibility of parallel execution to test system changes can be performed.
(Largely), developers are restricted from changing the production environment
In reviewing the R code, there are a number of things that I have questions on.
library(), install.packages() - I would like to exclude the possibility of installing newer versions of packages each time scripts are run?
how is it best to call R packages that are scheduled through a CRON job? There are a number of choices here.
When using RSelenium what is the most efficient way to use a gui/web browser or virtualised web browser?
In any case I would scratch any notion of updating the packages automatically. Expect the maintainers of the packages you rely on to introduce backward incompatible changes. Your code will stop working out of the blue if you auto update. Do not assume anything sacred.
Past that you need to ask yourself how much hands on is your deployment. If you're OK with manually setting up each deployment then you can probably get away using the packrat package to pull down and keep sources of the exact versions you are using. This way reproducing your deployment is painful, but at least possible. If you want fully automated reproducible deployments I suggest you start building docker images with your packages and tagging them with dates or versions.
If you make no provisions for reproducing your environment you are asking for trouble, while it may seem OK at first to simply fix any incompatibilities as they come up with updates, and does indeed seem to be the official workflow from the powers that be, however misguided that is; eventually as your codebase grows that will be all you will end up doing.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The powers-that-be have decided to bestow memory upgrades on our developer team. We're all now in control of Mac Pro's with 32GB of RAM. I would have preferred an SSD instead of half of that RAM and I'm struggling to think of ways to make the most of it all. To date I have installed the x64 version of Windows 7 and also set up a 4GB RAM drive for temp files, browser cache etc. as well as codefiles for the various apps I'm working on.
Despite this, even in the middle of a heavy-duty debug session with a massively multi-project solution I always seem to have what to me as obscene amounts of free memory left and I was wondering if there was anything else I could do to make the most of the available RAM. The only other thing I could think of was to run a virtual Windows server on my workstation for 'proper' (i.e. in a mirror of our production environment) local deployment/testing and so on, but any tools or tricks that could put the 4-6GB to good use in any developer or user-friendly ways would be very welcome.
I work with ASP.Net and SQL Server and use VS2010/12 so any 'tricks' specific for this set-up are especially welcome. I was saddened to see that all that RAM has not made VS2010 any less prone to fits of unresponsiveness.
Some ideas:
use RAMdisk, and put your dev. environment on that... This will do wonders! Way quicker than the quickest SSD... But be careful, this is volatile! You could have 16GB, or even 24GB RAMdisk, and still have enough room to play with. Project switching has never been quicker, not to mention all disk based activities.
you can run multiple virtual machines. Like if you use a DB for development, you could have a local copy, not having to rely on shared resources. This can have a lot of benefits, though it has some drawbacks too (replication of changes by other developers, etc.)
combine the above! Get a RAMdisk to run your VMs and the VS from that! This involves a lot of copying when starting work, but that is once/day... I think a coffee break, and reading through the emails would be enough. Benefits: quick... Quicker than anything - once it started.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Starting a new project. It's basically a blogging/commenting system.
We're considering node.js as the back end server. Is node.js ready for this sort of thing or is it too early and experimental?
We need HTTPS and gzip compression - perhaps a front end nginx server could provide this?
What's missing from node.js that would make developing a web app difficult?
From a production ready perspective, we're wondering if it is stable enough for building a commercial app on top of.
Thanks
UPDATE:
Almost a year has passed and now I'd definitely use node.js for live systems.
It's not ready. It sure is an awesome piece of software but it's not suitable for production use yet.
The developer of node.js himself stated in a talk, that it's probably full of bugs and security issues.
This is the talk: http://www.yuiblog.com/blog/2010/05/20/video-dahl/
He recommends that IF it is to be used in a production environment, you should place it behind a stable http proxy like nginx but he discourages doing that at all.
I'll wait for a production release and until then, play with it on my local machine.
Node.js is really great. But it's complicated for a production use now. Actually, the API change several times in each version and can be change again a lot of time. So you need fix to a particular version. The migration can be painful.
I'm using it for a production site. It's been live for a few months and I've had no issues with the node runtime. Stick with the latest stable release (currently 0.2.6).
The 3rd party modules written by the community are where you may run into issues. Some modules are more stable than others. The node community has standardized on github, so it's pretty easy to fork and fix things you run into. But be prepared to roll up your sleeves and hack -- it's probable that you'll need to fix a few bugs in the modules you use.
Overall I've been happy using node.js
It's just another tool, with different pros and cons. If your project is planned carefully you shouldn't run into major problems. Node.js is a very active project and it shouldn't be long before it reaches stable. If your team finally decides to use node.js please contribute any findings / solutions / code or any kind of valuable information back to the community while you're at it. That would really help. The more people active, the faster node.js will progress.
It's still got some rough edges, but I'd say it's ready to use (I'm about to launch a production site based on it). Here's an article describing how 3 companies are using it in production.
You may still find yourself finding/fixing the occasional bug, but that's where the community really shines.
(Updated answer) On June 2013 (version 0.10.12):
Node.js is ready for production, it's stable and really fast.
I am using it on live servers with Redis, using a SmartOS VM with dtrace and flamegraph for profiling (on a dev server). It also replaced quite well my Apache/PHP stack for creating websites.
The best ways to find up-to-date modules are Nipster and npmjs.
As some modules are not mature enough, finding the right one is sometimes an iterative process.
--
(Old answer) On May 2012 (version 0.6.18):
Node.js and its API seems stable enough for production use.
However, its ecosystem isn't: most modules are not stable yet and a lot of them aren't maintained anymore (last commits from 8 to 18 months - you can check on the github pages of modules)
Currently, using a module often require an active participation: subscribing to its mailing list and patch it when needed.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm operating a neighbourhood WIFI network in a rural environment.
Now I'm looking fo a monitoring tool to run on a server (Windows or Linux) which would track bandwidth, uptime (clients as well as internet connection), etc...
Most of this information is exposed via SNMP by my routers and access points, so SNMP support is required.
Additional features should be:
Graphical data representation
free license
So what's the best choice for me?
Edit These are the tools mentioned so far:
MRTG
Munin
Nagios
Zenoss Core
ntop
cacti
ZABBIX
MRTG is probably the easiest to setup. If your router has SNMP (as you mention), to setup it's a single command:
cfgmaker --output=mrtg_myrouter.cfg public#1.2.3.4
MRTG is good for high-bandwidth routers and the likes. It's not great for other data (it can be coerced into graphing most things, but it's a little unintuitive to setup)
For monitoring other stuff I like Munin. I would describe it again, but I posted an answer a while ago here (about graphing disc-usage).
Munin can of course graph network usage, and easily pull data via SNMP (in fact it's the recommended setup for grabbing data from Windows-based servers - run a SNMP daemon on the Windows machine, and have Munin connect to this). The graphs are also prettier than MRG, I would say (clearly the most important factor..)
There's an example installation of MRTG here, and Munin here
IMHO, Cacti is easiest to install and use.
Zabbix is interesting, but harder to use.
And here is a very comprehensive list of all network monitoring tools.
Not sure if this fits your usage but a lot of web hosting provider uses Nagios for network monitoring
Zenoss Core is free and open source. It keeps RRD graphs (like other monitoring tools mentioned here). To monitor parameters other than basic network bandwidth (and up state), the switch or router SNMP definitions and MIBs should be available as a ZenPack. Runs on a Linux (virtual?) server. Uses Google Maps to display link status.
I have been using ntop it is free on linux and for purchase if you want a windows binary and worked pretty well for us
I had the same question last week and tried several options.
For basic snmp graphing needs, cacti is great, but graphing apache, mysql, etc. is a bit too hard I think.
ntop is also a nice tool, but has a different usecase than the other ones in your list.
You should look at Zenoss. The Core version is FOSS, userfriendly, and very powerful. I had no need for the Enterprise version, but your needs may differ.
It does graphing, monitoring and alerting of all the basic stats, but download some ZenPacks and you can easily add Apache, MySQL or many other stats. All configuration can be done via the GUI. The interface is clear and responsive and allows for easy management of very large networks.
In short, I'm glad I never spent much time on Nagios, because I believe Zenoss is the best option available.
Also consider CactiEZ on a VM or small server, it is a baremetal CentOS 6 based system.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
At my job we make & sell websites. Usually we install our .NET C# based site on a customer's server and maintain and support it remotely. However, every once in a while, for bigger development works and just to make things simpler (and faster!), we will copy the site to a local server.
This is great, but has one pain - moving the site back to the customer. Now, If nothing was change on the customer's copy - no problem. However, it is the sad truth that sometime (read more often than I would like) some fixes were needed to be applied on the production server. Either because the customer needed it NOW or simply because it was major bug.
I know that you can easily apply those bug fixes to the local copy as well, but this is an error prone process. So I'm setting my hopes on a distributed version control to help synchronize the two copies.
Here is what I need:
Easy to install - nothing else needed except the installer and admin rights.
Can integrated in an existing website as a virtual directory and works on port 80 - no hassle with new DNS required.
Excellent software
That's it. Any ideas?
Some comments on the answers
First, thanks! much appreciated.
I've looked at Mercurial and Bazaar and both look very good. The only caveat is the installation as a virtual directory on IIS. Mercurial, as far as I understand, use a special protocol (wire) and Bazaar needs and addition of python extensions. Is there another system which is easier to integrate with IIS? I'm willing to take a performance hit for that.
I'd look at either Mercurial or Bazaar. I'm told Git also works on windows, but I suspect the windows port is still a second class port at best.
You'll probably need to be able to run python scripts on your webserver to host either of them.
Maybe not exactly what you request but checkout DeltaCopy which is a windows version of rsync. You can also read about another rsync solution here
I can also vouch for Mercurial. Simple to use and powerful to boot!