What would be a good, windows and iis (http) based distributed version control system [closed] - http

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
At my job we make & sell websites. Usually we install our .NET C# based site on a customer's server and maintain and support it remotely. However, every once in a while, for bigger development works and just to make things simpler (and faster!), we will copy the site to a local server.
This is great, but has one pain - moving the site back to the customer. Now, If nothing was change on the customer's copy - no problem. However, it is the sad truth that sometime (read more often than I would like) some fixes were needed to be applied on the production server. Either because the customer needed it NOW or simply because it was major bug.
I know that you can easily apply those bug fixes to the local copy as well, but this is an error prone process. So I'm setting my hopes on a distributed version control to help synchronize the two copies.
Here is what I need:
Easy to install - nothing else needed except the installer and admin rights.
Can integrated in an existing website as a virtual directory and works on port 80 - no hassle with new DNS required.
Excellent software
That's it. Any ideas?
Some comments on the answers
First, thanks! much appreciated.
I've looked at Mercurial and Bazaar and both look very good. The only caveat is the installation as a virtual directory on IIS. Mercurial, as far as I understand, use a special protocol (wire) and Bazaar needs and addition of python extensions. Is there another system which is easier to integrate with IIS? I'm willing to take a performance hit for that.

I'd look at either Mercurial or Bazaar. I'm told Git also works on windows, but I suspect the windows port is still a second class port at best.
You'll probably need to be able to run python scripts on your webserver to host either of them.

Maybe not exactly what you request but checkout DeltaCopy which is a windows version of rsync. You can also read about another rsync solution here

I can also vouch for Mercurial. Simple to use and powerful to boot!

Related

What should I do with all this RAM? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
The powers-that-be have decided to bestow memory upgrades on our developer team. We're all now in control of Mac Pro's with 32GB of RAM. I would have preferred an SSD instead of half of that RAM and I'm struggling to think of ways to make the most of it all. To date I have installed the x64 version of Windows 7 and also set up a 4GB RAM drive for temp files, browser cache etc. as well as codefiles for the various apps I'm working on.
Despite this, even in the middle of a heavy-duty debug session with a massively multi-project solution I always seem to have what to me as obscene amounts of free memory left and I was wondering if there was anything else I could do to make the most of the available RAM. The only other thing I could think of was to run a virtual Windows server on my workstation for 'proper' (i.e. in a mirror of our production environment) local deployment/testing and so on, but any tools or tricks that could put the 4-6GB to good use in any developer or user-friendly ways would be very welcome.
I work with ASP.Net and SQL Server and use VS2010/12 so any 'tricks' specific for this set-up are especially welcome. I was saddened to see that all that RAM has not made VS2010 any less prone to fits of unresponsiveness.
Some ideas:
use RAMdisk, and put your dev. environment on that... This will do wonders! Way quicker than the quickest SSD... But be careful, this is volatile! You could have 16GB, or even 24GB RAMdisk, and still have enough room to play with. Project switching has never been quicker, not to mention all disk based activities.
you can run multiple virtual machines. Like if you use a DB for development, you could have a local copy, not having to rely on shared resources. This can have a lot of benefits, though it has some drawbacks too (replication of changes by other developers, etc.)
combine the above! Get a RAMdisk to run your VMs and the VS from that! This involves a lot of copying when starting work, but that is once/day... I think a coffee break, and reading through the emails would be enough. Benefits: quick... Quicker than anything - once it started.

Xen Server Cluster [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have upwards of 30 Dell GX2xx models doing nothing. I've decided on using them to build a cluster, but I am lost as far as getting started. I've used ClusterKnoppix, and even straight OpenMosix in the past, but those projects are, very sadly, dead now.
I've checked out Xen, to an extent. I don't know if Xen is the solution I need. I'd like to have the ability to spin up a few VMs (when I need) in a server pool, with the VMs having the ability to run just off of resources in the pool, leaving me without the care of which node they run on.
I need some insight here... Thanks all!
Xen is not itself going to manage the whole cluster.
It will act on every single machine to instantiate/manage/delete the VMs.
You can have a look at Eucalyptus if you want to build that kind of private cloud solution with open-source software.
I would also recommend that you use OpenStack that tends to be the successor of Eucalyptus.
Have you checked out XCP, I find that it's really easy to start up a virtual cluster with this software.
Have you looked at such projects as OpenAIS, Corosync, DRBD and Pacemaker? They are all apart of the Linux High Availability project (http://www.linux-ha.org). They offer many different configuration options for numerous types of servers (IE. MySQL, Apache, Xen, etc.).
They have custom scripts (LSB and OCF) that are ran in place of your standard init scripts and assume the roles of these init scripts. I have included a detailed guide for setting up a Xen HA cluster on OpenSuse 11.1 below for your reference. The configuration of the Linux HA components should be the same from distro to distro, except that the package names to be installed will vary, as well as the location of the specific configuration files. The command line tools should be the same as well as functionality. Hope this helps.
http://www.howtoforge.com/installation-and-setup-guide-for-drbd-openais-pacemaker-xen-on-opensuse-11.1

Is node.js ready for production use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Starting a new project. It's basically a blogging/commenting system.
We're considering node.js as the back end server. Is node.js ready for this sort of thing or is it too early and experimental?
We need HTTPS and gzip compression - perhaps a front end nginx server could provide this?
What's missing from node.js that would make developing a web app difficult?
From a production ready perspective, we're wondering if it is stable enough for building a commercial app on top of.
Thanks
UPDATE:
Almost a year has passed and now I'd definitely use node.js for live systems.
It's not ready. It sure is an awesome piece of software but it's not suitable for production use yet.
The developer of node.js himself stated in a talk, that it's probably full of bugs and security issues.
This is the talk: http://www.yuiblog.com/blog/2010/05/20/video-dahl/
He recommends that IF it is to be used in a production environment, you should place it behind a stable http proxy like nginx but he discourages doing that at all.
I'll wait for a production release and until then, play with it on my local machine.
Node.js is really great. But it's complicated for a production use now. Actually, the API change several times in each version and can be change again a lot of time. So you need fix to a particular version. The migration can be painful.
I'm using it for a production site. It's been live for a few months and I've had no issues with the node runtime. Stick with the latest stable release (currently 0.2.6).
The 3rd party modules written by the community are where you may run into issues. Some modules are more stable than others. The node community has standardized on github, so it's pretty easy to fork and fix things you run into. But be prepared to roll up your sleeves and hack -- it's probable that you'll need to fix a few bugs in the modules you use.
Overall I've been happy using node.js
It's just another tool, with different pros and cons. If your project is planned carefully you shouldn't run into major problems. Node.js is a very active project and it shouldn't be long before it reaches stable. If your team finally decides to use node.js please contribute any findings / solutions / code or any kind of valuable information back to the community while you're at it. That would really help. The more people active, the faster node.js will progress.
It's still got some rough edges, but I'd say it's ready to use (I'm about to launch a production site based on it). Here's an article describing how 3 companies are using it in production.
You may still find yourself finding/fixing the occasional bug, but that's where the community really shines.
(Updated answer) On June 2013 (version 0.10.12):
Node.js is ready for production, it's stable and really fast.
I am using it on live servers with Redis, using a SmartOS VM with dtrace and flamegraph for profiling (on a dev server). It also replaced quite well my Apache/PHP stack for creating websites.
The best ways to find up-to-date modules are Nipster and npmjs.
As some modules are not mature enough, finding the right one is sometimes an iterative process.
--
(Old answer) On May 2012 (version 0.6.18):
Node.js and its API seems stable enough for production use.
However, its ecosystem isn't: most modules are not stable yet and a lot of them aren't maintained anymore (last commits from 8 to 18 months - you can check on the github pages of modules)
Currently, using a module often require an active participation: subscribing to its mailing list and patch it when needed.

What's a good tool to monitor network activity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm operating a neighbourhood WIFI network in a rural environment.
Now I'm looking fo a monitoring tool to run on a server (Windows or Linux) which would track bandwidth, uptime (clients as well as internet connection), etc...
Most of this information is exposed via SNMP by my routers and access points, so SNMP support is required.
Additional features should be:
Graphical data representation
free license
So what's the best choice for me?
Edit These are the tools mentioned so far:
MRTG
Munin
Nagios
Zenoss Core
ntop
cacti
ZABBIX
MRTG is probably the easiest to setup. If your router has SNMP (as you mention), to setup it's a single command:
cfgmaker --output=mrtg_myrouter.cfg public#1.2.3.4
MRTG is good for high-bandwidth routers and the likes. It's not great for other data (it can be coerced into graphing most things, but it's a little unintuitive to setup)
For monitoring other stuff I like Munin. I would describe it again, but I posted an answer a while ago here (about graphing disc-usage).
Munin can of course graph network usage, and easily pull data via SNMP (in fact it's the recommended setup for grabbing data from Windows-based servers - run a SNMP daemon on the Windows machine, and have Munin connect to this). The graphs are also prettier than MRG, I would say (clearly the most important factor..)
There's an example installation of MRTG here, and Munin here
IMHO, Cacti is easiest to install and use.
Zabbix is interesting, but harder to use.
And here is a very comprehensive list of all network monitoring tools.
Not sure if this fits your usage but a lot of web hosting provider uses Nagios for network monitoring
Zenoss Core is free and open source. It keeps RRD graphs (like other monitoring tools mentioned here). To monitor parameters other than basic network bandwidth (and up state), the switch or router SNMP definitions and MIBs should be available as a ZenPack. Runs on a Linux (virtual?) server. Uses Google Maps to display link status.
I have been using ntop it is free on linux and for purchase if you want a windows binary and worked pretty well for us
I had the same question last week and tried several options.
For basic snmp graphing needs, cacti is great, but graphing apache, mysql, etc. is a bit too hard I think.
ntop is also a nice tool, but has a different usecase than the other ones in your list.
You should look at Zenoss. The Core version is FOSS, userfriendly, and very powerful. I had no need for the Enterprise version, but your needs may differ.
It does graphing, monitoring and alerting of all the basic stats, but download some ZenPacks and you can easily add Apache, MySQL or many other stats. All configuration can be done via the GUI. The interface is clear and responsive and allows for easy management of very large networks.
In short, I'm glad I never spent much time on Nagios, because I believe Zenoss is the best option available.
Also consider CactiEZ on a VM or small server, it is a baremetal CentOS 6 based system.

Best Source Control Solution for Oracle/ASP.NET Environment? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am trying to plan a way for 5 developers to use Visual Studio 2005/2008 to collaboratively develop an ASP.NET web app on a development web server against an Oracle 8i(soon to be 10g) Database.
The developers are either on the local network or coming in over a vpn (not a very fast connection),
I evaluated the latest Visual SourceSafe, but ran into the following gotchas:
1) We can't use decentralized development because we can't replicate a development oracle database to all developers computers. Also, the vpn is too slow to let their local app instances connect to the database server.
2) Since VSS source code not on the file system, the only way to debug it is to build the app and run debugger, which only one developer can do at a time on a centralized development server. This is unacceptable. We tried using shadow folders so that every time a file is checked in it gets published to the app instance on the development server, but this failed for remote developers on the vpn.
3) Since the developers do a lot of web code, it is important for productivity reasons that when they SAVE a file, they should be able to immediately see the change working on the development server.
4) No easy way to implement a controlled process for pushing files to the production server.
Any suggestions on a source control solution that would work under these contraints?
Update: I guess since development is forced to be on the server, we need to go with a "Lock and Check In" model. So which source control solution would work best for "Lock and Check In' scenarios?
Update: Does Visual SVN support developing centrally against a development server? As in, the dev can immediately see his update on the development server after saving in VS?
I have used Subversion and TortoiseSVN and was very pleased.
Is point 1 due to an issue with your database schema (or data) ?
We can't use decentralized development because we can't replicate a development oracle database to all developers computers.
If not, I strongly suggest that every developer has its own environment (Visual Studio, Oracle...) and use your development server for integration purposes. Maybe you could just give them a subset of the data, or maybe just the schema scripts.
Oracle Express Edition is perfectly fit for this scenario. Besides, sharing the same database violates rule #1 for database work, which in my experience should be enforced anywhere possible.
As Guy suggested, have an automated build allowing any developer to recreate its database schema at any time.
More very useful guidelines can be found here (include rule #1 above).
Define your development process so that parallel development is possible, and only use locks as a last resort.
I'm sorry if you already envisioned these solutions and found them unfit to your situation, but I really felt the urge to express them just in case...
Visual Source Safe is the spawn of Satan.
Look at Subversion, and Visual SVN (with Tortise SVN). Sure, Visual SVN costs a bit - $49 per seat - but it is a great tool. We have a development team of 6 programmers, and it has been a great boon to us.
If you can spend the money, then Team Foundation Server is the one that works best in a Visual Studio dev environment.
And based on personal experience, it works beautifully over VPN connections. And you can of course have automated builds going on it.
I would say SVN on price (free), Perforce on ease of integration.
You will undoubtedly hear about GIT and CVS as well and there are good reasons to look at them.
Interesting -- it sounds you are working on a web site project on the server, and everyone is working on the same physical files. I agree that SVN is far superior to VSS and really good to work with, but in my experience it's really geared toward developers working on a copy of the code locally.
VSS is a "lock and check in" type of source control, while SVN and TFS and most others are "edit and merge" -- devs all get copies of the source, edit the files as needed, and later merge their changes in to source control, and if someone else has edited the file in the meantime they merge the changes together.
From a database standpoint, I assume you are checking in your database scripts, then have some automated build packaging and running them (or maybe just a dev or DBA running them manually every so often). In this case, having the developers have a local copy of the scripts that they can edit and merge using SVN or TFS makes sense.
For a team working on a shared copy of the source code on a development server, though, you may get into problems using edit and merge -- a "lock and check in" model of source control may work better for you. Just not VSS, from a corruption and stability standpoint.

Resources