Can I check in unix binaries (compiled executable, libraries, etc) into PVCS - unix

We are starting up a Unix development engagement and evaluating version control options.
Specific question: Does PVCS deployed on a Unix platform support checking in compiled code from a Unix build environment?
If so, example command perhaps?
Not looking to hear about other SCM systems at this point.

I doubt that PVCS would distinguish between binary and text files. Even if it did, it should support the notion of a binary file.
PVCS apparently doesn't do merging (not as a built-in operation), so there really isn't much that it needs to do to "support" checking in of unix binaries.
You may have problems dealing with the file permissions, however I would consider that to be a security feature - the files shouldn't be marked as executable unless you intend them to be executed, a deploy script would more than achieve this.
That said. Semantically, it is problematic asking if it "supports" checking in of unix binaries: Can a system that happens to allow such files to be checked in be claimed to "support" those files if it provides no features that ease management of those specific files, as distinct from other types of files?
Unfortunately their website is so full of marketing information that it is next to impossible to find out this information. Seriously pick a different VCS if at all possible. Heck, even Perforce would be a better choice, they provide tools for almost every current operating system and provide many levels of documentation. (Personally, I'm inclined to recommend Git, although Perforce would be a better choice in this case if you are more interested in versioning many binary files)

Related

Encrypting R script under MS-Windows

I have a bunch of R scripts which I am running on a Windows machine and want to ensure that the code remains unread by those not intended to see it. On a Linux box, I could wrap the R code in a bash script #! and make an encrypted (and perhaps even a limited-life) executable shell script. What are my options to do something on similar lines under Windows?
My answer is a bit late, but I believe this is a good question. Unfortunately, I don't believe that there is a solution, or at least an easy one, at the present time.
The difficulty is common because, for most interpreted languages, including R, it is often possible to turn on logging and inspection of all commands being run. This can negate many tricks to obfuscate the code.
For those who prefer to think of code being open == good, one should know that a common reason to obfuscate the code is if one is consulting with a client that hires multiple vendors. It is not uncommon for a client to take scripts from vendor A and ask vendor B why it doesn't work with their system. (This may be done by a low-level IT flunkie, rather than someone responsible for the NDA contracts.) If A & B are competitors, A's code has just been handed to B. When scripts == serious programs, then serious code has been given away.
The ways I've seen this addressed are:
Make a call to a compiled language, and use standard protections available there.
Host the executable on a different server, and use calls to the server to execute the calculations. (In R, there are multiple server-side options.)
Use compiled (preprocessed / bytecode) code within the language.
Option 2 is actually easier and better when the code may be widely distributed, not just for IP reasons. A major advantage is that it lets you upgrade the code without having to go through the pain of a site-wide release process. If new libraries are needed, no problem - update the server.
Option 3 is done in Matlab with .p files, and can be done with py2exe for Python on Windows. In R, the new bytecode compilation may be analogous, but I am not familiar enough with it to address any differences between .Rc files in the R context and .p files in the Matlab context. For more info on the compiler, see: http://www.inside-r.org/r-doc/compiler/compile
Hosting computations on the server is great for working with unsophisticated users, because it is easier to iterate quickly in response to bugs or feature requests. The IP protection is simply a benefit.
This is not a specifically R-oriented strategy. (And it's a bit unclear what your constraints or goals really are anyway.) If you want a cross-platform encryption method, you should look into the open-source program TrueCrypt. It supports creating encrypted files that can be mounted as volumes on any machine that supports the volume formatting method. I have tested this across the Mac PC divide , since the Mac can read FAT files, but have no experience with how it might work across the Linux-PC chasm.
(Their TODO list for Windows includes;"Command line options for volume creation (already implemented in Linux and Mac OS X versions)". So I don't see any clear way to use this from within R without you running the program from the OS.)
I don't think this is possible because the R interpreter has to be able to decrypt and read the code in order to execute it which means that whoever is using that interpreter will also be able to decrypt and read the code.
I am by no means an expert, so I reserve the right to be 100% wrong about that statement.
I believe the best solution is to ensure value comes from the expertise and services provided by your company and it's employers---not from keeping secrets.
Failing that, you could try separating the code into a client/server model. That way the client just sends data and receives results---they never have access to the code that runs on the server.
However, the scientist in me just said "that solution sucks and I would never trust results provided under such conditions".

Are there specific tools are used to configure, build and install source programs?

Are there specific tools are used to configure, build and install source programs?
Lots of different specific tools depending on the platform, language(s) and frameworks you may be using...
As almost all my development is in compiled languages on unix systems, I mostly use make which many people will tell you is sufficiently old these days that you shouldn't clean the patina off lest you ruin it's value at auction...

What is currently the best build system [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
A few years ago I looked into using some build system that isnt Make, and tools like CMake and SCons seemed pretty primitive. I'd like to find out if the situation has improved. So, under the following criteria, what is currently the best build tool:
platform agnostic: should work on windows, linux, mac
language agnostic: should have built-in support for common things like building C/C++ and other static langs. I guess it doesn't need to support the full autotools suite.
extensible: I need to be able to write rules to generate files, like from restructuredText, latex, custom formats, etc. I dont really care what language I have to write the rules in, but I would prefer a real language rather than a DSL.
I would prefer to avoid writing any XML by hand, which I think for example ant requires.
Freely available (preferably open source)
The term "best" is slightly subjective, but I think answers can be rated objectively by the criteria above.
I'd definitively put my vote up for premake. Although it is not as powerful as it's older brothers, it's main advantage is absurd simplicity and ease of use. Makes writing multi-compiler, multi-platform code a breeze, and natively generates Visual Studio solutions, XCode projects, Makefiles, and others, without any additional work needed.
So, judging purely by the criteria set forth in the question, the build system that seems like the best fit is probably waf - pure Python, provides support for C++ and other languages, general, powerful, not a DSL.
However, from my personal experience, I prefer CMake for C++ projects. (I tried CMake, SCons, and waf, and liked them in roughly that order). CMake is a general solution, but it has built-in support for C++ that makes it nicer than a more generic solution when you're actually doing C++.
CMake's build model for C++ is more declarative and less imperative, and thus, to me, easier to use. The CMake language syntax isn't great, but a declarative build with odd syntax beats an imperative build in Python. Of the three, CMake also seems to have the best support for "advanced" things like precompiled headers. Setting up precompiled headers reduced my rebuild time by about 70%.
Other pluses for CMake include decent documentation and a sizable community. Many open source libraries have CMake build files either in-tree or provided by the CMake community. There are major projects that already use CMake (OGRE comes to mind), and other major projects, like Boost and LLVM, are in the process of moving to CMake.
Part of the issue I found when experimenting with build systems is that I was trying to build a NPAPI plugin on OS X, and it turns out that very few build systems are set up to give XCode the exact combination of flags required to do so. CMake, recognizing that XCode is a complex and moving target, provides a hook for manually setting commands in generated XCode projects (and Visual Studio, I think). This is Very Smart as far as I'm concerned.
Whether you're building a library or an application may also determine which build system is best. Boost still uses a jam-based system, in part because it provides the most comprehensive support for managing build types that are more complex than "Debug" and "Release." Most boost libraries have five or six different versions, especially on Windows, anticipating people needing compatible libraries that link against different versions of the CRT.
I didn't have any problems with CMake on Windows, but of course your mileage may vary. There's a decent GUI for setting up build dependencies, though it's clunky to use for rebuilds. Luckily there's also a command-line client. What I've settled on so far is to have a thin wrapper Makefile that invokes CMake from an objdir; CMake then generates Makefiles in the objdir, and the original Makefile uses them to do the build. This ensures that people don't accidentally invoke CMake from the source directory and clutter up their repository. Combined with MinGW, this "CMake sandwich" provides a remarkably consistent cross-platform build experience!
Of course that depends on what your priorities are. If you are looking primarily for ease of use, there are at least two new build systems that hook into the filesystem to automatically track dependencies in a language agnostic fashion.
One is tup:
http://gittup.org/tup/
and the other is fabricate:
http://code.google.com/p/fabricate/
The one that seems to be the best performing, portable, and mature (and the one I have actually used) is tup. The guy who wrote it even maintains a toy linux distro where everything is a git submodule, and everything (including the kernel) is build with tup. From what I've read about the kernel's build system, this is quite an accomplishment.
Also, Tup cleans up old targets and other cruft, and can automatically maintain your .gitignore files. The result is that it becomes trivial to experiment with the layout and names of your targets, and you can confidently jump between git revisions without rebuilding everything. It's written in C.
If you know haskell and are looking for something for very advanced use cases, check out shake:
http://community.haskell.org/~ndm/shake/
Update: I haven't tried it, but this new "buildsome" tool also hooks into the filesystem, and was inspired by tup, so is relevant:
https://github.com/ElastiLotem/buildsome
CMake
CMake is an extensible, open-source
system that manages the build process
in an operating system and in a
compiler-independent manner.
Gradle seems to match all the criteria mentioned above.
It's a build system which took the best of Maven and Ant combined. To me, that's the best.
The Selenium project is moving over to Rake, not because its the best but because it handles multiple languages slightly better than all the other build tools and is cross platform (developed in Ruby).
All build tools have their issues and people learn to live with them. Something that runs on the JVM tends to be really good for building apps so Ant, Maven (i know its hideous), Ivy, Rake
Final Builder is well known in Windows world
smooth build matches most of your requirements.
platform agnostic: yes, it's written in java
language agnostic: it doesn't support c/c++t yet, only java but it is extensible via plugins written in java so adding more compilers support is not a problem
extensible: yes, you can implement smooth function via java plugin, you can also create smooth function via defining it as expression built of other smooth functions.
I would prefer to avoid writing any XML: you won't see a single line of it in smooth build
Freely available: yes, Apache 2 license
disclosure: I'm the author of smooth build.

Is PowerShell ready to replace my Cygwin shell on Windows? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm debating whether I should learn PowerShell, or just stick with Cygwin/Perl scripts/Unix shell scripts, etc.
The benefit of PowerShell would be that the scripts could be more easily used by teammates that don't have Cygwin; however, I don't know if I'd really be writing that many general purpose scripts, or if people would even use them.
Unix scripting is so powerful, does PowerShell come close enough to warrant switching over?
Here are some of the specific things (or equivalents) I would be looking for in PowerShell:
grep
sort
uniq
Perl (how close does PowerShell come to Perl's capabilities?)
AWK
sed
file (the command that gives file information)
etc.
Tools are just tools.
They help or they don't.
You need help or you don't.
If you know Unix and those tools do what you need them to do on Windows - then you are a happy guy and there is no need to learn PowerShell (unless you want to explore).
My original intent was to include a set of Unix tools in Windows and be done with it (a number of us on the team have deep Unix backgrounds and a healthy dose of respect for that community.)
What I found was that this didn't really help much. The reason for that is that AWK/grep/sed don't work against COM, WMI, ADSI, the Registry, the certificate store, etc., etc.
In other words, UNIX is an entire ecosystem self-tuned around text files. As such, text processing tools are effectively management tools. Windows is a completely different ecosystem self-tuned around APIs and Objects. That's why we invented PowerShell.
What I think you'll find is that there will be lots of occasions when text-processing won't get you what you want on Windows. At that point, you'll want to pick up PowerShell. NOTE - it is not an all or nothing deal. Within PowerShell, you can call out to your Unix tools (and use their text process or PowerShell's text processing). Also you can call PowerShell from your Unix tools and get text.
Again - there is no religion here - our focus is on giving you the tools you need to succeed. That is why we are so passionate about feedback. Let us know where we are falling down on the job or where you don't have a tool you need and we'll put it on the list and get to it.
In all honesty, we are digging ourselves out of a 30-year-hole, so it is going to take a while. That said, if you pick up the beta of Windows Server 2008 /R2 and/or the betas of our server products, I think you'll be shocked at how quickly that hole is getting filled.
With regard to usage - we've had > 3.5 million downloads to date. That does not include the people using it in Windows Server 2008, because it is included as an optional component and does not need a download.
V2 will ship in all versions of Windows. It will be on-by-default for all editions except Server core where it is an optional component. Shortly after Windows 7/Windows Server 2008 R2 ships, we'll make V2 available on all platforms, Windows XP and above. In other words - your investment in learning will be applicable to a very large number of machines/environments.
One last comment. If/when you start to learn PowerShell, I think you'll be pretty happy. Much of the design is heavily influenced by our Unix backgrounds, so while we are quite different, you'll pick it up very quickly (after you get over cussing that it isn't Unix :-) ).
We know that people have a very limited budget for learning - that is why we are super hard-core about consistency. You are going to learn something, and then you'll use it over and over and over again.
Experiment! Enjoy! Engage!
grep
Select-String cmdlet and -match operator work with regexes. Also you can directly make use of .NET's regex support for more advanced functionality.
sort
Sort-Object is more powerful (than I remember *nix's sort). Allowing multi-level sorting on arbitrary expressions. Here PowerShell's maintenance of underlying type helps; e.g. a DateTime property will be sorted as a DateTime without having to ensure formatting into a sortable format.
uniq
Select-Object -Unique
Perl (how close does PowerShell come to Perl capabilities?)
In terms of Perl's breadth of domain specific support libraries: nowhere close (yet).
For general programming, PowerShell is certainly more cohesive and consistent, and easier to extend. The one gap for text munging is something equivalent to Perl's .. operator.
AWK
It has been long enough since using AWK (must be >18 years, since later I just used Perl), so can't really comment.
sed
[See above]
file (the command that gives file information)
PowerShell's strength here isn't so much of what it can do with filesystem objects (and it gets full information here, dir returns FileInfo or FolderInfo objects as appropriate) is that is the whole provider model.
You can treat the registry, certificate store, SQL Server, Internet Explorer's RSS cache, etc. as an object space navigable by the same cmdlets as the filesystem.
PowerShell is definitely the way forward on Windows. Microsoft has made it part of their requirements for future non-home products. Hence rich support in Exchange, support in SQL Server. This is only going to expand.
A recent example of this is the TFS PowerToys. Many TFS client operations are done without having to startup tf.exe each time (which requires a new TFS server connection, etc.) and is notably easier to then further process the data. As well as allowing wide access to the whole TFS client API to a greater detail than exposed in either Team Explorer of TF.exe.
As someone whose career focused on Windows enterprise development from 1997 - 2010, the obvious answer would be PowerShell for all the good reasons given previously (e.g., it is part of Microsoft's enterprise strategy; it integrates well with Windows/COM/.NET; and using objects instead of files provides for a "richer" coding model). For that reason I'd been using and promoting PowerShell for the last two years or so, with the express belief I was following the "Word of Bill."
However, as a pragmatist I'm no longer sure PowerShell is such a great answer. While it's an excellent Windows tool and provides a much needed step towards filling the historic hole that is the Window command line, as we all watch Microsoft's grip on consumer computing slip it seems increasingly likely that Microsoft has a massive battle ahead to keep its OS as important to the enterprise of the future.
Indeed, given I find my work is increasingly in heterogeneous environments, I'm finding it much more useful to use Bash scripts at the moment, as they not only work on Linux, Solaris and Mac OS X, but they also work—with the help of Cygwin—on Windows.
So if you buy into the belief that the future of the OS is commoditized rather than a monopolized, then it seems to make sense to opt for an agile development tool strategy that keeps away from proprietary tools where feasible. If however you see your future being dominated by all-that-is-Redmond then go for PowerShell.
I have used a bit of PowerShell for script automation. While it is very nice that the environment seems to have been thought out much more than Unix shells, in practice the use of objects instead of text streams is much more clunky, and a lot of the Unix facilities that have been developed in the last 30 years are still missing.
Cygwin is still my scripting environment of choice for Windows hosts. It certainly beats the alternatives in terms of getting things done.
There are lots of great great answers here, and here is my take. PowerShell is ready if you are... Examples:
grep = "Select-String -Pattern"
sort = "Sort-Object"
uniq = "Get-Unique"
file = "Get-Item"
cat = "Get-Content"
Perl/AWK/Sed are not commands, but utilities hence hard to compare, but you can do almost everything in PowerShell.
I have only recently started dabbling in PowerShell with any degree of seriousness. Although for the past seven years I've worked in an almost exclusively Windows-based environment, I come from a Unix background and find myself constantly trying to "Unix-fy" my interaction experience on Windows. It's frustrating to say the least.
It's only fair to compare PowerShell to something like Bash, tcsh, or zsh since utilities like grep, sed, awk, find, etc. are not, strictly speaking, part of the shell; they will always, however, be part of any Unix environment. That said, a PowerShell command like Select-String has a very similar function to grep and is bundled as a core module in PowerShell ... so the lines can be a little blurred.
I think the key thing is culture, and the fact that the respective tool-sets will embody their respective cultures:
Unix is a file-based, (in general, non Unicode) text-based culture. Configuration files are almost exclusively text files. Windows, on the other hand has always been far more structured in respect of configuration formats--configurations are generally kept in proprietary databases (e.g., the Windows registry) which require specialised tools for their management.
The Unix administrative (and, for many years, development) interface has traditionally been the command line and the virtual terminal. Windows started off as a GUI and administrative functions have only recently started moving away from being exclusively GUI-based. We can expect the Unix experience on the command line to be a richer, more mature one given the significant lead it has on PowerShell, and my experience matches this. On this, in my experience:
The Unix administrative experience is geared towards making things easy to do in a minimal amount of key strokes; this is probably as a result of the historical situation of having to administer a server over a slow 9600 baud dial-up connection. Now PowerShell does have aliases which go a long way to getting around the rather verbose Verb-Noun standard, but getting to know those aliases is a bit of a pain (anyone know of something better than: alias | where {$_.ResolvedCommandName -eq "<command>"}?).
An example of the rich way in which history can be manipulated:
iptables commands are often long-winded and repeating them with slight differences would be a pain if it weren't for just one of many neat features of history manipulation built into Bash, so inserting an iptables rule like the following:
iptables -I camera-1-internet -s 192.168.0.50 -m state --state NEW -j ACCEPT
a second time for another camera ("camera-2"), is just a case of issuing:
!!:s/-1-/-2-/:s/50/51
which means "perform the previous command, but substitute -1- with -2- and 50 with 51.
The Unix experience is optimised for touch-typists; one can pretty much do everything without leaving the "home" position. For example, in Bash, using the Emacs key bindings (yes, Bash also supports vi bindings), cycling through the history is done using Ctrl-P and Ctrl-N whilst moving to the start and end of a line is done using Ctrl-A and Ctrl-E respectively ... and it definitely doesn't end there. Try even the simplest of navigation in the PowerShell console without moving from the home position and you're in trouble.
Simple things like versatile paging (a la less) on Unix don't seem to be available out-of-the-box in PowerShell which is a little frustrating, and a rich editor experience doesn't exist either. Of course, one can always download third-party tools that will fill those gaps, but it sure would be nice if these things were just "there" like they are on pretty much any flavour of Unix.
The Windows culture, at least in terms of system API's is largely driven by the supporting frameworks, viz., COM and .NET, both of-which are highly structured and object-based. On the other hand, access to Unix APIs has traditionally been through a file interface (/dev and /proc) or (non-object-oriented) C-style library calls. It's no surprise then that the scripting experiences match their respective OS paradigms. PowerShell is by nature structured (everything is an object) and Bash-and-friends file-based. The structured API which is at the disposal of a PowerShell programmer is vast (essentially matching the vastness of the existing set of standard COM and .NET interfaces).
In short, although the scripting capabilities of PowerShell are arguably more powerful than Bash (especially when you consider the availability of the .NET BCL), the interactive experience is significantly weaker, particularly if you're coming at it from an entirely keyboard-driven, console-based perspective (as many Unix-heads are).
I am not a very experienced PowerShell user by any means, but the little bit of it that I was exposed to impressed me a great deal. You can chain the built-in cmdlets together to do just about anything that you could do at a Unix prompt, and there's some additional goodness for doing things like exporting to CSV, HTML tables, and for more in-depth system administration types of jobs.
And if you really needed something like sed, there's always UnixUtils or GnuWin32, which you could integrate with PowerShell fairly easily.
As a longtime Unix user, I did however have a bit of trouble getting used to the command naming scheme, and I certainly would have benefitted more from it if I knew more .NET.
So essentially, I say it's well worth learning it if the Windows-only-ness of it doesn't pose a problem.
If you like shell scripting you will love PowerShell!
Start at A guided tour of the Microsoft Command Shell (Ars Technica).
As my recent experiments led me into depths of PowerShell and .NET calls, I must say that PowerShell can replace Cygwin and Unix shell.
I'm not sure about Perl, but since both PowerShell and Perl are Turing complete as programming languages, I give this as a yes to replacing Perl too.
One thing that PowerShell has above Cygwin and ordinary Bash under *nix, is its ability to perform sandboxed DLL calls, manipulating the operating system via direct API calls, WMI methods and even COM objects. How about launching Internet Explorer via code, then doing whatever you want with its displayed document, effectively emulating a back-end for a Web server?
How about gathering data from SQL servers and other data providers, parse them and export as CSV, mail messages, text and actually any kind of existing and non-existing file formats? (With proper skills of creating a valid file out of data received, of course, but CSV are readily available).
And there is an extra security available via signed cmdlets and scripts,
group policies, and execution policies that help prevent malicious code from running on your system even if you run them as administrator.
About what commands are implemented - the answer by Richard lists them and PowerShell's capability of emulating their functionality already.
About whether PowerShell is strong to warrant switching over - this is more a matter of personal preference, although as more and more Windows services are providing PowerShell cmdlets to control them, not using PowerShell with these services present is considered a hindrance. (Hyper-V server is the primary such service, and it also provides the ability to do more with PowerShell cmdlets than with GUI!)
Probably this answer is five years late, but still, if someone performs administrative tasks or general scripting of various stuff on Windows, they should definitely try harnessing PowerShell for their purposes.
When you compare PowerShell to the combination Cygwin/Perl/Shell, be aware that PowerShell only represents the "Shell" part of that combination.
You can however invoke any command from PowerShell just as you do from cmd.exe or Cygwin. It does not re-implement the specified functions, and it is certainly not comparable to Perl.
It's "just" a shell, but it makes programming easier providing a comfortable interface to the .NET universe.
Also keep in mind that PowerShell requires Windows XP, Windows Server 2003 or higher, which may pose a problem depending on your IT infrastructure.
Update:
I had no idea what kind of philosophical debate my answer would spark.
I posted my answer in the context of the question: Compare PowerShell to Cygwin and Perl and Bash.
PowerShell is a shell, as it makes no syntactic difference between built-in commands, commandlets, user functions, and external commands (.exe, .bat, .cmd). Only invoking .NET methods differ by adding a namespace or an object in the call.
Its programmability derives from the .NET framework, not from anything specific to the PowerShell "language".
I'd say I believe PowerShell is a "scripting language" as soon as Bugzilla or MediaWiki are implemented as PowerShell scripts running on a web server ;)
Until then, enjoy the comparisons.
TL;DR -- I don't hate Windows or PowerShell. I just can't do anything in Windows or on PowerShell.
I personally still find PowerShell underwhelming at best.
tab completion of directory paths do not compound, requiring the user to enter a path separator after every name completion.
I still feel like Windows doesn't even have the concept of a path or of what a path is, with no accessible user home indicator ~/ short of some #environment://somejibberish/%user_home%
NTFS is still a mess and seemingly always will be. Good luck navigating.
cmd-esque interface, The dinosaur cmd.exe is still visible in PowerShell, Edit → Mark still being the only way to copy information, and copying only in the form of rectangular blocks of visible terminal space. and Edit → Mark still being the only way to paste strings into the terminal.
Painting it blue doesn't make it any more attractive. I don't mind Microsoft developers having a taste in color though.
Windows always opens at top left corner of screen. For somebody who uses vertical task bars this is incredibly annoying, especially considering that the Windows task bar will cover the only corner of the window that gives access to copy/paste functionality.
I can't speak much on the grounds of the tools Windows includes. Being that there is a whole set of open-source, freely licensed CLI tools, and PowerShell ships with, to my knowledge, none of them is an utter disappointment.
PowerShell's wget takes seemingly incomparable arguments to GNU wget. Thanks, glimmer of hope portably-useless.
PowerShell POSIX is not Bash-compatible, particularly the && operator is not handled, making the simplest of conditional command following not a thing.
I don't know man; I gave it a shot, I really did; I still try to give it a shot in the hopes that the next time I open it it will be any less useless. I cannot do anything in PowerShell, and I can barely do things with a real project to bring GNU tools to Windows.
MySysGit gives me the dinosaur cmd.exe prompt with a couple of GNU tools, and it is still very underwhelming, but at last path completion works. And the Git command will run in Git Bash.
Mintty for MySysGit gives the Cygwin interface over mysysgit's environment, making copy and paste a thing (select to copy (mouse), Shift+Ins to paste, how modern...). However, things like git push are broken in Mintty.
I don't mean to rant, but I still see huge problems with command-line usability on Windows even given tools like Cygwin.
P.S.: Just because something can be done in PowerShell, doesn't make it usable. Usability is deeper than ability and is what I tend to focus on when trying to use a product as a consumer.
The cmdlets in PowerShell are very nice and work reliably. Their object-orientedness appeals to me a lot since I'm a Java/C# developer, but it's not at all a complete set. Since it's object oriented, it's missed out on a lot of the text stream maturity of the POSIX tool set (awk and sed to name a few).
The best answer I've found to the dilemma of loving OO techniques and loving the maturity in the POSIX tools is to use both! One great aspect of PowerShell is that it does an excellent job piping objects to standard streams. PowerShell by default uses an object pipeline to transport its objects around. These aren't the standard streams (standard out, standard error, and standard in). When PowerShell needs to pass output to a standard process that doesn't have an object pipeline, it first converts the objects to a text stream. Since it does this so well, PowerShell makes an excellent place to host POSIX tools!
The best POSIX tool set is GnuWin32. It does take more than 5 seconds to install, but it's worth the trouble, and as far as I can tell, it doesn't modify your system (registry, c:\windows\* folders, etc.) except copying files to the directories you specify. This is extra nice because if you put the tools in a shared directory, many people can access them concurrently.
GnuWin32 Installation Instructions
Download and execute the exe (it's from the SourceForge site) pointing it to a suitable directory (I'll be using C:\bin). It will create a GetGnuWin32 directory there in which you will run download.bat, then install.bat (without parameters), after which, there will be a C:\bin\GetGnuWin32\gnuwin32\bin directory that is the most useful folder that has ever existed on a Windows machine. Add that directory to your path, and you're ready to go.
I haven't seen that the PowerShell has really taken off, at least not yet. So it might not be worth the effort of learning it unless those others on your team already know it.
For your predicament you might be better off with a scripting language that others could get behind, Perl like you mentioned, or others like Ruby or Python.
I think a lot of it depends on what you need to do. Personally I've been using Python for my own personal scripts, but I know when I start writing something that I'll never be able to pass it on - so I try not to do anything too revolutionary.
Why not use both? Call PowerShell scripts in Cygwin just like any other interpreted scripts like Perl, etc.
I do this enough that I wrote https://bitbucket.org/jbianchi/powershell for a Bash wrapper to call powershell.exe in Cygwin. It can be used as a shebang as the first line of a powershell.exe .ps1 script (since PowerShell also uses "#" as a comment). See https://bitbucket.org/jbianchi/powershell/wiki/Home for examples
In a couple of lines, Cygwin and PowerShell are different tools however if you have Cygwin installed you can run the Cygwin executables within a PowerShell session. I've gotten so used to PowerShell that now I no longer use grep, sort, awk, etc. There are pretty much built-in alternatives in PowerShell, and if not you can find a cmdlet out there.
The main tool I find myself using is ssh.exe, but within a PowerShell session.
It works great.
I found PowerShell programming to be not worth the effort.
I have several years of experience with shell scripting under Unix, but I found it enormously difficult to do much of anything with PowerShell.
It seems like many functions require you to interrogate the Windows Management Interface and issue SQL-like commands to get the information you need.
For example, I wanted to write a script to remove all files with a specific suffix from a directory tree. Under Unix, this would be a simple ...
find . -name \*.xyz -exec rm {} \;
After a couple of hours dicking around with Scripting.FileSystemObject and WScript.Shell and issuing "SELECT * FROM Win32_ShortcutFile WHERE Drive = '" & drive & "' AND Path = '" & searchFolder & "'", I finally gave up and settled for Windows Explorer's Search command and just do it manually. There's probably some way to do what I wanted, but I didn't see anything obvious and all the examples on the MSDN site were so trivial as to be worthless.
EDIT Heh, of course as soon as I wrote this I poked around some more and found what I had been missing: the -recurse option to the remove-item command is faulty (revealed if you use get-help remove-item -detailed).
I had been trying "remove-item -filter '* .xyz' -recurse" and it wasn't working, so I gave up on it.
Turns out you need to use get-childitem -filter '*.xyz' -recurse | remove-item
You can also try running Bash scripts on Windows using BashWin at
https://github.com/skanga/BashWin.
PowerShell is very powerful, more powerful than the standard built-ins of the Unix shells (but only because it includes much of the functionality usually shelled out to subprograms). Also, consider that you can write applets in any .NET language, including IronPython, IronRuby, PerlNet, etc.. or you can simply call your Cygwin commands from PowerShell, ignoring all the extra functionality and it will work similarly to Bash, KornShell, or whatever...

How do small software patches correct big software?

One thing I've always wondered about is how software patches work. A lot of software seems to just release new versions on their binaries that need to be installed over older versions, but some software (operating systems like Windows in particular) seem to be able to release very small patches that correct bugs or add functionality to existing software.
Most of the time the patches I see can't possibly replace entire applications, or even small files that are used within applications. To me it seems like the actual binary is being modified.
How are these kinds of patches actually implemented? Could anyone point me to any resources that explain how this works, or is it just as simple as replacing small components such as linked libraries in an application?
I'll probably never need to do a deployment in this manner, but I am curious to find out how it works. If I'm correct in my understanding that patches can really modify only portions of binary files, is this possible to do in .NET? If it is I'd like to learn it since that's the framework I'm most familiar with and I'd like to understand how it works.
This is usually implemented using binary diff algorithms -- diff the most recently released version against the new code. If the user's running the most recent version, you only need to apply the diff. Works particularly well against software, because compiled code is usually pretty similar between versions. Of course, if the user's not running the most recent version you'll have to download the whole thing anyway.
There are a couple implementations of generic binary diff algorithms: bsdiff and xdelta are good open-source implementations. I can't find any implementations for .NET, but since the algorithms in question are pretty platform-agnostic it shouldn't be too difficult to port them if you feel like a project.
If you are talking about patching windows applications then what you want to look at are .MSP files. These are similar to an .MSI but just patch and application.
Take a look at Patching and Upgrading in the MSDN documents.
What an .MSP files does is load updated files to an application install. This typically is updated dll's and resource files, but could include any file.
In addition to patching the installed application, the repair files located in C:\WINDOWS\Installer are updated as well. Then if the user selects "Repair" from Add / Remove programs the updated patch files are used as well.
I'm thinking that the binary diff method discussed by John Millikin must be used in other operating systems. Although you could make it work in windows it would be somewhat alien.

Resources