I have created an R code script that:
Reads some data from a database
Makes some transformations and..
exports into a csv the modified table.
This code needs to run in a client's machine, but we need to "hide" the actual code from the user.
Is there any useful suggestions on how we can achieve that?
Up front
... it will be nearly impossible to deploy an R <something> to another computer in a way that prevents curious users from accessing the source code.
From a mailing list conversation in 2011, in response to "I would not like anyone to be able to read the code.",
R is an open source project, so providing ways for you to do this is not
one of our goals.
Duncan Murdoch https://stat.ethz.ch/pipermail/r-help/2011-July/282755.html
(Prof Murdoch was on the R Core Team and R Foundation for many years.)
Background
Several (many?) programming languages provide the ability to compile a script or program into an executable, the .exe you reference. For example, python has tools like py2exe and PyInstaller. The tools range from merely compactifying the script into a zip-ball, perhaps obfuscating the script; ... to actually creating a exe with the script either tightly embedded or such. (This part could use some more citations/research.)
This is usually good enough for many people, by keeping the honest out. I say it that way because all you need to do is google phrases like decompile py2exe and you'll find tools, howtos, tutorials, etc, whose intent might be honestly trying to help somebody recover lost code. Regardless of the intentions, they will only slow curious users.
Unfortunately, there are no tools that do this easily for R.
There are tools with the intent of making it easy for non-R-users to use R-based tools. For instance, RInno and DesktopDeployR are two tools with the intent of creating Windows (no mac/linux) installers that support R or R/shiny tools. But the intent of tools like this is to facilitate the IT tasks involved with getting a user/client to install and maintain R on their computer, not with protecting the code that it runs.
Constrain R.exe?
There have been questions (elsewhere?) that ask if they can modify the R interpreter itself so that it does not do everything it is intended to do. For instance, one could redefine base::print in such a way that functions' contents cannot be dumped, and debug doesn't show the code it's about to execute, and perhaps several other protective steps.
There are a few problems with this approach:
There is always another way to get at a function's contents. Even if you stop print.default and the debugger from doing this, there are others ways to get to the functions (body(.), for one). How many of these rabbit holes do you feel you will accurately traverse, get them all ... with no adverse effect on normal R code?
Even if you feel you can get to them all, are you encrypting the source .R files that contain your proprietary content? Okay, encrypting is good, except you need to decrypt the contents somehow. Many tools that have encrypted contents do so to thwart reverse-engineering, so they also embed (obfuscatedly, of course) the decryption key in the application itself. Just give it time, somebody will find and extract it.
You might think that you can download the key on start-up (not stored within the app), so that the code is decrypted in real-time. Sorry, network sniffers will get the key. Even if you retrieve it over https://, tools such as https://mitmproxy.org/ will render this step much less effective.
Let's say you have recompiled R to mask print and such, have a way to distribute source code encrypted, and are able to decrypt it in a way that does not easily reveal the key (for full decryption of the source code files). While it takes a dedicated user to wade through everything above to get to the source code, none of the above steps are required: they may legally compel you to release your changes to the R interpreter itself (that you put in place to prevent printing function contents). This doesn't reveal your source code, but it will reveal many of your methods, which might be sufficient. (Or just the risk of legal costs.)
R is GPL, and that means that anything that links to it is also "tainted" with the GPL. This means that anything compiled with Rcpp, for instance, will also be constrained/liberated (your choice) by the GPL. This includes thoughts of using RInside: it is also GPL (>= 2).
To do it without touching the GPL, you'd need to write your interpreter (relatively from scratch, likely) without code from the R project.
Alternatives
Ultimately, if you want to release R-based utilities/apps/functionality to clients, the only sure-fire way to allow them to use your code without seeing it is to ... control the computers on which R will run (and source code will reside). I'll add more links supporting this claim as I find them, but a small start:
https://stat.ethz.ch/pipermail/r-help/2011-July/282717.html
https://www.researchgate.net/post/How_to_make_invisible_the_R_code
Options include anything that keeps the R code and R interpreter completely under your control. Simple examples:
Shiny apps, self-hosted (or on shinyapps.io if you trust their security); servers include Shiny Server (both free and commercial versions), RStudio Connect (commercial only), and ShinyProxy. (The list is not known to be exclusive.)
Rplumber is an API server, not a shiny server. The intent is for single HTTP(s) endpoint calls, possibly authenticated, supporting whatever HTTP supports (post, get, etc). This can be served in various ways, see its hosting page for options.
Rserve. I know less about this, but from what I've experienced with it, I've not had as much luck integrating with enterprise systems (where, e.g., authentication and fine-control over authorization is important). This does allow near-raw access to R, so it might not be what you want (especially when the intent is to give to clients who may not be strong R users themselves).
OpenCPU should be discussed, but not as a viable candidate for "protect your code". It is very similar to rplumber in that it provides HTTP endpoints, but it supports endpoints for every exported function in every package installed in its R library. This includes the base package, so it is not at all difficult to get the source code of any function that you could get on the R console. I believe this is a design feature, even if it is perfectly at odds with your intent to protect your code.
Anything that can call R or Rscript. This might be PHP or mod_python or similar. Any web-page serving language that can exec("/usr/bin/Rscript",...) can take its output and turn it around to the calling agent. (It might also be possible, for example, for a PHP front-end to call an opencpu endpoint that only permits connections from the PHP-serving host.)
I'm working on a closed-source game that uses a scripting language for automation. Almost all of the game logic is handled by scripts. Scripts can be compiled to a bytecode format, but due to the nature of the language, identifiers must be preserved. Compiled scripts can be embedded in other text-based resource formats using a binary-to-text encoding.
I want to encrypt the compiled scripts to protect the source during distribution, but because the language, bytecode format, and binary-to-text encoding scheme are all proprietary, do I need to worry about encryption at all? If so, should I simply perturb some bytes and call it a day, or should I make use of a fully featured encryption solution? Encryption should not increase the size of the executable unduly, because scripts can be large and load times are important.
On Windows, the size of the executable has no impact on load times because the exe is just mapped into memory and then paged in as needed. I can't imagine why that would not be true for *nix as well.
So, if the scripts don't need to change separately from your .exe, you could imbed them into the .exe, that would make them difficult for users to change even if they could find them. I wrote a little tool once that turned data files into .obj files that made it really easy to imbed data into my exe - it turned out to be pretty easy to write an object file that contains only data.
Of course, if you really care about protecting this data than full encryption in your only choice, but if you are just trying to discourage casual hacking, making the files hard to get at might be good enough.
You shouldn't assume that people can't read binary proprietary formats. There are many people who are very good at reverse-engineering protocols without any documentation.
So if you want to keep your source safe, you need some real security. The only problem is that if you encrypt the files, you'll need to give your users the decryption key in order to play the game, and when you do that then its only a matter of time before someone works out how to get the key and use it to decrypt all the files.
So basically, there's not much you can do unfortunately. You could try obfuscating your code, but even that's not going to stop everyone.
What you're talking about isn't going to be encryption, because you're going to have to ship the decryption key with it. It's just obfuscation. No matter how much you try to hide the decryption key, if your program can find it, so can the user.
So once you understand that we're just talking about various obfuscation schemes, the question is how much obfuscation you need. Likely the proprietary byte-compiling is a higher barrier than the encryption would be, and I'd call it a day. Anyone who wants to trace the logic can just put a debugger on it whether you encrypt or not. If they've already reverse-engineered your run-time engine to work out the byte-codes, then they're already in the portion of the code that has the unencrypted data.
That said, if you find the identifiers in the file to be problematic you can mechanically converts them to random strings prior to byte-compiling.
Encryption would not buy you much here.
Basically, whatever encryption layer you add, the executable itself must be able to perform the decryption in order to run the scripts. You lock the door, but you leave the key in the lock. This is unavoidable.
What encryption does is that it somewhat raises the bar on who may access the data. It requires some disassembly skills. Simply embedding the files in the executable will already filter out the casual not-very-good hacker. Those who are not deterred by such embedding are also those who will be able to follow the data processing path, find the decryption logic, and siphon out the decrypted code at will. A layer of encryption may also increase the feeling of importance: that which was encrypted is certainly worthwhile. Hence, trying too funky things may just make your situation worse, not better.
On the other hand, embedding the files in the executable binary is probably a good idea. It would remove the need to locate them at runtime on the filesystem (locating things at runtime is known to be somewhat more difficult on Unix systems than on Windows, because of hard links: on Windows, an executable can easily obtained its own path, but on Unix the presence of hard links means that "the" executable path is ill-defined).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm debating whether I should learn PowerShell, or just stick with Cygwin/Perl scripts/Unix shell scripts, etc.
The benefit of PowerShell would be that the scripts could be more easily used by teammates that don't have Cygwin; however, I don't know if I'd really be writing that many general purpose scripts, or if people would even use them.
Unix scripting is so powerful, does PowerShell come close enough to warrant switching over?
Here are some of the specific things (or equivalents) I would be looking for in PowerShell:
grep
sort
uniq
Perl (how close does PowerShell come to Perl's capabilities?)
AWK
sed
file (the command that gives file information)
etc.
Tools are just tools.
They help or they don't.
You need help or you don't.
If you know Unix and those tools do what you need them to do on Windows - then you are a happy guy and there is no need to learn PowerShell (unless you want to explore).
My original intent was to include a set of Unix tools in Windows and be done with it (a number of us on the team have deep Unix backgrounds and a healthy dose of respect for that community.)
What I found was that this didn't really help much. The reason for that is that AWK/grep/sed don't work against COM, WMI, ADSI, the Registry, the certificate store, etc., etc.
In other words, UNIX is an entire ecosystem self-tuned around text files. As such, text processing tools are effectively management tools. Windows is a completely different ecosystem self-tuned around APIs and Objects. That's why we invented PowerShell.
What I think you'll find is that there will be lots of occasions when text-processing won't get you what you want on Windows. At that point, you'll want to pick up PowerShell. NOTE - it is not an all or nothing deal. Within PowerShell, you can call out to your Unix tools (and use their text process or PowerShell's text processing). Also you can call PowerShell from your Unix tools and get text.
Again - there is no religion here - our focus is on giving you the tools you need to succeed. That is why we are so passionate about feedback. Let us know where we are falling down on the job or where you don't have a tool you need and we'll put it on the list and get to it.
In all honesty, we are digging ourselves out of a 30-year-hole, so it is going to take a while. That said, if you pick up the beta of Windows Server 2008 /R2 and/or the betas of our server products, I think you'll be shocked at how quickly that hole is getting filled.
With regard to usage - we've had > 3.5 million downloads to date. That does not include the people using it in Windows Server 2008, because it is included as an optional component and does not need a download.
V2 will ship in all versions of Windows. It will be on-by-default for all editions except Server core where it is an optional component. Shortly after Windows 7/Windows Server 2008 R2 ships, we'll make V2 available on all platforms, Windows XP and above. In other words - your investment in learning will be applicable to a very large number of machines/environments.
One last comment. If/when you start to learn PowerShell, I think you'll be pretty happy. Much of the design is heavily influenced by our Unix backgrounds, so while we are quite different, you'll pick it up very quickly (after you get over cussing that it isn't Unix :-) ).
We know that people have a very limited budget for learning - that is why we are super hard-core about consistency. You are going to learn something, and then you'll use it over and over and over again.
Experiment! Enjoy! Engage!
grep
Select-String cmdlet and -match operator work with regexes. Also you can directly make use of .NET's regex support for more advanced functionality.
sort
Sort-Object is more powerful (than I remember *nix's sort). Allowing multi-level sorting on arbitrary expressions. Here PowerShell's maintenance of underlying type helps; e.g. a DateTime property will be sorted as a DateTime without having to ensure formatting into a sortable format.
uniq
Select-Object -Unique
Perl (how close does PowerShell come to Perl capabilities?)
In terms of Perl's breadth of domain specific support libraries: nowhere close (yet).
For general programming, PowerShell is certainly more cohesive and consistent, and easier to extend. The one gap for text munging is something equivalent to Perl's .. operator.
AWK
It has been long enough since using AWK (must be >18 years, since later I just used Perl), so can't really comment.
sed
[See above]
file (the command that gives file information)
PowerShell's strength here isn't so much of what it can do with filesystem objects (and it gets full information here, dir returns FileInfo or FolderInfo objects as appropriate) is that is the whole provider model.
You can treat the registry, certificate store, SQL Server, Internet Explorer's RSS cache, etc. as an object space navigable by the same cmdlets as the filesystem.
PowerShell is definitely the way forward on Windows. Microsoft has made it part of their requirements for future non-home products. Hence rich support in Exchange, support in SQL Server. This is only going to expand.
A recent example of this is the TFS PowerToys. Many TFS client operations are done without having to startup tf.exe each time (which requires a new TFS server connection, etc.) and is notably easier to then further process the data. As well as allowing wide access to the whole TFS client API to a greater detail than exposed in either Team Explorer of TF.exe.
As someone whose career focused on Windows enterprise development from 1997 - 2010, the obvious answer would be PowerShell for all the good reasons given previously (e.g., it is part of Microsoft's enterprise strategy; it integrates well with Windows/COM/.NET; and using objects instead of files provides for a "richer" coding model). For that reason I'd been using and promoting PowerShell for the last two years or so, with the express belief I was following the "Word of Bill."
However, as a pragmatist I'm no longer sure PowerShell is such a great answer. While it's an excellent Windows tool and provides a much needed step towards filling the historic hole that is the Window command line, as we all watch Microsoft's grip on consumer computing slip it seems increasingly likely that Microsoft has a massive battle ahead to keep its OS as important to the enterprise of the future.
Indeed, given I find my work is increasingly in heterogeneous environments, I'm finding it much more useful to use Bash scripts at the moment, as they not only work on Linux, Solaris and Mac OS X, but they also work—with the help of Cygwin—on Windows.
So if you buy into the belief that the future of the OS is commoditized rather than a monopolized, then it seems to make sense to opt for an agile development tool strategy that keeps away from proprietary tools where feasible. If however you see your future being dominated by all-that-is-Redmond then go for PowerShell.
I have used a bit of PowerShell for script automation. While it is very nice that the environment seems to have been thought out much more than Unix shells, in practice the use of objects instead of text streams is much more clunky, and a lot of the Unix facilities that have been developed in the last 30 years are still missing.
Cygwin is still my scripting environment of choice for Windows hosts. It certainly beats the alternatives in terms of getting things done.
There are lots of great great answers here, and here is my take. PowerShell is ready if you are... Examples:
grep = "Select-String -Pattern"
sort = "Sort-Object"
uniq = "Get-Unique"
file = "Get-Item"
cat = "Get-Content"
Perl/AWK/Sed are not commands, but utilities hence hard to compare, but you can do almost everything in PowerShell.
I have only recently started dabbling in PowerShell with any degree of seriousness. Although for the past seven years I've worked in an almost exclusively Windows-based environment, I come from a Unix background and find myself constantly trying to "Unix-fy" my interaction experience on Windows. It's frustrating to say the least.
It's only fair to compare PowerShell to something like Bash, tcsh, or zsh since utilities like grep, sed, awk, find, etc. are not, strictly speaking, part of the shell; they will always, however, be part of any Unix environment. That said, a PowerShell command like Select-String has a very similar function to grep and is bundled as a core module in PowerShell ... so the lines can be a little blurred.
I think the key thing is culture, and the fact that the respective tool-sets will embody their respective cultures:
Unix is a file-based, (in general, non Unicode) text-based culture. Configuration files are almost exclusively text files. Windows, on the other hand has always been far more structured in respect of configuration formats--configurations are generally kept in proprietary databases (e.g., the Windows registry) which require specialised tools for their management.
The Unix administrative (and, for many years, development) interface has traditionally been the command line and the virtual terminal. Windows started off as a GUI and administrative functions have only recently started moving away from being exclusively GUI-based. We can expect the Unix experience on the command line to be a richer, more mature one given the significant lead it has on PowerShell, and my experience matches this. On this, in my experience:
The Unix administrative experience is geared towards making things easy to do in a minimal amount of key strokes; this is probably as a result of the historical situation of having to administer a server over a slow 9600 baud dial-up connection. Now PowerShell does have aliases which go a long way to getting around the rather verbose Verb-Noun standard, but getting to know those aliases is a bit of a pain (anyone know of something better than: alias | where {$_.ResolvedCommandName -eq "<command>"}?).
An example of the rich way in which history can be manipulated:
iptables commands are often long-winded and repeating them with slight differences would be a pain if it weren't for just one of many neat features of history manipulation built into Bash, so inserting an iptables rule like the following:
iptables -I camera-1-internet -s 192.168.0.50 -m state --state NEW -j ACCEPT
a second time for another camera ("camera-2"), is just a case of issuing:
!!:s/-1-/-2-/:s/50/51
which means "perform the previous command, but substitute -1- with -2- and 50 with 51.
The Unix experience is optimised for touch-typists; one can pretty much do everything without leaving the "home" position. For example, in Bash, using the Emacs key bindings (yes, Bash also supports vi bindings), cycling through the history is done using Ctrl-P and Ctrl-N whilst moving to the start and end of a line is done using Ctrl-A and Ctrl-E respectively ... and it definitely doesn't end there. Try even the simplest of navigation in the PowerShell console without moving from the home position and you're in trouble.
Simple things like versatile paging (a la less) on Unix don't seem to be available out-of-the-box in PowerShell which is a little frustrating, and a rich editor experience doesn't exist either. Of course, one can always download third-party tools that will fill those gaps, but it sure would be nice if these things were just "there" like they are on pretty much any flavour of Unix.
The Windows culture, at least in terms of system API's is largely driven by the supporting frameworks, viz., COM and .NET, both of-which are highly structured and object-based. On the other hand, access to Unix APIs has traditionally been through a file interface (/dev and /proc) or (non-object-oriented) C-style library calls. It's no surprise then that the scripting experiences match their respective OS paradigms. PowerShell is by nature structured (everything is an object) and Bash-and-friends file-based. The structured API which is at the disposal of a PowerShell programmer is vast (essentially matching the vastness of the existing set of standard COM and .NET interfaces).
In short, although the scripting capabilities of PowerShell are arguably more powerful than Bash (especially when you consider the availability of the .NET BCL), the interactive experience is significantly weaker, particularly if you're coming at it from an entirely keyboard-driven, console-based perspective (as many Unix-heads are).
I am not a very experienced PowerShell user by any means, but the little bit of it that I was exposed to impressed me a great deal. You can chain the built-in cmdlets together to do just about anything that you could do at a Unix prompt, and there's some additional goodness for doing things like exporting to CSV, HTML tables, and for more in-depth system administration types of jobs.
And if you really needed something like sed, there's always UnixUtils or GnuWin32, which you could integrate with PowerShell fairly easily.
As a longtime Unix user, I did however have a bit of trouble getting used to the command naming scheme, and I certainly would have benefitted more from it if I knew more .NET.
So essentially, I say it's well worth learning it if the Windows-only-ness of it doesn't pose a problem.
If you like shell scripting you will love PowerShell!
Start at A guided tour of the Microsoft Command Shell (Ars Technica).
As my recent experiments led me into depths of PowerShell and .NET calls, I must say that PowerShell can replace Cygwin and Unix shell.
I'm not sure about Perl, but since both PowerShell and Perl are Turing complete as programming languages, I give this as a yes to replacing Perl too.
One thing that PowerShell has above Cygwin and ordinary Bash under *nix, is its ability to perform sandboxed DLL calls, manipulating the operating system via direct API calls, WMI methods and even COM objects. How about launching Internet Explorer via code, then doing whatever you want with its displayed document, effectively emulating a back-end for a Web server?
How about gathering data from SQL servers and other data providers, parse them and export as CSV, mail messages, text and actually any kind of existing and non-existing file formats? (With proper skills of creating a valid file out of data received, of course, but CSV are readily available).
And there is an extra security available via signed cmdlets and scripts,
group policies, and execution policies that help prevent malicious code from running on your system even if you run them as administrator.
About what commands are implemented - the answer by Richard lists them and PowerShell's capability of emulating their functionality already.
About whether PowerShell is strong to warrant switching over - this is more a matter of personal preference, although as more and more Windows services are providing PowerShell cmdlets to control them, not using PowerShell with these services present is considered a hindrance. (Hyper-V server is the primary such service, and it also provides the ability to do more with PowerShell cmdlets than with GUI!)
Probably this answer is five years late, but still, if someone performs administrative tasks or general scripting of various stuff on Windows, they should definitely try harnessing PowerShell for their purposes.
When you compare PowerShell to the combination Cygwin/Perl/Shell, be aware that PowerShell only represents the "Shell" part of that combination.
You can however invoke any command from PowerShell just as you do from cmd.exe or Cygwin. It does not re-implement the specified functions, and it is certainly not comparable to Perl.
It's "just" a shell, but it makes programming easier providing a comfortable interface to the .NET universe.
Also keep in mind that PowerShell requires Windows XP, Windows Server 2003 or higher, which may pose a problem depending on your IT infrastructure.
Update:
I had no idea what kind of philosophical debate my answer would spark.
I posted my answer in the context of the question: Compare PowerShell to Cygwin and Perl and Bash.
PowerShell is a shell, as it makes no syntactic difference between built-in commands, commandlets, user functions, and external commands (.exe, .bat, .cmd). Only invoking .NET methods differ by adding a namespace or an object in the call.
Its programmability derives from the .NET framework, not from anything specific to the PowerShell "language".
I'd say I believe PowerShell is a "scripting language" as soon as Bugzilla or MediaWiki are implemented as PowerShell scripts running on a web server ;)
Until then, enjoy the comparisons.
TL;DR -- I don't hate Windows or PowerShell. I just can't do anything in Windows or on PowerShell.
I personally still find PowerShell underwhelming at best.
tab completion of directory paths do not compound, requiring the user to enter a path separator after every name completion.
I still feel like Windows doesn't even have the concept of a path or of what a path is, with no accessible user home indicator ~/ short of some #environment://somejibberish/%user_home%
NTFS is still a mess and seemingly always will be. Good luck navigating.
cmd-esque interface, The dinosaur cmd.exe is still visible in PowerShell, Edit → Mark still being the only way to copy information, and copying only in the form of rectangular blocks of visible terminal space. and Edit → Mark still being the only way to paste strings into the terminal.
Painting it blue doesn't make it any more attractive. I don't mind Microsoft developers having a taste in color though.
Windows always opens at top left corner of screen. For somebody who uses vertical task bars this is incredibly annoying, especially considering that the Windows task bar will cover the only corner of the window that gives access to copy/paste functionality.
I can't speak much on the grounds of the tools Windows includes. Being that there is a whole set of open-source, freely licensed CLI tools, and PowerShell ships with, to my knowledge, none of them is an utter disappointment.
PowerShell's wget takes seemingly incomparable arguments to GNU wget. Thanks, glimmer of hope portably-useless.
PowerShell POSIX is not Bash-compatible, particularly the && operator is not handled, making the simplest of conditional command following not a thing.
I don't know man; I gave it a shot, I really did; I still try to give it a shot in the hopes that the next time I open it it will be any less useless. I cannot do anything in PowerShell, and I can barely do things with a real project to bring GNU tools to Windows.
MySysGit gives me the dinosaur cmd.exe prompt with a couple of GNU tools, and it is still very underwhelming, but at last path completion works. And the Git command will run in Git Bash.
Mintty for MySysGit gives the Cygwin interface over mysysgit's environment, making copy and paste a thing (select to copy (mouse), Shift+Ins to paste, how modern...). However, things like git push are broken in Mintty.
I don't mean to rant, but I still see huge problems with command-line usability on Windows even given tools like Cygwin.
P.S.: Just because something can be done in PowerShell, doesn't make it usable. Usability is deeper than ability and is what I tend to focus on when trying to use a product as a consumer.
The cmdlets in PowerShell are very nice and work reliably. Their object-orientedness appeals to me a lot since I'm a Java/C# developer, but it's not at all a complete set. Since it's object oriented, it's missed out on a lot of the text stream maturity of the POSIX tool set (awk and sed to name a few).
The best answer I've found to the dilemma of loving OO techniques and loving the maturity in the POSIX tools is to use both! One great aspect of PowerShell is that it does an excellent job piping objects to standard streams. PowerShell by default uses an object pipeline to transport its objects around. These aren't the standard streams (standard out, standard error, and standard in). When PowerShell needs to pass output to a standard process that doesn't have an object pipeline, it first converts the objects to a text stream. Since it does this so well, PowerShell makes an excellent place to host POSIX tools!
The best POSIX tool set is GnuWin32. It does take more than 5 seconds to install, but it's worth the trouble, and as far as I can tell, it doesn't modify your system (registry, c:\windows\* folders, etc.) except copying files to the directories you specify. This is extra nice because if you put the tools in a shared directory, many people can access them concurrently.
GnuWin32 Installation Instructions
Download and execute the exe (it's from the SourceForge site) pointing it to a suitable directory (I'll be using C:\bin). It will create a GetGnuWin32 directory there in which you will run download.bat, then install.bat (without parameters), after which, there will be a C:\bin\GetGnuWin32\gnuwin32\bin directory that is the most useful folder that has ever existed on a Windows machine. Add that directory to your path, and you're ready to go.
I haven't seen that the PowerShell has really taken off, at least not yet. So it might not be worth the effort of learning it unless those others on your team already know it.
For your predicament you might be better off with a scripting language that others could get behind, Perl like you mentioned, or others like Ruby or Python.
I think a lot of it depends on what you need to do. Personally I've been using Python for my own personal scripts, but I know when I start writing something that I'll never be able to pass it on - so I try not to do anything too revolutionary.
Why not use both? Call PowerShell scripts in Cygwin just like any other interpreted scripts like Perl, etc.
I do this enough that I wrote https://bitbucket.org/jbianchi/powershell for a Bash wrapper to call powershell.exe in Cygwin. It can be used as a shebang as the first line of a powershell.exe .ps1 script (since PowerShell also uses "#" as a comment). See https://bitbucket.org/jbianchi/powershell/wiki/Home for examples
In a couple of lines, Cygwin and PowerShell are different tools however if you have Cygwin installed you can run the Cygwin executables within a PowerShell session. I've gotten so used to PowerShell that now I no longer use grep, sort, awk, etc. There are pretty much built-in alternatives in PowerShell, and if not you can find a cmdlet out there.
The main tool I find myself using is ssh.exe, but within a PowerShell session.
It works great.
I found PowerShell programming to be not worth the effort.
I have several years of experience with shell scripting under Unix, but I found it enormously difficult to do much of anything with PowerShell.
It seems like many functions require you to interrogate the Windows Management Interface and issue SQL-like commands to get the information you need.
For example, I wanted to write a script to remove all files with a specific suffix from a directory tree. Under Unix, this would be a simple ...
find . -name \*.xyz -exec rm {} \;
After a couple of hours dicking around with Scripting.FileSystemObject and WScript.Shell and issuing "SELECT * FROM Win32_ShortcutFile WHERE Drive = '" & drive & "' AND Path = '" & searchFolder & "'", I finally gave up and settled for Windows Explorer's Search command and just do it manually. There's probably some way to do what I wanted, but I didn't see anything obvious and all the examples on the MSDN site were so trivial as to be worthless.
EDIT Heh, of course as soon as I wrote this I poked around some more and found what I had been missing: the -recurse option to the remove-item command is faulty (revealed if you use get-help remove-item -detailed).
I had been trying "remove-item -filter '* .xyz' -recurse" and it wasn't working, so I gave up on it.
Turns out you need to use get-childitem -filter '*.xyz' -recurse | remove-item
You can also try running Bash scripts on Windows using BashWin at
https://github.com/skanga/BashWin.
PowerShell is very powerful, more powerful than the standard built-ins of the Unix shells (but only because it includes much of the functionality usually shelled out to subprograms). Also, consider that you can write applets in any .NET language, including IronPython, IronRuby, PerlNet, etc.. or you can simply call your Cygwin commands from PowerShell, ignoring all the extra functionality and it will work similarly to Bash, KornShell, or whatever...