Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Looking to add UI testing to my WinForms 3.5 project. Currently using MSTest for unit testing and MSBuild to build it.
One option I am looking at is Test Automation FX.
The product seems to be a bit new and not fully polished, but it seems to work. So, I'm curious if anyone else is using and has good or bad things to say about it.
It is quite a bit cheaper in price ($450) than Test Complete ($2000), so I also am trying to figure out what is lacking or missing, if anything, from Test Automation FX.
I have gone recently through the process of choosing a GUI testing solution, and finally decided to go to TestAutomationFX. Here are the main reasons I made this choice:
It's creating real code (in my case C#), which is invaluable for me: for maintenability, archivability, flexibility and so on. It is much easier to write in C# (I can ask my developers for support) than in a proprietary script language I would have to learn from scratch (or worse: endless grids of non-maintanable dropboxes). It also lets me build a good testing framework
It has seamless integration with NUnit (that my team uses for unit and integration tests). My data driven test come from the same CSVs, and GUI test reports are just appended to unit test reports, granting easy archiving and maintenance
It has much better recognition of the complex UI objects my developers use (Telerik, Infragistic, home-made): 25% of my clics are in x/y mode, versus 67% with TestComplete or Ranorex
Their sales engineers gave me excellent support (at least during the evaluation period)
It has no major bugs nor complex license setup (yes, I'm looking at you, TestComplete guys, see my other post), no runtime license issue, no virtual machine licensing problems either
(though this was not that important to me), it's four times cheaper than other commercial solutions
On the other hand, there is a medium flaw in the application:
The mapping system (ie. mapping AUT-object properties to Test-application-objects) is really touchy: code refactoring needs special attention. I overcome this by commiting to my VCS before every code refactoring. Anyway, does testComplete provide the option of code refactoring.
OK, as you can see, I'm pretty ethusiast with this solution. I've been using it for only a few days, and may run into bigger problems later. But right now it gives me exactly what I wanted, so let me be happy :)
The company I work for uses SilkTest, which works very good. In general, when using automated testing, you would be doing lots of regression testing. What is more important is when you've modified an existing project, then the test software must still be able to run those tests without any errors. (Or, with the errors you'd expect.)
But the market does have lots and lots of other test solutions. In the past, I even saw a test setup which required two computers and additional hardware. The hardware would connect to the monitor, mouse and keyboard of the test system. The other end would connect to a special extension card in the test server. The hardware was there so the server could send keyboard commands to the test system and record anything that happened on the screen. With some additional OCR software, it was very well capable of analysing any errors. Then again, it had a price of six digits and to be honest, I'd rather buy a Porsche for that price and probably would have some cash left to bring two beautiful dates with me while driving through the boulevards in Nice, France...
There's a Wiki page with an overview of all kinds of test software. It doesn't compare them but you can find Test Automation FX there, although it doesn't provide much information. It seems limited to testing Windows GUI's only.
TestComplete provides more information. Then again, comparing the Wiki's it also supports a lot more. Really a lot more. Enough to explain why it's that expensive...
I have just starting to evaluate different GUI automate testing tool. I have looked at Test Automation FX, Ranorex and TestComplete. And the price for the software are in that order.
This is some of my conclusions:
Test Automation FX - Coded in C#, Fully VS integrated. But very slow in finding components and takes much memory and don't fully support DevExpress components
Ranorex - Coded in C#, Have a studio for maintating test but can be fully integrated into VS. Has better object support. And you can find objects in your software by regex expresseion on several thing. Have some problem with DevExpress components but is rather fast to work with.
TestComplete - Uses its on script language. VBscript is the easiest one (C#Script is just awkward notation). This have really good support for DevExpress components and runs the test really fast. But is very expemsive
Right now I don't know which I should use. Ranorex is alite better than Test Automation FX but both lack the full support for DevExpress components. TestComplete is nice but it introduce a new language to the development and is very expensive. But the test scripts are small and the program have more logic in finding very to click.
I have evaluated Test Automation Fx, Although it recognizes all the controls of my application (we use 3rd party controls from infragistics ie netAdvantage controls for WPF)
It is very slow in recognizing the controls and even playback time is quite slow compared to QTP or Ranorex. I would recommend Ranorex over Test Automation Fx.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
We are currently using Fitness for subsystem testing.
we are having lot of issues using the tool, few to mention
Development time for writing Fixture is more then writing the actual code
Issues around check in of the dlls so that Qa can test them
Issues in running Fitnesse for project which uses NHibernate
limited help online
We are planning to use some other tool to do the testing
Few options which we know are
SOAP UI
Story teller
I am not sure whether we will have similar problems with these tools
It would be great to know if someone has experience using these tool and could guide us
In our project we have adopted TDD so we have Nuits for unit testing.
It would be great if anyone is aware of tools/ideas which could extend nunits for subsystem testing as well.
Component testing tools are all about calling functions. Your tests cause functions to be called in "fixtures" that then call into the SUT. Any tool based on this premise will encounter the problems you reference above.
However, most of those problem are manageable. For example you should not be writing lots of fixtures. If you are, something is wrong. Secondly, your fixtures ought to be little more than wiring code to call the APIs in your application. If your fixtures are doing significant work, then something is wrong.
In most FitNesse environments the number of fixtures is rather small. For example, there are over two hundred acceptance tests for fitnesse itself, but the number of fixtures in on the order of a dozen, and they are all relatively simple.
Get help on the fitnesse#yahoogroups.com site. The folks there are usually very responsive to questions.
If you can communicate with your software using text, then I have had success on past projects rolling my own framework using expect.
The framework I cooked up stored tests as XML files, using a simple xUnit style markup. The xml files were then transformed into executable tests using a stylesheet. I ended up transforming the tests into Tcl/Expect, but you could transform them into anything. In fact, if you wanted, you could transform them into multiple languages, depending on your needs.
Several people have kindly reminded me (in the same way you remind you poor dottering grandfather about the drool on his chin) that we are in the 21st century when they inquire why I would choose Tcl over some more modern language. As it turns out, for the purposes of this kind of testing, I haven't yet found a better choice. The Tcl language still kicks butt in this area. Trust me, I didn't wake up one day and say to myself "self, what I need a test framework implemented in a scripting language everyone will hate!"
Believe it or not, I really was looking for a tool, any tool, that had the following characteristics:
Cross platform. This was non-negotiable. We do a lot of cross platform development and we already use WAY too many tools that don't support cross platform development.
Simple syntax. Say what you want about Tcl, but the syntax is very regular. I knew that some native code would probably creep even into the XML files (and originally it was Tcl only, no XML) and I wanted the syntax to be comprehensible to a non-programmer. This simplicity is a core strength of Tcl. As it turns out, it also made transforming the XML easier too.
Free. My favorite price ;-)
Writing tests as simple xml files allowed non-programmers to write customer acceptance level tests - no programming required.
Easily extended.
I did not set out to home grow this to the extent I have. Initially, I looked at established test frameworks like DejaGnu and android. Mostly they had way too many features. They were so feature laden that I didn't think they would be easy for a project to start using without a lot of up front training. Looking at DejaGnu, got me interested in Tcl in general, and after a brief look at tcltest, I almost gave up. Both DejaGnu and tcltest assume you are an advanced Tcl scripter, which I didn't think anyone at my company ever would be. In addition, I wanted the test framework (if possible) to support an xUnit type of test framework and neither of these tools did.
Eventually I found TclTkUnit, a Tcl based testing framework that is designed along xUnit lines. It was only a short leap of logic to realize I could run TclTkUnit in Expect instead of tclsh and get everything I needed.
As it ended up getting used more, I added another stylesheet to render the xml files nicely in a web browser. The test framework generated it's own documentation.
On another project we needs a very basic sim / stim environment to emulate a person throwing switches and pushing buttons on a piece of hardware we didn't have. It only took a few hours to hack the test framework to function as a simulator. Creating the framework took some work, but we felt that it did pay benefits in the long run. I really believe that these types of unforseen consequences of creating your own tools is why people in the agile community & XP in particular have always been such strong advocates.
We have adopted a Fitnesse-based but practically-code-free approach using GenericFixture (google for Anubhava to find his wordpress site) for Fitnesse.
What this allows us to do is to create "executable test narratives" using a language that is friendly to the business-side (as opposed to the technical-side). This language, which is very easily defined, practically without coding, in Generic Fixture, is called a DSL (domain specific language). So we can write our test narratives using e.g. medical terms or even in a language other than English. Basically what we get is transforming our Use Cases into executable narratives.
We are starting to use it in a large project (15 ppl for 2 years) and it seems (so far) to have a good future.
It easily allows Test Driven Development or test-creation after development (traditional approach).
It is wiki-based (Fitnesse) and its versioning and refactoring funcitonality has proven so far sufficient.
I can give more info if anyone is interested.
best regards,
Aristotelis.
We use unit-testing frameworks like NUnit to drive our subsystem tests as well - the tests don't care how they are run. It doesn't have fitnesse's document-based approach, though.
We're in the initial stages of a large project, and have decided that some form of automated UI testing is likely going to be useful for us, but have not yet sorted out exactly how this is going to work...
The primary goal is to automate a basic install and run-through of the app, so if a developer causes a major breakage (eg: app won't install, network won't connect, window won't display, etc) the testers don't have to waste their time (and get annoyed by) installing and configuring a broken build
A secondary goal is to help testers when dealing with repetitive tasks.
My question is: Who should create these kinds of tests? The implicit assumption in our team has been that the testers will do it, but everything I've read on the net always seems to imply that the developers will create them, as a kind of 'extended unit test'.
Some thoughts:
The developers seem to be in a much better position to do this, given that they know control ID's, classes, etc, and have a much better picture of how the app is working
The testers have the advantage of NOT knowing how the app is working, and hence can produce tests which may be much more useful
I've written some initial scripts using IronRuby and White. This has worked really well, and is powerful enough to do literally anything, but then you need to be able to write code to write the UI tests
All of the automated UI test tools we've tried (TestComplete, etc) seem to be incredibly complex and fragile, and while the testers can use them, it takes them about 100 times longer and they're constantly running into "accidental complexity" caused by the UI test tools.
Our testers can't code, and while they're plenty smart, all I got were funny looks when I suggested that testers could potentially write simple ruby scripts (even though said scripts are about 100x easier to read and write than the mangled mess of buttons and datagrids that seems to be the standard for automated UI test tools).
I'd really appreciate any feedback from others who have tried UI automation in a team of both developers and testers. Who did what, and did it work well? Thanks in advance!
Edit: The application in question is a C# WPF "rich client" application which connects to a server using WCF
Ideally it should really be QA who end up writing the tests. The problem with using a programmatic solution is the learning curve involved in getting the QA people up to speed with using the tool. Developers can certainly help with this learning curve and help the process by mentoring, but it still takes time and is a drag on development.
The alternative is to use a simple GUI tool which backs a language (and data scripts) and enables QA to build scripts visually, delving into the finer details of the language only when really necessary - development can also get involved here also.
The most successful attempts I've seen have definitely been with the latter, but setting this up is the hard part. Selenium has worked well for simple web applications and simple threads through the application. JMeter also (for scripted web conversations for web services) has worked well... Another option which is that of in house built test harness - a simple tool over the top of a scripting language (Groovy, Python, Ruby) that allows QA to put test data into the application either via a GUI or via data files. The data files can be simple properties files, or in more complex cases structured (something like YAML or even Excel) data files. That way they can build the basic smoke tests to start, and later expand that into various scenario driven tests.
Finally... I think rich client apps are way more difficult to test in this way, but it depends on the nature of the language and the tools available to you...
In my experience, testers who can code will switch jobs for a pay raise as developers.
I agree with you on the automated UI testing tools. Every place I've worked that was rich enough to afford WinRunner or LoadRunner couldn't afford the staff to actually use it. The prices may have changed, but back then, these were in the high 5-digit to low 6-digit price tags (think of the price of a starter home). The products were hard to use, and were usually kept uninstalled in a locked cabinet because everyone was afraid of getting in trouble for breaking them.
I worked over 7 years as an application developer before I finally switched to testing and test automation. Testing is much more challenging than coding, and any automation developer who wants to succeed should master testing skills.
Some time ago I put my thoughts on skill matrices in a couple of blog posts.
If interested to discuss:
http://automation-beyond.com/2009/05/28/qa-automation-skill-matrices/
Thanks.
I think having the developers write the tests will be of the most use. That way, you can get "breakage checking" throughout your dev cycle, not just at the end. If you do nightly automated builds, you can catch and fix bugs when they're small, before they grow into huge, mean, man-eating bugs.
What about the testers proposing the tests, and the developers actually writing it ?
I believe at first it largely depends on the tools you use.
Our company currently uses Selenium (We're a Java shop).
The Selenium IDE (which records actions in Firefox) works OK, but developers need to manually correct mistakes it makes against our webapps, so it's not really appropriate for QA to write tests with.
One thing I tried in the past (with some success), was to write library functions as wrappers for Selenium functions. They read as plain english:
selenium.clickButton("Button Text")
...but behind the scenes check for proper layout and tags on the button, has an id etc.
Unfortunately this required a lot of set up to allow easy writing of tests.
I recently became aware of a tool called Twist (from Thoughtworks, built on the Eclipse engine), which is a wrapper for Selenium, allowing plain English style tests to be written. I am hoping to be able to supply this to the testers, who can write simple assertions in plain English!
It automatically creates stubs for new assertions too, so the testers could write the tests, and pass them to developers if they need new code.
I've found the most reasonable option is to have enough specs such that the QA folks can stub out the test, basically figure out what they want to test at each 'screen' or on each component, and stub those out. The stubs should be named such that they're very descriptive as to what they're testing. This also offers a way to crystalize functional requirements. In fact, doing the requirements in this fashion are particularly easy, and help non-technical people really work through the muddy waters of their own though process.
The stubs can be filled in via a combination of QA/dev people. This allows you to CHEAPLY train QA people as to how to write tests, and they typically slurp it up as it furthers their job security.
I think it depends mostly on the skill level of your test team, the tools available, and the team culture with respect to how developers and testers interact with each other. My current situation is that we have a relatively technical test team. All testers are expected to have development skills. In our case, testers write UI Automation. If your test team doesn't have those skills they will not be set up for success. In that case, it may be best for developers to write you UI automation.
Other factors to consider:
What other testing tasks are on the testers' plate?
Who are your customers and what are their expectations related to quality?
What is the skill level of the development team and what is their willingness to take on test automation work?
-Ron
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm debating whether I should learn PowerShell, or just stick with Cygwin/Perl scripts/Unix shell scripts, etc.
The benefit of PowerShell would be that the scripts could be more easily used by teammates that don't have Cygwin; however, I don't know if I'd really be writing that many general purpose scripts, or if people would even use them.
Unix scripting is so powerful, does PowerShell come close enough to warrant switching over?
Here are some of the specific things (or equivalents) I would be looking for in PowerShell:
grep
sort
uniq
Perl (how close does PowerShell come to Perl's capabilities?)
AWK
sed
file (the command that gives file information)
etc.
Tools are just tools.
They help or they don't.
You need help or you don't.
If you know Unix and those tools do what you need them to do on Windows - then you are a happy guy and there is no need to learn PowerShell (unless you want to explore).
My original intent was to include a set of Unix tools in Windows and be done with it (a number of us on the team have deep Unix backgrounds and a healthy dose of respect for that community.)
What I found was that this didn't really help much. The reason for that is that AWK/grep/sed don't work against COM, WMI, ADSI, the Registry, the certificate store, etc., etc.
In other words, UNIX is an entire ecosystem self-tuned around text files. As such, text processing tools are effectively management tools. Windows is a completely different ecosystem self-tuned around APIs and Objects. That's why we invented PowerShell.
What I think you'll find is that there will be lots of occasions when text-processing won't get you what you want on Windows. At that point, you'll want to pick up PowerShell. NOTE - it is not an all or nothing deal. Within PowerShell, you can call out to your Unix tools (and use their text process or PowerShell's text processing). Also you can call PowerShell from your Unix tools and get text.
Again - there is no religion here - our focus is on giving you the tools you need to succeed. That is why we are so passionate about feedback. Let us know where we are falling down on the job or where you don't have a tool you need and we'll put it on the list and get to it.
In all honesty, we are digging ourselves out of a 30-year-hole, so it is going to take a while. That said, if you pick up the beta of Windows Server 2008 /R2 and/or the betas of our server products, I think you'll be shocked at how quickly that hole is getting filled.
With regard to usage - we've had > 3.5 million downloads to date. That does not include the people using it in Windows Server 2008, because it is included as an optional component and does not need a download.
V2 will ship in all versions of Windows. It will be on-by-default for all editions except Server core where it is an optional component. Shortly after Windows 7/Windows Server 2008 R2 ships, we'll make V2 available on all platforms, Windows XP and above. In other words - your investment in learning will be applicable to a very large number of machines/environments.
One last comment. If/when you start to learn PowerShell, I think you'll be pretty happy. Much of the design is heavily influenced by our Unix backgrounds, so while we are quite different, you'll pick it up very quickly (after you get over cussing that it isn't Unix :-) ).
We know that people have a very limited budget for learning - that is why we are super hard-core about consistency. You are going to learn something, and then you'll use it over and over and over again.
Experiment! Enjoy! Engage!
grep
Select-String cmdlet and -match operator work with regexes. Also you can directly make use of .NET's regex support for more advanced functionality.
sort
Sort-Object is more powerful (than I remember *nix's sort). Allowing multi-level sorting on arbitrary expressions. Here PowerShell's maintenance of underlying type helps; e.g. a DateTime property will be sorted as a DateTime without having to ensure formatting into a sortable format.
uniq
Select-Object -Unique
Perl (how close does PowerShell come to Perl capabilities?)
In terms of Perl's breadth of domain specific support libraries: nowhere close (yet).
For general programming, PowerShell is certainly more cohesive and consistent, and easier to extend. The one gap for text munging is something equivalent to Perl's .. operator.
AWK
It has been long enough since using AWK (must be >18 years, since later I just used Perl), so can't really comment.
sed
[See above]
file (the command that gives file information)
PowerShell's strength here isn't so much of what it can do with filesystem objects (and it gets full information here, dir returns FileInfo or FolderInfo objects as appropriate) is that is the whole provider model.
You can treat the registry, certificate store, SQL Server, Internet Explorer's RSS cache, etc. as an object space navigable by the same cmdlets as the filesystem.
PowerShell is definitely the way forward on Windows. Microsoft has made it part of their requirements for future non-home products. Hence rich support in Exchange, support in SQL Server. This is only going to expand.
A recent example of this is the TFS PowerToys. Many TFS client operations are done without having to startup tf.exe each time (which requires a new TFS server connection, etc.) and is notably easier to then further process the data. As well as allowing wide access to the whole TFS client API to a greater detail than exposed in either Team Explorer of TF.exe.
As someone whose career focused on Windows enterprise development from 1997 - 2010, the obvious answer would be PowerShell for all the good reasons given previously (e.g., it is part of Microsoft's enterprise strategy; it integrates well with Windows/COM/.NET; and using objects instead of files provides for a "richer" coding model). For that reason I'd been using and promoting PowerShell for the last two years or so, with the express belief I was following the "Word of Bill."
However, as a pragmatist I'm no longer sure PowerShell is such a great answer. While it's an excellent Windows tool and provides a much needed step towards filling the historic hole that is the Window command line, as we all watch Microsoft's grip on consumer computing slip it seems increasingly likely that Microsoft has a massive battle ahead to keep its OS as important to the enterprise of the future.
Indeed, given I find my work is increasingly in heterogeneous environments, I'm finding it much more useful to use Bash scripts at the moment, as they not only work on Linux, Solaris and Mac OS X, but they also work—with the help of Cygwin—on Windows.
So if you buy into the belief that the future of the OS is commoditized rather than a monopolized, then it seems to make sense to opt for an agile development tool strategy that keeps away from proprietary tools where feasible. If however you see your future being dominated by all-that-is-Redmond then go for PowerShell.
I have used a bit of PowerShell for script automation. While it is very nice that the environment seems to have been thought out much more than Unix shells, in practice the use of objects instead of text streams is much more clunky, and a lot of the Unix facilities that have been developed in the last 30 years are still missing.
Cygwin is still my scripting environment of choice for Windows hosts. It certainly beats the alternatives in terms of getting things done.
There are lots of great great answers here, and here is my take. PowerShell is ready if you are... Examples:
grep = "Select-String -Pattern"
sort = "Sort-Object"
uniq = "Get-Unique"
file = "Get-Item"
cat = "Get-Content"
Perl/AWK/Sed are not commands, but utilities hence hard to compare, but you can do almost everything in PowerShell.
I have only recently started dabbling in PowerShell with any degree of seriousness. Although for the past seven years I've worked in an almost exclusively Windows-based environment, I come from a Unix background and find myself constantly trying to "Unix-fy" my interaction experience on Windows. It's frustrating to say the least.
It's only fair to compare PowerShell to something like Bash, tcsh, or zsh since utilities like grep, sed, awk, find, etc. are not, strictly speaking, part of the shell; they will always, however, be part of any Unix environment. That said, a PowerShell command like Select-String has a very similar function to grep and is bundled as a core module in PowerShell ... so the lines can be a little blurred.
I think the key thing is culture, and the fact that the respective tool-sets will embody their respective cultures:
Unix is a file-based, (in general, non Unicode) text-based culture. Configuration files are almost exclusively text files. Windows, on the other hand has always been far more structured in respect of configuration formats--configurations are generally kept in proprietary databases (e.g., the Windows registry) which require specialised tools for their management.
The Unix administrative (and, for many years, development) interface has traditionally been the command line and the virtual terminal. Windows started off as a GUI and administrative functions have only recently started moving away from being exclusively GUI-based. We can expect the Unix experience on the command line to be a richer, more mature one given the significant lead it has on PowerShell, and my experience matches this. On this, in my experience:
The Unix administrative experience is geared towards making things easy to do in a minimal amount of key strokes; this is probably as a result of the historical situation of having to administer a server over a slow 9600 baud dial-up connection. Now PowerShell does have aliases which go a long way to getting around the rather verbose Verb-Noun standard, but getting to know those aliases is a bit of a pain (anyone know of something better than: alias | where {$_.ResolvedCommandName -eq "<command>"}?).
An example of the rich way in which history can be manipulated:
iptables commands are often long-winded and repeating them with slight differences would be a pain if it weren't for just one of many neat features of history manipulation built into Bash, so inserting an iptables rule like the following:
iptables -I camera-1-internet -s 192.168.0.50 -m state --state NEW -j ACCEPT
a second time for another camera ("camera-2"), is just a case of issuing:
!!:s/-1-/-2-/:s/50/51
which means "perform the previous command, but substitute -1- with -2- and 50 with 51.
The Unix experience is optimised for touch-typists; one can pretty much do everything without leaving the "home" position. For example, in Bash, using the Emacs key bindings (yes, Bash also supports vi bindings), cycling through the history is done using Ctrl-P and Ctrl-N whilst moving to the start and end of a line is done using Ctrl-A and Ctrl-E respectively ... and it definitely doesn't end there. Try even the simplest of navigation in the PowerShell console without moving from the home position and you're in trouble.
Simple things like versatile paging (a la less) on Unix don't seem to be available out-of-the-box in PowerShell which is a little frustrating, and a rich editor experience doesn't exist either. Of course, one can always download third-party tools that will fill those gaps, but it sure would be nice if these things were just "there" like they are on pretty much any flavour of Unix.
The Windows culture, at least in terms of system API's is largely driven by the supporting frameworks, viz., COM and .NET, both of-which are highly structured and object-based. On the other hand, access to Unix APIs has traditionally been through a file interface (/dev and /proc) or (non-object-oriented) C-style library calls. It's no surprise then that the scripting experiences match their respective OS paradigms. PowerShell is by nature structured (everything is an object) and Bash-and-friends file-based. The structured API which is at the disposal of a PowerShell programmer is vast (essentially matching the vastness of the existing set of standard COM and .NET interfaces).
In short, although the scripting capabilities of PowerShell are arguably more powerful than Bash (especially when you consider the availability of the .NET BCL), the interactive experience is significantly weaker, particularly if you're coming at it from an entirely keyboard-driven, console-based perspective (as many Unix-heads are).
I am not a very experienced PowerShell user by any means, but the little bit of it that I was exposed to impressed me a great deal. You can chain the built-in cmdlets together to do just about anything that you could do at a Unix prompt, and there's some additional goodness for doing things like exporting to CSV, HTML tables, and for more in-depth system administration types of jobs.
And if you really needed something like sed, there's always UnixUtils or GnuWin32, which you could integrate with PowerShell fairly easily.
As a longtime Unix user, I did however have a bit of trouble getting used to the command naming scheme, and I certainly would have benefitted more from it if I knew more .NET.
So essentially, I say it's well worth learning it if the Windows-only-ness of it doesn't pose a problem.
If you like shell scripting you will love PowerShell!
Start at A guided tour of the Microsoft Command Shell (Ars Technica).
As my recent experiments led me into depths of PowerShell and .NET calls, I must say that PowerShell can replace Cygwin and Unix shell.
I'm not sure about Perl, but since both PowerShell and Perl are Turing complete as programming languages, I give this as a yes to replacing Perl too.
One thing that PowerShell has above Cygwin and ordinary Bash under *nix, is its ability to perform sandboxed DLL calls, manipulating the operating system via direct API calls, WMI methods and even COM objects. How about launching Internet Explorer via code, then doing whatever you want with its displayed document, effectively emulating a back-end for a Web server?
How about gathering data from SQL servers and other data providers, parse them and export as CSV, mail messages, text and actually any kind of existing and non-existing file formats? (With proper skills of creating a valid file out of data received, of course, but CSV are readily available).
And there is an extra security available via signed cmdlets and scripts,
group policies, and execution policies that help prevent malicious code from running on your system even if you run them as administrator.
About what commands are implemented - the answer by Richard lists them and PowerShell's capability of emulating their functionality already.
About whether PowerShell is strong to warrant switching over - this is more a matter of personal preference, although as more and more Windows services are providing PowerShell cmdlets to control them, not using PowerShell with these services present is considered a hindrance. (Hyper-V server is the primary such service, and it also provides the ability to do more with PowerShell cmdlets than with GUI!)
Probably this answer is five years late, but still, if someone performs administrative tasks or general scripting of various stuff on Windows, they should definitely try harnessing PowerShell for their purposes.
When you compare PowerShell to the combination Cygwin/Perl/Shell, be aware that PowerShell only represents the "Shell" part of that combination.
You can however invoke any command from PowerShell just as you do from cmd.exe or Cygwin. It does not re-implement the specified functions, and it is certainly not comparable to Perl.
It's "just" a shell, but it makes programming easier providing a comfortable interface to the .NET universe.
Also keep in mind that PowerShell requires Windows XP, Windows Server 2003 or higher, which may pose a problem depending on your IT infrastructure.
Update:
I had no idea what kind of philosophical debate my answer would spark.
I posted my answer in the context of the question: Compare PowerShell to Cygwin and Perl and Bash.
PowerShell is a shell, as it makes no syntactic difference between built-in commands, commandlets, user functions, and external commands (.exe, .bat, .cmd). Only invoking .NET methods differ by adding a namespace or an object in the call.
Its programmability derives from the .NET framework, not from anything specific to the PowerShell "language".
I'd say I believe PowerShell is a "scripting language" as soon as Bugzilla or MediaWiki are implemented as PowerShell scripts running on a web server ;)
Until then, enjoy the comparisons.
TL;DR -- I don't hate Windows or PowerShell. I just can't do anything in Windows or on PowerShell.
I personally still find PowerShell underwhelming at best.
tab completion of directory paths do not compound, requiring the user to enter a path separator after every name completion.
I still feel like Windows doesn't even have the concept of a path or of what a path is, with no accessible user home indicator ~/ short of some #environment://somejibberish/%user_home%
NTFS is still a mess and seemingly always will be. Good luck navigating.
cmd-esque interface, The dinosaur cmd.exe is still visible in PowerShell, Edit → Mark still being the only way to copy information, and copying only in the form of rectangular blocks of visible terminal space. and Edit → Mark still being the only way to paste strings into the terminal.
Painting it blue doesn't make it any more attractive. I don't mind Microsoft developers having a taste in color though.
Windows always opens at top left corner of screen. For somebody who uses vertical task bars this is incredibly annoying, especially considering that the Windows task bar will cover the only corner of the window that gives access to copy/paste functionality.
I can't speak much on the grounds of the tools Windows includes. Being that there is a whole set of open-source, freely licensed CLI tools, and PowerShell ships with, to my knowledge, none of them is an utter disappointment.
PowerShell's wget takes seemingly incomparable arguments to GNU wget. Thanks, glimmer of hope portably-useless.
PowerShell POSIX is not Bash-compatible, particularly the && operator is not handled, making the simplest of conditional command following not a thing.
I don't know man; I gave it a shot, I really did; I still try to give it a shot in the hopes that the next time I open it it will be any less useless. I cannot do anything in PowerShell, and I can barely do things with a real project to bring GNU tools to Windows.
MySysGit gives me the dinosaur cmd.exe prompt with a couple of GNU tools, and it is still very underwhelming, but at last path completion works. And the Git command will run in Git Bash.
Mintty for MySysGit gives the Cygwin interface over mysysgit's environment, making copy and paste a thing (select to copy (mouse), Shift+Ins to paste, how modern...). However, things like git push are broken in Mintty.
I don't mean to rant, but I still see huge problems with command-line usability on Windows even given tools like Cygwin.
P.S.: Just because something can be done in PowerShell, doesn't make it usable. Usability is deeper than ability and is what I tend to focus on when trying to use a product as a consumer.
The cmdlets in PowerShell are very nice and work reliably. Their object-orientedness appeals to me a lot since I'm a Java/C# developer, but it's not at all a complete set. Since it's object oriented, it's missed out on a lot of the text stream maturity of the POSIX tool set (awk and sed to name a few).
The best answer I've found to the dilemma of loving OO techniques and loving the maturity in the POSIX tools is to use both! One great aspect of PowerShell is that it does an excellent job piping objects to standard streams. PowerShell by default uses an object pipeline to transport its objects around. These aren't the standard streams (standard out, standard error, and standard in). When PowerShell needs to pass output to a standard process that doesn't have an object pipeline, it first converts the objects to a text stream. Since it does this so well, PowerShell makes an excellent place to host POSIX tools!
The best POSIX tool set is GnuWin32. It does take more than 5 seconds to install, but it's worth the trouble, and as far as I can tell, it doesn't modify your system (registry, c:\windows\* folders, etc.) except copying files to the directories you specify. This is extra nice because if you put the tools in a shared directory, many people can access them concurrently.
GnuWin32 Installation Instructions
Download and execute the exe (it's from the SourceForge site) pointing it to a suitable directory (I'll be using C:\bin). It will create a GetGnuWin32 directory there in which you will run download.bat, then install.bat (without parameters), after which, there will be a C:\bin\GetGnuWin32\gnuwin32\bin directory that is the most useful folder that has ever existed on a Windows machine. Add that directory to your path, and you're ready to go.
I haven't seen that the PowerShell has really taken off, at least not yet. So it might not be worth the effort of learning it unless those others on your team already know it.
For your predicament you might be better off with a scripting language that others could get behind, Perl like you mentioned, or others like Ruby or Python.
I think a lot of it depends on what you need to do. Personally I've been using Python for my own personal scripts, but I know when I start writing something that I'll never be able to pass it on - so I try not to do anything too revolutionary.
Why not use both? Call PowerShell scripts in Cygwin just like any other interpreted scripts like Perl, etc.
I do this enough that I wrote https://bitbucket.org/jbianchi/powershell for a Bash wrapper to call powershell.exe in Cygwin. It can be used as a shebang as the first line of a powershell.exe .ps1 script (since PowerShell also uses "#" as a comment). See https://bitbucket.org/jbianchi/powershell/wiki/Home for examples
In a couple of lines, Cygwin and PowerShell are different tools however if you have Cygwin installed you can run the Cygwin executables within a PowerShell session. I've gotten so used to PowerShell that now I no longer use grep, sort, awk, etc. There are pretty much built-in alternatives in PowerShell, and if not you can find a cmdlet out there.
The main tool I find myself using is ssh.exe, but within a PowerShell session.
It works great.
I found PowerShell programming to be not worth the effort.
I have several years of experience with shell scripting under Unix, but I found it enormously difficult to do much of anything with PowerShell.
It seems like many functions require you to interrogate the Windows Management Interface and issue SQL-like commands to get the information you need.
For example, I wanted to write a script to remove all files with a specific suffix from a directory tree. Under Unix, this would be a simple ...
find . -name \*.xyz -exec rm {} \;
After a couple of hours dicking around with Scripting.FileSystemObject and WScript.Shell and issuing "SELECT * FROM Win32_ShortcutFile WHERE Drive = '" & drive & "' AND Path = '" & searchFolder & "'", I finally gave up and settled for Windows Explorer's Search command and just do it manually. There's probably some way to do what I wanted, but I didn't see anything obvious and all the examples on the MSDN site were so trivial as to be worthless.
EDIT Heh, of course as soon as I wrote this I poked around some more and found what I had been missing: the -recurse option to the remove-item command is faulty (revealed if you use get-help remove-item -detailed).
I had been trying "remove-item -filter '* .xyz' -recurse" and it wasn't working, so I gave up on it.
Turns out you need to use get-childitem -filter '*.xyz' -recurse | remove-item
You can also try running Bash scripts on Windows using BashWin at
https://github.com/skanga/BashWin.
PowerShell is very powerful, more powerful than the standard built-ins of the Unix shells (but only because it includes much of the functionality usually shelled out to subprograms). Also, consider that you can write applets in any .NET language, including IronPython, IronRuby, PerlNet, etc.. or you can simply call your Cygwin commands from PowerShell, ignoring all the extra functionality and it will work similarly to Bash, KornShell, or whatever...
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As recently as several years ago, the developers actually made the builds that went to clients. This was obviously a disaster for reasons too numerous to list.
Then when we started to learn the errors of our ways, we looked for a way to auto-build the entire application on a dedicated build machine. The culture at that time was very averse to bringing in outside tools, so we built our own autobuild system by writing a VB app.
This worked fine for a while, until the project's structure started to change, new projects were added, and we needed to build the application in different ways. Then then weaknesses of our hand-rolled autobuilder became apparent and, over time, increasingly onerous. This disease has progressed now to the point where QA (who owns our build process) can't even maintain the autobuilder because it requires more and more programming skill. Every time we add a project or change something in an existing project, it consumes more developer time just to make it work. There have been days when we were unable to produce a build because the system was broken.
I'm now in a position where I can change this process, and I'm looking to scrap the entire system and put something else in it's place. My goals are:
Have an autobuild system that can run with zero human interaction at a specific time every day. It should be able to gather all the source code, compile all the apps, create the setups, put the finished products on a network share, and possibly trigger the automated testing system to kick in (we use QTP).
The autobuild system should be flexible enough to easily adapt to changes in the project without rrequiring a major overhaul.
It should be simple enough so that QA can own the system and not require developer resources to make changes to how builds are made.
What are your experiences? Can you recommend an autobuild system? Should I have different goals?
I'm currently using CruiseControl integrated with Ant to control project builds. This allows flexibility of build schedules and means you can automate the entire build process fairly easily using Ant scripts. Also, during defect fixing periods you can have CruiseControl set up to watch for source control submissions instead of time periods and build when these occur. This allows developers very quick feedback on defect fixes.
I use FinalBuilder and FinalBuilder Server for nightly builds. It's a bit buggy at times, but if you think it through it's quite easy to create extensible projects that can build X project type, build it's database from change scripts and deploy it to a testing server.
It can also handle all kinds of wierd and wonderful things like ZIPing a nightly build and uploading it to an FTP or creating ISO images automatically.
Definitely look into MSBuild if you're on the Microsoft stack.
Joel is always going on and on about how great FinalBuilder is, so that might be worth a look as well.
We just migrated from a hand-rolled set of Perl scripts to a Buildbot setup. I found it because that's what Google's using for Chrome.
You can do nightlies, or it can integrate with source control to do an isolated test build whenever anybody does a checkin, or a variety of other things. It's also parallel; you can have more than one machine in the build farm, either for specialized duties or just to handle more load.
The entire system is written in Python, so it's platform-agnostic, which is important if you need to do builds on more than one platform. It can do anything you can do from the command line; we have it calling MSBuild for user-mode components, a DDK build for kernel-mode pieces, and running products for unit test builds.
Out of the box it supports most OSS source control tools, but if you're using TFS or something else you may need to modify the package that you install on the slave machines.
I think you are on the right track here.
Whoever looks after your automated build process needs to have a fundamental understanding of how your solution fits together. This doesn't necessarily mean knowing how to write code or architect solutions, but they will require a solid understanding of how the solution compiles, packages itself etc.
You might need to share responsibility for builds between people or teams to accomplish this. I'd say that a daily build is a "team responsibility".
I'd look at establishing a baseline build configuration which can be extended for "special use" builds (besides just building a release version), e.g. internationalized releases, fxCop/Quality Tools config, build + run Unit Tests, continuous integration builds, a build config to run on developer workstations, etc.
Instead, I'd aim to achieve the following:
Automatic versioning, signing etc
Ability to produce verbose output (logging) to help debug build breaks
On that point - it should handle errors properly, capture as much information and log it properly
Consistency - It should work the same way each time to produce repeatable outcomes
Run in a clean, limited access environment
Well commented/documented so that it can be understood by new staff, etc.
Option to generate release notes, compile metrics, produce reports (if this option is available)
Ability to deploy to multiple environments
Support different ways to obtain source code from source control, e.g. by changeset, label, date, etc
As for tool recommendations, I've used FinalBuilder, Visual Build Pro, MSBuild/Team Build, nAnt, CruiseControl and CIFactory plus and good old fashioned batch files.
Each has its pros and cons, I'm not going to make a recommendation except to say that the products with decent UI support were a little bit easier to work with, but at times were far less powerful. If you're working with VIsual Studio, MSBuild is very powerful, but has a somewhat steep learning curve.
As of tools delivered with MS Visual Studio you might want to use MSBuild. Additional Community toolsets for MSBuild will even give you the possibility to checkout code from Subversion and zip output.
We're using it successfully in our company. Projects consists of several solutions with 100+ subprojects. Works like a charm.
Visual Build Pro is nice, if your build machines are Windows. I think this would fill the requirement you have about QA owning the system. But don't get me wrong, it's pretty powerful.
We use CruiseControl.NET and UppercuT (which uses NAnt) to do this. UppercuT uses conventions for building so it makes it really easy for someone to get started by answering three questions (What is the solution named? What is the path to source control? What is your company's name?) and you are building.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
We use the Hudson buildbot for for big Java web app building from ant build scripts. Hudson is pretty sweet for our purposes. It has a master/slave setup so builds can be done concurrently (on a timer or on-demand). Slave nodes can be any OS/hardware combo provided it has the needed build tools already on it and is on the network (and won't crash every 10 min).
Full web-based interface including live console output, change logs, artifacts from the build are available across the network including previous builds (if successful). Awesomesauce!