I've got a FxCop custom rule and I need to be able to create a integration test. I'm however having a hard time finding decent examples regarding this, since MS changed the API. Osherove's FxCopUnit looks like the perfekt framework, but it also relies on the old FxCop API - Any thoughts?
True unit testing of FxCop rules isn't necessarily worth the investment necessary to build a proper unit testing framework for rules. Too much depends on the data and logic provided by the FxCop engine itself for the dependency to be mocked without introducing potentially serious deviations from behaviour of the actual engine. Most folks who test rules run mainly integration tests (including FxCopUnit, despite its name).
If you feel that an integration testing framework for FxCop rules would be useful, there is one included in the Bordecal FxCop rules framework. Documentation for its use is in the "The rule testing framework" section at http://bordecalfxcop.codeplex.com/documentation.
I've also rolled a custom FxCop testing framework based on Roslyn CTP. You can find it in the FxCopContrib project at http://FxCopContrib.codeplex.com/
Just get the latest version of the sources to get it.
Related
I recently learned how to use robot framework - a testing framework for software / web app testing. It has very simple strong expressive syntax.
I wonder if there is possibility of any other usage of robot framework than testing - I could imagine to create some kind of scraping bot, or checking bot. But so far it looks to me, like it is created strictly for testing (basically whole logic you write in testcases).
So my question:
Can robot framework be used outside testing? If yes, can you provide any resources / examples?
Feel free to share any personal experience with this.
Yes, robot can be used outside of testing. Robot Framework version 3.1 added preliminary support for RPA (robotic process automation). Instead of tests, you can create tasks.
See Creating tasks in the robot framework user guide for a bit more information.
Hi I am looking for predefined (common) step definitions for Meteor-cucumber\chimp.
I used PHP's Behat (BDD cucumber framework). There is this extensions and this class. Which allows you to have a common step definitions out of the box. You don't need to write those step definitions by yourself.
Down below it is the list of step definitions you got from Behat.
Short Answer
This sort of step-def library doesn't exist and we (the authors of Chimp) won't be adding them because we have seen they are very harmful in the long run.
It looks like you are wanting to write test scripts, in which case, you would be better off using Chimp with Mocha + Customer WebdriverIO commands and not Cucumber to write these.
Long Answer
Features files with plain language scenarios and steps are intended to discover and express the domain of your application. The natural freeform text encourages you to use language that you can use with the entire team - otherwise known as the ubiquitous domain language.
You are wanting to make one of the most common mistakes when it comes to Cucumber, and that is to use it as a UI testing tool. Using UI based steps breaks the ubiquitous language principle.
The step reuse should be around the business domain so that you create a ubiquitous domain language. If you use UI steps instead of specs, you end up creating technical debt without knowing it. Gherkin syntax is not easy to refactor and if you change your step implementations, you need to update in multiple places. For domain concerns, this is usually not a big issue, but for UI tests, it's likely you will heavily reuse steps.
It sounds like you are interested in good code reuse. If you think about it, WebdriverIO already has a great API and most of the steps you are wanting to use would just be wrappers around the API.
Rather than create this extraneous translation, you should just Mocha to write the tests and access WebdriverIO's API directly. This way, you have the full JavaScript language to employ some software engineering practices instead of the simplistic Gherkin parser.
WebdriverIO also has a great custom commands command that allows you to create all of the methods you have mentioned above. An extension file that adds a ton of these scripts would be VERY useful.
We have written a repository with best practices and some do's and don'ts lessons. In particular, you should see:
Lesson #1: Test Scripts !== Executable Specifications
Lesson #2: Say No To Natural Language Test Scripts
You might also want to read:
Aslak's view of BDD
BDD Tool Cucumber is Not a Testing Tool
To test my UI I will use Mocha. I don't need cucumber specs.
As a task runner I will use Chimp (Chimp uses webdriver.io).
Here is quick Mocha+Chimp how to.
I have a requirement to write an application in .Net that can allow business customers to define their own rules. I have been looking into BRE (Business Rule Engine) by Microsoft that comes in Biztalk server. What I understood so far is that BRE provides you a flexible rule composer to drag drop properties from your .Net entity and then assert it against some condition (predicate). However, this is pretty basic and straight forward idea which in my mind can be simply achieved by defining my own domain specific language for writing easy to understand business rules. All I have to do is to create a grammar using ANTLR or Coco/R and an interface where you can write and compile rules and I am good to go.
Can someone shed some light on how BRE is offering more and why one should prefer it over custom made solution?
The question's answer really depends on factors such as:
The enterprise need: BizTalk is an integration platform and if your enterprise is already using BizTalk, There should not be any question of writing your own rule engine vs BRE since BRE provides most of the functionality you need from a rule engine with great performance, also providing ability to cache long term facts at run time if required. Business Rule Composer provides easy rule building as well as can be installed to business users for composing rules. In some complex scenario, this can be customized as well. These rules you can use within BizTalk using Orchestration or .NET class libraries. So with BizTalk its a great rule engine with lots of flexibility.
If BizTalk is not your integration platform and you are considering it only for BRE then you need to think twice since using BRE itself also requires BizTalk license, also using BizTalk product only for BRE may not be cost effective. Something you need to think about.
I want to test a pipeline which includes a custom component using properties in the message property bag, which at runtime the File Adapter creates. How do I inject those properties in a unit test ?
I think starting BizTalk Server 2009 there's already a build support for Visual Studio to generate test artifacts for BizTalk Server pipelines.
See this link http://msdn.microsoft.com/en-us/library/dd792682%28v=bts.10%29.aspx
Also here a very good blog which the author has summarized several way of testing BizTalk Server Artifacts. http://santoshbenjamin.wordpress.com/2009/02/05/biztalk-testing-and-mocks/
Personally, As mentioned by hugh above I'm also using the Tomas Restrepo's pipeline testing framework combined with Moq mocking framework, it has given me an advantage of a more stable and more fluent way of testing Biztalk applications.
Another advantage of http://winterdom.com/2007/08/pipelinetesting11released is if you're using BizUnit for your Unit or Integration testing then it is already supported.
Hope this helps
Sounds like you need to use Tomas Restrepo's pipeline testing framework:
http://winterdom.com/2007/08/pipelinetesting11released
We are looking to build a cube in Microsft SQL server analysis services but would like to be able to use some of the automated testing infrastructure we have.
such as Cruise control for automated build, deployments and test.
I am looking for anyone that can give me any pointers on building tests against analysis services, and also any experience with adding these to a build pipeline.
Also if automation is not possible some manual test methods.
Recently I came upon BI.Quality project on codeplex and from what I can tell it's very easy to learn and to integrate into existing deployment process.
There is another framework named NBi. You've additional features compared to BI.Quality as to check the existence of a measure, dimension, attributes, the order of members, the count of members. Also when comparing two result sets it's often easier to spot the difference between them with NBi. The edition of the test-suites is also done in one single xml file validated by an XSD (better user-experience).