How to start with unit testing? - asp.net

First,I don't want to do TDD. I'd like to write some code and then just write the unit tests for the important stuff.
I know the tools. I know how to write a unit test.
I wrote some code (couple off classes) and I don't know where to start. I'm missing the semantic.
How to pick up what to unit test ?
Should I unit test every class extra ?
Should I try to simulate every possible variation of method's parameters ?

The idea with unit testing is to test pieces of code. We use TDD at my company which works create. We write tests for every possible option of a function. So to answer your three questions;
How to pick up what to unit test ?
It's useless to write unit tests for code or functions that contain no intelligence. This actually is what needs to happen when you strictly follow TDD. But if it's obvious what the function returns and you're sure nothing can go wrong, then you probably don't have to write an test for it. Though it's better to do so.
Should I unit test every class extra?
What exactly do you mean by this? If the question is do you need to test classes the answer is no. Unit testing is for testing the smallest piece of code. Mostly functions and constructors. What you want to know is if your function gives the result you want and you want it to return the desired result or throw an nicely handeled exception no matter what parameter-value you send to the function
Should I try to simulate every possible variation of method's parameters ?
You should. That's the whole idea of writing a unit test. You want to test a piece of code an rule out every single thing that can go wrong. Parameters are the most important here. What happens if a string-parameter contains html for example? And what if an required parameter is NULL? Or an empty string. Every option should be tested to rule out the possible things that can go wrong.
If you're using the .net framework it is very interessting to look at the Moq framework. Shortly put it is an framework allowing you to create an fake object of some type and you can validate your test against it to check if the result is as expected using several different parameter and return values. You can read about it in Scott Hanselmans blog post.

Should I try to simulate every
possible variation of method's
parameters ?
You should have a look at Pex which can generate input values for your parameterised tests that provide the greatest code coverage.

Related

Karate - how to stop following test(s) when current test fails

I am using Karate and I have written test scenario with multiple runs (using Scenario Outline).
Suppose that I have defined variables in table (Examples section). It contains five rows -> scenario will be run 5 times.
Suppose that two first scenarios (runs) passed and third failed.
Is there some way how to set the behavior that after test scenario failure all following test scenarios (runs) will not be processed (skipped)?
I am not sure if something like that is supported.
Thank you!
No. This request is a surprise, because everybody seems to want the other behavior: https://stackoverflow.com/a/54108755/143475
But you can write a custom loop and handle this the way you want, but I wouldn't recommend it.
If you are ok to do some Java coding, this behavior can be implemented as an ExecutionHook: https://stackoverflow.com/a/59080128/143475

How to complete an existing phpunit test with a generator

PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?
I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.
AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).

Any other alternative similar to #covers to mark that a particular method of a class is covered in a unit test

Please note that I am not instantiating my code in my unit test rather I am using curl to test a web service operation and then asserting the actual result against expected value. I have no issues with my testing. I just want a way to show that the class is covered in the 'CodeCoverage' of PHPUnderControl. I tried #covers- it just puts up the class in the Code Coverage list of classes but gives the code coverage as 0 pulling down my overall coverage. I wonder if there is a way to indicate explicitly that a unit test should cover few methods in a class.
Marking a test with #covers tells PHPUnit to use xdebug to track the code coverage in that area during that test. It does not state that the code is covered. I assume that the code under test is running under a separate PHP process invoked via httpd, Apache, or some other HTTP method.
If you can find a way to bypass curl from within your test and call the code directly, the coverage would get tracked. That would leave only the request layer uncovered which gets you closer. Or you need to find a way to simulate the same calls that come in to your web service.

How to write Use case(s) & Test Driven Development for binding a gridview Example

I'm still trying to understand and use Use Cases and Test Driven Development, but having a hard time crossing the line. I'm hoping someone can provide a good example of how setting a datasource and/or databinding a gridview could be accomplished using Test Driven Development.
Here is my pseudo approach at it.
Create an aspx page and add the gridview control to it.
create a method in the code behind called BindGrid(datacollection,gridview) that passes the collection and gridview to a method in a class outside my website so I can actually write the Unit test for the method, and returns a databound gridview.
On the BindGrid method, I right click and select "Create Unit Test" which creates a new test project for me with an outline test for my BindGrid() method.
Now I guess there are a number of test I could write, for example: testTrueDataCollectionBindstoGridView() to see if the collection datatype actually binds? I'm not sure the other test to write?
This is how I currently understand I would go about TDD and Unit testing my example. It feels very clumsy and I'm hoping for some feedback as to what I'm doing wrong, and ideas for improvement.
Thanks
Update:
I've decided to try to simplify my question in hopes of getting more ideas.
How would you go about writing a test for a collection binding to a control? For example say I wanted to bind a dictionary to a drop down list. What test should I be writing, and how would I go about writing them?
Thanks
To some extent, your question describes why the ASP.NET MVC frameworks was created. It is very difficult to write tests against the ASP.NET Web Forms model (of which the GridView is part of). Also, you seem to be close to writing tests to test the Framework itself, not the code you are writing. If you go too far down that path you will write tests forever and never release any code. It would be more productive, in this case, to seperate out whatever code you used to generate the collection you are binding to the GridView and thouroughly test the scenarios around creating that (and especially any exception conditions and possible conditional logic) and trust that the framework binds the data to grid correctly. If the data generation method can produce any exceptions that you want to handle at the UI-level, try test how your code handles that, but again, this is where the MVC frameworks are helpful, b/c it is extremely difficult to test code that needs to be run within the Web Forms lifecycle.
Just speaking in general terms, since I'm mostly doing Java these days and not .NET stuff. I find that TDD does feel rather clumsy at first for most people, but give it some time, and see if you like it or not.
As far as writing tests, take this general approach. Create a method that you want to test, but without any body to it. Create a test that exercises some particular behavior, and keep it simple. Run the test, and it should fail (run red, assuming your IDE does that). Then, write the line or couple lines of code that make the test succeed (go green). Now, write another test, and repeat. If you have conditionals, make sure all paths are tested. That is, write one path, then the other, writing the test for each first.
When you have a method that works the way you want, you can now refactor it to your heart's content, since the test will always be there to check your work. Look at the method. Perhaps there's 4-5 lines that go together, and could be pulled out into a method. Give the method a good name, so that when you're reading the calling method, the name tells you what's going to happen without drilling in. There are other possible refactorings, especially inasmuch as you can see design patterns that you can leverage.
Make sure you re-run the tests frequently as you proceed with your refactoring.
TDD is about adding functionality. It encourages minimal logic in UI components - mostly they should be pure delegates to the code that does the real work, that you can test.
I would recommend, for the purposes of learning TDD, don't bother writing tests about UI behaviour (eg binding data to a control).
To learn TDD effectively, I think a good place to start is try a few code katas. A code kata is a small problem which you do repeatedly. The solution you develop is not the important thing - learn from the process of getting to the solution. So do the problem 10 times, and deliberately choose different designs for the solution each time. TDD is about the process.
There is a list of katas here: http://codingdojo.org/cgi-bin/wiki.pl?KataCatalogue
The Roman Numerals kata is a good one, but just pick one that appeals to you.

How can I make my Selenium tests less brittle?

We use Selenium to test the UI layer of our ASP.NET application. Many of the test cases test longer flows that span several pages.
I've found that the tests are very brittle, broken not just by code changes that actually change the pages but also by innocuous refactorings such as renaming a control (since I need to pass the control's clientID to Selenium's Click method, etc) or replacing a gridview with a repeater. As a result I find myself "wasting" time updating string values in my test cases in order to fix broken tests.
Is there a way to write more maintainable Selenium tests? Or a better web UI testing tool?
Edited to add:
Generally the first draft is created by recording a test in the IDE. (This first step may be performed by QA staff.) Then I refactor the generated C# code (extract constants, extract methods for repeated code, maybe repeat the test case with different data, etc). But the general flow of code for each test case remains reasonably close to the originally generated code.
I've found PageObject pattern very helpful.
http://code.google.com/p/webdriver/wiki/PageObjects
more info:
- What's the Point of Selenium?
- Selenium Critique
maybe a good way to start is to incrementally refactor your test cases.
I use the same scenario you have selenium + c#
Here is how my code looks like:
A test method will look like somethink like this
[TestMethod]
public void RegisterSpecialist(UserInfo usrInfo, CompanyInfo companyInfo)
{
var RegistrationPage = new PublicRegistrationPage(selenium)
.FillUserInfo(usrInfo)
.ContinueSecondStep();
RegistrationPage.FillCompanyInfo(companyInfo).ContinueLastStep();
RegistrationPage.FillSecurityInformation(usrInfo).ContinueFinishLastStep();
Assert.IsTrue(RegistrationPage.VerifySpecialistRegistrationMessagePayPal());
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
paypal.LoginSandboxPage(usrInfo.sandboxaccount, usrInfo.sandboxpwd);
Assert.IsTrue(paypal.VerifyAmount(usrInfo));
paypal.SubmitPayment();
RegistrationPage.GetSpecialistInformation(usrInfo);
var bphome = new BPHomePage(selenium, string.Format(Resources.GlobalResources.LoginBPHomePage, usrInfo.AccountName, usrInfo.Password));
Assert.IsTrue(bphome.VerifyPageWasLoaded(usrInfo));
Assert.IsTrue(bphome.VerifySpecialistProfile());
bphome.Logout();
}
A page Object will be something like this
public class PublicRegistrationPage
{
public ISelenium selenium { get; set; }
#region Constructors
public PublicRegistrationPage(ISelenium sel)
{
selenium = sel;
selenium.Open(Resources.GlobalResources.PublicRegisterURL);
}
#endregion
#region Methods
public PublicRegistrationPage FillUserInfo(UserInfo usr)
{
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserFirstName", usr.FirstName);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserLastName", usr.LastName);
selenium.Select("ctl00_cphComponent_ctlContent_wizRegister_ddlUserCountry", string.Format("label={0}",usr.Country ));
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserEmail", usr.Email );
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserDirectTel", usr.DirectTel);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserMobile", usr.Mobile);
return this;
}
}
Hope this helps.
How are you creating your Selenium tests, by recording them and playing them back? What we have done is build an object model around pages so that you call a method like "clickSubmit()" rather than clicking on an id (with a naming convention for these ids), which allows selenium tests to survive many changes.
You may or may not be able to write tests that are resilient to refactoring. Here's how to make the refactoring less painful: Continuous integration is essential.
Run them every day or every build. The sooner it's fixed, the easier.
Ensure devs can run the tests themselves. Again, the sooner it's seen and fixed, the easier.
Keep selenium tests few. They should focus on critical path / pri 1 test scenarios. Deep testing should be done at unit test level (or jsunit tests). Integration tests are always expensive and less valuable.
Hooking up on any low-level concepts like XPaths, CSS Selectors or IDs for end-to-end tests is a recipe for unstable tests.
I advise using testRigor to produce tests that won't break any time you run/change/improve your application a little bit.
The code analogous to the page object one above would look like this:
enter "Peter" into "First Name"
enter "Pen" into "Last Name"
enter "US" into "Country" below "User Data"
enter stored value "email" into "Email"
enter stored value "password" into "Password"
enter "415-123-4567" into "Direct Telephone"
enter "415-123-4568" into "Mobile Number"
click "Submit"
testRigor would associate texts that look like labels with the inputs so that as soon as from an end-user's perspective your page would look the same then the testRigor scripts will be green. Here is the doc.
disclaimer: I'm a co-founder of testRigor. I co-founded it because we had those exact issues ourselves.
Hope this helps.
There are no innocuous changes when it comes to test automation ;)
We use the SAFS framework with Rational Robot (RRAFS) to minimize impact to our automation scripts. There's still work to maintain the application map, but the scripts remain stable for the most part. The SAFS framework sounds very similar to the method cynicalman mentions, but already packages up the generic methods you would use in your scripts.
The SAFS site says there's partial support for Selenium, so this may work for you.
I've found that using XPath expressions in Selenuium-RC adds alot to the robustness of a test.
I write my tests in a similar manner. The first pass is often written via the IDE/Record to get most of my page-flow and click operations. Once I've got that, I begin stepping through the test via Selenium-RC adding assertions and changing absolute widget locators to more readable and friendly Xpath expressions. (as well as documenting the test! :) )
One thing to be aware of.. if your tests are xpath-heavy, they may run a little slower in IE6 due to its poor javascript execution abilities. (I have some test suites that take almost an hour longer to execute under IE than under FF. It's managable, but just something to keep in mind when you're writing the tests.)
Selenium in theory has an abstraction called UI Element (the documentation is here).
The features would be
abstract locators, indipendent on the very html implementation; this would map well to the concept of component or widget of a web framework,
rollup rules, allowing to merge several commands into a single more abstract command.
I've struggled a couple of days to leverage this feature but in the end I decided to abandon it, for the following reasons:
some concepts, such as that of offset locators (think of them as parts of a component) are not fully or usefully developed;
the feature is not fully supported in formatters and the more recent the formatter the less the feature is supported, hinting that the core Selenium evolution is leaving this feature behind;
it's not fully integrated in Selenium 2.0 (WebDriver).
I think Xpath is the best way to ensure robust selenium tests.
I am currently working on a library to help writing xpath expressions easier.
If interested, you can check it out here:
http://www.unit-testing.net/CurrentArticle/How-To-Write-XPath-for-Selenium-Tests.html

Resources