How can I make my Selenium tests less brittle? - asp.net

We use Selenium to test the UI layer of our ASP.NET application. Many of the test cases test longer flows that span several pages.
I've found that the tests are very brittle, broken not just by code changes that actually change the pages but also by innocuous refactorings such as renaming a control (since I need to pass the control's clientID to Selenium's Click method, etc) or replacing a gridview with a repeater. As a result I find myself "wasting" time updating string values in my test cases in order to fix broken tests.
Is there a way to write more maintainable Selenium tests? Or a better web UI testing tool?
Edited to add:
Generally the first draft is created by recording a test in the IDE. (This first step may be performed by QA staff.) Then I refactor the generated C# code (extract constants, extract methods for repeated code, maybe repeat the test case with different data, etc). But the general flow of code for each test case remains reasonably close to the originally generated code.

I've found PageObject pattern very helpful.
http://code.google.com/p/webdriver/wiki/PageObjects
more info:
- What's the Point of Selenium?
- Selenium Critique
maybe a good way to start is to incrementally refactor your test cases.
I use the same scenario you have selenium + c#
Here is how my code looks like:
A test method will look like somethink like this
[TestMethod]
public void RegisterSpecialist(UserInfo usrInfo, CompanyInfo companyInfo)
{
var RegistrationPage = new PublicRegistrationPage(selenium)
.FillUserInfo(usrInfo)
.ContinueSecondStep();
RegistrationPage.FillCompanyInfo(companyInfo).ContinueLastStep();
RegistrationPage.FillSecurityInformation(usrInfo).ContinueFinishLastStep();
Assert.IsTrue(RegistrationPage.VerifySpecialistRegistrationMessagePayPal());
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
paypal.LoginSandboxPage(usrInfo.sandboxaccount, usrInfo.sandboxpwd);
Assert.IsTrue(paypal.VerifyAmount(usrInfo));
paypal.SubmitPayment();
RegistrationPage.GetSpecialistInformation(usrInfo);
var bphome = new BPHomePage(selenium, string.Format(Resources.GlobalResources.LoginBPHomePage, usrInfo.AccountName, usrInfo.Password));
Assert.IsTrue(bphome.VerifyPageWasLoaded(usrInfo));
Assert.IsTrue(bphome.VerifySpecialistProfile());
bphome.Logout();
}
A page Object will be something like this
public class PublicRegistrationPage
{
public ISelenium selenium { get; set; }
#region Constructors
public PublicRegistrationPage(ISelenium sel)
{
selenium = sel;
selenium.Open(Resources.GlobalResources.PublicRegisterURL);
}
#endregion
#region Methods
public PublicRegistrationPage FillUserInfo(UserInfo usr)
{
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserFirstName", usr.FirstName);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserLastName", usr.LastName);
selenium.Select("ctl00_cphComponent_ctlContent_wizRegister_ddlUserCountry", string.Format("label={0}",usr.Country ));
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserEmail", usr.Email );
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserDirectTel", usr.DirectTel);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserMobile", usr.Mobile);
return this;
}
}
Hope this helps.

How are you creating your Selenium tests, by recording them and playing them back? What we have done is build an object model around pages so that you call a method like "clickSubmit()" rather than clicking on an id (with a naming convention for these ids), which allows selenium tests to survive many changes.

You may or may not be able to write tests that are resilient to refactoring. Here's how to make the refactoring less painful: Continuous integration is essential.
Run them every day or every build. The sooner it's fixed, the easier.
Ensure devs can run the tests themselves. Again, the sooner it's seen and fixed, the easier.
Keep selenium tests few. They should focus on critical path / pri 1 test scenarios. Deep testing should be done at unit test level (or jsunit tests). Integration tests are always expensive and less valuable.

Hooking up on any low-level concepts like XPaths, CSS Selectors or IDs for end-to-end tests is a recipe for unstable tests.
I advise using testRigor to produce tests that won't break any time you run/change/improve your application a little bit.
The code analogous to the page object one above would look like this:
enter "Peter" into "First Name"
enter "Pen" into "Last Name"
enter "US" into "Country" below "User Data"
enter stored value "email" into "Email"
enter stored value "password" into "Password"
enter "415-123-4567" into "Direct Telephone"
enter "415-123-4568" into "Mobile Number"
click "Submit"
testRigor would associate texts that look like labels with the inputs so that as soon as from an end-user's perspective your page would look the same then the testRigor scripts will be green. Here is the doc.
disclaimer: I'm a co-founder of testRigor. I co-founded it because we had those exact issues ourselves.
Hope this helps.

There are no innocuous changes when it comes to test automation ;)
We use the SAFS framework with Rational Robot (RRAFS) to minimize impact to our automation scripts. There's still work to maintain the application map, but the scripts remain stable for the most part. The SAFS framework sounds very similar to the method cynicalman mentions, but already packages up the generic methods you would use in your scripts.
The SAFS site says there's partial support for Selenium, so this may work for you.

I've found that using XPath expressions in Selenuium-RC adds alot to the robustness of a test.
I write my tests in a similar manner. The first pass is often written via the IDE/Record to get most of my page-flow and click operations. Once I've got that, I begin stepping through the test via Selenium-RC adding assertions and changing absolute widget locators to more readable and friendly Xpath expressions. (as well as documenting the test! :) )
One thing to be aware of.. if your tests are xpath-heavy, they may run a little slower in IE6 due to its poor javascript execution abilities. (I have some test suites that take almost an hour longer to execute under IE than under FF. It's managable, but just something to keep in mind when you're writing the tests.)

Selenium in theory has an abstraction called UI Element (the documentation is here).
The features would be
abstract locators, indipendent on the very html implementation; this would map well to the concept of component or widget of a web framework,
rollup rules, allowing to merge several commands into a single more abstract command.
I've struggled a couple of days to leverage this feature but in the end I decided to abandon it, for the following reasons:
some concepts, such as that of offset locators (think of them as parts of a component) are not fully or usefully developed;
the feature is not fully supported in formatters and the more recent the formatter the less the feature is supported, hinting that the core Selenium evolution is leaving this feature behind;
it's not fully integrated in Selenium 2.0 (WebDriver).

I think Xpath is the best way to ensure robust selenium tests.
I am currently working on a library to help writing xpath expressions easier.
If interested, you can check it out here:
http://www.unit-testing.net/CurrentArticle/How-To-Write-XPath-for-Selenium-Tests.html

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Web test recording: automatically insert assertions during recording?

I need to automate as much as possible the recording of Web test scenarios. Selenium IDE or better Katalon plugin for Chrome seem very effective for this. However what's missing in the recording are the assertions. I've so far found no real alternative than to "add them by hand" after the recording is done.
Now I know which parts of my pages contain relevant output text, i.e. are subject to test. For instance based on ID patterns, class names, tag hierarchy etc.
So given that my web app is in a "known good state", I could theoretically grab the text content of the relevant tags during the recording, and insert my assertions in the recorded scenario right there and then. My aim is to automate this.
Is there any way to do this in Katalon plugin, Selenium IDE or any other automated web recording tool? I've read about Katalon Extension Scripts but as far as I understand it, these cannot do what I want?
-- edit -- trying to rephrase and be more concrete --
During my recording, on certain events (e.g. on page load) I want the tool to find all elements that match certain selectors, and for each match store an assertion in the scenario that asserts the actual current value (e.g. div.innerText or input.value) of the element on the page. I want to define the events and the selectors that should trigger the insertion of assertions and the expression that defines the asserted value.
example
Suppose my webapp has a search page. I enter data in input fields, and hit the "search" button. These actions are recorded by most tools like Katalon Recorder. Now on the next page, the search results will show. Each search result will be in a div class="result". Suppose while recording I got two search results "foo" and "bar". So I want the tool to store in the scenario, while recording, an assertion that the first result should be "foo" and the second should be "bar", based on my rule that all $("div.result") should have their "innerText" asserted upon page load.
Avoid using Selenium IDE, as compatibility with Firefox has been discontinued since Firefox version 55, you will thus not be able to run your tests on recent versions of Firefox.
When performing actions in the browser, it is relatively easy to record those actions to re-run them again. It is 100% clear what button you just pressed.
You can probably do a million different assertions on a page, it would be difficult for any tool to guess which things you would like to assert and then automatically add those assertions so I would be surprised if you would find a tool that would do exactly what you want.
What is keeping you from writing your own automated tests in code from scratch? From my experience, coding your own tests is not that much slower, but once you are used to doing this you will be able to tackle more complex problems with much more ease.
I have no experience with Katalon.
You can't add assertions in recording time, but you can use Selenese after recording too.
Check official reference here: https://docs.katalon.com/display/KD/Selenese+%28Selenium+IDE%29+Commands+Reference
For what it's worth, I've managed to get what I needed as follows:
locate the Extension directory of Katalon Recorder in my Chrome
copy the entire contents to Eclipse
modify the source content/recorder.js, method Recorder.attach() by adding the following:
var self = this;
$(...).each(function(i, el) {
var target = self.locatorBuilders.buildAll(el);
if (el.tagName == "SELECT" || el.tagName == "INPUT")
recorder.record("assertValue", target, el.value, false);
else
recorder.record("assertText", target, el.innerText, false);
});
(note ... are the JQuery selectors that define the areas that I know will contain relevant data in application. This could be tweaked either in this source (e.g. by adding more selectors), or in the application itself (e.g. by adding a signaling class to certain tags in the HTML just to trigger assertions).
in chrome, activate "developer mode" and load the modified plugin.
While recording, assertions are now automatically added for the relevant parts (... in the above) of my web app, on each page load.
happy!

How to create functional / UI-Tests for an existing web application

I've got the interesting task to build complex workflow tests for an existing web application which has never had unit or integration tests (all testing by developers / users without structure or guidelines).
Setup
The target is an ASP.NET (no MVC) web application, that has been build over several years. There is no clean MVC or any other pattern, so the HTML output is very bad (generated IDs, not much css classes, in line css styles
--> hard to test).
The application is data-centric, so there is a lot to test and the status of the database is very important for the tests.
I'm thinking of the following workflow:
1. Reset Database to test start data
2. Run Tests which create and test data by simulation user interactions
Tools
I was playing around with Selenium IDE, but it does not feel structured enough.
I'm dreaming of a toolset, where I can write tests in (literate) coffeescript / javascript, that can be executed in the browser (no need for headless testing), but all tools I can find aim to test javascript functions and not user interactions.
My test steps need to be:
page loads
test if page has loaded correctly by searching for text A or counting elements in ul B
click on button C, wait for popup D, type text in field E
click save button and check if text from field E is shown in element F
Is jasmine able to test these kind of user interactions? I can only find jasmine tests for javascript functions or the existance of HTML elements, but not for complex workflows with test steps that rely on each other.
Thanks in advance!
Selenium IDE is not the way to go for this.
Selenium is built on the WebDriver JSON Wire Protocol. Having this 'base' means it has been very easily plugged into many many different languages, all using the same kind of API.
One of them being JavaScript:
https://code.google.com/p/selenium/wiki/WebDriverJs
Disclaimer: the JS Bindings are very new!
I'm not sure I understand why you must do it in JavaScript, especially as I can see this severely limits your options.
I suggest IBM Rational Functional Tester. It's java based, making it very extensible. It's not Javascript, so be warned that test are written in java code. For most of it there's a recorder: you simply click around and it records your actions.
Regarding automated tests with it, here's my opinions .
It is a commercial product. I'm not affiliated with IBM but I work with RFT a lot.
You may want to look at RIATest for cross-platform cross-browser testing of web applications. It uses ECMAScript based script language which is very similar to JavaScript.
It works on Windows and Mac, supported browsers are Firefox, IE and Chrome. Automated testing scripts written on one platform/browser can be run against all other supported platforms/browsers.
It is certainly possible to automate the steps that you described. Dynamically generated IDs are not good but you should be able to use other properties (such as text, element type, etc.) to identify the HTML objects that you want to automate and test for.
(Disclaimer: I am a RIATest team member).

How to write Use case(s) & Test Driven Development for binding a gridview Example

I'm still trying to understand and use Use Cases and Test Driven Development, but having a hard time crossing the line. I'm hoping someone can provide a good example of how setting a datasource and/or databinding a gridview could be accomplished using Test Driven Development.
Here is my pseudo approach at it.
Create an aspx page and add the gridview control to it.
create a method in the code behind called BindGrid(datacollection,gridview) that passes the collection and gridview to a method in a class outside my website so I can actually write the Unit test for the method, and returns a databound gridview.
On the BindGrid method, I right click and select "Create Unit Test" which creates a new test project for me with an outline test for my BindGrid() method.
Now I guess there are a number of test I could write, for example: testTrueDataCollectionBindstoGridView() to see if the collection datatype actually binds? I'm not sure the other test to write?
This is how I currently understand I would go about TDD and Unit testing my example. It feels very clumsy and I'm hoping for some feedback as to what I'm doing wrong, and ideas for improvement.
Thanks
Update:
I've decided to try to simplify my question in hopes of getting more ideas.
How would you go about writing a test for a collection binding to a control? For example say I wanted to bind a dictionary to a drop down list. What test should I be writing, and how would I go about writing them?
Thanks
To some extent, your question describes why the ASP.NET MVC frameworks was created. It is very difficult to write tests against the ASP.NET Web Forms model (of which the GridView is part of). Also, you seem to be close to writing tests to test the Framework itself, not the code you are writing. If you go too far down that path you will write tests forever and never release any code. It would be more productive, in this case, to seperate out whatever code you used to generate the collection you are binding to the GridView and thouroughly test the scenarios around creating that (and especially any exception conditions and possible conditional logic) and trust that the framework binds the data to grid correctly. If the data generation method can produce any exceptions that you want to handle at the UI-level, try test how your code handles that, but again, this is where the MVC frameworks are helpful, b/c it is extremely difficult to test code that needs to be run within the Web Forms lifecycle.
Just speaking in general terms, since I'm mostly doing Java these days and not .NET stuff. I find that TDD does feel rather clumsy at first for most people, but give it some time, and see if you like it or not.
As far as writing tests, take this general approach. Create a method that you want to test, but without any body to it. Create a test that exercises some particular behavior, and keep it simple. Run the test, and it should fail (run red, assuming your IDE does that). Then, write the line or couple lines of code that make the test succeed (go green). Now, write another test, and repeat. If you have conditionals, make sure all paths are tested. That is, write one path, then the other, writing the test for each first.
When you have a method that works the way you want, you can now refactor it to your heart's content, since the test will always be there to check your work. Look at the method. Perhaps there's 4-5 lines that go together, and could be pulled out into a method. Give the method a good name, so that when you're reading the calling method, the name tells you what's going to happen without drilling in. There are other possible refactorings, especially inasmuch as you can see design patterns that you can leverage.
Make sure you re-run the tests frequently as you proceed with your refactoring.
TDD is about adding functionality. It encourages minimal logic in UI components - mostly they should be pure delegates to the code that does the real work, that you can test.
I would recommend, for the purposes of learning TDD, don't bother writing tests about UI behaviour (eg binding data to a control).
To learn TDD effectively, I think a good place to start is try a few code katas. A code kata is a small problem which you do repeatedly. The solution you develop is not the important thing - learn from the process of getting to the solution. So do the problem 10 times, and deliberately choose different designs for the solution each time. TDD is about the process.
There is a list of katas here: http://codingdojo.org/cgi-bin/wiki.pl?KataCatalogue
The Roman Numerals kata is a good one, but just pick one that appeals to you.

Resources