Web test recording: automatically insert assertions during recording? - automated-tests

I need to automate as much as possible the recording of Web test scenarios. Selenium IDE or better Katalon plugin for Chrome seem very effective for this. However what's missing in the recording are the assertions. I've so far found no real alternative than to "add them by hand" after the recording is done.
Now I know which parts of my pages contain relevant output text, i.e. are subject to test. For instance based on ID patterns, class names, tag hierarchy etc.
So given that my web app is in a "known good state", I could theoretically grab the text content of the relevant tags during the recording, and insert my assertions in the recorded scenario right there and then. My aim is to automate this.
Is there any way to do this in Katalon plugin, Selenium IDE or any other automated web recording tool? I've read about Katalon Extension Scripts but as far as I understand it, these cannot do what I want?
-- edit -- trying to rephrase and be more concrete --
During my recording, on certain events (e.g. on page load) I want the tool to find all elements that match certain selectors, and for each match store an assertion in the scenario that asserts the actual current value (e.g. div.innerText or input.value) of the element on the page. I want to define the events and the selectors that should trigger the insertion of assertions and the expression that defines the asserted value.
example
Suppose my webapp has a search page. I enter data in input fields, and hit the "search" button. These actions are recorded by most tools like Katalon Recorder. Now on the next page, the search results will show. Each search result will be in a div class="result". Suppose while recording I got two search results "foo" and "bar". So I want the tool to store in the scenario, while recording, an assertion that the first result should be "foo" and the second should be "bar", based on my rule that all $("div.result") should have their "innerText" asserted upon page load.

Avoid using Selenium IDE, as compatibility with Firefox has been discontinued since Firefox version 55, you will thus not be able to run your tests on recent versions of Firefox.
When performing actions in the browser, it is relatively easy to record those actions to re-run them again. It is 100% clear what button you just pressed.
You can probably do a million different assertions on a page, it would be difficult for any tool to guess which things you would like to assert and then automatically add those assertions so I would be surprised if you would find a tool that would do exactly what you want.
What is keeping you from writing your own automated tests in code from scratch? From my experience, coding your own tests is not that much slower, but once you are used to doing this you will be able to tackle more complex problems with much more ease.
I have no experience with Katalon.

You can't add assertions in recording time, but you can use Selenese after recording too.
Check official reference here: https://docs.katalon.com/display/KD/Selenese+%28Selenium+IDE%29+Commands+Reference

For what it's worth, I've managed to get what I needed as follows:
locate the Extension directory of Katalon Recorder in my Chrome
copy the entire contents to Eclipse
modify the source content/recorder.js, method Recorder.attach() by adding the following:
var self = this;
$(...).each(function(i, el) {
var target = self.locatorBuilders.buildAll(el);
if (el.tagName == "SELECT" || el.tagName == "INPUT")
recorder.record("assertValue", target, el.value, false);
else
recorder.record("assertText", target, el.innerText, false);
});
(note ... are the JQuery selectors that define the areas that I know will contain relevant data in application. This could be tweaked either in this source (e.g. by adding more selectors), or in the application itself (e.g. by adding a signaling class to certain tags in the HTML just to trigger assertions).
in chrome, activate "developer mode" and load the modified plugin.
While recording, assertions are now automatically added for the relevant parts (... in the above) of my web app, on each page load.
happy!

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

What's the idea behing W-, P- and I-files?

I'm working with appBuilder and procedure editor in Progress Release 11.6.
As mentioned in some previous questions, regularly I'm having problems with the appBuilder, not wanting to open files, corrupting them (deleting parts of source code), ..., one of the reasons now seems to be the limit that a procedure cannot exceed 32K, comments included.
At first I thought "Are we back in the stone age?", pardon my reaction.
But now I start thinking that we are completely abusing the whole concept, therefore I'd like to show my view on W-, P- and I-files, please confirm (or correct):
W-files are meant only to contain GUI definitions, like a form with some frames, buttons, fill-in fields, ..., any real programming needs to be done in P-files.
P-files contain the real intelligence: there the procedures and functions are elaborated, that can be used by the rest of the P-files, or finally by the W-files.
I-files are just there to include general behaviour.
Let me give you an example:
W-file:
DEFINE VARIABLE combo_information VIEW-AS COMBOBOX /* with some information on the content, if this is static */
...
ON CHOOSE OF combo_information DO:
RUN very_large_procedure.
END.
...
{about.i} /* see here-after */
...
P-file:
PROCEDURE very_large_procedure:
DO /* a lot */
END.
I-file (about.i):
/* Describes the help-about menu item */
While working like this (only putting the GUI-related things in the W-file and let the "real" programming be done in the P-files), the mentioned 32K limit will never be reached. In top of that, adding a procedure can be done easily, the appBuilder will not delete it as the appBuilder won't ever open the P-file.
Is my view correct (and what about the I-files)?
In case yes: one technical question: how can I launch a procedure from a P-file inside a W-file? (Obviously, the mentioned example can't work as in the W-file I did not mention where to look for the very_large_procedure)
The naming is arbitrary and you may occasionally find other extensions being used. Having said that:
"W" is for "window", it is supposed to contain code that is related to making a GUI work. It is very often abused to contain any sort of code. It is usually abused in that way by people who learned to code on the app builder or who have never coded on anything but Windows.
"P" is for "Progress" and retconned to "Procedure". It was the standard back in the old days prior to the appearance of Windows GUI code. Any "headless" code or character mode code would typically go into a dot-p file.
"I" is for "include". This is a very old school way to create reusable code snippets and common "header files". Include files are commonly parameterized. Potentially with either named or positional arguments.
Another major extension is ".cls" files. These are for OO4GL classes (OpenEdge 10 and above).
Launching procedures is acheived by running them:
RUN myproc.p.
or
RUN guiproc.w.
Or, if by "launch", you mean "start a session" then you use the "-p procedureName" startup parameter along with prowin32.exe or prowin.exe for Windows GUI code or _progres.exe for batch or character code.

Two instances of Modernizr on one page

I'm currently trying to make some speed improvements to one of my sites, and I'm looking at Modernizer usage.
Previously all of my javascript (including Modernizer) was lumped into one big js file. I've now removed Modernizer and it sits inline in the head section of the page. For clarity, it is a custom build.
However, not all feature detects are equal - some features benefit from being detected quickly while others can wait.
For instance, detecting webp support is pretty important, because I assume downloading a jpeg then another webp version sort of defeats the object of the feature.
Then, there are things like pointer/touch support, which don't affect layout as such and are more to do with interaction - so they can wait.
With that in mind, the obvious thing is to put two instances of Modernizer in the page - one for the important stuff at the top, and one for the rest at the bottom.
However, I've been unable to find anything on this topic. I guess that leads me to ask two questions: is it possible? And if it is - is it a sensible idea?
It definitively is possible to have two instances of Modernizr on one page; but in order to have that you have to manually rename the global object to something else since Modernizr is exposed to window directly:
e.Modernizr = Modernizr // e is internal ref. to window object
}(window, document);
This however may be considered a dirty patch since you have to alter the production code (& to maintain that alteration through update cycles manually), download and execute exactly the same basic functionality twice, which is less optimal.
Another approach would be to build all that is needed immediately for first batch of (essential) tests and than to utilize Modernizr.addTest (it has to be included in build) later on, for non-essential functionality.
Source and doc-like comments.
Of course, you'd have to write your tests. You may relay on official Modernizr tests, but addTest called outside of the Modernizr's factory method lacks some useful methods (for example, things like Modernizr's internal createElement()).
You have to make choices since there is no way to subsequently add other tests out of the box.

How can I make my Selenium tests less brittle?

We use Selenium to test the UI layer of our ASP.NET application. Many of the test cases test longer flows that span several pages.
I've found that the tests are very brittle, broken not just by code changes that actually change the pages but also by innocuous refactorings such as renaming a control (since I need to pass the control's clientID to Selenium's Click method, etc) or replacing a gridview with a repeater. As a result I find myself "wasting" time updating string values in my test cases in order to fix broken tests.
Is there a way to write more maintainable Selenium tests? Or a better web UI testing tool?
Edited to add:
Generally the first draft is created by recording a test in the IDE. (This first step may be performed by QA staff.) Then I refactor the generated C# code (extract constants, extract methods for repeated code, maybe repeat the test case with different data, etc). But the general flow of code for each test case remains reasonably close to the originally generated code.
I've found PageObject pattern very helpful.
http://code.google.com/p/webdriver/wiki/PageObjects
more info:
- What's the Point of Selenium?
- Selenium Critique
maybe a good way to start is to incrementally refactor your test cases.
I use the same scenario you have selenium + c#
Here is how my code looks like:
A test method will look like somethink like this
[TestMethod]
public void RegisterSpecialist(UserInfo usrInfo, CompanyInfo companyInfo)
{
var RegistrationPage = new PublicRegistrationPage(selenium)
.FillUserInfo(usrInfo)
.ContinueSecondStep();
RegistrationPage.FillCompanyInfo(companyInfo).ContinueLastStep();
RegistrationPage.FillSecurityInformation(usrInfo).ContinueFinishLastStep();
Assert.IsTrue(RegistrationPage.VerifySpecialistRegistrationMessagePayPal());
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
paypal.LoginSandboxPage(usrInfo.sandboxaccount, usrInfo.sandboxpwd);
Assert.IsTrue(paypal.VerifyAmount(usrInfo));
paypal.SubmitPayment();
RegistrationPage.GetSpecialistInformation(usrInfo);
var bphome = new BPHomePage(selenium, string.Format(Resources.GlobalResources.LoginBPHomePage, usrInfo.AccountName, usrInfo.Password));
Assert.IsTrue(bphome.VerifyPageWasLoaded(usrInfo));
Assert.IsTrue(bphome.VerifySpecialistProfile());
bphome.Logout();
}
A page Object will be something like this
public class PublicRegistrationPage
{
public ISelenium selenium { get; set; }
#region Constructors
public PublicRegistrationPage(ISelenium sel)
{
selenium = sel;
selenium.Open(Resources.GlobalResources.PublicRegisterURL);
}
#endregion
#region Methods
public PublicRegistrationPage FillUserInfo(UserInfo usr)
{
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserFirstName", usr.FirstName);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserLastName", usr.LastName);
selenium.Select("ctl00_cphComponent_ctlContent_wizRegister_ddlUserCountry", string.Format("label={0}",usr.Country ));
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserEmail", usr.Email );
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserDirectTel", usr.DirectTel);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserMobile", usr.Mobile);
return this;
}
}
Hope this helps.
How are you creating your Selenium tests, by recording them and playing them back? What we have done is build an object model around pages so that you call a method like "clickSubmit()" rather than clicking on an id (with a naming convention for these ids), which allows selenium tests to survive many changes.
You may or may not be able to write tests that are resilient to refactoring. Here's how to make the refactoring less painful: Continuous integration is essential.
Run them every day or every build. The sooner it's fixed, the easier.
Ensure devs can run the tests themselves. Again, the sooner it's seen and fixed, the easier.
Keep selenium tests few. They should focus on critical path / pri 1 test scenarios. Deep testing should be done at unit test level (or jsunit tests). Integration tests are always expensive and less valuable.
Hooking up on any low-level concepts like XPaths, CSS Selectors or IDs for end-to-end tests is a recipe for unstable tests.
I advise using testRigor to produce tests that won't break any time you run/change/improve your application a little bit.
The code analogous to the page object one above would look like this:
enter "Peter" into "First Name"
enter "Pen" into "Last Name"
enter "US" into "Country" below "User Data"
enter stored value "email" into "Email"
enter stored value "password" into "Password"
enter "415-123-4567" into "Direct Telephone"
enter "415-123-4568" into "Mobile Number"
click "Submit"
testRigor would associate texts that look like labels with the inputs so that as soon as from an end-user's perspective your page would look the same then the testRigor scripts will be green. Here is the doc.
disclaimer: I'm a co-founder of testRigor. I co-founded it because we had those exact issues ourselves.
Hope this helps.
There are no innocuous changes when it comes to test automation ;)
We use the SAFS framework with Rational Robot (RRAFS) to minimize impact to our automation scripts. There's still work to maintain the application map, but the scripts remain stable for the most part. The SAFS framework sounds very similar to the method cynicalman mentions, but already packages up the generic methods you would use in your scripts.
The SAFS site says there's partial support for Selenium, so this may work for you.
I've found that using XPath expressions in Selenuium-RC adds alot to the robustness of a test.
I write my tests in a similar manner. The first pass is often written via the IDE/Record to get most of my page-flow and click operations. Once I've got that, I begin stepping through the test via Selenium-RC adding assertions and changing absolute widget locators to more readable and friendly Xpath expressions. (as well as documenting the test! :) )
One thing to be aware of.. if your tests are xpath-heavy, they may run a little slower in IE6 due to its poor javascript execution abilities. (I have some test suites that take almost an hour longer to execute under IE than under FF. It's managable, but just something to keep in mind when you're writing the tests.)
Selenium in theory has an abstraction called UI Element (the documentation is here).
The features would be
abstract locators, indipendent on the very html implementation; this would map well to the concept of component or widget of a web framework,
rollup rules, allowing to merge several commands into a single more abstract command.
I've struggled a couple of days to leverage this feature but in the end I decided to abandon it, for the following reasons:
some concepts, such as that of offset locators (think of them as parts of a component) are not fully or usefully developed;
the feature is not fully supported in formatters and the more recent the formatter the less the feature is supported, hinting that the core Selenium evolution is leaving this feature behind;
it's not fully integrated in Selenium 2.0 (WebDriver).
I think Xpath is the best way to ensure robust selenium tests.
I am currently working on a library to help writing xpath expressions easier.
If interested, you can check it out here:
http://www.unit-testing.net/CurrentArticle/How-To-Write-XPath-for-Selenium-Tests.html

Resources