Verify XHTML is valid in a JUnit test? - xhtml

I have a Spring controller that returns XHTML, what's the easiest way to setup a JUnit test that verifies that the XHTML is valid? I'd also like to verify certain elements are present.

The only way is to parse the output. I've used dom4j in my jUnit tests. Then you can use Xpath or DOM to extract the elements you want and test them.
If you're not already using a parser, it can take a little messing around to get it going. But once you've got it, it's very handy, and you can write all sorts of great tests. If using parsers is new to you, perhaps take a look at the dom4j quickstart guide.

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Allure .Organizing tests in one Feature

Could you please help me with two questions regarding organizing tests and using Allure's "feature" tags?
If I have a few different tests but I need all of them to be included into one feature, do I have to write #Features("My Feature") annotation above each test method? Is there a way to write the #Features("My Feature") annotation once and include all required tests in it?
If I have a few logically separated classes with my #Test methods, is there an easy way to call all required tests from one TestSuite class in order to simply manage test queue?
You can write annotation #Feature once per class. But are you really need such feature? Maybe you should think a bit more and split your tests using some other way?
Allrue is not a test framework, it just a report tool. Allure does not run tests. To answer this part of question I need to know more about test framework you use, your environment (Ant, Maven, Jenkins, Teamcity etc)

Testing for valid HTML5 output using PHPUnit 4

How do you make unit tests for the HTML output of your PHP functions/scripts, specifically to check that the output is HTML5 valid?
Currently a can test functionality in PHPUnit and presentation with online copy/paste validators. But it would be much nicer if this could be integrated into the PHPUnit testing.
Is there a standard way to go about such things, or is it mainly a matter of using regular unit tests on functions which create the inserted content, and then making sure it looks correct in the browser/W3C Validator?
Similar question for older version of PHPUnit that no longer applies:
Unit tests for HTML Output?
What you looking for is behavior testing. Take a look at Behat
The Twine project (http://twineproject.sourceforge.net/doc/phphtml.html) replaces the copy/paste manual process. It might be useful; it still sends the HTML to the w3C site each time, which is not ideal for unit tests. (The W3C says all their stuff is open source and so you might be able to download it and run it locally... I couldn't find the download link though!)
An alternative approach is to use DomDocument::validate() However it requires the DTD to be referenced inside, and as this answer https://stackoverflow.com/a/15245834/841830 explains, HTML5 has no DTD.
(I'm assuming you meant that you have functions that return HTML5 strings, and you want to unit test those functions: if you want to test the whole output of a web app, e.g. as run through Apache and seen in a browser, I would use CasperJS or Selenium. But this is high-level functional testing, notably slower to run than unit tests, so I recommend to unit test what can be unit tested: and I still cannot find an offline HTML5 validator for Casper/Phantom/Slimer nor for Selenium!)

How can I make my Selenium tests less brittle?

We use Selenium to test the UI layer of our ASP.NET application. Many of the test cases test longer flows that span several pages.
I've found that the tests are very brittle, broken not just by code changes that actually change the pages but also by innocuous refactorings such as renaming a control (since I need to pass the control's clientID to Selenium's Click method, etc) or replacing a gridview with a repeater. As a result I find myself "wasting" time updating string values in my test cases in order to fix broken tests.
Is there a way to write more maintainable Selenium tests? Or a better web UI testing tool?
Edited to add:
Generally the first draft is created by recording a test in the IDE. (This first step may be performed by QA staff.) Then I refactor the generated C# code (extract constants, extract methods for repeated code, maybe repeat the test case with different data, etc). But the general flow of code for each test case remains reasonably close to the originally generated code.
I've found PageObject pattern very helpful.
http://code.google.com/p/webdriver/wiki/PageObjects
more info:
- What's the Point of Selenium?
- Selenium Critique
maybe a good way to start is to incrementally refactor your test cases.
I use the same scenario you have selenium + c#
Here is how my code looks like:
A test method will look like somethink like this
[TestMethod]
public void RegisterSpecialist(UserInfo usrInfo, CompanyInfo companyInfo)
{
var RegistrationPage = new PublicRegistrationPage(selenium)
.FillUserInfo(usrInfo)
.ContinueSecondStep();
RegistrationPage.FillCompanyInfo(companyInfo).ContinueLastStep();
RegistrationPage.FillSecurityInformation(usrInfo).ContinueFinishLastStep();
Assert.IsTrue(RegistrationPage.VerifySpecialistRegistrationMessagePayPal());
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
paypal.LoginSandboxPage(usrInfo.sandboxaccount, usrInfo.sandboxpwd);
Assert.IsTrue(paypal.VerifyAmount(usrInfo));
paypal.SubmitPayment();
RegistrationPage.GetSpecialistInformation(usrInfo);
var bphome = new BPHomePage(selenium, string.Format(Resources.GlobalResources.LoginBPHomePage, usrInfo.AccountName, usrInfo.Password));
Assert.IsTrue(bphome.VerifyPageWasLoaded(usrInfo));
Assert.IsTrue(bphome.VerifySpecialistProfile());
bphome.Logout();
}
A page Object will be something like this
public class PublicRegistrationPage
{
public ISelenium selenium { get; set; }
#region Constructors
public PublicRegistrationPage(ISelenium sel)
{
selenium = sel;
selenium.Open(Resources.GlobalResources.PublicRegisterURL);
}
#endregion
#region Methods
public PublicRegistrationPage FillUserInfo(UserInfo usr)
{
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserFirstName", usr.FirstName);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserLastName", usr.LastName);
selenium.Select("ctl00_cphComponent_ctlContent_wizRegister_ddlUserCountry", string.Format("label={0}",usr.Country ));
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserEmail", usr.Email );
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserDirectTel", usr.DirectTel);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserMobile", usr.Mobile);
return this;
}
}
Hope this helps.
How are you creating your Selenium tests, by recording them and playing them back? What we have done is build an object model around pages so that you call a method like "clickSubmit()" rather than clicking on an id (with a naming convention for these ids), which allows selenium tests to survive many changes.
You may or may not be able to write tests that are resilient to refactoring. Here's how to make the refactoring less painful: Continuous integration is essential.
Run them every day or every build. The sooner it's fixed, the easier.
Ensure devs can run the tests themselves. Again, the sooner it's seen and fixed, the easier.
Keep selenium tests few. They should focus on critical path / pri 1 test scenarios. Deep testing should be done at unit test level (or jsunit tests). Integration tests are always expensive and less valuable.
Hooking up on any low-level concepts like XPaths, CSS Selectors or IDs for end-to-end tests is a recipe for unstable tests.
I advise using testRigor to produce tests that won't break any time you run/change/improve your application a little bit.
The code analogous to the page object one above would look like this:
enter "Peter" into "First Name"
enter "Pen" into "Last Name"
enter "US" into "Country" below "User Data"
enter stored value "email" into "Email"
enter stored value "password" into "Password"
enter "415-123-4567" into "Direct Telephone"
enter "415-123-4568" into "Mobile Number"
click "Submit"
testRigor would associate texts that look like labels with the inputs so that as soon as from an end-user's perspective your page would look the same then the testRigor scripts will be green. Here is the doc.
disclaimer: I'm a co-founder of testRigor. I co-founded it because we had those exact issues ourselves.
Hope this helps.
There are no innocuous changes when it comes to test automation ;)
We use the SAFS framework with Rational Robot (RRAFS) to minimize impact to our automation scripts. There's still work to maintain the application map, but the scripts remain stable for the most part. The SAFS framework sounds very similar to the method cynicalman mentions, but already packages up the generic methods you would use in your scripts.
The SAFS site says there's partial support for Selenium, so this may work for you.
I've found that using XPath expressions in Selenuium-RC adds alot to the robustness of a test.
I write my tests in a similar manner. The first pass is often written via the IDE/Record to get most of my page-flow and click operations. Once I've got that, I begin stepping through the test via Selenium-RC adding assertions and changing absolute widget locators to more readable and friendly Xpath expressions. (as well as documenting the test! :) )
One thing to be aware of.. if your tests are xpath-heavy, they may run a little slower in IE6 due to its poor javascript execution abilities. (I have some test suites that take almost an hour longer to execute under IE than under FF. It's managable, but just something to keep in mind when you're writing the tests.)
Selenium in theory has an abstraction called UI Element (the documentation is here).
The features would be
abstract locators, indipendent on the very html implementation; this would map well to the concept of component or widget of a web framework,
rollup rules, allowing to merge several commands into a single more abstract command.
I've struggled a couple of days to leverage this feature but in the end I decided to abandon it, for the following reasons:
some concepts, such as that of offset locators (think of them as parts of a component) are not fully or usefully developed;
the feature is not fully supported in formatters and the more recent the formatter the less the feature is supported, hinting that the core Selenium evolution is leaving this feature behind;
it's not fully integrated in Selenium 2.0 (WebDriver).
I think Xpath is the best way to ensure robust selenium tests.
I am currently working on a library to help writing xpath expressions easier.
If interested, you can check it out here:
http://www.unit-testing.net/CurrentArticle/How-To-Write-XPath-for-Selenium-Tests.html

Resources