CSS Regression testing with VCS and CI - css

BackstopJS generates a single page with regression test results. Is there any tool that works with CI (doesn't matter which one) and VCS (git) and assigns the test results to commits? I imagine the main page has a list of commits with passed/failed numbers next to them and links to a page with full results. Doesn't have to use BackstopJS.
Doesn't have to be as good as what I just described, I'd be happy for anything better than a script that copies results from BackstopJS to a folder with time and date.

You would be wanting to look into ‘pre commit hooks’ and working that into your build process. Might include a grunt-based git hook module and some bash scripting..
On projects I use CSS Regression testing it is still manual prior to commit/push.
https://www.npmjs.com/package/grunt-githooks
http://codeinthehole.com/writing/tips-for-using-a-git-pre-commit-hook/

Related

Customized json report for karate framework [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Karate tests - problem with visiting PDF link from email in headless mode (run from Jenkins) [duplicate]

I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047

Making webdriverio to run test cases for multiple similar pages?

I am using webdriverio to automate certain test cases. I am using the POM style to organise my project. I am facing an issue where I want to use the same test code for n pages, and all of the pages are clones (only theme changes),is there any way I can set webdriverio to run the testcases for all of the websites and give me results individually?
If you execute the tests using TestNG, then use DataProviders aka make your tests accept parameters (like the page name you wanna test) and create a functon that wil provide these parameters. IF you annotate these functions properly, TestNG will record different resuts for each iteration. There is an extension for JUnit that provides the same for that framework as well.

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

Adding additional make/test targets to an autotools project

We have an autotools project that has a mixture of unit and integration tests, all of which run via 'make check'. This isn't ideal, as some of the integration tests take a while, and have all sorts of dependencies (database, etc.)
I'd like to separate the integration tests and assign them their own make target. That way, unit tests can still be run often (via make check), and the integration tests can be run as needed in a similar fashion.
Is there a straightforward (or otherwise) way to add an additional make target?
NOTE: I should probably also add that this is a large project, so editing/maintaining every makefile by hand is not desirable. I'd like to do it the 'autotools way' if possible.
-- UPDATE 1 --
I've attempted Jon's solution, and it's a step closer, but not quite there. I still have a couple of issues:
1) Recursion - I'm OK with modifying the makefile.am in the root of the build tree, as well as any directory that contains the tests, but it seems like there should be a way to do this where I don't have to change every Makefile.am in the hierarchy. (the check target works this way, after all)
2) .PHONY - I keep getting messages about .PHONY being redefined. Which is understandable, because it's being set by another package (specifically, doxygen). How do I make the two play nice together?
In your am files, all make syntax is passed into the generated Makefile. So if you want a new target just create it like you would in a Makefile and it will appear in the auto generated Makefile. Put the following at the bottom of your am files.
integration-tests: prerequisites....
commands to run test
.PHONY: integration-tests
Since there haven't been any more responses, I'm going to answer with my solution.
I solved the recursion issue by eliminating the recursion altogether. Using this page
as a guide, I switched the entire project from recursive make to non-recursive make. I then cloned the non-recursive check-related targets (check, check-am, check-TESTS, etc.) into a new set of targets for the integration tests. So far, this works extremely well.
Note: you may be wondering why I didn't just clone the recursive targets instead. Quite frankly, I couldn't find them. Either I didn't know where to look (the rules weren't in the generated Makefile) or something is happening implicitly, and I don't understand autotools well enough to follow it.
As for the issue with .PHONY being redefined, I still haven't found a solution, other than to conditionally exclude the other definition when I'm doing the integration tests.

Resources