I am using velocity html reporter and console reporter to find the test report. But I am unable to find the code coverage.
Is there any way to see the code coverage?
npm run coverage:watch
Works perfectly well for us, is amazingly fast and provides you with a nicely formatted HTML report.
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I am new to robot framework. I want to know how to capture screenshots on failure.
Doesnt robot framework automatically take screenshots if script fails?
An example will be of great help!
this is actually a feature of the Selenium2Library that would be required with Robot if you were doing Selenium based tests.
More information can be found here: http://robotframework.org/Selenium2Library/doc/Selenium2Library.html
As it says it the documentation, setting up screenshots on failure is very easy, for example here is mine from a test suite I'm working with:
Library Selenium2Library timeout=10 implicit_wait=1.5 run_on_failure=Capture Page Screenshot
You can use the below keyword to capture screen shot after any of the step you want :
Capture Page Screenshot
Hope so this was helpful!
In this case teardown will be executed once the test case is executed/not executed and if the test case fail, it will take screenshot:
[Teardown] Run Keyword If Test Failed Capture Page Screenshot
Or you can do it even better on suite level if you don't need different teardowns for particular tests:
[Test Teardown] Run Keyword If Test Failed Capture Page Screenshot
So far, all of the other answers assume that you are using Selenium
If you are not, there is a "Screenshot" library that has the keyword "Take Screenshot." To include this library, all you need to do is put "Library Screenshot" in your settings table.
In my robotframework code, my teardowns all just reference a keyword I made called "Default Teardown" which is defined as:
Default Teardown
Run Keyword If Test Failed Take Screenshot
Close All
(I think that the Close All might be one of my custom keywords).
I have noticed a few issues with the Take Screenshot keyword. Some of this may be configurable, but I don't know. First off, it will take a screenshot of your screen, not necessarily just the application that you are interested in. So if you're using this and are allowing other people to view the resulting screenshots, make sure that you don't have anything else on your screen that you wouldn't want to share.
Also, if you kick off your tests and then lock your screen so you can take a quick break while it runs, all of your screenshots will just be pictures of your lock screen.
I'm using this on my Jenkins server as well which is using the xvfb-run command to create sort of a fake GUI to run the robot framework tests. If you're doing this as well, make sure that in your xvfb-run command you include something along the lines of
xvfb-run --server-args="-screen 0 1024x768x24" <rest of your command>
You'll have to decide what screen resolution works the best for you, but I found that with the default screen resolution, only a small portion of my app was captured.
Long story short, I think that you're better off using Capture Page Screenshot if you're using selenium. However if you're not, this may be your best (or only) solution.
How do you make unit tests for the HTML output of your PHP functions/scripts, specifically to check that the output is HTML5 valid?
Currently a can test functionality in PHPUnit and presentation with online copy/paste validators. But it would be much nicer if this could be integrated into the PHPUnit testing.
Is there a standard way to go about such things, or is it mainly a matter of using regular unit tests on functions which create the inserted content, and then making sure it looks correct in the browser/W3C Validator?
Similar question for older version of PHPUnit that no longer applies:
Unit tests for HTML Output?
What you looking for is behavior testing. Take a look at Behat
The Twine project (http://twineproject.sourceforge.net/doc/phphtml.html) replaces the copy/paste manual process. It might be useful; it still sends the HTML to the w3C site each time, which is not ideal for unit tests. (The W3C says all their stuff is open source and so you might be able to download it and run it locally... I couldn't find the download link though!)
An alternative approach is to use DomDocument::validate() However it requires the DTD to be referenced inside, and as this answer https://stackoverflow.com/a/15245834/841830 explains, HTML5 has no DTD.
(I'm assuming you meant that you have functions that return HTML5 strings, and you want to unit test those functions: if you want to test the whole output of a web app, e.g. as run through Apache and seen in a browser, I would use CasperJS or Selenium. But this is high-level functional testing, notably slower to run than unit tests, so I recommend to unit test what can be unit tested: and I still cannot find an offline HTML5 validator for Casper/Phantom/Slimer nor for Selenium!)
I've got some github projects which I want to test with code coverage. The only way I found (see blog post) to achieve this is to write a custom script that counts code coverage XML lines and outputs Code coverage is 74.32%, which is below the accepted 80%. Displaying code coverage in HTML is way better, but is it possible in travis-ci?
You can use https://coveralls.io/ together with Travis to display coverage nicely. Example can be found here: https://coveralls.io/r/phpmyadmin/error-reporting-server
PS: I know this is quite old question, but I've found it just now when searching for something else.
Travic CI doesn't support any persistent storage. One suggestion would be to create a custom script and run phpunit --coverage-html, next send the contents of the output dir to your own server using something like rsync.