Robot Framework lint-rule: Too few keyword steps - robotframework

I have a test-suite containing one test-case which in turn only contain one test-step (keyword). When I try to commit it to our repo, there is a lint-rule preventing that.
Question: What would be the motivation for keeping this rule? Is the intention that things should be done in a different manner than I have done in my test-suite (robot-file) shown below?
In my Jenkins-log when trying to commit, I see following:
10:53:31 [INFO]: E: 10, 0: Too few steps (1) in test case (TooFewTestSteps)
In file "rf_lint.args" in our "commit_gate"-folder in our repo, I commented following rule:
#-e TooFewTestSteps
Then my commit went through the rf-lint.
This is how my .robot-file look that was not allowed:
*** Settings ***
Documentation My Documentation
Library libraries/my_library.py
Suite Setup suite precondition
Suite Teardown suite postcondition
Force Tags my_tag
*** Test Cases ***
My Test Case
[Documentation] My Test Case Documentation
my keyword
*** Keywords ***

What would be the motivation for keeping this rule?
The motivation is to discourage creating a test with just one or two steps. For some teams this is considered a code smell.
A test that calls only one keyword might be hard to understand since it might hide what the test is actually verifying. Unless the name of the keyword is self-documenting, you would have to dig into the implementation of the keyword to understand what the test is trying to accomplish.
If you and your team don't consider this to be a problem, there's nothing wrong with removing the rule.

Related

How do I suppress console errors when testing a class that catches an error in a dependency with phpunit9.5 and php8.0

I have a class that takes in a dependency. One of the things the class does is catch errors thrown by that dependency (does something useful to log this, and goes on to return a return the rest of the code finds useful.)
In PHPUnit, I can mock up the dependency and test as follows:
$stubqueryrunner = $this->createStub(QueryRunner::class);
$stubqueryrunner->method('getArray')->will($this->throwException(new \Error));
$inner = new myClass("mark", $stubqueryrunner);
$this->assertEquals(["what" => "", "why" => ""], $inner->getReasons());
The class I'm testing catches any error raised in its dependencies (and returns some default values) for the getReasons method.
My code passes my tests (ie - the Method I'm testing has indeed caught the error and substituted the default values.)
However, the act of raising the error writes many lines of stack trace to the output of PHPUnit.
I can't use the # operator at the point I call it, because I'm in PHP8, where the stfu operator is deprecated for errors.
I can't use
/**
* #expectedError PHPUnit\Framework\Error
*/
... because that's not in PHPUnit 9.
So how do I actually suppress all the stack trace stuff from appearing in the PHPUnit console?
Update: Adding
$this->expectException(\Error::class); just creates an additional assertion which then immediately fails precisely because the class I'm testing has successfully caught it.
The code I'm testing never actually produces errors. The only thing that's producing an error is the Stub I've written in the Unit test when it runs throwException
Adding error_reporting(0); in the test doesn't help.
I can't use the # operator at the point I call it, because I'm in PHP8, where the stfu operator is deprecated for errors.
The # operator may be deprecated, but what it does, is not.
Suppress the errors by settings or setting blank the error-handler for the lines of code you used the # operator previously and you should get it to work (functions that come to mind: error_reporting(), set_error_handler()).
This should then already suppress the error and henceforth even if the Phpunit run configuration is "sharp" and would mark the test as a failed on error (or at least warns, this is what you normally want in testing) as the error has been suppressed before Phpunits test-runner could be noticed by it (the phpunit error handler is set blank when you set your own handler, and active again after you unset it), there is no additional handing of it any longer by Phpunit.
So how do I actually suppress all the stack trace stuff from appearing in the PHPUnit console?
The previous stacktrace may not be fully gone after the change if some internal handling is adding it. One extension that does this I know about is xdebug and the stacktrace might then be caused by runtime configuration.
Take care you're using the right settings in testing or at least don't fear if you still see a stacktrace and can't explain it to yourself.
For Xdebug it's the develop mode in xdebug, remove it and the stack-traces done by xdebug to aid in development are gone.
You should get it to work on the level of the test. This would be the first milestone. Lets assume already being there and add some more context:
You now have two tests:
The original test (that is not sufficient for you under PHP 8, maybe even fails).
The regression (new) test where you take care of the error handling.
I'd say, the (original) test shows, that the code under test itself is not forward compatible with PHP 8. So what has been patched into the (regression) test to get it working should ideally become part of the implementation.
The previous test then should run again, regardless of which version. So you would have the old test, and the new test to test for the regression.
I'd consider this a bit of a luxury problem. Not because its superfluous, but by the description of the error handling, this is not straight forward code so it needs extras in development, testing and maintenance and should have actual a decent staff to take care of these.
See just in testing, its hard. Even the simple things like what the # operator is and while already seen as the cause of change, it's overly puzzling - why?
It might be because error handling is hard and testing for error handling maybe even harder.
Q/A Reference:
How to simulate and properly write test with error_get_last() function in phpunit

Can phpunit start with a certain test and then keep going?

I use stopOnFailure in phpunit. Just my preference. I'm wondering if there is a way to isolate 1 test and run it, like --filter but then keep going. So that if my test suite finds a failure half way thru all the tests ... I just want to pick up where I left off when I'm done isolating and fixing.
(Not looking for solutions like paratest)
No. But you can configure PHPUnit to start the execution with tests that failed the last time: --order-by=defects.

Taking screenshots on failure using Robot framework

I am new to robot framework. I want to know how to capture screenshots on failure.
Doesnt robot framework automatically take screenshots if script fails?
An example will be of great help!
this is actually a feature of the Selenium2Library that would be required with Robot if you were doing Selenium based tests.
More information can be found here: http://robotframework.org/Selenium2Library/doc/Selenium2Library.html
As it says it the documentation, setting up screenshots on failure is very easy, for example here is mine from a test suite I'm working with:
Library Selenium2Library timeout=10 implicit_wait=1.5 run_on_failure=Capture Page Screenshot
You can use the below keyword to capture screen shot after any of the step you want :
Capture Page Screenshot
Hope so this was helpful!
In this case teardown will be executed once the test case is executed/not executed and if the test case fail, it will take screenshot:
[Teardown] Run Keyword If Test Failed Capture Page Screenshot
Or you can do it even better on suite level if you don't need different teardowns for particular tests:
[Test Teardown] Run Keyword If Test Failed Capture Page Screenshot
So far, all of the other answers assume that you are using Selenium
If you are not, there is a "Screenshot" library that has the keyword "Take Screenshot." To include this library, all you need to do is put "Library Screenshot" in your settings table.
In my robotframework code, my teardowns all just reference a keyword I made called "Default Teardown" which is defined as:
Default Teardown
Run Keyword If Test Failed Take Screenshot
Close All
(I think that the Close All might be one of my custom keywords).
I have noticed a few issues with the Take Screenshot keyword. Some of this may be configurable, but I don't know. First off, it will take a screenshot of your screen, not necessarily just the application that you are interested in. So if you're using this and are allowing other people to view the resulting screenshots, make sure that you don't have anything else on your screen that you wouldn't want to share.
Also, if you kick off your tests and then lock your screen so you can take a quick break while it runs, all of your screenshots will just be pictures of your lock screen.
I'm using this on my Jenkins server as well which is using the xvfb-run command to create sort of a fake GUI to run the robot framework tests. If you're doing this as well, make sure that in your xvfb-run command you include something along the lines of
xvfb-run --server-args="-screen 0 1024x768x24" <rest of your command>
You'll have to decide what screen resolution works the best for you, but I found that with the default screen resolution, only a small portion of my app was captured.
Long story short, I think that you're better off using Capture Page Screenshot if you're using selenium. However if you're not, this may be your best (or only) solution.

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

Handling Test Workflows in R testthat

I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.

Resources