Robot Framework Test Flow - robotframework

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?

There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.

This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

Related

Which way is better for test data preparation in robot framework?

I’m using robot framework and selenium library to testing a web application, which way is better for test data preparation?
Writing test data directly into the test cases, test data act as the user keywords arguments. This way is simple and without any test data file needed, but some user keywords have a bit more arguments and test cases looks strange for people not familiar with.
Prepare test data file per test case, then load test data file into variables when executing. This way removes the user keywords arguments and easier for making higher level user keywords, but can’t identify where the variables in the user keywords come from directly and need to open and check the test data file when editing test data.
There is no way best way in general, it will depend on the context (how many tests, how many keywords, how many arguments etc.). Writing Robot tests is like writing code in any other langage: you have to refactor it again and again as it grows.
Though in the specific case of Robot, I agree there is a tension between having short/readable keywords with few/no arguments (solution 1) and more detailed keywords with more arguments (solution 2). My strategy is usually to keep the most important/relevant arguments (like 1 or 2) clearly provided in the test itself and take the other ones from data/lib files. This way you can see what this test is specifically doing without having to check other files.
Best approach depends on the volume and verity of data, if the expectation is to get huge amounts of data to be crunched then, document db i.e mongo db is extermely ood, on all other cases, excel should be good enough as well.

How to access methods in forms from a test method in a different solution file?

I have two different projects in two separate solution files. One is a test project containing my unit test methods and other is the main project. I know I need to create an instance of the individual GUI form first in order to access the methods in but not sure how to do this.
Also, I have certain variable values which get initiated only when I run the entire original application. These values are used in many of the methods in the main project. Without them I will just get null values and the test method would just fail. Is there any way to get values for those variables without running the application? I tried to place the logic which fetches the value for those variables in my test method and then call the actual method but this still doesn't work. How do I resolve this problem?
The problems you have run into are quite common for trying to write unit tests for an application that is not well written with regards for testability.
Usually the code you write test for, does not reside in forms or UI classes.
The business logic should be located in separated classes within your main project. The business logic can then be called from your UI and from your unit test too.
So you have to move any business logic to separate classes first.
The next thing you need to do is remove any dependencies from the newly created classes to other classes which prevent you from writing a unit test and replace those dependencies with interfaces.
For instance you can change your application so that every environment variable which you mention in your question, is retrieved by special classes.
If you create interfaces for this classes you can setup a so called Mock ( a fake version of your real class) in you unit test which you can configure in the test to behave as desired for the particular test scenario.
Some general advice:
Often refactoring the code to be tested (i.e. making it better testable) takes more time than writing the unit test itself. There are whole books about refactoring so called "legacy" code.
Refactoring an application for testability often means to de-couple dependencies between concrete classes and to use interfaces instead. In production code you pass your production class for the interface, in a unit test you can create a "mock" class which implements the same interface but behaves in a way like needed in your test
When writing unit tests, you might consider using a mocking framework like Moq. It will save you a lot of time and make the test code smaller. You will find an introduction here: Unit Testing .NET Application with Moq Framework
Ideally you should design you application from the beginning in a way which has testability in mind. The "TDD" (test driven development) approach even goes one step further, here you write your tests before you write the actual code, but this approach is not used very often in my experience.

Handling Test Workflows in R testthat

I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.

How to complete an existing phpunit test with a generator

PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?
I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.
AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).

Adding additional make/test targets to an autotools project

We have an autotools project that has a mixture of unit and integration tests, all of which run via 'make check'. This isn't ideal, as some of the integration tests take a while, and have all sorts of dependencies (database, etc.)
I'd like to separate the integration tests and assign them their own make target. That way, unit tests can still be run often (via make check), and the integration tests can be run as needed in a similar fashion.
Is there a straightforward (or otherwise) way to add an additional make target?
NOTE: I should probably also add that this is a large project, so editing/maintaining every makefile by hand is not desirable. I'd like to do it the 'autotools way' if possible.
-- UPDATE 1 --
I've attempted Jon's solution, and it's a step closer, but not quite there. I still have a couple of issues:
1) Recursion - I'm OK with modifying the makefile.am in the root of the build tree, as well as any directory that contains the tests, but it seems like there should be a way to do this where I don't have to change every Makefile.am in the hierarchy. (the check target works this way, after all)
2) .PHONY - I keep getting messages about .PHONY being redefined. Which is understandable, because it's being set by another package (specifically, doxygen). How do I make the two play nice together?
In your am files, all make syntax is passed into the generated Makefile. So if you want a new target just create it like you would in a Makefile and it will appear in the auto generated Makefile. Put the following at the bottom of your am files.
integration-tests: prerequisites....
commands to run test
.PHONY: integration-tests
Since there haven't been any more responses, I'm going to answer with my solution.
I solved the recursion issue by eliminating the recursion altogether. Using this page
as a guide, I switched the entire project from recursive make to non-recursive make. I then cloned the non-recursive check-related targets (check, check-am, check-TESTS, etc.) into a new set of targets for the integration tests. So far, this works extremely well.
Note: you may be wondering why I didn't just clone the recursive targets instead. Quite frankly, I couldn't find them. Either I didn't know where to look (the rules weren't in the generated Makefile) or something is happening implicitly, and I don't understand autotools well enough to follow it.
As for the issue with .PHONY being redefined, I still haven't found a solution, other than to conditionally exclude the other definition when I'm doing the integration tests.

Resources