Allure .Organizing tests in one Feature - automated-tests

Could you please help me with two questions regarding organizing tests and using Allure's "feature" tags?
If I have a few different tests but I need all of them to be included into one feature, do I have to write #Features("My Feature") annotation above each test method? Is there a way to write the #Features("My Feature") annotation once and include all required tests in it?
If I have a few logically separated classes with my #Test methods, is there an easy way to call all required tests from one TestSuite class in order to simply manage test queue?

You can write annotation #Feature once per class. But are you really need such feature? Maybe you should think a bit more and split your tests using some other way?
Allrue is not a test framework, it just a report tool. Allure does not run tests. To answer this part of question I need to know more about test framework you use, your environment (Ant, Maven, Jenkins, Teamcity etc)

Related

How to import external Java libraries in OpenTest framework?

I would like to find out how I can import external libraries into my tests? For example, if i use a Java library for random name/number generation, how do I go about using it in my tests?
Thanks
Before I answer, I would advise that you should avoid using Java code, if you can. For example, a random name/number generator is very easy to implement in JavaScript and you can find plenty of ready-made examples out there. If it's JS code, you can easily embed it in your tests using one of the techniques described here. Even better, you should use capabilities that are provided out-of-the-box with OpenTest: $random and $randomString.
If you really need to use Java code, there are two ways to do it:
The recommended way: create one or more custom OpenTest keywords as described here. This will make it easier for you to maintain your test suite in the future and it also makes it easier for other members of your team to leverage this work in their own tests, especially if they are not familiar with Java.
The "quick and dirty" way: create a user-jars directory in your test actor's working directory and drop the JAR file in there. Then, call your Java code from JavaScript as described here.

How to access methods in forms from a test method in a different solution file?

I have two different projects in two separate solution files. One is a test project containing my unit test methods and other is the main project. I know I need to create an instance of the individual GUI form first in order to access the methods in but not sure how to do this.
Also, I have certain variable values which get initiated only when I run the entire original application. These values are used in many of the methods in the main project. Without them I will just get null values and the test method would just fail. Is there any way to get values for those variables without running the application? I tried to place the logic which fetches the value for those variables in my test method and then call the actual method but this still doesn't work. How do I resolve this problem?
The problems you have run into are quite common for trying to write unit tests for an application that is not well written with regards for testability.
Usually the code you write test for, does not reside in forms or UI classes.
The business logic should be located in separated classes within your main project. The business logic can then be called from your UI and from your unit test too.
So you have to move any business logic to separate classes first.
The next thing you need to do is remove any dependencies from the newly created classes to other classes which prevent you from writing a unit test and replace those dependencies with interfaces.
For instance you can change your application so that every environment variable which you mention in your question, is retrieved by special classes.
If you create interfaces for this classes you can setup a so called Mock ( a fake version of your real class) in you unit test which you can configure in the test to behave as desired for the particular test scenario.
Some general advice:
Often refactoring the code to be tested (i.e. making it better testable) takes more time than writing the unit test itself. There are whole books about refactoring so called "legacy" code.
Refactoring an application for testability often means to de-couple dependencies between concrete classes and to use interfaces instead. In production code you pass your production class for the interface, in a unit test you can create a "mock" class which implements the same interface but behaves in a way like needed in your test
When writing unit tests, you might consider using a mocking framework like Moq. It will save you a lot of time and make the test code smaller. You will find an introduction here: Unit Testing .NET Application with Moq Framework
Ideally you should design you application from the beginning in a way which has testability in mind. The "TDD" (test driven development) approach even goes one step further, here you write your tests before you write the actual code, but this approach is not used very often in my experience.

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

Handling Test Workflows in R testthat

I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.

How to complete an existing phpunit test with a generator

PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?
I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.
AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).

Resources