Making webdriverio to run test cases for multiple similar pages? - automated-tests

I am using webdriverio to automate certain test cases. I am using the POM style to organise my project. I am facing an issue where I want to use the same test code for n pages, and all of the pages are clones (only theme changes),is there any way I can set webdriverio to run the testcases for all of the websites and give me results individually?

If you execute the tests using TestNG, then use DataProviders aka make your tests accept parameters (like the page name you wanna test) and create a functon that wil provide these parameters. IF you annotate these functions properly, TestNG will record different resuts for each iteration. There is an extension for JUnit that provides the same for that framework as well.

Related

How to access methods in forms from a test method in a different solution file?

I have two different projects in two separate solution files. One is a test project containing my unit test methods and other is the main project. I know I need to create an instance of the individual GUI form first in order to access the methods in but not sure how to do this.
Also, I have certain variable values which get initiated only when I run the entire original application. These values are used in many of the methods in the main project. Without them I will just get null values and the test method would just fail. Is there any way to get values for those variables without running the application? I tried to place the logic which fetches the value for those variables in my test method and then call the actual method but this still doesn't work. How do I resolve this problem?
The problems you have run into are quite common for trying to write unit tests for an application that is not well written with regards for testability.
Usually the code you write test for, does not reside in forms or UI classes.
The business logic should be located in separated classes within your main project. The business logic can then be called from your UI and from your unit test too.
So you have to move any business logic to separate classes first.
The next thing you need to do is remove any dependencies from the newly created classes to other classes which prevent you from writing a unit test and replace those dependencies with interfaces.
For instance you can change your application so that every environment variable which you mention in your question, is retrieved by special classes.
If you create interfaces for this classes you can setup a so called Mock ( a fake version of your real class) in you unit test which you can configure in the test to behave as desired for the particular test scenario.
Some general advice:
Often refactoring the code to be tested (i.e. making it better testable) takes more time than writing the unit test itself. There are whole books about refactoring so called "legacy" code.
Refactoring an application for testability often means to de-couple dependencies between concrete classes and to use interfaces instead. In production code you pass your production class for the interface, in a unit test you can create a "mock" class which implements the same interface but behaves in a way like needed in your test
When writing unit tests, you might consider using a mocking framework like Moq. It will save you a lot of time and make the test code smaller. You will find an introduction here: Unit Testing .NET Application with Moq Framework
Ideally you should design you application from the beginning in a way which has testability in mind. The "TDD" (test driven development) approach even goes one step further, here you write your tests before you write the actual code, but this approach is not used very often in my experience.

CSS Regression testing with VCS and CI

BackstopJS generates a single page with regression test results. Is there any tool that works with CI (doesn't matter which one) and VCS (git) and assigns the test results to commits? I imagine the main page has a list of commits with passed/failed numbers next to them and links to a page with full results. Doesn't have to use BackstopJS.
Doesn't have to be as good as what I just described, I'd be happy for anything better than a script that copies results from BackstopJS to a folder with time and date.
You would be wanting to look into ‘pre commit hooks’ and working that into your build process. Might include a grunt-based git hook module and some bash scripting..
On projects I use CSS Regression testing it is still manual prior to commit/push.
https://www.npmjs.com/package/grunt-githooks
http://codeinthehole.com/writing/tips-for-using-a-git-pre-commit-hook/

Allure .Organizing tests in one Feature

Could you please help me with two questions regarding organizing tests and using Allure's "feature" tags?
If I have a few different tests but I need all of them to be included into one feature, do I have to write #Features("My Feature") annotation above each test method? Is there a way to write the #Features("My Feature") annotation once and include all required tests in it?
If I have a few logically separated classes with my #Test methods, is there an easy way to call all required tests from one TestSuite class in order to simply manage test queue?
You can write annotation #Feature once per class. But are you really need such feature? Maybe you should think a bit more and split your tests using some other way?
Allrue is not a test framework, it just a report tool. Allure does not run tests. To answer this part of question I need to know more about test framework you use, your environment (Ant, Maven, Jenkins, Teamcity etc)

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

How to complete an existing phpunit test with a generator

PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?
I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.
AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).

Resources