How to complete an existing phpunit test with a generator - phpunit

PhpUnit has a generator of skel based on an existing class.
But it's works once.
If later some new method are added (because a dev doesnt work with tdd ), the test file is incomplete.
Is there a tool to generate a skel for uncovered method ?

I don't know any, and I also don't see the need. That skeleton generator generates one test method per function it finds, but you cannot test all use cases of a slightly advanced function within only one test function.
Also, the name of the test function is generated - but better names can and should be created to describe the intended test case or behavior of the tested function. Like "testGetQuoteFromStockMarket" vs. "testGettingMicrosoftQuoteFromStockMarketShouldReturnQuoteObject" and "testGettingUmbrellaCorporationFromStockMarketShouldFailWithException".
Note that you cannot test the throwing of exceptions in combination with cases that do not throw exceptions.
So all in all there simply is no use case to create "one test method per method" at all, and if you add new methods, it is your task to manually add the appropriate number of new tests for that - the generated code coverage statistics will tell you how well you did, or which functions are untested.

AFAIK there is no built-in phpunit functionality to updated the auto-generated test code; that is typical of most code generators.
The good news is that each of the functions is added quite cleanly and independently. So what I would do is rename your existing unit test file to *.old, regenerate a fresh test file, then use meld (or the visual diff tool of your choice) to merge in the new functions.
Aside: automatic test generation is really only needed at the start of a new class anyway; the idea of exactly one unit test per function is more about generating nice coverage stats to please your boss; from the point of view of building good software, some functions will need multiple tests, and some functions (getters and setters come to mind) do not really need any, and sometimes multiple functions are best covered by a single unit test (getters and setters again come to mind).

Related

How to access methods in forms from a test method in a different solution file?

I have two different projects in two separate solution files. One is a test project containing my unit test methods and other is the main project. I know I need to create an instance of the individual GUI form first in order to access the methods in but not sure how to do this.
Also, I have certain variable values which get initiated only when I run the entire original application. These values are used in many of the methods in the main project. Without them I will just get null values and the test method would just fail. Is there any way to get values for those variables without running the application? I tried to place the logic which fetches the value for those variables in my test method and then call the actual method but this still doesn't work. How do I resolve this problem?
The problems you have run into are quite common for trying to write unit tests for an application that is not well written with regards for testability.
Usually the code you write test for, does not reside in forms or UI classes.
The business logic should be located in separated classes within your main project. The business logic can then be called from your UI and from your unit test too.
So you have to move any business logic to separate classes first.
The next thing you need to do is remove any dependencies from the newly created classes to other classes which prevent you from writing a unit test and replace those dependencies with interfaces.
For instance you can change your application so that every environment variable which you mention in your question, is retrieved by special classes.
If you create interfaces for this classes you can setup a so called Mock ( a fake version of your real class) in you unit test which you can configure in the test to behave as desired for the particular test scenario.
Some general advice:
Often refactoring the code to be tested (i.e. making it better testable) takes more time than writing the unit test itself. There are whole books about refactoring so called "legacy" code.
Refactoring an application for testability often means to de-couple dependencies between concrete classes and to use interfaces instead. In production code you pass your production class for the interface, in a unit test you can create a "mock" class which implements the same interface but behaves in a way like needed in your test
When writing unit tests, you might consider using a mocking framework like Moq. It will save you a lot of time and make the test code smaller. You will find an introduction here: Unit Testing .NET Application with Moq Framework
Ideally you should design you application from the beginning in a way which has testability in mind. The "TDD" (test driven development) approach even goes one step further, here you write your tests before you write the actual code, but this approach is not used very often in my experience.

Robot Framework Test Flow

It's possible to require an execution of a specific test case before the execution of the current test case?
My test cases are organized in several folder and it's possible that a test require the execution of the another test placed in the another folder (see the image below).
Any suggestions?
There is nothing you can do if the test cases are in different files, short of reorganizing your tests.
You can control the order that suites are run, and you can control the order of tests within a file, but you can't control the order of tests between files.
Best practices suggest that tests should be independent and not depend on other tests. In practice that can be difficult, but at the very least you should strive to make test suites independent of one another.
This is not a good / recommended / possible way to go.
Robot framework doesn't support it, and for a good reason. It is not sustainable to create such dependencies in the long term (or even short term).
Tests shouldn't depend on other tests. Mainly not on other tests from a different suite. What if the other suite was not run?
You can work around the issue in two ways:
You can define a file called
__init__.robot
In a directory. That suite setup and suite teardown in the file would run before anything in the underlying folders.
You can also turn the other test into a keyword so:
Test C simply calls a keyword that makes Test C run and also updates a global variable (Test_C_already_runs)
Test B would use then issue
run if '${Test_C_already_runs}'=='true' Test_C_Keyword
You would have to set a value to Test_C_already_runs before that anyway (as part of variable import, or as part of some suite_setup) to prevent variable not found error.

Symfony2 testing: Why should I use fixtures instead of managing data directly in test?

I'm writing some functional tests for an API in Symfony that depend on having data in the database. It seems the generally accepted method for doing this is to use Fixtures and load the fixtures before testing.
It seems pretty daunting and impractical to create a robust library of Fixture classes that are suitable for all my tests. I am using the LiipFunctionalTestBundle so I only load the Fixtures I need, but it only makes things slightly easier.
For instance, for some tests I may need 1 user to exist in the database, and other tests I may need 3. Beyond that, I may need each of those users to have slightly different properties, but it all depends on the test.
I'd really like to just create the data I need for each test on-demand as I need it. I don't want to pollute the database with any data I don't need which may be a side effect of using Fixtures.
My solution is to use the container to get access to Doctrine and set up my objects in each test before running assertions.
Is this a terrible decision for any reason which I cannot foresee? This seems like a pretty big problem and is making writing tests a pain.
Another possibility it to try and use this port of Factory Girl for PHP, but it doesn't seem to have a large following, even though Factory Girl is used so widely in the Ruby community to solve this same problem.
The biggest reason to fixture test data in a single place (rather than in each test as needed) is that it allows you to isolate test failures from BC-breaking schema changes to only tests that are affected by the changes directly.
A simple example:
Suppose you have a User class, in which you have a required field name, for which you provide getName() and setName(). You write your functional tests without fixtures (each test creating Users as needed) and all is well.
Sometime down the road, you decide you actually need firstname and lastname fields instead of just name. Naturally you change the schema and replace getName/setName with new get and set methods for these, and go ahead and run your tests. Test fail all over the place (anything that sets up a User), since even tests that don't use the name field have called setName() during the setup, and now they need to be changed.
Compare to using a fixture, where the User classes needed for testing are all created in one fixture. After making your breaking change, you update the fixture (in one place / class) to setup the Users properly, and run your tests again. Now the only failures you get should be directly related to the change (ie tests that use getName() for something, and every other test that doesn't care about the name field proceeds as normal.
For this reason it's highly preferred to use fixtures for complicated functional tests where possible, but if your particular schema / testing needs make it really inconvenient you can set entities up manually in tests, but I'd do my best to avoid doing so except where you really need to.

Handling Test Workflows in R testthat

I've two files test_utils.r and test_core.r, they contain tests for various utilities and some core functions separated into different 'context'. I can control the flow of tests within each file by moving around my test_that() statements.
But am looking for a way in which I can create different workflows, say ensuring that at run time, tests from Context A_utils runs first followed by tests from Context B_Core followed by context B_Utils.
Any ideas on how this can be achieved?
BrajeshS,
I have an idea. Have you tried the skip() function in version 0.9 or better? See testthat documentation on page 7:
Description
This function allows you to skip a test if it’s not currently available.
This will produce an informative message, but will not cause the test suite
to fail.
It was introduced to skip tests if an internet connection or API not available. You could then dependent on your workflow, jump over tests.
To see example code using skip_on_cran, look at wibeasley's answer where he provides test code in Rappster's reply - https://stackoverflow.com/a/26068397/4606130
I am still getting to grips with testthat. Hope this helps you.

How to start with unit testing?

First,I don't want to do TDD. I'd like to write some code and then just write the unit tests for the important stuff.
I know the tools. I know how to write a unit test.
I wrote some code (couple off classes) and I don't know where to start. I'm missing the semantic.
How to pick up what to unit test ?
Should I unit test every class extra ?
Should I try to simulate every possible variation of method's parameters ?
The idea with unit testing is to test pieces of code. We use TDD at my company which works create. We write tests for every possible option of a function. So to answer your three questions;
How to pick up what to unit test ?
It's useless to write unit tests for code or functions that contain no intelligence. This actually is what needs to happen when you strictly follow TDD. But if it's obvious what the function returns and you're sure nothing can go wrong, then you probably don't have to write an test for it. Though it's better to do so.
Should I unit test every class extra?
What exactly do you mean by this? If the question is do you need to test classes the answer is no. Unit testing is for testing the smallest piece of code. Mostly functions and constructors. What you want to know is if your function gives the result you want and you want it to return the desired result or throw an nicely handeled exception no matter what parameter-value you send to the function
Should I try to simulate every possible variation of method's parameters ?
You should. That's the whole idea of writing a unit test. You want to test a piece of code an rule out every single thing that can go wrong. Parameters are the most important here. What happens if a string-parameter contains html for example? And what if an required parameter is NULL? Or an empty string. Every option should be tested to rule out the possible things that can go wrong.
If you're using the .net framework it is very interessting to look at the Moq framework. Shortly put it is an framework allowing you to create an fake object of some type and you can validate your test against it to check if the result is as expected using several different parameter and return values. You can read about it in Scott Hanselmans blog post.
Should I try to simulate every
possible variation of method's
parameters ?
You should have a look at Pex which can generate input values for your parameterised tests that provide the greatest code coverage.

Resources