Using many slots to test many classes in Qt unit testing - qt

I've seen a variety of workarounds posted in various places that suggest writing custom main functions instead of relying on the Qt QTEST_MAIN() macro when creating a single test execution that works through many different tests of different classes.
Correct me if I'm wrong, but couldn't you just have a single test class and have as many slots as you need to test as many classes as you want? Just instantiate the desired class you want to test inside the slot's implementation and run your tests in that slot. Then, a different slot might instantiate a different class and run different tests. The single QTEST_MAIN is supposed to run through all your slot tests, so everything gets tested, right?
Here are some of the alternate techniques I've read about that don't use QTEST_MAIN:
https://sites.google.com/a/embeddedlab.org/community/technical-articles/qt/qt-posts/creatingandexecutingasingletestprojectwithmultipleunittests
https://stackoverflow.com/a/12207504/768472

Of course you can have as much slots as you want in a test class. But sooner or later you will need to separate tests and group them just because you have too many tests to place them all in one class. And you will have to create several test classes.
The original intent of QTEST_MAIN function is to run only one test. If you want to test several classes and you can do it independently of each other, you can put them in separate test classes, add a QTEST_MAIN macro for each of them and compile each class to a separate executable. The plus is that if one test case crushes, other tests continue to run properly. The minus is that you need a test runner to run all tests and check their results, and qtestlib doesn't provide a runner. You can write your own runner or use one of existing ones (example).
The options are:
to obey the paradigm of QTestLib. Separate tests to different executables to prevent test failing because of other tests.
to store all test in one class. If your app is not tiny, this will be very inconvenient.
to run all tests manually using custom main function. It's not very bad, but it's inconvenient too because you need to list test classes manually.
to use another test lib. I prefer Google Test. It's much more powerful than qtestlib, it supports death tests, it automatically registers and runs tests and counts their results. There are no such problems in google test. Note that you can use many useful qtestlib features (like QSignalSpy) along with another test framework.

Related

Intention of GUI-Unit Testing in QML

I do some research about Qt-Quick-Tests, especially the GUI-Unit Test.
I´d like to know what is the Intention? Is it for triggering the functions that are written in QML, or do I want to see the behavior of the UI, or is it something complete different, I have not mentioned yet?
I´d like to know what is the Intention?
I would guess that main reason Qt Test was written was to test Qt itself via regression testing. Qt has a continuous integration (CI) system that runs automated tests against batches of changes submitted by contributors through code review. You can see all of these tests in the tests directory of each Git repository. For example, here are qtbase.git's automated tests.
From those tests, you will see that there are several applications for users, some of which are:
Basic logic tests. For example, when I call foo(), I expect the bar() signal to be emitted. This is not specific to GUI applications, and all applications can benefit from these. A lot of tests will fall under this category.
Render tests. Checking that the user interface is rendered correctly by comparing images "grabbed" from the screen.
User interaction tests. If I click this button, does it perform some action? If I type into this text field, does it then contain that text?
These are terms that I just made up, although they are accurate enough. If you're researching this subject, you will find resources online about the various types of automated testing. This is not specific to Qt.
Is it for triggering the functions that are written in QML
Yes.
or do I want to see the behavior of the UI
It can also test that.
Note that some applications are too large or complex to test via Qt's libraries. There are products like Squish that automate GUI testing for these types of applications.

Write Unit Tests for Static Methods

In my project there are lots of Static methods and all are inturn hitting the DB. I am supposed to write Unit Test for the project but often struck with as all the methods are static and they are hitting DB. Is there any way to overcome this. Sorry for being abstract in the question but my concern is what is the way to write unit test for static methods and those hitting DB. MOQ is not useful when the methods are static and also in my project one method is calling other method within the same class. So in this case i cannot MOQ the inside method as both are in the same class.
The project I'm currently in is lot worse than what you have described. It is a blue print of an un-testable system. There are couple of options I think, but it all depends on your situation.
Write Integration test, which hits the database, and test multiple components together. I know this is not ideal, but it at least give some confidence on the work you do. Then try to refactor your code in a small step at a time, (be sure to take baby steps) and write Unit tests around that code. Make sure your integration tests continue to pass. You are still allowed to refactor your intergeneration type tests, if the semantics are changed.
This might not be easier as I said, and it takes time. That's why I said it is depends on your situation.
Another option would be (I know many people do this with legacy code) to use one of those pricey Isolation frameworks such as Isolator, MS Fakes perhaps to fake out those un testable dependencies. Once those tests written you can look at re factoring the code to make it more testable.

Should I write integration test or unit test?

I have a function which saves photos(stored in database,app gives user option to save in a directory) to a given directory.Now, this was not working correctly.I just fixed it.Now, should I write unit test or integration test for the function?
For your case, you want to write an integration test to cover the scenario you mention. I have a full post on this topic. However, here's a summarized version specific to your question:
In his book The Art of Unit Testing, Roy Osherove describes a key principle that a unit test must be “trustworthy”. On the surface, this seems fairly obvious. However, this underlying highlights some of the key differences between a unit test vs an integration test.
With a trustworthy test, you must be able trust the results 100% of the time. If the test fails, you want to be certain that the code is broken and must be fixed. You shouldn’t have to ask things like “Was the database down?”, “Was the connection string OK?”, “Was the stored procedure modified?”. By asking these questions, it shows that you aren't able to trust the results and you likely have a poorly designed “unit test”.
As your scenario describes a situation with similar multiple dependencies, you want to cover it with a integration test. Again, for more details, see my full post here as well.
Good luck!
Integration tests and unit tests have different scopes and purposes:
Unit tests test small pieces of code (like a function) in isolation from the rest of the program, ideally covering all possible edge cases (like exceptions, null parameters, etc.)
Integration tests test an entire application from a use case point of view. They can never cover all edge cases, but they can catch problems with the interaction between parts of the code and the glue code that joins them together which unit tests often miss
For a singe function, you can really only have a unit test, and you should. But you could also have an integration test that shows that when the user presses a certain button, a photo is written into the directory, and can be opened in the program as well.
Integration tests help you to validate if your software is working properly.
Unit tests help you to find why your software is breaking.
Unit tests to some extent also contribute to the first goal. Plus it has a couple of advantages:
It's generally way cheaper to write and run a unit test with a much smaller scope.
It's easier to get coverage for the combinatoric explosion of states of you components using unit tests than an integration test. Say you have a setup involving three components. Each of them has 3 different states. Then integration testing the entire setup would involve checking 3 * 3 * 3 = 27 conditions. Unit testing the individual components would require testing 3 + 3 + 3 = 9 conditions. (This is oversimplified, but you will hopefully see the point.)
Because of this, unit tests are generally more popular than integration tests. However, you really cannot do without integration tests. Integration tests should be the cornerstone used for acceptance of your software. Having unit tests only just proves that you have a bunch of stuff doing something. An integration test proves that you have working software.
Some people would call a test for a DAO an integration test; others would say it's a unit test.
Whatever you call it, I'd say you should have a unit test for all the DAO functionality and an integration test for the front-to-back behavior embodied in the use case that says "give the user the option to save to the file system." I'd have integration tests for both scenarios, since it sounds like both are possible in your system.
I think it depends on the source of your problem.
If the function itself may have some problems in different scenarios you can have unit tests to test this scenarios over your function.
If integration of your function and other parts of your program may cause some problems you should think of an integration test.
Sometimes a function like yours may need some external resources to do its job it's not a bad idea to have some unit tests to see what will happen if some of these resources are not available

Code Coverage generic functions/parameters?

I am working on some code coverage for my applications. Now, I know that code coverage is an activity linked to the type of tests that you create and the language for which you wish to do the code coverage.
My question is: Is there any possible way to do some generic code coverage? Like in, can we have a set of features/test cases, which can be run (along with a lot more specific tests for the application under test) to get the code coverage for say 10% or more of the code?
More like, if I wish to build a framework for code coverage, what is the best possible way to go about making a generic one? Is it possible to have some functionality automated or generalized?
I'm not sure that generic coverage tools are the holy grail, for a couple of reasons:
Coverage is not a goal, it's an instrument. It tells you which parts of the code are not entirely hit by a test. It does not say anything about how good the tests are.
Generated tests can not guess the semantics of your code. Frameworks that generate tests for you only can deduct meaning from reading your code, which in essence could be wrong, because the whole point of unittesting is to see if the code actually behaves like you intended it too.
Because the automated framework will generate artificial coverage, you can never tell wether a piece of code is tested with a proper unittest, or superficially tested by a framework. I'd rather have untested code show up as uncovered, so I fix that.
What you could do (and I've done ;-) ) is write a generic test for testing Java beans. By reflection, you can test a Java bean against the Sun spec of a Java bean. Assert that equals and hashcode are both implemented (or neither of them), see that the getter actually returns the value you pushed in with the setter, check wether all properties have getters and setters.
You can do the same basic trick for anything that implements "comparable" for instance.
It's easy to do, easy to maintain and forces you to have clean beans. As for the rest of the unittests, I try to focus on getting important parts tested first and thouroughly.
Coverage can give a false sense of security. Common sense can not be automated.
This is usually achieved by combining static code analysis (Coverity, Klockwork or their free analogs) with dynamic analysis by running a tests against instrumented application (profiler + memory checker). Unfortunately, this is hard to automate test algorythms, most tools are kind of "recorders" able to record traffic/keys/signals - depending on domain and replay them (with minimal changes/substitutions like session ID/user/etc)

What's the point of automated integration test here?

Yes, I did read the 'Related Questions' in the box above after I typed this =). They still didn't help me as much as I'd like, as I understand what the difference between the two are - I'm just not sure if I need it in my specific case.
So I have a fully unit tested (simple & small) application. I have some 'Job' class with a single public Run() method + ctors which takes in an Excel spreadsheet as parameter, extracts the data, checks the database to see if we already have that data, and if not, makes a request to a third party vendor, takes that response, puts it in the database and then completes the job (db update again)
I have IConnection to talk to vendor, IParser to parse excel/vendor files, IDataAccess to do all database access. My Job class is lean & mean and doesnt do much logic, even though in reality it is doing all of the logic, it's really just 'chaining along' data through to the composite objects...
So all the composite objects are unit tested themselves, including the DAL, and even my Run() method on the Job class is unit tested fully using mocks for all possible code paths..
So - do I need to do any type of integration test at this point, other then run the app to see if it works? Is my test(s) of the Run() method with mocks considered my integration test(s)? Or should my integration test use real instances instead of mocks, and then Assert database values at the end, based on known excel spreadsheet input? But that's what all my unit tests are doing already (just in seperate places, and the mocked Run test makes sure those places 'connect')! Following the DRY methodology, I just don't see a need to do an integration test here...
Am I missing something obvious guys? Many thanks again...
I think the biggest thing you're missing is the actual behaviour of your external systems. While your unit tests may certainly assert that the individual steps perform the expected action, they do little to reveal the run-time issues that may arise when accessing external systems. Your external systems may also contain data you do not know about.
So yes, I think you need both. You do not necessarily need to be equally detailed in both tests. Sometimes you can just let the integration test be a smoke test

Resources