In Ruby, specifically RSpec, you can tell the test runner to abort on the first test that does not pass by the command-line flag --fail-fast. This helps a lot to not waste time or lose focus when fixing a lot of test in a row, for example when doing test-driven or behavior-driven development.
Now on Elixir with ExUnit I am looking for a way to do exactly that. Is there a way to do this?
There is such an option since Elixir 1.8.
Use the --max-failures switch to limit the number of tests evaluated with failure. To halt the test suite after the first failure, run this:
mix test --max-failures 1
Unfortunately there is (to my knowledge) no such flag implemented.
However, you can run a single test by
mix test path/to/testfile.exs:12
where 12 is the line number of the test.
Hope that helps!
That makes not much sense since tests in Elixir are a) to be run blazingly fast and b) in most cases are to be run asynchronously. Immediate termination of the test suite on the failed test is an anti-pattern and that’s why it’s not allowed by ExUnit authors.
One still has an option to shoot their own leg: just implement a custom handler for the EventManager and kill the whole application on “test failed” event.
For BDD, one preferably uses tags, running the test suite with only this feature included. That way you’ll get an ability to run tests per feature at any time in the future.
Also, as a last resort one might run a specific case only by passing the file name to mix test and/or a specific test only by passing the file name followed by a colon and a line number.
Related
I wanted to ask about this in general, but I had concerns switching my frontend automation suite from a Java framework to a JavaScript one. Mainly around an individual test running asynchronously.
Could cases potentially happen where tests take steps out of order or do a false positive of passing a test before the last expect argument is resolved?
If they can, in general, how do I resolve this issue?
You have two options to handle asynchronous nature of JS with Protractor:
1) Use async/await
or
2) User control flow (this basically means that if you are not using async/await you are using the Control Flow. It can not be combined as far as I know)
If you are not familiar with the async/await you can simply write your tests in synchronous manner e.g:
browser.get("https://myurl.com")
element(by.id("login").sendKeys("admin")
element(by.id("password").sendKeys("secretpassword")
element(by.buttonText("Login!").click()
expect(element(by.cssContainingText("header", "Welcome Admin")).toBePresent()
Protractor will execute this code in synchronous manner and the biggest plus is that it will actually wait for your Angular application to be ready (nevertheless you will have to use smart waits (Protractor expect condition) from time to time)
Before executing your tests protractor will build a queue of your steps and will execute it one by one since most of this steps are build upon promises. All none-protractor/webdriver method will be added to the call stack in common asynchronous manner. E.g. if you will add console.log("foo") at the end of the previous code snippet it will print the console.log before executing the steps.
This is pretty poor explanation but hope it will help
I have a WCF service which runs and interacts with database, file system and few external web services, then creates the result and Xml Serialize it and returns it finally.
I'd like to write tests for this solution and I'm thinking how (it's all using dependency injection and design by contract).
There are 3 main approaches I can take.
1) I can pick smallest units of codes/methods and write tests for it. Pick one class and isolate it from its dependencies (other classes, etc). Although it guarantees quality but it takes lots of time writing them and that's slow.
2) Only make the interaction with external systems mockable and write some tests that cover the main scenarios from when the request is made until the response is serialized and returned. This will test all the interactions between my classes but mocks all external resource accesses.
3) I can setup a test environment where the interaction with external web services do happen, file access happens, database access happens, etc. Then writing the tests from end to end. this requires environmental setup and dependency on all other systems to be up and running.
About #1, I see no point in investing the time/money/energy on writing the tests for every single method or codes that I have. I mean it's a waste of time.
About #3, since it has dependency on external resources/systems, it's hard to set it up and running.
#2, sounds to be the best option to me. Since it will test what it should be testing. Only my system and all its classes and mocking all other external systems.
So basically, my conclusion after some years experience with unit tests is that writing unit tests is a waste to be avoided and instead isolated system tests are best return on investment.
Even if I was going to write the tests first (TDD) then the production code, still #2 I think would be best.
What's your view on this? would you write small unit tests for your application? would you consider it a good practice and best use of time/budget/energy?
If you want to talk about quality, you should have all 3:
Unit tests to ensure your code does what you think it does, expose any edge cases and help with regression. You (developer) should write such tests.
Integration tests to verify correctness of entire process, whether components talk to each other correctly and so on. And again, you as a developer write such tests.
System-wide tests in production-like environment (with some limitations naturally - you might not have access to client database, but you should have its exact copy on your local machines). Those tests are usually written by dedicated testers (often in programming languages different from application code), but of course can be written by you.
Second and third type of tests (integration and system) will be way too much effort to test edge cases of smaller components. This is what you usually want unit tests for. You need integration because something might fail on hooking-up of tested, verified and correct modules. And of course system tests is what you do daily, during development, or have assigned people (manual testers) do it.
Going for selected type of tests from the list might work to some point, but is far from complete solution or quality software.
All 3 are important and targeted at different test types that is a matrix of unit/integration/system categories with positive and negative testing in each category.
For code coverage Unit testing will yield the highest percentage, followed by Integration then System.
You also need to consider whether or not the purpose of the test is Validation (will meet the final user\customer requirements, i.e Value) or Verification (written to specification, i.e. Correct).
In summary the answer is 'it depends', and I would recommend following the SEI CMMi model for Verification and Validation (i.e. testing) which begins with the goals (value) of each activity then subjecting that activity to measures that will ultimately allow the whole process to be subjected to continuous improvement. In this way you have isolated the What and Why from the How and you will be able to answer time and value type questions for your given environment (which could be a Life support System or a Tweet of the day, to your favorite Aunt, App).
Summary: #2 (integration testing) seems most logical, but you shouldn't hesitate to use a variety of tests to achieve the best coverage for pieces of your codebase that need it most. Shooting for having tests for "everything" is not a worthy goal.
Long version
There is a school of thought out there where devs are convinced that adopting unit\integration\system tests means striving for every single chuck of code being tested. It's either no test coverage at all, or committing to testing "everything". This binary thinking always makes adopting any kind of testing strategy seem very expensive.
The truth is, forcing every single line of code\function\module to be tested is about as sound as writing all your code to be as fast as possible. It takes too much time and effort, and most of it nets very little return. Another truth is that you can never achieve true 100% coverage in a non-trivial project.
Testing is not a goal unto itself. It's a means to achieve other things: final product quality, maintainability, interoperability, and so on, all while expending the least amount of effort possible.
With that in mind, step back and evaluate your particular circumstances. Why do you want to "write tests for this solution"? Are you unhappy with the overall quality of the project today? Have you experienced high regression rates? Are you perhaps unsure about how some module works (and more importantly, what bugs it might have)? Regardless of what your exact goal is, you should be able to select pieces that pose particular challenges and focus your attention on them. Depending on what those pieces are, an appropriate testing approach can be selected.
If you have a particularly tricky function or a class, consider unit testing them. If you're faced with a complicated architecture with multiple, hard to understand interactions, consider writing integration tests to establish a clean baseline for your trickiest scenarios and to better understand where the problems are coming from (you'll probably flush out some bugs along the way). System testing can help if your concerns are not addressed in more localized tests.
Based on the information you provided for your particular scenario, external-facing unit testing\integration testing (#2) looks most promising. It seems like you have a lot of external dependencies, so I'd guess this is where most of the complexity hides. Comprehensive unit testing (#1) is a superset of #2, with all the extra internal stuff carrying questionable value. #3 (full system testing) will probably not allow you to test external edge cases\error conditions as well as you would like.
I have a function which saves photos(stored in database,app gives user option to save in a directory) to a given directory.Now, this was not working correctly.I just fixed it.Now, should I write unit test or integration test for the function?
For your case, you want to write an integration test to cover the scenario you mention. I have a full post on this topic. However, here's a summarized version specific to your question:
In his book The Art of Unit Testing, Roy Osherove describes a key principle that a unit test must be “trustworthy”. On the surface, this seems fairly obvious. However, this underlying highlights some of the key differences between a unit test vs an integration test.
With a trustworthy test, you must be able trust the results 100% of the time. If the test fails, you want to be certain that the code is broken and must be fixed. You shouldn’t have to ask things like “Was the database down?”, “Was the connection string OK?”, “Was the stored procedure modified?”. By asking these questions, it shows that you aren't able to trust the results and you likely have a poorly designed “unit test”.
As your scenario describes a situation with similar multiple dependencies, you want to cover it with a integration test. Again, for more details, see my full post here as well.
Good luck!
Integration tests and unit tests have different scopes and purposes:
Unit tests test small pieces of code (like a function) in isolation from the rest of the program, ideally covering all possible edge cases (like exceptions, null parameters, etc.)
Integration tests test an entire application from a use case point of view. They can never cover all edge cases, but they can catch problems with the interaction between parts of the code and the glue code that joins them together which unit tests often miss
For a singe function, you can really only have a unit test, and you should. But you could also have an integration test that shows that when the user presses a certain button, a photo is written into the directory, and can be opened in the program as well.
Integration tests help you to validate if your software is working properly.
Unit tests help you to find why your software is breaking.
Unit tests to some extent also contribute to the first goal. Plus it has a couple of advantages:
It's generally way cheaper to write and run a unit test with a much smaller scope.
It's easier to get coverage for the combinatoric explosion of states of you components using unit tests than an integration test. Say you have a setup involving three components. Each of them has 3 different states. Then integration testing the entire setup would involve checking 3 * 3 * 3 = 27 conditions. Unit testing the individual components would require testing 3 + 3 + 3 = 9 conditions. (This is oversimplified, but you will hopefully see the point.)
Because of this, unit tests are generally more popular than integration tests. However, you really cannot do without integration tests. Integration tests should be the cornerstone used for acceptance of your software. Having unit tests only just proves that you have a bunch of stuff doing something. An integration test proves that you have working software.
Some people would call a test for a DAO an integration test; others would say it's a unit test.
Whatever you call it, I'd say you should have a unit test for all the DAO functionality and an integration test for the front-to-back behavior embodied in the use case that says "give the user the option to save to the file system." I'd have integration tests for both scenarios, since it sounds like both are possible in your system.
I think it depends on the source of your problem.
If the function itself may have some problems in different scenarios you can have unit tests to test this scenarios over your function.
If integration of your function and other parts of your program may cause some problems you should think of an integration test.
Sometimes a function like yours may need some external resources to do its job it's not a bad idea to have some unit tests to see what will happen if some of these resources are not available
Yes, I did read the 'Related Questions' in the box above after I typed this =). They still didn't help me as much as I'd like, as I understand what the difference between the two are - I'm just not sure if I need it in my specific case.
So I have a fully unit tested (simple & small) application. I have some 'Job' class with a single public Run() method + ctors which takes in an Excel spreadsheet as parameter, extracts the data, checks the database to see if we already have that data, and if not, makes a request to a third party vendor, takes that response, puts it in the database and then completes the job (db update again)
I have IConnection to talk to vendor, IParser to parse excel/vendor files, IDataAccess to do all database access. My Job class is lean & mean and doesnt do much logic, even though in reality it is doing all of the logic, it's really just 'chaining along' data through to the composite objects...
So all the composite objects are unit tested themselves, including the DAL, and even my Run() method on the Job class is unit tested fully using mocks for all possible code paths..
So - do I need to do any type of integration test at this point, other then run the app to see if it works? Is my test(s) of the Run() method with mocks considered my integration test(s)? Or should my integration test use real instances instead of mocks, and then Assert database values at the end, based on known excel spreadsheet input? But that's what all my unit tests are doing already (just in seperate places, and the mocked Run test makes sure those places 'connect')! Following the DRY methodology, I just don't see a need to do an integration test here...
Am I missing something obvious guys? Many thanks again...
I think the biggest thing you're missing is the actual behaviour of your external systems. While your unit tests may certainly assert that the individual steps perform the expected action, they do little to reveal the run-time issues that may arise when accessing external systems. Your external systems may also contain data you do not know about.
So yes, I think you need both. You do not necessarily need to be equally detailed in both tests. Sometimes you can just let the integration test be a smoke test
During software development, there may be bugs in the codebase which are known issues. These bugs will cause the regression/unit tests to fail, if the tests have been written well.
There is constant debate in our teams about how failing tests should be managed:
Comment out failing test cases with a REVISIT or TODO comment.
Advantage: We will always know when a new defect has been introduced, and not one we are already aware of.
Disadvantage: May forget to REVISIT the commented-out test case, meaning that the defect could slip through the cracks.
Leave the test cases failing.
Advantage: Will not forget to fix the defects, as the script failures will constantly reminding you that a defect is present.
Disadvantage: Difficult to detect when a new defect is introduced, due to failure noise.
I'd like to explore what the best practices are in this regard. Personally, I think a tri-state solution is the best for determining whether a script is passing. For example when you run a script, you could see the following:
Percentage passed: 75%
Percentage failed (expected): 20%
Percentage failed (unexpected): 5%
You would basically mark any test cases which you expect to fail (due to some defect) with some metadata. This ensures you still see the failure result at the end of the test, but immediately know if there is a new failure which you weren't expecting. This appears to take the best parts of the 2 proposals above.
Does anyone have any best practices for managing this?
I would leave your test cases in. In my experience, commenting out code with something like
// TODO: fix test case
is akin to doing:
// HAHA: you'll never revisit me
In all seriousness, as you get closer to shipping, the desire to revisit TODO's in code tends to fade, especially with things like unit tests because you are concentrating on fixing other parts of the code.
Leave the tests in perhaps with your "tri-state" solution. Howeveer, I would strongly encourage fixing those cases ASAP. My problem with constant reminders is that after people see them, they tend to gloss over them and say "oh yeah, we get those errors all the time..."
Case in point -- in some of our code, we have introduced the idea of "skippable asserts" -- asserts which are there to let you know there is a problem, but allow our testers to move past them on into the rest of the code. We've come to find out that QA started saying things like "oh yeah, we get that assert all the time and we were told it was skippable" and bugs didn't get reported.
I guess what I'm suggesting is that there is another alternative, which is to fix the bugs that your test cases find immediately. There may be practical reasons not to do so, but getting in that habit now could be more beneficial in the long run.
Fix the bug right away.
If it's too complex to do right away, it's probably too large a unit for unit testing.
Lose the unit test, and put the defect in your bug database. That way it has visibility, can be prioritized, etc.
I generally work in Perl and Perl's Test::* modules allow you to insert TODO blocks:
TODO: {
local $TODO = "This has not been implemented yet."
# Tests expected to fail go here
}
In the detailed output of the test run, the message in $TODO is appended to the pass/fail report for each test in the TODO block, so as to explain why it was expected to fail. For the summary of test results, all TODO tests are treated as having succeeded, but, if any actually return a successful result, the summary will also count those up and report the number of tests which unexpectedly succeeded.
My recommendation, then, would be to find a testing tool which has similar capabilities. (Or just use Perl for your testing, even if the code being tested is in another language...)
We did the following: Put a hierarchy on the tests.
Example: You have to test 3 things.
Test the login (login, retrieve the user name, get the "last login date" or something familiar etc.)
Test the database retrieval (search for a given "schnitzelmitkartoffelsalat" - tag, search the latest tags)
Test web services (connect, get the version number, retrieve simple data, retrieve detailed data, change data)
Every testing point has subpoints, as stated in brackets. We split these hierarchical. Take the last example:
3. Connect to a web service
...
3.1. Get the version number
...
3.2. Data:
3.2.1. Get the version number
3.2.2. Retrieve simple data
3.2.3. Retrieve detailed data
3.2.4. Change data
If a point fails (while developing) give one exact error message. I.e. 3.2.2. failed. Then the testing unit will not execute the tests for 3.2.3. and 3.2.4. . This way you get one (exact) error message: "3.2.2 failed". Thus leaving the programmer to solve that problem (first) and not handle 3.2.3. and 3.2.4. because this would not work out.
That helped a lot to clarify the problem and to make clear what has to be done at first.
I tend to leave these in, with an Ignore attribute (this is using NUnit) - the test is mentioned in the test run output, so it's visible, hopefully meaning we won't forget it. Consider adding the issue/ticket ID in the "ignore" message. That way it will be resolved when the underlying problem is considered to be ripe - it'd be nice to fix failing tests right away, but sometimes small bugs have to wait until the time is right.
I've considered the Explicit attribute, which has the advantage of being able to be run without a recompile, but it doesn't take a "reason" argument, and in the version of NUnit we run, the test doesn't show up in the output as unrun.
I think you need a TODO watcher that produces the "TODO" comments from the code base. The TODO is your test metadata. It's one line in front of the known failure message and very easy to correlate.
TODO's are good. Use them. Actively management them by actually putting them into the backlog on a regular basis.
#5 on Joel's "12 Steps to Better Code" is fixing bugs before you write new code:
When you have a bug in your code that you see the first time you try to run it, you will be able to fix it in no time at all, because all the code is still fresh in your mind.
If you find a bug in some code that you wrote a few days ago, it will take you a while to hunt it down, but when you reread the code you wrote, you'll remember everything and you'll be able to fix the bug in a reasonable amount of time.
But if you find a bug in code that you wrote a few months ago, you'll probably have forgotten a lot of things about that code, and it's much harder to fix. By that time you may be fixing somebody else's code, and they may be in Aruba on vacation, in which case, fixing the bug is like science: you have to be slow, methodical, and meticulous, and you can't be sure how long it will take to discover the cure.
And if you find a bug in code that has already shipped, you're going to incur incredible expense getting it fixed.
But if you really want to ignore failing tests, use the [Ignore] attribute or its equivalent in whatever test framework you use. In MbUnit's HTML output, ignored tests are displayed in yellow, compared to the red of failing tests. This lets you easily notice a newly-failing test, but you won't lose track of the known-failing tests.