When writing UI tests, how does one test one thing that lies at the end of a long sequence? - automated-tests

I just got started running UI tests against my ASP.NET MVC application using WatiN. It's a great tool and really intuitive, but I find myself wondering what belongs in an individual test.
I found a number of people suggesting that these tests should be treated like unit tests and as such there should be no expectations on order or side effects.
I run into problems when the user story assumes that the user has completed a series of steps before completing the activity I want to test.
Some examples...
A user must register, log out, and enter the wrong password 3 times to verify that the system won't let them log in again with the right password
A user must register, add some foos, add some bars, submit a form that allows them to select among their foos and bars, and see their submission on another page
With unit tests, I can use mocking to take care of the prerequisite tasks.
What are some good ways of handling this scenario such that I can avoid writing individual tests that go through the same prerequisite steps yet have tests that complete reliably every time?

Hey.
I would split integration tests and story acceptance tests.
Check PageObjects pattern - You create LoginPage class with proper methods like loginAs(String username, String password), loginAsExpectingError(String username, String password). You write other classes like that - it gives you automation framework for your app. You can use this in the following way:
On integration level you are checking if application components work properly if you provide proper credentials (loginAs) and that when you are provide wrong credentials (loginAsExpectingError).
On acceptance level you use LoginPage.loginAs() to make first step in your acceptance test. Second could be something like MainPage.addSomeFoos(). Then MainPage.addSomeBars(). Then MainPage.logOut().
If your unit tests pass, then run integration tests, if they pass run acceptance tests.

Related

Run Pact Provider Test Class in Sequence

I am using pact-jvm provider spring. I have two different pact(.json) files lets say (order.json and irs.json), i need to run them in sequentially (order followed by irs), but based on alphabetical order the test classes are picked, the irs run first and order runs second. Is there way to call execute the particular test class provider states or define the test class execution order?
Pact is not a tool for end-to-end testing, in fact, one of Pact's stated objectives is to reduce or in some cases, completely remove the need for E2E testing.
Instead of doing end-to-end testing, we use contract tests to avoid the need to do that. Doing this has a lot of benefits, including the ability to test and release things separately, avoiding the need for managing test environments and data, and reducing coupling/ordering in tests themselves. Furthermore, it should be able to run on your laptop or on a CI build - you don't need to test against a running provider deployed to a real environment.
If you have to run a set of these tests in a particular sequence, you're doing it wrong
Here are some links to help you understand what I mean a bit better:
https://docs.pact.io/consumer/contract_tests_not_functional_tests
https://docs.pact.io/faq/#do-i-still-need-end-to-end-tests
https://docs.pact.io/getting_started/what_is_pact_good_for
I would also recommend completing one of our workshops, probably https://github.com/DiUS/pact-workshop-jvm.
It takes about 1 hour, but is well worth your time as all of the core concepts are clearly explained.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

isolating database operations for integration tests

I am using NHibernate and ASP.NET/MVC. I use one session per request to handle database operations. For integration testing I am looking for a way to have each test run in an isolated mode that will not change the database and interfere with other tests running in parallel. Something like a transaction that can be rolled back at the end of the test. The main challenge is that each test can make multiple requests. If one request changes data the next request must be able to see these changes etc.
I tried binding the session to the auth cookie to create child sessions for the following requests of a test. But that does not work well, as neither sessions nor transactions are threadsafe in NHibernate. (it results in trying to open multiple DataReaders on the same connection)
I also checked if TransactionScope could be a way, but could not figure out how to use it from multple threads/requests.
What could be a good way to make this happen?
I typically do this by operating on different data.
for example, say I have an integration test which checks a basket total for an e-commerce website.
I would go and create a new user, activate it, add some items to a basket, calculate the total, delete all created data and assert on whatever I need.
so the flow is : create the data you need, operate on it, check it, delete it. This way all the tests can run in parallel and don't interfere with each other, plus the data is always cleaned up.

Confusion over Symfony testing practices: functional vs integration testing

The Symfony testing documentation doesn't really mention a distinction between functional tests and integration tests, but my understanding is that they are different.
The Symfony docs describe functional testing like this:
Make a request;
Test the response;
Click on a link or submit a form;
Test the response;
Rinse and repeat.
While the Ruby on Rails docs describe it like this:
was the web request successful?
was the user redirected to the right page?
was the user successfully authenticated?
was the correct object stored in the response template?
was the appropriate message displayed to the user in the view?
The Symfony docs seem to be describing something more akin to integration testing. Clicking links, filling out forms, submitting them, etc. You're testing that all these different components are interacting properly. In their example test case, they basically test all actions of a controller by traversing the web pages.
I am confused why Symfony does not make a distinction between functional and integration testing. Does anyone in the Symfony community isolate tests to specific controller actions? Am I overthinking things?
Unit testing refers to test methods of a class, one by one, and check that they make the right calls in the right context. If those methods use dependencies (injected services or even other methods of that class), we're mocking them to isolate the test to the current method only.
Integration testing refers to automatically test a feature of your application. This is checking that all possible usage scenarios for the given feature work as expected. To do such tests, you're basically using the crawler, and simulate a website user to work through the feature, and check the resulting page or even resulting database data are consistent.
Functional testing refers to manually challenge the application usability, in a preproduction environment. You have a Quality Assurance team that will roll out some scenarios to check if your website works as expected. Manual testing will give you feedbacks you can't have automatically, such as "this button is ugly", "this feature is too complex to use", or whatever other subjective feedback a human (that will generally think like a customer) can give.
The way I see it the 2 lists don't contradict eachother. The first list (Symfony) can be seen as a method to provide answers for the second list (Rails).
Both lists sound like functional testing to me. They use the application as a whole to determine if the application satisfies the requirements. The second list (Rails) describes typical questions to determine if requirements are met, the first list (Symfony) offers a method on how to answer those questions.
Integration tests are more focused on how units work together. Say you have a repository unit that depends on a database abstraction layer. A unit test can make sure the repository itself functions correctly by stubbing/mocking out the database abstraction layer. An integration test will use both units to see if they actually work together as they should.
Another example of integration testing is to use real database to check if the correct tables/columns exist, and if queries deliver the results you expect. But also that when a logger is called to store a message, that message really ends up in a file.
PS: Functional testing that actually uses a (headless) browser is often called acceptance testing.
Both referenced docs describes functional tests. Functional tests are performed from user perspective (typically on GUI layer). It tests what user will see, what will happen if user submit form or will click on some button. Does not matter if it is automatic or manual process.
Then there are integration and unit tests. These test are on lower level. Basic prediction for unit tests is that they are isolated. You can test particular object and its methods but without external or real dependecies. This is what are mocks for (basically mock simulates real object according to unit test needs). Without understanding of IOC is really hard write isolated tests.
If you are writing tests using real/external dependencies (no mocks), you write integration tests. Integration tests can test cooperation of two objects or whole package/module including querying database, sending mails etc.
Yes, you are overthinking things ;)
I don't know why Symfony or Ruby on Rails states things like that. There is a time where testing depends on the eye it's looking at it. Bottom line: it doesn't matter the name. The only important thing is the confidence that the test gives to you on what you are doing.
Apart from that, the tests are alive and should evolve with your code. I sometimes test only for a specific HTTP status code, other times I isolate a module and unit test it... depends on the time I have to spend, the benefits, etc.
If I have a piece of code that only is used in a controller, I usually go for a functional test. If I'm making an utility I usually go for unit testing.

unit testing user login/logout

I am very new to the whole unit testing concept so I'm sorry if "unit test" is the wrong word for this. I think it might actually be a "integration test"?
At any rate, I am using asp.net's membership framework for login, logout, change password, etc. But I do a few extra things like updating the authentication ticket, adding an entry to another table at registration, refresh auth ticket on password change, etc.
What's the best way to go about writing tests for these types of cases?
I see 2 main options:
If you have a wrapper for the auth ticket and an interface for the repository, you can effectively unit test the logic involved (not sure how you are using the asp.net membership framework, but in any case you can move this logic to a class you can unit test).
Do automated web acceptance tests, using watin or selenium - this would still be written in your testing framework of choice (mstest, nunit, etc)
Update 1: I would do both, full blown unit tests for the logic and the WAT instead of smoke tests.

Resources