Confusion over Symfony testing practices: functional vs integration testing - symfony

The Symfony testing documentation doesn't really mention a distinction between functional tests and integration tests, but my understanding is that they are different.
The Symfony docs describe functional testing like this:
Make a request;
Test the response;
Click on a link or submit a form;
Test the response;
Rinse and repeat.
While the Ruby on Rails docs describe it like this:
was the web request successful?
was the user redirected to the right page?
was the user successfully authenticated?
was the correct object stored in the response template?
was the appropriate message displayed to the user in the view?
The Symfony docs seem to be describing something more akin to integration testing. Clicking links, filling out forms, submitting them, etc. You're testing that all these different components are interacting properly. In their example test case, they basically test all actions of a controller by traversing the web pages.
I am confused why Symfony does not make a distinction between functional and integration testing. Does anyone in the Symfony community isolate tests to specific controller actions? Am I overthinking things?

Unit testing refers to test methods of a class, one by one, and check that they make the right calls in the right context. If those methods use dependencies (injected services or even other methods of that class), we're mocking them to isolate the test to the current method only.
Integration testing refers to automatically test a feature of your application. This is checking that all possible usage scenarios for the given feature work as expected. To do such tests, you're basically using the crawler, and simulate a website user to work through the feature, and check the resulting page or even resulting database data are consistent.
Functional testing refers to manually challenge the application usability, in a preproduction environment. You have a Quality Assurance team that will roll out some scenarios to check if your website works as expected. Manual testing will give you feedbacks you can't have automatically, such as "this button is ugly", "this feature is too complex to use", or whatever other subjective feedback a human (that will generally think like a customer) can give.

The way I see it the 2 lists don't contradict eachother. The first list (Symfony) can be seen as a method to provide answers for the second list (Rails).
Both lists sound like functional testing to me. They use the application as a whole to determine if the application satisfies the requirements. The second list (Rails) describes typical questions to determine if requirements are met, the first list (Symfony) offers a method on how to answer those questions.
Integration tests are more focused on how units work together. Say you have a repository unit that depends on a database abstraction layer. A unit test can make sure the repository itself functions correctly by stubbing/mocking out the database abstraction layer. An integration test will use both units to see if they actually work together as they should.
Another example of integration testing is to use real database to check if the correct tables/columns exist, and if queries deliver the results you expect. But also that when a logger is called to store a message, that message really ends up in a file.
PS: Functional testing that actually uses a (headless) browser is often called acceptance testing.

Both referenced docs describes functional tests. Functional tests are performed from user perspective (typically on GUI layer). It tests what user will see, what will happen if user submit form or will click on some button. Does not matter if it is automatic or manual process.
Then there are integration and unit tests. These test are on lower level. Basic prediction for unit tests is that they are isolated. You can test particular object and its methods but without external or real dependecies. This is what are mocks for (basically mock simulates real object according to unit test needs). Without understanding of IOC is really hard write isolated tests.
If you are writing tests using real/external dependencies (no mocks), you write integration tests. Integration tests can test cooperation of two objects or whole package/module including querying database, sending mails etc.

Yes, you are overthinking things ;)
I don't know why Symfony or Ruby on Rails states things like that. There is a time where testing depends on the eye it's looking at it. Bottom line: it doesn't matter the name. The only important thing is the confidence that the test gives to you on what you are doing.
Apart from that, the tests are alive and should evolve with your code. I sometimes test only for a specific HTTP status code, other times I isolate a module and unit test it... depends on the time I have to spend, the benefits, etc.
If I have a piece of code that only is used in a controller, I usually go for a functional test. If I'm making an utility I usually go for unit testing.

Related

Run Pact Provider Test Class in Sequence

I am using pact-jvm provider spring. I have two different pact(.json) files lets say (order.json and irs.json), i need to run them in sequentially (order followed by irs), but based on alphabetical order the test classes are picked, the irs run first and order runs second. Is there way to call execute the particular test class provider states or define the test class execution order?
Pact is not a tool for end-to-end testing, in fact, one of Pact's stated objectives is to reduce or in some cases, completely remove the need for E2E testing.
Instead of doing end-to-end testing, we use contract tests to avoid the need to do that. Doing this has a lot of benefits, including the ability to test and release things separately, avoiding the need for managing test environments and data, and reducing coupling/ordering in tests themselves. Furthermore, it should be able to run on your laptop or on a CI build - you don't need to test against a running provider deployed to a real environment.
If you have to run a set of these tests in a particular sequence, you're doing it wrong
Here are some links to help you understand what I mean a bit better:
https://docs.pact.io/consumer/contract_tests_not_functional_tests
https://docs.pact.io/faq/#do-i-still-need-end-to-end-tests
https://docs.pact.io/getting_started/what_is_pact_good_for
I would also recommend completing one of our workshops, probably https://github.com/DiUS/pact-workshop-jvm.
It takes about 1 hour, but is well worth your time as all of the core concepts are clearly explained.

How to retry R testthat test on API error?

Some tests rely on some external services (e.g. APIs). Sometimes, these external services will go down. This can cause tests to fail and (worse) continuous integration tools to fail.
Is there a way to instruct testthat tests and regular package examples re-run examples/tests more than once, ideally with the second attempt being 5 minutes after the first?
Ideally you would write your tests in a way that they don't call API or database.
Instead you will mock API end points according to the specification and also write test for cases where API returns unexpected results or errors.
Here is an example of package that allows you to do so:
https://github.com/nealrichardson/httptest
If you are worried that your vendor might change API, talk to them and extract details on their API change management.
Ask them this:
What is your change management process?
How do you avoid introducing break changes to existing endpoints that people are using?
(modified from this post)
If you have to check that API is still the same, draw the line between API validation and testing of your code.
You will need two separate processes:
Unit / Acceptance tests that are executed against the mocks of the API end points. Those run fast and are focused on the logic of your application.
pipeline for regular validation of the API. If your code is already live, you are likely to find out of any breaking changes in API anyway. So this is highly redundant. In exceptional cases this can be useful, but only with a very bad vendor.

Axon Framework: Should microservices share events?

We are migrating a monolithic to a more distributed and we decided to use AxonFramework.
In Axon, as messages are first-class citizens, you get to model them as POJOs.
Now I wonder, since one event can be dispatched by one service and listen on any others, how should we handle event distribution.
My first impulse is to package them in a separate project as a JAR file, but this goes against a rule for microservices, that they should not share implementations.
Any suggestion is welcome.
Having some form of 'common' module is definitely not uncommon, although I'd personally use that 'common' module for that specific application alone.
I'd generally say you should regard your commands/events/queries as the API of your application. As such, it might be beneficial to share the event structure with other projects, but just not the actual POJO itself. You could for example think about using ProtoBuf for this use case, were in ProtoBuf describes a schema for your events.
Another thing to think about is to not expose your whole 'event-API'. Typically you'll have quite some fine grained events, things which other (micro) services in your environment are not interested in. There are however always a couple of 'very important events', differently put 'milestone events', which others definitely are interested in.
These milestone events in some scenarios aren't a direct POJO following from your domain, but rather an accumulations of several events.
It is thus not to uncommon to have a service which accumulates these and publishes another event to notify other services. The accumulating of these fine grained, internal events, and publishing a milestone event as a response to these is typically better suited as the event-API within your micro service architecture.
So that's a couple of ideas there for you, hope they give you some insights.
I'd like to give a clear cut solution to your question, but such an answer always hides behind 'it depends'.
You are right, the "official" rule is not to share models. So if you have distributed dev-teams, I would stick to it.
However, I tend to not follow strictly when I have components that are decoupled but developed by the same team or teams with high interaction ...

Unit Testing an MVC4 Application Service Layer

I've spent the past two days starting and restarting this learning process because I really don't know how to get started.
I have an MVC4 application with three layers: Web, Services, and Core. Controllers send requests to the Service layer, which provide info that the controllers use to hydrate the viewModels for the view.
I have the following methods in my service layer:
public interface ISetServices
{
List<Set> GetBreadcrumbs(int? parentSetId);
Set GetSet(int? setId);
Set CreateSet(string name, string details, int? parentSetId);
void DeleteSet(int? setId);
Card GetCard(int? cardId);
Card CreateCard(List<string> sides, string details, int? parentSetId);
void DeleteCard(int? cardId);
Side GetSide(int? sideId);
List<String> GetSides(Card card);
Side CreateSide(Card card, string content);
void DeleteSide (int? sideId);
}
I'm trying to figure out how I can create a Unit Test Class Library to test these functions.
When the tests are run, I would like a TestDatabase to be dropped (if it exists) and recreated, and seeded with data. I have a "protected" seed method in my Core project along with a - can I use this? If so, how?
Everywhere I read says to never use a database in your tests, but I can't quite figure out what the point of a test is then. These services are for accessing and updating the database... don't I need a database to test?
I've created a Project.Services.Tests unit testing project, but don't know how to wire everything up. I'd like to do it with code and not configuration files if possible... any examples or pointers would be MUCH appreciated.
There are many aspects to this problem, let me try to approach some:
Unit testing is about testing a unit of code, smallest possible testable piece of code but testing unit of code with it's interaction with database is an integration test problem
One approach to this problem by using repository pattern - it is an abstraction layer over your data access layer. Your service interface looks more like a repository pattern implementation, google around about it more.
Some people do not test internals of repository pattern, they just assert calls against it's interface. Database tests are considered an integration test problem.
Some people hit their database directly by writing SetUp and TearDown steps in their unit tests, where usually you would insert appropriate data in a SetUp and TearDown would clean it all up to the previous state, but be aware - they can get pretty slow and make your unit testing a pain.
Other approach would be to configure your tests to use different database - for example SQLCE. For some ORMs database swapping might be quite easy. This is faster than hitting 'full' database, and seem cleaner, but databases implementations have differences that sooner or later will surface and make your unit testing painful...
Currently with the raise of NoSQL solutions accessing database directly can get very easy because quite often they have their memory counterparts (like RavenDB)
I realize it might be a bit overwhelming at the beginning, but again, this problem has many aspects. How about you post your source code to github and share it here ?

When writing UI tests, how does one test one thing that lies at the end of a long sequence?

I just got started running UI tests against my ASP.NET MVC application using WatiN. It's a great tool and really intuitive, but I find myself wondering what belongs in an individual test.
I found a number of people suggesting that these tests should be treated like unit tests and as such there should be no expectations on order or side effects.
I run into problems when the user story assumes that the user has completed a series of steps before completing the activity I want to test.
Some examples...
A user must register, log out, and enter the wrong password 3 times to verify that the system won't let them log in again with the right password
A user must register, add some foos, add some bars, submit a form that allows them to select among their foos and bars, and see their submission on another page
With unit tests, I can use mocking to take care of the prerequisite tasks.
What are some good ways of handling this scenario such that I can avoid writing individual tests that go through the same prerequisite steps yet have tests that complete reliably every time?
Hey.
I would split integration tests and story acceptance tests.
Check PageObjects pattern - You create LoginPage class with proper methods like loginAs(String username, String password), loginAsExpectingError(String username, String password). You write other classes like that - it gives you automation framework for your app. You can use this in the following way:
On integration level you are checking if application components work properly if you provide proper credentials (loginAs) and that when you are provide wrong credentials (loginAsExpectingError).
On acceptance level you use LoginPage.loginAs() to make first step in your acceptance test. Second could be something like MainPage.addSomeFoos(). Then MainPage.addSomeBars(). Then MainPage.logOut().
If your unit tests pass, then run integration tests, if they pass run acceptance tests.

Resources