This question already has an answer here:
Is there a way to run Karate tests as an integration test suite against a pre-booted spring boot server?
(1 answer)
Closed 1 year ago.
Maybe this is not possible to do generically in a test framework but
I would like to be able to deploy the microservice I am testing within the test itself. I have looked at Citrus, RestAssured, and Karate and listened to countless talks and read countless blogs but I never see how to do this first stage. It always seems to be the case that there is an assumption that the microservice is pre-deployed.
Honestly it depends on how your microservice is deployed and which infrastructure you are targeting on. I prefer to integrate the deployment into the Maven build as Maven provides pre- and post-integration-test phases.
In case you can use Kubernetes or Docker I would recommend integrating the deployment with fabric8 maven plugins (fabric8-maven-plugin, docker-maven-plugin). That would automatically create/start/stop the Docker container deployment within the Maven build.
In case you can use Spring boot the official maven plugin can do so in the same way.
Another possibility would be to use build pipelines. Where the continuous build with Jenkins for example would deploy the system under test first and then execute the tests in a pipeline.
I personally prefer to always separate deployment and testing tasks. In case you really want to do a deployment within your test Citrus as a framework is able to start/stop Docker containers and/or Kubernetes pods within the test. Citrus can also integrate with before/after test suite phases for these deployment tasks.
I found a way to do it using docker-compose.
https://github.com/djangofan/karate-test-prime-example
Basically, make a docker-compose.yml that runs your service container and then also runs e2e tests after a call to wait-for-it.sh.
2 points:
The karate-demo is a Spring Boot example that is deployed by the JUnit test-runner. Here is the code that starts the server.
The karate-mock-servlet takes things one step further where you can run HTTP integration tests within your project without booting an app-server. Save time and code-coverage reports are easier.
If you have any requirements beyond these, I'd be happy to hear them. One of the things we would like to implement is a built-in server-side mocking framework - think embedded wiremock: but with the ease of Karate's DSL. But no concrete timeline yet.
EDIT: Karate has mocking now: https://github.com/intuit/karate/tree/master/karate-netty
Related
I'm just started working with Kafka together with C# (dotnet core). I have spinnned up an Kafka environment with Schema Registry using docker (Confluent Images).
I have a hobby project where I try to implement a microservice architecture. I will use Kafka to handle my IntegrationEvents between services.
Right now I have created my Kafka topics through the Confluent UI but I really like to have configuration as code - etc. I have database migrations using EF Core Migrations - Cloud environment using TerraForm.
What would be the best practices for creating topics? Right now I am thinking to create topics when my applications is starting up (if it exists it does nothing). The responsible application for creating the topic will be the application that needs to produce to that topic.
Any input our ideas what I can improve or do I have missed something that can potentiel cause me a lot of troubles.
Best regards Martin
I have decided to just use a command for my docker compose file. I still don't know or have decided for how my hosting environment will be. So wether going with Kubernetes all the way or using only Azure Services or ConfluentCloud.
So on I will postpone my decision regarding that and when time comes I will adjust code or Terraform for that.
I am in charge of implementing QA processes and test automation for a project using microservices architecture.
Project has one public api that makes some data available. So I will automate API tests. Tests will live in one repository. This part is clear to me, I did this before in other monolith projects. I had one repo for API tests. And possibly another repo for selenium tests.
But then here the whole poduct consists of many microservices that communicate via restful apis and/or rabbit queues. How would I go about automating tests for each of these individual servicess? Would tests for each individual service be in a separate repo? Note: services are written in Java or PHP. I will automate tests with Python. It seems to me that I will end up with a lot of repos for tests/stubs/mocks.
What suggestions or good resources can community offer? :)
Keep unit and contract tests with the microservice implementation
Component tests make sense in the context of composite microservices,
so keep them together
Have the integration and E2E tests in a
separate repo, grouped by use cases
For this kind of testing I like to use Pact. (I know you said Python, but I couldn't find anything similar in that space, so I hope you (or other people searching) will find this excellent Ruby gem useful.)
For testing from outside in, you can just use the proxy component - hope this at least gives you some ideas.
Give each microservice its own code repository, and add one for the cross-service end-to-end tests.
Inside a microservice's repository, keep everything that relates to that service, from code over tests to documentation and pipeline:
root/
app/
source-code/
unit-tests/ (also: integration-tests, component-tests)
acceptance-tests/
contract-tests/
Keep everything that your build step uses in one folder (here: app), probably with sub-folders to distinguish source code from unit tests, integration tests, and component tests.
Put tests like acceptance tests and contract tests that run in later stages of the delivery pipeline in own folders. This keeps them viually separate. It also simplifies creating separate build/test steps for them, for example by including own pom.xml's when using Maven.
If a developer changes a feature, he will need to change the tests at the exact same time to ensure the two fit together. Keeping code and tests in the same repository keeps the two in sync in a natural way.
To everyone that took their time, to read my question, I want to point out, that I'm writing Integration-tests NOT Unit-tests.
Using the definition of Integration-test, provided by sites(that are at the bottom of the question):
Integration tests do not use mock objects to substitute
implementations for service dependencies. Instead, integration tests
rely on the application's services and components. The goal of
integration tests is to exercise the functionality of the application
in its normal run-time environment.
My question is what is the best practice on writing integration test for ASP.net web API. At the moment I'm using the in memory host approach provided by Filip. W. blog post.
My second question is, how do you ensure, that your test data is there and is correct, when you're not not Mocking(msdn and other sites clearly say, that integration test do not mock databases). The internet is filled with examples of how to write extremely simple integration tests, but has zero examples for more complex api(anything that goes further than returning 1)
Reference Sites:
https://msdn.microsoft.com/en-us/library/ff647876.aspx
https://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx
http://www.codeproject.com/Articles/44276/Unit-Testing-and-Integration-Testing-in-Business-A
http://blog.stevensanderson.com/2009/06/11/integration-testing-your-aspnet-mvc-application/
Filip. W. In-Memory-Hosting:
http://www.strathweb.com/2012/06/asp-net-web-api-integration-testing-with-in-memory-hosting/
Have you seen my answer over at this other SO question here . I will pad this out with the additional information below.
In our release pipeline (using Visual Studio Release manager 2013) we provision a nightly integration database from a known test script by creating the database from scratch (all scripted) - initially we cloned production but as data grew this was too time consuming as part of the nightly integration build. After the db is provisioned we do the same with the integration VM web-servers and deploy the latest build to that environment. After these come up we run our unit tests again from command line as part of the release pipeline this time including the tests decorated with the custom action filter I descried in the answer linked.
I'm thinking of using Flyway for my database migration. Seems like it will be simpler than creating my own SQL and Java migration scripts. However, looking at the documentation there seems to be several ways to use it.
What should I consider when deciding between migrating with (a) application integration, (b) a maven task, or (c) the command line?
Currently I deploy to heroku with a simple git push. This builds my app and starts it as specified in the proc file.
So in this regard it seems like the application integration (migrating on startup) would be the simplest. But it also seems like overhead I don't need. I suppose if I do the maven task I would need to ensure that heroku calls maven correctly to make this happen.
What are the trade-offs? Is anyone currently using Spring + JPA + Flyway together with a heroku hosted application?
You are correct, the application integration is the simplest. Code and DB can never get out of sync.
The overhead is absolutely minimal, especially compared to JPA. The few millis it'll cost you on startup are well worth the dev and deployment convenience.
Q. We're looking for a way to automate
build process, run test cases and
store build results.
A problem could be raise as the application on which we want to setup this process is an ajax application -- a one page operation application highly rely on JavaScript. The QA team is using QTP to automate their testing.
Q. Now as we're moved to Team Foundation
Server we would like to be in the box
instead to use some other tool for
functions that can also be done in
Team Foundation. Will it a good choice to use Team Foundation instead
of other tool for defining test cases.
Once, they adopt and will generate test cases for the app.
Q. We would like to attach the test
cases with the daily build and also
like to have log/report for monitoring
build progress.
This, I assume, but you can also suggest a practice which can make the aforementioned process more effective and quick.
Thanks.
TFS has a built in test runner but it is aimed at MSTest. What test framework are you using? TFS uses MSBuild in the background and has a template build script with hooks to allow you to customize the process. Read up more about it here.
There is an TFS Web Test but I haven't looked into it much there is nothing stopping you hooking in some open source framework like Selenium into the build process
3.TFS keeps a log of all the builds done much the same as CruiseControl would.
I would recommned "Team Foundation Server 2008 in Action" as it is a very good book that explains a lot about TFS.