Spring Cloud Contract generating stubs and a standalone server with the same stubs - spring-cloud-contract

This question feels a bit strange, but here it goes:
I know I can use SCC in unit tests because I can access the stubs it creates.
But the question is, from the same stubs can I configure a standalone server that could run in some DEV server, lets say for some manual testing or for some Selenium testing of the frontend app that will ultimately use those stubs?

Have you read the docs? You can use the Stub Runner Boot application. You can read about it here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#_stub_runner_boot_application and about its Docker version here https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html#stubrunner-docker
UPDATE:
Updating links for Hoxton.SR1 release train (Spring Cloud Contract 2.2.1.RELEASE):
Stub Runner Boot: https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/2.2.1.RELEASE/reference/html/project-features.html#features-stub-runner-boot
Stub Runner Docker: https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/2.2.1.RELEASE/reference/html/docker-project.html

Related

Why does Spring Cloud Contract stub runner have local and remote attributes?

The Spring Cloud Contract docs say
"Use the REMOTE stubsMode when downloading stubs from an online repository and LOCAL for Offline work".
Why does Spring Cloud Contract stub runner need local and remote attributes?
I would expect instead that it should respect the normal Maven lifecycle... If I do a mvn clean install on the contract module it should publish locally. If I do a mvn clean deploy there, it should publish to my remote. Same for the test verifier... If there is a copy of the binaries in my local repo use that. Otherwise pull it from remote
So I am not getting why we have to include local and remote in the stub runner.
This also seems dangerous because you might accidentally check in code with local when you meant to change it to remote on the build server
Why does Spring Cloud Contract stub runner need local and remote attributes?
We've described it in the docs that you quote. When you work offline, then you want to automatically pick the stubs from your local .m2. Otherwise you want to pick it from a different location.
I would expect instead that it should respect the normal Maven lifecycle... If I do a mvn clean install on the contract module it should publish locally. If I do a mvn clean deploy there, it should publish to my remote. Same for the test verifier... If there is a copy of the binaries in my local repo use that. Otherwise pull it from remote
You're mixing stub runner with verifier. When you're on the producer side, you're using the Spring Cloud Contract verifier and it follows the maven lifecycle fully. That's because we produce a stub jar and we attach it to the standard maven flow. With Stub Runner, it's completely unrelated to your maven flow.
This also seems dangerous because you might accidentally check in code with local when you meant to change it to remote on the build server
If you check in code with local then indeed you can have a false positive. That's why you should take care of what you're doing. When you're on the consumer side and doing ./mvnw clean install/deploy then Stub Runner just follows your test setup. If in the test setup you've messed up your configuration then Stub Runner can't do much about it.

Is it possible to test consumer side without stub runner in spring-cloud-contract

Currently I want to test the error handling of calling other micro services in consumer side via spring cloud contract. But there are some troubles blocking me to create stubs in provider side due to it's difficult to share build artifacts in docker CI build.
I'm wondering if possible to just create groovy or yaml contacts in consumer side then using them by wiremock server?
There are ways to achieve it. One, is to clone the producer's code, run ./mvnw clean install -DskipTests or ./gradlew publishToMavenLocal -x test and have the stubs installed without running any tests. Another option is to write your own StubDownloaderBuilder (for Finchley) that will fetch the contracts via Aether as the AetherStubDownloader does, but then will also automatically convert the contracts to WIreMock stubs.
Of course both approaches are "cheating". You shouldn't use the stubs in your CI system until the producer has actually published the stubs.
Maybe instead of hacking the system it's better to analyze
provider side due to it's difficult to share build artifacts in docker CI build.
and try to fix it? Why is it difficult? What exactly is the problem?

How should we test our application in Docker container?

We have a Java Application in a Docker Container with a Docker Db2 database 'side-car'. In DevOps pipeline (Jenkins) we run unit tests, and integration test between components. Run SonarQube and if all good, we move over to the Staging environment. In the Automated Testing step we build application container using latest code base, we then proceed to run automated Acceptance Testing using Cucumber framework.
Question is about the use of database for testing: should we spin up a db2 in a new/isolated container, or use a 'common' DB2 container that the test team uses in that env for manual testing? Best practices, proven approaches and recommendations are needed.
for post deployment tests (API tests, end to end tests), I would try to avoid using the same db with other environments and have dedicated database setup for those tests.
The reasons are:
For API tests, end to end tests, I want to have control on what data is available in database. If sharing database with other environments, the tests can fail for strange reason (.e.g. someone accidentally modify the record that the test is expecting to be in some state)
For the same reason, I don't want the API tests, end to end tests to affect other people testing also. It will be quite annoying if someone is in the middle of testing and realise the data is wiped out by post deployment tests
So normally in the CI, we have steps to:
clear test db
run migration
seed essential data
deploy server
run post deployment tests

How to use Sonar+JaCoCo to measure line coverage using integration tests (manual+automated)

I am trying to do line coverage analysis of a java based application. Found many resources on the internet on how to use Sonar+JaCoCo plugin to get line coverage results, and it looks very promising. However, I couldn't get a full clarity on how to go about implementing this solution.
More about my project:
There is a service being called by a website. The service is java based, and is built using maven.
There is also a selenium based test suite that is run on website (which makes calls to the above mentioned service at several instances). The test suite is built & invoked by Ant.
The code base for the service and the code base for the tests are at different locations on the same host.
I need to generate coverage report for the service based on the integration test suite.
The resources I went through are:
http://www.sonarsource.org/measure-coverage-by-integration-tests-with-sonar-updated/
http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even after going through all of these, I am not sure where to put jacoco-agent.jar, whether to make jacoco a part of maven (service's build process) or ant (tests' build process), how to invoke jacoco agent, where to specify the source repository(service's code base) and test repository locations.
I have tried blind permutations of all of the above, but either the maven build or the ant build starts failing as soon as I add jacoco tasks to them.
Can someone please help me out in this? I need to understand the exact steps to follow to get it done.
When you execute your server process for the test mode, you need to ensure that jacoco agent is setup on the classpath. The jacoco agent will then effectively listen and record details of the code covered for the life time of the JVM.
You then execute your client side selenium tests which will invoke the server. The jacoco agent in this case will record details of the code executed as part of your tests. When the client test finishes, you need to shutdown your server process which should result in a jacoco coverage file.
The final step is to generate a jacoco html report based on your coverage report. I might suggest you look into moving your ANT based selenium tests into your maven pom, since then it will be easier to control the order of test execution.

Web-based NUnit test runner for ASP.net?

Is there any web-based test runner; to run a web-site's unit tests from the web-site?
i know this is runs afoul of some people's dogma, but i want to be able to run unit tests from inside the product i'm testing. Whether this be from inside the native Win32 executable, or from within a .NET executable.
A guy has already written a web-based AJAX test runner for UnitTest++.
In the past i had to rip apart NUnit, so i could tests embedded in the executable without having to ship NUnit dlls. This required me to also write my own graphical test-runner for Windows/WinForms.
Has anyone already done the work of creating runnable unit-tests for ASP.net?
Note: The usual response by people: "Unit tests are not supposed to be in the final product, and definitely not accessible by testers, support, or developers, when on-site."
Note: You may disagree with my desire to run unit tests, but don't let that affect your answer. If there is no web-based nunit test runner for ASP.net then that's the answer. Don't be afraid to answer the question. i won't bite.
I think the reasons why you want to do this kind of thing is because of the lack of a Continuous Integration server, I cannot think another justification, so instead of trying to patch your design by doing this, it would be cleaner if you evaluate the implementation of a CI server (which it's not so difficult, for instance you could look at: NCastor)
So in my opinion what you need is to setup a continuous integration server in order to run unit tests and integration tests automatically as part of your automated build process.
You would be deploying your application to the next 'stage' only when the build process and the tests are valid, you can also configure the CI server to perform:
run tests
run static analysis
run non-static analysis
generate tests-reports
run tests with test coverage
calculate and set application version
create packages of your application
modify config files according to the target stage
minify your scripts
send emails when a build fails
Among others
And as you mentioned, you wouldn't need to deploy the tests to your production servers
I strongly recommend you to read the next article:
http://misko.hevery.com/2008/07/16/top-10-things-i-do-on-every-project/
And this is a list of some CI servers
Jenkins - Free & Easy configuration
Hudson - Free & Easy configuration
TeamCity - Free for Open Source projects
Cruise Control - Free (however, this has decreased in popularity because configuration is only available through XML files, it's annoying...)
Found NUnitWebRunner on github https://github.com/tonyx/NUnitWebRunner

Resources