WSO2 Automated test - automated-tests

I need to do some performance tests on a WSO2 platform deployed on premise servers, that platform includes a WSO2 5.7.0 cluster, a WSO2 APIM 2.6.0 cluster and a WSO2 EI 6.4.0 cluster. Making a search I found very useful information about how to do automated tests:
https://medium.com/#kasunbg/introducing-wso2-testgrid-89089fe9efb0
https://github.com/wso2/testgrid
The WSO2 testgrid implements a great test scenario from scratch!, including infrastructure provisioning and wso2 product installation. In my case, I just need to execute performance tests , but unfortunately I can not find how to do it on the repository's documentation.
Does anyone have access to extra documentation abaut WSO2 Testgrid project?

WSO2 TestGrid was developed to test WSO2 products in different infrastructure combinations. Testgrid is intended to be used in-house hence there is very little documentation around it for external users to refer to. Here is a Slide Deck you can refer to understand the problem Testgrid tries to address and the architecture behind it.
Coming back to your question. I don't think Testgrid is what you need. You can look at performance-apim, performance-is and performance-ei to perf test the deployments. These are also a bit complex to set up. So What I would suggest is to refer to the above repos and comeup with your own set of scripts(You can use Jmeter) to perf test the deployment with scenarios relevant to your business requirements.

Related

How to use dotnet-coverage connect utility?

I have a DOTNET ASP.NET API REST service that needs to have integration testing with code coverage measured.
I have the code coverage for unit tests currently functioning.
The testing harness is using Postman (Newman for CLI) issuing requests to an executing instance of the API service.
I was considering using the utility dotnet-coverage connect to collect the code coverage data during testing. However, I can't find any examples of its usage.
Documentation: dotnet-coverage connect
Does any one have any experience with this approach? Any other technique I should consider?
Thanks in advance.

How to test wso2 apim performance

I have deployed wso2am-4.0.0 in okd, with 2gw, 2cp, 2tm. I'm going to have a performance test for WSO2 API Manager, I got docs from here, but I don't know hot to config, can any one guide me?
Although these scripts are in the public domain, they seems to be developed specifically to be used inhouse. Hence trying to use them may be an overkill. I would suggest you to look at Some of the test scenarios available in that repo and build your own test script to test your specific use cases.

Deploying microservice to be tested within the test [duplicate]

This question already has an answer here:
Is there a way to run Karate tests as an integration test suite against a pre-booted spring boot server?
(1 answer)
Closed 1 year ago.
Maybe this is not possible to do generically in a test framework but
I would like to be able to deploy the microservice I am testing within the test itself. I have looked at Citrus, RestAssured, and Karate and listened to countless talks and read countless blogs but I never see how to do this first stage. It always seems to be the case that there is an assumption that the microservice is pre-deployed.
Honestly it depends on how your microservice is deployed and which infrastructure you are targeting on. I prefer to integrate the deployment into the Maven build as Maven provides pre- and post-integration-test phases.
In case you can use Kubernetes or Docker I would recommend integrating the deployment with fabric8 maven plugins (fabric8-maven-plugin, docker-maven-plugin). That would automatically create/start/stop the Docker container deployment within the Maven build.
In case you can use Spring boot the official maven plugin can do so in the same way.
Another possibility would be to use build pipelines. Where the continuous build with Jenkins for example would deploy the system under test first and then execute the tests in a pipeline.
I personally prefer to always separate deployment and testing tasks. In case you really want to do a deployment within your test Citrus as a framework is able to start/stop Docker containers and/or Kubernetes pods within the test. Citrus can also integrate with before/after test suite phases for these deployment tasks.
I found a way to do it using docker-compose.
https://github.com/djangofan/karate-test-prime-example
Basically, make a docker-compose.yml that runs your service container and then also runs e2e tests after a call to wait-for-it.sh.
2 points:
The karate-demo is a Spring Boot example that is deployed by the JUnit test-runner. Here is the code that starts the server.
The karate-mock-servlet takes things one step further where you can run HTTP integration tests within your project without booting an app-server. Save time and code-coverage reports are easier.
If you have any requirements beyond these, I'd be happy to hear them. One of the things we would like to implement is a built-in server-side mocking framework - think embedded wiremock: but with the ease of Karate's DSL. But no concrete timeline yet.
EDIT: Karate has mocking now: https://github.com/intuit/karate/tree/master/karate-netty

Test automation for microservices architecture

I am in charge of implementing QA processes and test automation for a project using microservices architecture.
Project has one public api that makes some data available. So I will automate API tests. Tests will live in one repository. This part is clear to me, I did this before in other monolith projects. I had one repo for API tests. And possibly another repo for selenium tests.
But then here the whole poduct consists of many microservices that communicate via restful apis and/or rabbit queues. How would I go about automating tests for each of these individual servicess? Would tests for each individual service be in a separate repo? Note: services are written in Java or PHP. I will automate tests with Python. It seems to me that I will end up with a lot of repos for tests/stubs/mocks.
What suggestions or good resources can community offer? :)
Keep unit and contract tests with the microservice implementation
Component tests make sense in the context of composite microservices,
so keep them together
Have the integration and E2E tests in a
separate repo, grouped by use cases
For this kind of testing I like to use Pact. (I know you said Python, but I couldn't find anything similar in that space, so I hope you (or other people searching) will find this excellent Ruby gem useful.)
For testing from outside in, you can just use the proxy component - hope this at least gives you some ideas.
Give each microservice its own code repository, and add one for the cross-service end-to-end tests.
Inside a microservice's repository, keep everything that relates to that service, from code over tests to documentation and pipeline:
root/
app/
source-code/
unit-tests/ (also: integration-tests, component-tests)
acceptance-tests/
contract-tests/
Keep everything that your build step uses in one folder (here: app), probably with sub-folders to distinguish source code from unit tests, integration tests, and component tests.
Put tests like acceptance tests and contract tests that run in later stages of the delivery pipeline in own folders. This keeps them viually separate. It also simplifies creating separate build/test steps for them, for example by including own pom.xml's when using Maven.
If a developer changes a feature, he will need to change the tests at the exact same time to ensure the two fit together. Keeping code and tests in the same repository keeps the two in sync in a natural way.

How to write ASP.NET API Integration tests

To everyone that took their time, to read my question, I want to point out, that I'm writing Integration-tests NOT Unit-tests.
Using the definition of Integration-test, provided by sites(that are at the bottom of the question):
Integration tests do not use mock objects to substitute
implementations for service dependencies. Instead, integration tests
rely on the application's services and components. The goal of
integration tests is to exercise the functionality of the application
in its normal run-time environment.
My question is what is the best practice on writing integration test for ASP.net web API. At the moment I'm using the in memory host approach provided by Filip. W. blog post.
My second question is, how do you ensure, that your test data is there and is correct, when you're not not Mocking(msdn and other sites clearly say, that integration test do not mock databases). The internet is filled with examples of how to write extremely simple integration tests, but has zero examples for more complex api(anything that goes further than returning 1)
Reference Sites:
https://msdn.microsoft.com/en-us/library/ff647876.aspx
https://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx
http://www.codeproject.com/Articles/44276/Unit-Testing-and-Integration-Testing-in-Business-A
http://blog.stevensanderson.com/2009/06/11/integration-testing-your-aspnet-mvc-application/
Filip. W. In-Memory-Hosting:
http://www.strathweb.com/2012/06/asp-net-web-api-integration-testing-with-in-memory-hosting/
Have you seen my answer over at this other SO question here . I will pad this out with the additional information below.
In our release pipeline (using Visual Studio Release manager 2013) we provision a nightly integration database from a known test script by creating the database from scratch (all scripted) - initially we cloned production but as data grew this was too time consuming as part of the nightly integration build. After the db is provisioned we do the same with the integration VM web-servers and deploy the latest build to that environment. After these come up we run our unit tests again from command line as part of the release pipeline this time including the tests decorated with the custom action filter I descried in the answer linked.

Resources