We have a Java Application in a Docker Container with a Docker Db2 database 'side-car'. In DevOps pipeline (Jenkins) we run unit tests, and integration test between components. Run SonarQube and if all good, we move over to the Staging environment. In the Automated Testing step we build application container using latest code base, we then proceed to run automated Acceptance Testing using Cucumber framework.
Question is about the use of database for testing: should we spin up a db2 in a new/isolated container, or use a 'common' DB2 container that the test team uses in that env for manual testing? Best practices, proven approaches and recommendations are needed.
for post deployment tests (API tests, end to end tests), I would try to avoid using the same db with other environments and have dedicated database setup for those tests.
The reasons are:
For API tests, end to end tests, I want to have control on what data is available in database. If sharing database with other environments, the tests can fail for strange reason (.e.g. someone accidentally modify the record that the test is expecting to be in some state)
For the same reason, I don't want the API tests, end to end tests to affect other people testing also. It will be quite annoying if someone is in the middle of testing and realise the data is wiped out by post deployment tests
So normally in the CI, we have steps to:
clear test db
run migration
seed essential data
deploy server
run post deployment tests
Related
My organization is looking to implement CI/CD in our deployment process and I've been tasked to setup the test automation part in the pipeline. I did a lot of reading and I've got an idea on how the setup should look like. Can help to go through my idea and advice if it's indeed the correct setup and give some ideas on improvements if needed?
Here's how I have pictured the setup in my mind. My organization have 3 environments, Development, Testing and Production. Test automation scripts in the pipeline will be executed in the Testing environment.
Dev to check in the code in the Development environment.
This will then trigger the deployment to Testing environment
Once the deployment is complete, it will then trigger the test automation scripts in the repo via the test suites id
Each pipeline will point to specific test suite id
The automation scripts will be used to perform regression testing to ensure the new changes doesn't break existing functionalities.
At the same time, testers will be doing manual testing in the test environment to ensure the new changes works as it's intended.
If both the test automation scripts and the manual testing passes, we can then proceed to the next stage. (UAT Approval and then PROD deployment)
Would this be the correct process flow?
Many thanks in advance!
I am in the process of converting our legacy custom database deployment process with custom built tools into a full fledged SSDT project. So far everything has gone very well. I have composite projects that can deploy a base database as well as projects that deploy sample and test data.
The problem I am having now is finding a solution for running some sort of code that can call a web service to get an activation code and add it to the database as the final step of the process. Can anyone point me to a hook that I might be able to use?
UPDATE: To be clearer I am doing this to make it easier to maintain and deploy our sample and test data to a local machine. We can easily use Jenkins to activate the sites when they are deployed nightly to our official testing environments. I'm just hoping to be able to do this in a single step to replace the homegrown database deploy tool that we use now.
In my deployment scenario I wrapped database deployment process in some powershell scripts which do necessary prerequisites. For example:
powershell script is started and then it stops some services
next it run sqlpackage.exe or preproduced sql deployment scripts
finally powershell script starts services.
You can pass some parameters from powershell to sql scripts or sqlpackage.exe as sqlcmd variables. So you can call webservice first, then pass activation code as sqlcmd variable and use the variable in postdeployment script.
Particularly if it's the final step, I'd be tempted to do this separately, using whatever tool you're using to do the deployment: Powershell scripts, msbuild, TFS, Jenkins, whatever. Presumably there's also a front-end of some description that gets provisioned this way?
SSDT isn't an eierlegende Wollmilchsau, it's a set of tools for managing database changes.
I suspect if the final step were "provision a Google App Engine Instance and deploy a Python script", for example, it wouldn't appear to be a natural candidate for inclusion in an SSDT post-deploy script, and I reckon this falls into the same category.
We have a complete Lab Management environment running Coded UI tests in nightly builds. What we are trying to achieve is to run our integration tests (regular TestMethod() with SQL connections) just before all the Coded UI tests to verify that our db scripts are executed correctly and that there are no new changes causing any problems.
So far I have found a way to execute tests remotely through .testrunconfig. The problem we have with that approach is that it's not possible to choose a testcontroller connected to a team project so I guess that would be only useful for running tests on physical machines outside of Lab Management?
One option seem to be to create a Test Case for each integration test and that should run it together with the UI tests but it feels like it will be to much maintenance managing hundreds of test cases just to run the integration tests. Also it would be better to completely separate the test runs for the different kinds of tests.
Is there any easy way to achieve this that I have totally missed? Or do I have to modify the lab build template to deploy and run the tests?
I guess that would be only useful for running tests on physical machines outside of Lab Management?
If you run your tests remotely through .testrunconfig you have to connect the Test Agent to another Test Controller which is NOT connected to the team project.
Unfortunately it is impossible for the environment which are running under the Lab Management, to my knowledge.
What about this approach:
Create an Ordered Test containing all you integration tests.
Create a new Test Case "Integration Tests" and automate it by the ordered test
So you do not have to maintain hundreds of Test Cases.
You could also create several Ordered Tests if you want to group the integration tests and then
create a "main" Ordered Test containing them.
This way it will be easier to analyze test results especially if you have a lot of tests.
Let the integration tests run as a part of your existing nightly build.
Create a new Build Definition which does not start a build but uses the last successful nightly build and let your CodedUI tests run using Lab Build Template.
This way you will have different test runs for the different kinds of tests.
The only drawback is that you have to "synchronize" these two builds...
You could just schedule the second build later so you could be sure the fist build is done.
It's not really perfect, I know... but this way you could easily achieve your goal.
I am not sure if there is an alternative solution, but on the project I am currently working on we have both our Unit and Integration Test Assemblies set under the Process options (Process>Basic>AutomatedTest>TestAssembly) in our Nightly Build. This was achieved through altering the Default Build Process Template (not the Lab Default) a bit, as you suggested (I thought this was standard, but it's been a while).
Is there any web-based test runner; to run a web-site's unit tests from the web-site?
i know this is runs afoul of some people's dogma, but i want to be able to run unit tests from inside the product i'm testing. Whether this be from inside the native Win32 executable, or from within a .NET executable.
A guy has already written a web-based AJAX test runner for UnitTest++.
In the past i had to rip apart NUnit, so i could tests embedded in the executable without having to ship NUnit dlls. This required me to also write my own graphical test-runner for Windows/WinForms.
Has anyone already done the work of creating runnable unit-tests for ASP.net?
Note: The usual response by people: "Unit tests are not supposed to be in the final product, and definitely not accessible by testers, support, or developers, when on-site."
Note: You may disagree with my desire to run unit tests, but don't let that affect your answer. If there is no web-based nunit test runner for ASP.net then that's the answer. Don't be afraid to answer the question. i won't bite.
I think the reasons why you want to do this kind of thing is because of the lack of a Continuous Integration server, I cannot think another justification, so instead of trying to patch your design by doing this, it would be cleaner if you evaluate the implementation of a CI server (which it's not so difficult, for instance you could look at: NCastor)
So in my opinion what you need is to setup a continuous integration server in order to run unit tests and integration tests automatically as part of your automated build process.
You would be deploying your application to the next 'stage' only when the build process and the tests are valid, you can also configure the CI server to perform:
run tests
run static analysis
run non-static analysis
generate tests-reports
run tests with test coverage
calculate and set application version
create packages of your application
modify config files according to the target stage
minify your scripts
send emails when a build fails
Among others
And as you mentioned, you wouldn't need to deploy the tests to your production servers
I strongly recommend you to read the next article:
http://misko.hevery.com/2008/07/16/top-10-things-i-do-on-every-project/
And this is a list of some CI servers
Jenkins - Free & Easy configuration
Hudson - Free & Easy configuration
TeamCity - Free for Open Source projects
Cruise Control - Free (however, this has decreased in popularity because configuration is only available through XML files, it's annoying...)
Found NUnitWebRunner on github https://github.com/tonyx/NUnitWebRunner
Our team has hundreds of integration tests that hit a database and verify results. I've got two base classes for all the integration tests, one for retrieve-only tests and one for create/update/delete tests. The retrieve-only base class regenerates the database during the TestFixtureSetup so it only executes once per test class. The CUD base class regenerates the database before each test. Each repository class has its own corresponding test class.
As you can imagine, this whole thing takes quite some time (approaching 7-8 minutes to run and growing quickly). Having this run as part of our CI (CruiseControl.Net) is not a problem, but running locally takes a long time and really prohibits running them before committing code.
My question is are there any best practices to help speed up the execution of these types of integration tests?
I'm unable to execute them in-memory (a la sqlite) because we use some database specific functionality (computed columns, etc.) that aren't supported in sqlite.
Also, the whole team has to be able to execute them, so running them on a local instance of SQL Server Express or something could be error prone unless the connection strings are all the same for those instances.
How are you accomplishing this in your shop and what works well?
Thanks!
Keep your fast (unit) and slow (integration) tests separate, so that you can run them separately. Use whatever method for grouping/categorizing the tests is provided by your testing framework. If the testing framework does not support grouping the tests, move the integration tests into a separate module that has only integration tests.
The fast tests should take only some seconds to run all of them and should have high code coverage. These kind of tests allow the developers to refactor ruthlessly, because they can do a small change and run all the tests and be very confident that the change did not break anything.
The slow tests can take many minutes to run and they will make sure that the individual components work together right. When the developers do changes that might possibly break something which is tested by the integration tests but not the unit tests, they should run those integration tests before committing. Otherwise, the slow tests are run by the CI server.
in NUnit you can decorate your test classes (or methods) with an attribute eg:
[Category("Integration")]
public class SomeTestFixture{
...
}
[Category("Unit")]
public class SomeOtherTestFixture{
...
}
You can then stipulate in the build process on the server that all categories get run and just require that your developers run a subset of the available test categories. What categories they are required to run would depend on things you will understand better than I will. But the gist is that they are able to test at the unit level and the server handles the integration tests.
I'm a java developer but have dealt with a similar problem. I found that running a local database instance works well because of the speed (no data to send over the network) and because this way you don't have contention on your integration test database.
The general approach we use to solving this problem is to set up the build scripts to read the database connection strings from a configuration file, and then set up one file per environment. For example, one file for WORKSTATION, another for CI. Then you set up the build scripts to read the config file based on the specified environment. So builds running on a developer workstation run using the WORKSTATION configuration, and builds running in the CI environment use the CI settings.
It also helps tremendously if the entire database schema can be created from a single script, so each developer can quickly set up a local database for testing. You can even extend this concept to the next level and add the database setup script to the build process, so the entire database setup can be scripted to keep up with changes in the database schema.
We have an SQL Server Express instance with the same DB definition running for every dev machine as part of the dev environment. With Windows authentication the connection strings are stable - no username/password in the string.
What we would really like to do, but haven't yet, is see if we can get our system to run on SQL Server Compact Edition, which is like SQLite with SQL Server's engine. Then we could run them in-memory, and possibly in parallel as well (with multiple processes).
Have you done any measurements (using timers or similar) to determine where the tests spend most of their time?
If you already know that the database recreation is why they're time consuming a different approach would be to regenerate the database once and use transactions to preserve the state between tests. Each CUD-type test starts a transaction in setup and performs a rollback in teardown. This can significantly reduce the time spent on database setup for each test since a transaction rollback is cheaper than a full database recreation.