In unit testing for Corda, it runs a H2 DB. Given that I run a postgreSQL in production, I would like to run using PostgreSQL during unit testing to close this gap (rather than leave it to integrating testing) as well. There are embedded postgreSQL lib to "mock" postgreSQL".
How can i override the default H2 DB in Corda to run a embedded postgreSQL instead during unit testing?
Short Answer: You will not be able to customize the default database for unit testing.
The reason is that the unit test uses Mock Network service. And currently, H2 is the default database at the moment.
PS: But we have thought about tooling for a customized database for testing.
Related
We have a Java Application in a Docker Container with a Docker Db2 database 'side-car'. In DevOps pipeline (Jenkins) we run unit tests, and integration test between components. Run SonarQube and if all good, we move over to the Staging environment. In the Automated Testing step we build application container using latest code base, we then proceed to run automated Acceptance Testing using Cucumber framework.
Question is about the use of database for testing: should we spin up a db2 in a new/isolated container, or use a 'common' DB2 container that the test team uses in that env for manual testing? Best practices, proven approaches and recommendations are needed.
for post deployment tests (API tests, end to end tests), I would try to avoid using the same db with other environments and have dedicated database setup for those tests.
The reasons are:
For API tests, end to end tests, I want to have control on what data is available in database. If sharing database with other environments, the tests can fail for strange reason (.e.g. someone accidentally modify the record that the test is expecting to be in some state)
For the same reason, I don't want the API tests, end to end tests to affect other people testing also. It will be quite annoying if someone is in the middle of testing and realise the data is wiped out by post deployment tests
So normally in the CI, we have steps to:
clear test db
run migration
seed essential data
deploy server
run post deployment tests
Are there any automatic testing mechanism available for azure data factory pipelines? Does the azure data factory visual studio project come with any test suite of its own? Any help highly appreciated
Thanks
EDIT after comment:
You could use a github repository (gbrueckl - Azure.DataFactory.LocalEnvironment) for running custom pipelines local etc. His repository provides some tools which make it easier to work with Azure Data Factory (ADF). It mainly contains two features:
Debug Custom .Net Activities locally (within VS and without
deployment to the ADF Service!)
Export existing ADF Visual Studio projects a Azure Resource Manager
(ARM) template for deployment
In addition, the repository also contains various samples to
demonstrate how to work with the ADF Local Environment.
https://github.com/gbrueckl/Azure.DataFactory.LocalEnvironment is the link to the repository for doing that. You can use it to debug your pipelines on your local environment at least and it could also help to test them...
Not that I'm aware of, but happy to be told otherwise.
I suggest you post this on Microsoft's user voice page as a feedback idea. Then people searching will come here, go to that link and vote to get something developed.
https://feedback.azure.com/forums/270578-data-factory/filters/my_feedback?query=Unit%20Testing%20for%20ADF%20Projects
Hope this helps.
The only related project / sample code I'm aware of is the ability to step into and debug "DotNetActivty" outside of Azure Batch, by using your pipeline variables.
This could effectively be used as a test runner of some description.
If you have your data factory setup to be automatically deployed, you could deploy to an alternative QA environment.
Then you could probably (I haven't dug into the SDK in that area enough to know for sure) then use the SDK to run the slice and check if it ran successfully. It would be more of an integration test / end to end smoke test at this point.
ADFCustomActivityRunner on Github
I'm new to DevOps, so forgive me if this is trivial, but given the following workflow, what is the purpose of the integration server?
I've been given the following steps as an example of an approach to DevOps at my organisation :
Developers check in changes to source control (TFS).
Build server checks for changes.
Artefacts of the build are deployed to an "integration server" which has a copy of our ERP on it.
A release management application takes the output from this ERP environment and moves it to test, pre-production, and production environment as and when.
Is this approach correct, and if so, is the purpose of an integration server merely to provide a working implementation of code, that isn't accessed for any means other than moving code onto other servers?
My answer is making some assumptions on what it sounds like is going on in your environment.
When you check in changes to source control with AX, it's adding *.xpo text files of the code/objects that are your changes only.
It sounds like your "integration server" is a build/staging server. Imagine these two scenarios:
You have a customization with 3 objects, and you add 2 of the objects to source control and forget one. When you build on the integration server, it could have compile errors because that missing dependent object.
In your development environment, you create test forms and jobs that are basically junk you are experimenting with. You do not add these objects to source control. You wouldn't want this code to be deployed to your other environments, so the integration environment ensures the code is strictly from the repo.
Doing full compiles/syncs against the integration will also help identify issues. Then you can deploy the environment in its entirety to your other environments.
The big thing to realize is that your repo is really only your changes to the base (sys/syp) code. So part of the integration/build process is your code & base code combining.
I'm unit-testing an asp.net MVC 3 web application which our team is building.
The problem is that we have to mock a lot of things and our unit tests don't cover all webserver and database-related stuff.
Example:
I have a method with the following code:
public List<Useraccount> GetUseraccounts(Company company)
{
return company.Useraccounts.ToList<Useraccount>();
}
My developer complains that he has to inject a fake company object he's preparing by himself. He'd like to have a real object from the database.
My question:
Is it possible to use a real database (could also be SQLite/SQLExpress or something) with Unit-Test? Is this useful?
What are the pros and cons?
Without a real database we need to mock too many objects. We can not verify that, for example, such calls are working:
Useraccount useraccount = UnitOfWork.UseraccountRepository.Get(u => u.EnableCode == enableCode && u.IsEnabled == false).Single<Useraccount>();
Testing against a real database is integration testing, not unit testing. You can still run the integration tests in the same manner as unit tests - that is, run them via nunit or mstest or whatever, and via a command line or some such on your build server - but there are a couple of extra steps:
You need to setup the test data, inject it into the database, run the test, then remove the test data again. In an ideal world, you'd create a test database at the start of your integration test run, then run all tests, then remove it after all integration tests are finished. This can be impractical though.
Your integration tests will run much more slowly than unit tests. Be prepared for this by running them, for example, nightly in a build server job.
In terms of using SqlLite or whatever, i'd say not to, use the exact type of database you're using in the real world, otherwise it's not a trustworthy test.
try to use Effort tool . Effort is a powerful tool that enables a convenient way to create automated tests for Entity Framework based applications.
It is basically an ADO.NET provider that executes all the data operations on a lightweight in-process main memory database instead of a traditional external database. It provides some intuitive helper methods too that make really easy to use this provider with existing ObjectContext or DbContext classes. A simple addition to existing code might be enough to create data driven tests that can run without the presence of the external database.
Our team has hundreds of integration tests that hit a database and verify results. I've got two base classes for all the integration tests, one for retrieve-only tests and one for create/update/delete tests. The retrieve-only base class regenerates the database during the TestFixtureSetup so it only executes once per test class. The CUD base class regenerates the database before each test. Each repository class has its own corresponding test class.
As you can imagine, this whole thing takes quite some time (approaching 7-8 minutes to run and growing quickly). Having this run as part of our CI (CruiseControl.Net) is not a problem, but running locally takes a long time and really prohibits running them before committing code.
My question is are there any best practices to help speed up the execution of these types of integration tests?
I'm unable to execute them in-memory (a la sqlite) because we use some database specific functionality (computed columns, etc.) that aren't supported in sqlite.
Also, the whole team has to be able to execute them, so running them on a local instance of SQL Server Express or something could be error prone unless the connection strings are all the same for those instances.
How are you accomplishing this in your shop and what works well?
Thanks!
Keep your fast (unit) and slow (integration) tests separate, so that you can run them separately. Use whatever method for grouping/categorizing the tests is provided by your testing framework. If the testing framework does not support grouping the tests, move the integration tests into a separate module that has only integration tests.
The fast tests should take only some seconds to run all of them and should have high code coverage. These kind of tests allow the developers to refactor ruthlessly, because they can do a small change and run all the tests and be very confident that the change did not break anything.
The slow tests can take many minutes to run and they will make sure that the individual components work together right. When the developers do changes that might possibly break something which is tested by the integration tests but not the unit tests, they should run those integration tests before committing. Otherwise, the slow tests are run by the CI server.
in NUnit you can decorate your test classes (or methods) with an attribute eg:
[Category("Integration")]
public class SomeTestFixture{
...
}
[Category("Unit")]
public class SomeOtherTestFixture{
...
}
You can then stipulate in the build process on the server that all categories get run and just require that your developers run a subset of the available test categories. What categories they are required to run would depend on things you will understand better than I will. But the gist is that they are able to test at the unit level and the server handles the integration tests.
I'm a java developer but have dealt with a similar problem. I found that running a local database instance works well because of the speed (no data to send over the network) and because this way you don't have contention on your integration test database.
The general approach we use to solving this problem is to set up the build scripts to read the database connection strings from a configuration file, and then set up one file per environment. For example, one file for WORKSTATION, another for CI. Then you set up the build scripts to read the config file based on the specified environment. So builds running on a developer workstation run using the WORKSTATION configuration, and builds running in the CI environment use the CI settings.
It also helps tremendously if the entire database schema can be created from a single script, so each developer can quickly set up a local database for testing. You can even extend this concept to the next level and add the database setup script to the build process, so the entire database setup can be scripted to keep up with changes in the database schema.
We have an SQL Server Express instance with the same DB definition running for every dev machine as part of the dev environment. With Windows authentication the connection strings are stable - no username/password in the string.
What we would really like to do, but haven't yet, is see if we can get our system to run on SQL Server Compact Edition, which is like SQLite with SQL Server's engine. Then we could run them in-memory, and possibly in parallel as well (with multiple processes).
Have you done any measurements (using timers or similar) to determine where the tests spend most of their time?
If you already know that the database recreation is why they're time consuming a different approach would be to regenerate the database once and use transactions to preserve the state between tests. Each CUD-type test starts a transaction in setup and performs a rollback in teardown. This can significantly reduce the time spent on database setup for each test since a transaction rollback is cheaper than a full database recreation.