Using a debugOnly package in integration tests (MeteorJS) - meteor

I'm moving our test-data into a debugOnly package. This way I can ensure that no test-data will be available on production installations and the methods that clean the DB and create fixtures will be unavailable too. That means when we run our integration tests with the --production flag, the methods for creating fixtures will be unavailable.
Is it possible to tell meteor to include a specific package in a production run? Or are there other good ways of testing production builds with fixtures?

Related

Cloud composer rosbag BashOperator

I have an airflow dag running in a VM, but in order to facilitate the event driven triggering I'm trying to set up cloud composer in GCP. However, I only see an option in cloud composer to install pypi packages.
I need rosbag package in order to run my bash script, is there any way to do that in cloud composer? Or it's better to either run Airflow in a VM or a container with Kubernetes?
You can add your own requirements in Cloud Composer
https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies
However knowing rosbag pretty well (I've been robotics engineer using ROS for quite some time) - this might not be super easy to work out the right set of dependencies. Airflow has > 500 dependencies overall and it is highly likely some of them might clash with the particular version of ROS.
Also ROS has its own, specific way of initialization and setting up all the environment variables, sourcing certain scripts - which you will have to emulate yourself, modify PYTHONPATH and possibly do some initialization.
I'd say your best bet will be to use DockerOperator and use ROS from a Docker image. This can be done even with GPU support if needed (been there, done that) and it will provide the right level of isolation - both Airflow and ROS are utilising Python and dependencies a lot, and this might be the simplest way.

Best practices for developing own custom operators in Airflow 2.0

We are currently in the process of developing custom operators and sensors for our Airflow (>2.0.1) on Cloud Composer. We use the official Docker image for testing/developing
As of Airflow 2.0, the recommended way is not to put them in the plugins directory of Airflow but to build them as separate Python package. This approach however seems quite complicated when developing DAGs and testing them on the Docker Airflow.
To use Airflows recommended approach we would use two separate repos for our DAGs and the operators/sensors, we would then mount the custom operators/sensors package to Docker to quickly test it there and edit it on the local machine. For further use on Composer we would need to publish our package to our private pypi repo and install it on Cloud Composer.
The old approach however, to put everything in the local plugins folder, is quite straight forward and doesnt deal with these problems.
Based on your experience what is your recommended way of developing and testing custom operators/sensors ?
You can put the "common" code (custom operators and such) in the dags folder and exclude it from being processed by scheduler via .airflowignore file. This allows for rather quick iterations when developing stuff.
You can still keep the DAG and "common code" in separate repositories to make things easier. you can easily use a "submodule" pattern for that (add "common" repo as submodule of the DAG repo - this way you will be able to check them out together, you can even keep different DAG directories (for different teams) with different version of the common packages this way (just submodule-link it to different versions of the packages).
I think the "package" pattern if more of a production deployment thing rather than development. Once you developed the common code locally, it would be great if you package it together in common package and version accordingly (same as any other python package). Then you can release it after testing, version it etc. etc..
In the "development" mode you can checkout the code with "recursive" submodule update and add the "common" subdirectory to PYTHONPATH. In production - even if you use git-sync, you could deploy your custom operators via your ops team using custom image (by installing appropriate, released version of your package) where your DAGS would be git-synced separately WITHOUT the submodule checkout. The submodule would only be used for development.
Also it would be worth in this case to run a CI/CD with the Dags you push to your DAG repo to see if they continue working with the "released" custom code in the "stable" branch, while running the same CI/CD with the common code synced via submodule in "development" branch (this way you can check the latest development DAG code with the linked common code).
This is what I'd do. It would allow for quick iteration while development while also turning the common code into "freezable" artifacts that could provide stable environment in production, while still allowing your DAGs to be developed and evolve quickly, while also CI/CD could help in keeping the "stable" things really stable.

How to start docker images for my integration tests in bazel?

I was searching for a build system for my Go projects. In the Java world, things look much easier. You have maven and it's so easy to make test/integration test and package the project.
I'm trying to find the solution for starting Redis in docker then run package integration tests and finally stop the Redis.
I don't have problems with the test rule:
go_test(
name = "go_default_test",
srcs = ["person_cache_integration_test.go"],
embed = [":go_default_library"],
deps = [
"//internal/models:go_default_library",
"#com_github_stretchr_testify//assert:go_default_library",
],
)
but how can I start Redis in Docker before this rule and stop Redis in any case after successful or fail tests?
Thanks.
If your usecase is really just testing I recommend testcontainers.
This way your tests don’t depend on the build system and can be locally executed as well as in a CI system.
Here is a go implementation: https://github.com/testcontainers/testcontainers-go
I am not sure if that would work with bazel remote build, but for a typical project it should be fine.

Is it possible to test consumer side without stub runner in spring-cloud-contract

Currently I want to test the error handling of calling other micro services in consumer side via spring cloud contract. But there are some troubles blocking me to create stubs in provider side due to it's difficult to share build artifacts in docker CI build.
I'm wondering if possible to just create groovy or yaml contacts in consumer side then using them by wiremock server?
There are ways to achieve it. One, is to clone the producer's code, run ./mvnw clean install -DskipTests or ./gradlew publishToMavenLocal -x test and have the stubs installed without running any tests. Another option is to write your own StubDownloaderBuilder (for Finchley) that will fetch the contracts via Aether as the AetherStubDownloader does, but then will also automatically convert the contracts to WIreMock stubs.
Of course both approaches are "cheating". You shouldn't use the stubs in your CI system until the producer has actually published the stubs.
Maybe instead of hacking the system it's better to analyze
provider side due to it's difficult to share build artifacts in docker CI build.
and try to fix it? Why is it difficult? What exactly is the problem?

How should we test our application in Docker container?

We have a Java Application in a Docker Container with a Docker Db2 database 'side-car'. In DevOps pipeline (Jenkins) we run unit tests, and integration test between components. Run SonarQube and if all good, we move over to the Staging environment. In the Automated Testing step we build application container using latest code base, we then proceed to run automated Acceptance Testing using Cucumber framework.
Question is about the use of database for testing: should we spin up a db2 in a new/isolated container, or use a 'common' DB2 container that the test team uses in that env for manual testing? Best practices, proven approaches and recommendations are needed.
for post deployment tests (API tests, end to end tests), I would try to avoid using the same db with other environments and have dedicated database setup for those tests.
The reasons are:
For API tests, end to end tests, I want to have control on what data is available in database. If sharing database with other environments, the tests can fail for strange reason (.e.g. someone accidentally modify the record that the test is expecting to be in some state)
For the same reason, I don't want the API tests, end to end tests to affect other people testing also. It will be quite annoying if someone is in the middle of testing and realise the data is wiped out by post deployment tests
So normally in the CI, we have steps to:
clear test db
run migration
seed essential data
deploy server
run post deployment tests

Resources