How can I prevent Jest from running async tests concurrently? - asynchronous

I'm writing a series of tests against a database. All tests take the following form:
test("Name", async ()=>{
// I do not call done(). I didn't think I had to anymore,
// and I get type errors if I do.
});
Unfortunately, this is causing concurrency issues. I'm not sure if that's down to background tasks on the DB, or Jest running tests concurrently. The tests work fine when run individually, so I know concurrency of some sort is the problem.
How can I make absolutely sure that Jest runs these async tests one at a time?

In Jest, the tests in a single file run serially in the order of appearance. However, the tests in multiple files run concurrently. This is a problem when you are running all the tests against a single database.
To disable running the tests in multiple files concurrently, use the CLI option --runInBand in your jest command.
For example:
jest --runInBand
If you use scripts like npm run test to run your tests, then append the --runInBand to the associated command in package.json file.

Related

How to watch tasks logs via CLI in Airflow?

So I am having a problem: there are no logs displaying in Airflow UI I am currently working with, I don't know the reason, but I've already informed my colleagues about it and they're looking for a solution.
Meanwhile I need to watch logs of certain tasks of my Dag. Is there any way to do it via airflow CLI?
I am using airflow tasks run command but it only seems to run tasks, but doesn't show a thing in command line.
By default Airflow should store your logs at $AIRFLOW_HOME/logs/ maybe you'll find them there, if they are still generated.
In the meantime you can use airflow tasks test <dag_id> <task_id> this tests a specific task and displays the logs in your terminal.

Issue running cloud functions code locally using cloud functions shell

I am trying to test my functions locally using cloud functions shell. I was successful in making the shell thing work for my code. I see that this doesn't require my code to be deployed to the cloud. But whenever I run the function through shell it's working fine but it is using the deployed code, not the local code(I am checking this by using console statements as shown in sample code). I am not able to invoke local code unless I deploy.
Also, in my cloud functions, I am using the onCreate method for a real-time database and writing back to the same real-time database. When I test locally using the shell, I input data files for the function and write back to the real-time database. So I am actually trying to write code and run it locally to write to a real-time database on the cloud. Is this achievable using shell without deploying functions?
My sample function looks like this:
export const myCloudFunction = functions.database.instance(getDatabaseIdentifier()).ref(PATH).onCreate(async (snapshot, context) => {
console.log('local code invoked')
// or
console.log('deployed code invoked')
});
I figured it out since I am using typescript I need to transpile my code to javascript before I run the cloud functions shell.
The reason I thought it's invoking the deployed code is obvious as it is not actually invoking the deployed code but invoking the locally transpiled code which gets generated while I deploy to cloud. Now all I needed to do is transpile my code using the below command in my functions folder before I run the cloud functions shell.
// run this command in your functions folder
'npm run-script build'
This build generates transpiled javascript code in the 'lib' folder. Now we can run the below command to invoke the shell.
firebase functions:shell
Now we can emulate the local non deployed cloud functions and test them locally.
Check this medium post for detailed explanation:
https://medium.com/#moki298/test-your-firebase-cloud-functions-locally-using-cloud-functions-shell-32c821f8a5ce

How should we test our application in Docker container?

We have a Java Application in a Docker Container with a Docker Db2 database 'side-car'. In DevOps pipeline (Jenkins) we run unit tests, and integration test between components. Run SonarQube and if all good, we move over to the Staging environment. In the Automated Testing step we build application container using latest code base, we then proceed to run automated Acceptance Testing using Cucumber framework.
Question is about the use of database for testing: should we spin up a db2 in a new/isolated container, or use a 'common' DB2 container that the test team uses in that env for manual testing? Best practices, proven approaches and recommendations are needed.
for post deployment tests (API tests, end to end tests), I would try to avoid using the same db with other environments and have dedicated database setup for those tests.
The reasons are:
For API tests, end to end tests, I want to have control on what data is available in database. If sharing database with other environments, the tests can fail for strange reason (.e.g. someone accidentally modify the record that the test is expecting to be in some state)
For the same reason, I don't want the API tests, end to end tests to affect other people testing also. It will be quite annoying if someone is in the middle of testing and realise the data is wiped out by post deployment tests
So normally in the CI, we have steps to:
clear test db
run migration
seed essential data
deploy server
run post deployment tests

Run Firebase functions through experimental shell in batch?

We are running a larger scale project on Firebase and have already invested in unit tests. Now, we are also using the experimental shell to run integration tests against a testing environment and database. We would very much like to invoke functions via bash and shell script instead of opening the experimental shell, requiring our testdata and invoking each function manually. We tried reverse-engineering the firebase tools for that matter but this seems to be overkill. Any idea how we might be able to test-run all our functions in series?

Run unit/integration tests with Lab Management

We have a complete Lab Management environment running Coded UI tests in nightly builds. What we are trying to achieve is to run our integration tests (regular TestMethod() with SQL connections) just before all the Coded UI tests to verify that our db scripts are executed correctly and that there are no new changes causing any problems.
So far I have found a way to execute tests remotely through .testrunconfig. The problem we have with that approach is that it's not possible to choose a testcontroller connected to a team project so I guess that would be only useful for running tests on physical machines outside of Lab Management?
One option seem to be to create a Test Case for each integration test and that should run it together with the UI tests but it feels like it will be to much maintenance managing hundreds of test cases just to run the integration tests. Also it would be better to completely separate the test runs for the different kinds of tests.
Is there any easy way to achieve this that I have totally missed? Or do I have to modify the lab build template to deploy and run the tests?
I guess that would be only useful for running tests on physical machines outside of Lab Management?
If you run your tests remotely through .testrunconfig you have to connect the Test Agent to another Test Controller which is NOT connected to the team project.
Unfortunately it is impossible for the environment which are running under the Lab Management, to my knowledge.
What about this approach:
Create an Ordered Test containing all you integration tests.
Create a new Test Case "Integration Tests" and automate it by the ordered test
So you do not have to maintain hundreds of Test Cases.
You could also create several Ordered Tests if you want to group the integration tests and then
create a "main" Ordered Test containing them.
This way it will be easier to analyze test results especially if you have a lot of tests.
Let the integration tests run as a part of your existing nightly build.
Create a new Build Definition which does not start a build but uses the last successful nightly build and let your CodedUI tests run using Lab Build Template.
This way you will have different test runs for the different kinds of tests.
The only drawback is that you have to "synchronize" these two builds...
You could just schedule the second build later so you could be sure the fist build is done.
It's not really perfect, I know... but this way you could easily achieve your goal.
I am not sure if there is an alternative solution, but on the project I am currently working on we have both our Unit and Integration Test Assemblies set under the Process options (Process>Basic>AutomatedTest>TestAssembly) in our Nightly Build. This was achieved through altering the Default Build Process Template (not the Lab Default) a bit, as you suggested (I thought this was standard, but it's been a while).

Resources