Test suite Run time in karate in report - report

I am using karate 9.0.0 and running feature files in parallel and generating
cucumber report using karate parallel run code. Problem is that in the report in feature overview its showing the total time execution as
feature 1 execution time + feature 2 execution time +feature 3 execution time = total execution time
but actual execution time is less if i am running features in parallel in more than 1 thread. How can i show and calculate the test suite run time.

It is reported on the console. I don't understand why you need to worry about it as long as your tests are running reasonably fast.
Anyway, if you really want to capture this, just use the methods of the Results class that you get from the Runner.parallel(). For example you have a getElapsedTime() method.

Related

How to do parallel execution of one single test case that is based on a large set of data

I have written a single test case in robot framework that fetches data about more than 1000 locations from an excel sheet and runs each location. This whole execution takes more than 12 hours to complete. I want to minimize the execution time. Is there any possibility to execute it parallelly. I have gone through Pabot but that executes all test cases parallelly and I have only one test case.
Datadriver works for me:
DataDriver https://github.com/Snooz82/robotframework-datadriver
pabot
https://pabot.org/PabotLib.html
how to run:
pabot --testlevelsplit --pabotlib ...
No, there is no way for robot to split up a test case into multiple parallel threads or processes.
If you want multiple keywords to run in parallel, you'll have to rewrite your test suite to contain multiple tests, or create your own keywords to do work in parallel threads or processes.

how to speed up initialization of run in firebase test lab

When running:
gcloud firebase test android run --type=instrumentation --app=app.apk --test=test_app.apk
The firebase command line is stuck many minutes in "Creating individual test executions".
When debugging further it seems that the command line polls a backend "https://testing.googleapis.com:443" periodically till it get's an ok.
Is there a way to speed this up? This step can take 5 minutes and it takes unnessecary CI time
Update:
The command line was missing the part: --device model=NexusLowRes,version=29 --verbosity=debug
I analyzed the issue further. It takes about 100 sec to upload both app and test app and another 150 s to create the test execution. so i think that it is a limitation in the system and nothing can be done here. Maybe the size of the apk is limiting. It is about 200 mb and it takes a lot time to scan this.
Please see my comment on your question asking for additional details that could affect the answer.
One option is to add --async to your command. This will only poll the matrix status until it verifies that the matrix is created successfully, then exit without waiting for the test to actually run.

Error triggering a matrix tests when I run my Espresso tests using Test Lab for Android

When I run my Espresso tests from Android Studio 2.2.1 using Test Lab for Android I get the following error message:
Exception while triggering a matrix execution. 429 Too Many Requests.
I managed to run several tests on remote devices, but then they start failing with this error message. Firebase console shows nothing.
What might be the issue?
This is the HTTP Status returned when a project that is on the Flame or Spark tier has used too much of its quota.
Projects on the Spark and Flame tier can send up to 10 virtual device tests and 5 physical device tests per day (with each day beginning at midnight PDT). In addition, there is a maximum of 4 tests per test matrix for projects on these plans.
See https://firebase.google.com/pricing/ for more.

RobotFramework : Getting testcase results while execution is in progress

We are using Robot framework and RIDE tool for test case execution. we have 100+ testcases and test execution takes more than 6 hours to complete.
RF result and log html is great for viewing results. But these 2 files are viewable only after completion of test case execution.
Is there any plugin / tool or mechanism to view the testcase result status during execution. in RIDE tool -"Run" tab - only shows pass:<> fail:<> and not very user useful.
Need real time testcase status report instead of waiting for completion
You can use the listener interface. With it, you can have robot framework call a python function each time a keyword, testcase or suite starts and finishes. For the case where they finish, the data that is passed in will include the pass or fail status.
Using the listener interface (as Bryan Oakley suggested) is surely the most flexible way to intercept test progession status. If you are looking for tools, Jenkins (with Robot Framework plugin) gives you the opportunity to follow a test run in real time at test case granularity. Just start a job and switch to (Jenkins) console to see the output dropping in.

How to set delay between converge and verify on kitchen test?

I'm running Serverspec integration smoke tests with Test Kitchen on a system built with Vagrant+Chef Solo. When i run kitchen test then the tests are started right after successful converge, and some of my tests fail because it takes time for the system to fully start up for the first time.
So i'm wondering what would be a good way to insert delay between converge and verify, otherwise preserving the default behaviour of kitchen test? Currently i have the following ideas:
write a shell script that does kitchen converge+check if converge was unsuccessful, then abort+sleep xx+kitchen verify+if successful then kitchen destroy. But this would not allow to run multiple suites on parallel (i'm testing multiple versions of the system).
create a recipe that just executes sleep xx and append it to the end of chef run list. This seems to work, but looks a bit too "hacky" for me.
Does anyone know a better way?
taavi
For now i went on with idea 2. Also created a feature request: https://github.com/test-kitchen/test-kitchen/issues/598

Resources