In TOSCA by using DEX we can distribute testcases, if there is dependency in test case is it possbile to distribute using DEX - tosca

I am having 100 Test cases in TOSCA by using DEX I have distributed 100 test cases to 10 users with each user I have shared 10 test cases but the second 10 Test cases have a dependency on the first 10 Test cases, now will it execute or will it show an error or have to wait until the first 10 test cases are executed

Related

How to do parallel execution of one single test case that is based on a large set of data

I have written a single test case in robot framework that fetches data about more than 1000 locations from an excel sheet and runs each location. This whole execution takes more than 12 hours to complete. I want to minimize the execution time. Is there any possibility to execute it parallelly. I have gone through Pabot but that executes all test cases parallelly and I have only one test case.
Datadriver works for me:
DataDriver https://github.com/Snooz82/robotframework-datadriver
pabot
https://pabot.org/PabotLib.html
how to run:
pabot --testlevelsplit --pabotlib ...
No, there is no way for robot to split up a test case into multiple parallel threads or processes.
If you want multiple keywords to run in parallel, you'll have to rewrite your test suite to contain multiple tests, or create your own keywords to do work in parallel threads or processes.

How do I dispatch dbt logs from Airflow to the relevant owner?

I am running dbt (v0.19.0) on Apache Airflow (v2.0.2) using KuberneterPodOperator and sending alerts to Slack on failures. Nested within dbt there are multiple models from multiple owners with cross dependencies between them and they are all run together with the run command.
For example:
KubernetesPodOperator(
dbt_command="run",
task_id="run_models",
on_failure_callback=send_slack_alert,
)
I am trying to make sure that each model owner gets the alert that belongs to them in their relevant channel.
To explain my problem better, Let's say there are two models within dbt; Model-A & Model-B. Model-A is owned by the A-Team and Model-B is owned by the B-Team. With this approach (because there is one dbt run command) if there is a failure of Model-A, the failure will appear in the shared logs for Model-A and Model-B. Let's also assume that both A-Team & B-Team have their own alert channels. However, because dbt is run with a single command all the alerts are sent to a common channel.
Now imagine having plenty of models (Model-A, Model-B.....Model-Z). How can I improve the existing process to make sure that failures in Model-A get sent to the A-team alert channel, failures in Model-B get sent to the B-team alert channel...and so on.
How do I dispatch errors from dbt (running within Airflow) to the relevant owner to make alerts actionable?
I'd suggest you're likely to end up with n models owned by m teams.
Your simplest change would be to tag each dbt model with an owning team. And then to call that model calling back to that team,e.g.
KubernetesPodOperator(
dbt_command="run -m tags:team1",
task_id="run_models",
on_failure_callback=send_slack_alert_team1,
)
You could consider passing args to your alerts rather than custom callbacks (Pass other arguments to on_failure_callback).
This will work as long as you want to run your models in groups of owners, but could get to issues if there are intra owner dependencies.
You can break down your Airflow model to compose a dynamic dag from your models, running one model at a time, e.g. here https://www.astronomer.io/blog/airflow-dbt-1.
You could then assign the slack operator in that same dynamic loop.

Test suite Run time in karate in report

I am using karate 9.0.0 and running feature files in parallel and generating
cucumber report using karate parallel run code. Problem is that in the report in feature overview its showing the total time execution as
feature 1 execution time + feature 2 execution time +feature 3 execution time = total execution time
but actual execution time is less if i am running features in parallel in more than 1 thread. How can i show and calculate the test suite run time.
It is reported on the console. I don't understand why you need to worry about it as long as your tests are running reasonably fast.
Anyway, if you really want to capture this, just use the methods of the Results class that you get from the Runner.parallel(). For example you have a getElapsedTime() method.

Robot framework - way to set statuses of previous test-cases

I got robot framework tests in below schema:
Suite Setup
Test Case 1
Test Case 2
Test Case 3
...
Suite Teardown
In tear down step I have got a loop that goes through all tests cases and do some additional checks for all test cases (i can do this when test cases are execute because it need to wait some time for some operations in external system). If any of this checks will fail, the teardown step is fail and it also fail every test case. I can set tear down keyword to don't fail tear down step but than I will have all pass in test suite.
Is there any option/feature (or walkaround) that will give me possibility to set status and error message of selected test case in tear down step (something like tc[23].status=fail, tc[23].message='something'.
This is not possible, at least not out-of-the-box. In any event I also think this is not a desirable test approach. Every test should be self contained and all the logic to assess PASS or FAIL should be in that test. Revisiting the result is in my view an anti-pattern.
It is understandable that when there is a large pause of inactivity that you would like to progress with your tests. However, I think that parallelising your tests is a better and more stable approach. For Robot Framework there is Pabot to help you with that, but creating your own test runner is possible.

Robotframework: behavior when teardown fails

I'm trying to understand how Robot behaves when there is a failure in Test teardown.
Conceptually, I would think that if a test case completes execution, it should be considered passed. Teardown is not part of the test, so if there is a failure in teardown, the test case should still be marked as passed. The behavior I observe is that if test teardown fails, the test case fails. Is this what is supposed to happen, and is there any way to change it?
I'm also seeing something weird when Suite teardown fails.
The console output shows the test case as passed, displaying |PASS| next to the case. However, the statistics at the bottom of the output show all cases as failed.
Here's an example:
*** Settings ***
Suite Teardown Teardown
*** Keywords ***
Setup
Log to Console setup
Teardown
Should Be Equal 1 2
*** Test Cases ***
case1
[Setup] Setup
Log To Console case
and the output:
==============================================================================
Test
==============================================================================
case1 setup
.case
case1 | PASS |
------------------------------------------------------------------------------
Test | FAIL |
Suite teardown failed:
1 != 2
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================
This is just confusing. The test passes, and is shown as passed, but is marked as failed in the stats. Is this a bug, or is there some way to fix it?
Sometimes the test fails in tear down is an important issue, for example, the clean up is not completed and it causes other test cases to fail. Therefore robot framework always reports FAIL if the test case fails in tear down. Use Run Keyword And Ignore Error if the keyword failure is not an issue to your test case:
*** Keywords ***
Teardown
Run Keyword And Ignore Error Should Be Equal 1 2
However you should be careful that if the keyword fails, nothing is reported unless you check the detail in output logs.
The suite tear down runs after all test cases finished. The first test case is passed and program prints PASS. After that, the suite tear down runs and it fails, so the program prints FAIL. This is expected result. It is easier to understand if there are more test cases in one suite, for example:
Test suite A
run case 1 ----> print PASS
run case 2 ----> print PASS
run case 3 ----> print PASS
run suite teardown ----> print FAIL (and change case 1, 2, 3 to FAIL)
And tear down fail is the same as test case fail in robot framework, therefore robot framework reports all test cases fail in the end. Check the output log.html, you can see that all test cases are FAIL.
I figured out a solution that may or may not be useful. We have a Jenkins integration and Jenkins will report all these test failures that I want to see as passed. What I did was not generate the html from robot, just the xml.
I then used etree to create a new test xml tag.
def create_test(id= 'sx-tx', name='Test'):
return ET.Element("test", attrib={'id': id, 'name': name})
I copied the teardown internals to the new test and used 'rebot' to generate the xml from the new xml. This made Teardown a test so it showed only a single failure.
I can elaborate if you would like.

Resources