How to mark pass the test when is retested in robotframework - robotframework

Right now when my team made a deploy in out QA environment, I run one suite tests of robotframework. This, have several test that failed at first cause the environment is not "warm up". So, in the same pipeline I have the "--rerun" option if some test has failed. Usually, in the second run they work just fine. Then, I merge the outputs with rebot:
rebot --merge output.xml output2.xml
And even the log.html is showing correctly the information (on the test and on the suite level).
Now comes the fun part. Even when in the output.xml now I have two runs of the test (all with some failled and the retries with all pass), when I upload this to XRay, it create a test execution with the results of the first run only.
So, my question is: why? In the output.xml is clearly al the results as the last run. If not, I would understand that this create one test execution and then put all the results inside (first and second run) which is not the case.
It seems to me that XRay is not importing the data correctly.

First, I have never used rebot in that way.
However, I think what you obtained in Xray is the consequence of the "merged" output.xml you have and how Xray works.
Whenever you upload test automation results to Xray, usually a Test Execution is going to be created, containing Test Runs (one per each Test)🤞🏻.
Test issues will be autoprovisioned unless they already exist; if they exist, only Test Runs will be created for those Tests.
A Test Execution cannot contain more than one Test Run for each Test. In other words, a test execution is like a task for running (or that contains the results of) a list of tests; this list cannot have duplicates.
I would advise to upload all the reports, individually. Xray will consider the latest results whenever showing coverage status.

Related

How to implement Jira Xray + Robot Framework?

Hello im a new junior test software and i've been asked to study about xray and robot framework and how to implement both.
I've made a few Test cases in xray and after i've started to learn about robot framework and till there was all good.
Now i've been trying to implement the results of this tests cases that i made on robot to the tests execution in xray but everytime that i try to import the output.xml from robot to xray instead of "syncronize" this tests the xray creats me new tests care with the result of the robot.
There is anyone around that has done it before that could help me? i've tryed to implement tags in robot or even use the same name of tests (in xray and robot) but it didnt work. Thanks in advance.
I recommend using Jenkins with the XRay - Jira plugin to sync the results of automated tests into xray test items.
You would use a Tag in robot to link a test case to an Xray Test item or if you don't specify an ID, the plugin would create a new Test item and keep it updated based on name
*** Test Cases ***
Add Multiple Records To Timesheet By Multi Add Generator
[Tags] PD-61083
Check this link for details on how to configure the integration
https://docs.getxray.app/display/XRAY/Integration+with+Jenkins
The plugin can keep track of the execution in a specific Test Execution item or create one per run but should keep referring to the same Test item.
When you upload the RF results, Xray will auto-provision Test issues, one per each Robot Framework's Test Case. This is the typical behavior, which you may override in case you want to report results against an existing Test issue. In that case, you would have a Test in Jira and then you would add a tag to the RF Test Case entry, with the issue key of the existing Test issue.
However, taking advantage of auto-provisioning of Tests is easier and is probably the most used case. Xray, will only provision/create Test issues if they don't exist; for this, Xray tries to figure out if a generic Test exists, having the same definition (i.e. name of RF Test suites plus the Test Case name). If it does find it, then it will just report results (i.e. create a Test Run) against the existing Test issue.
If Test issues are always being created each time you submit the test results, that's an unexpected behavior and needs to be analyzed in more detail.
There is another entity to have in mind: Test Execution.
Your results will be part of a Test Execution. Every time that you submit test results, a Test Execution... unless, you specify otherwise. In the REST API request (or in the Jenkins plugin) you may specify an existing Test Execution by its issue key. If you do so, then the results will be overwritten on that Test Execution and no new Test Execution issue will be created. Think on it as reusing a given Test Execution.
How the integration works and the available capabilities are described in some detail within the documentation.
As an additional reference, let me also share this RF tutorial as it may be useful to you.

Cypress test data generation scripts not as part of the test suites

I wrote a couple of Cypress end-2-end tests for our website. These tests run against a staging environment. Since we have a couple of those staging environments for our dev teams I would like to make sure these tests are stable on all stagings.
For some tests I need certain test data, so I wrote a cypress test, creating that test data. Normally this test data generation test is not executed on our CI system. This test is located in a separate file within the integration directory, so that cypress is able to find and execute it. It is just executed once per staging environment. The test data will just remain there and does not have to be generated again and again.
When opening the cypress GUI (cypress open) I would like to have this test data generation test being ignored by cypress so that I can simply run all suites at once. But, when I add this test data generation test to the set of ignoredTestFiles, I can't run the test data generation test at all anymore.
Do you have an idea how I can make the cypress GUI ignore my test data generation test on the one side and on the other keep it executable by cypress, when I explicitly want it to execute?
You could make the test generation data test depend on an environment variable, and only set that variable when you explicitly want it to execute. Something like:
if (Cypress.env('GENERATE_TEST_DATA')) {
// Generate your test data here
// You could even put this if around the entire it block
// so the test doesn't execute at all when the environment variable isn't set
}
Then when you want the test data to be generated, set the environment variable like this:
CYPRESS_GENERATE_TEST_DATA=true npx cypress run
Also see the documentation on Cypress.env()

ReTest bloc of tests in Robotframework

I have a list of tests to execute on robot framework where it has a block of tests that can be executed again if a specific test fails as the following flow explains, And i want to know if this is dowable with robot framework.
No, it is not doable. At least, not in a single test run, and not without a lot of work. Robot has no ability to re-run a test within a single test run. You would have to exec a second instance of robot where the output is sent to a separate output file, and then you would have to somehow merge the output files of the original test run and the second exec.
However, robot does support being able to give it the output.xml from a previous run so that it will re-run only the failed test cases. You can do that with the --rerunfailed command line option. See Re-executing failed test cases in the robot framework user guide.

Using Test Configuration when running on-demand automated tests from VSTS Test Hub

My organization has moved from executing automated tests within MTM to executing automated tests within the VSTS Test Hub, via Release Definitions. We're using MSTest to run tests with Selenium / C#.
It is a requirement for us to be able to run any individual test on-demand, post-build, and to use the Test Configuration to control the browser. This was working without an issue when runnings tests through MTM.
Previously, when running tests through MTM, my MSTest TestContext's "Properties" were populated with runtime parameters, such as the Test Configuration, such as TestContext.Properties["__Tfs_TestConfigurationName__"]. The runtime properties could also be seen in the resulting .trx file.
Now, when running tests through the VSTS Test Hub, those values are no longer set in TestContext automatically, nor do I see them in the .trx file, or in my Release deployment logs.
Are there any suggestions as to how the Test Configuration values of each test can be propagated into TestContext (or accessible via code otherwise) to be able to control the browser at test run-time?
I have considered creating a .runsettings file, but am unaware as to how Test Configuration variables would be dynamically populated into that file.
I have considered attempting to query the Test Run / Test Results in my Selenium code to determine the Test Points / Configuration, but am unaware of how to determine the Test Run ID (I see the Test Run ID in the release logs, but don't know how to programmatically access it)
I have seen a VSTS Extension called 'VSTS Test Extensions' which is purported to inject the VSTS variables into a .runsettings file, but there are no details as to how to configure the .runsettings file in order to accomplish this, and the old MTM-style parameter names do not seem to work.
Are Test Configuration variables accessible globally in VSTS somehow?
I.e. could I create a PowerShell Release task to somehow access test run / test point data? Microsoft does not provide any indication about how to use Test Configurations other than using them for manual tests.
Any help is greatly appreciated.
After some more review, it seems that my only option is a giant hack:
I noticed that in the Release logs, the [TEST_RUNID] is output, which points me to the $(test.runid) variable, which I didn't know existed.
I can output the Run ID to an external file via a Powershell Release task.
I can then have my test code read that file, perform an API call to VSTS querying the Test Results for that Run ID, and then use that response to output the Test Configuration Name for each Test Case ID.
I can then match up my Test Case ID within each of my test methods with the Test Case IDs from the service call response, and set the browser according to the Test Configuration Name.
The only issue then is if the same test exists multiple times in the same run with different configurations, which can be hacked around by querying the Test Run every time a test begins and checking which configuration has already been run based on the State ("Pending", etc.) or by some other means I haven't thought of yet.

How to Automatically run unit test (from MS test) and log result when commit on TFS ? [ASP.NET]

I'm looking for automate Unit test (using MS TEST), when a developer commits his code on TFS.
I'm not sure it's possible ...
In a second time, I want to log on file, the result of the test unit execution, if you have a solution with VS.
Thanks for all :)
You could create a CI build in TFS to build and run your Unit tests just like it mentioned in the comment above. The Continuous integration (CI) build trigger will make your build definition be queued when someone check in something.
When a build finishes, all the test results could be found in TFS web page and the test result file(.trx) also supports to download.

Resources