ReTest bloc of tests in Robotframework - robotframework

I have a list of tests to execute on robot framework where it has a block of tests that can be executed again if a specific test fails as the following flow explains, And i want to know if this is dowable with robot framework.

No, it is not doable. At least, not in a single test run, and not without a lot of work. Robot has no ability to re-run a test within a single test run. You would have to exec a second instance of robot where the output is sent to a separate output file, and then you would have to somehow merge the output files of the original test run and the second exec.
However, robot does support being able to give it the output.xml from a previous run so that it will re-run only the failed test cases. You can do that with the --rerunfailed command line option. See Re-executing failed test cases in the robot framework user guide.

Related

How to mark pass the test when is retested in robotframework

Right now when my team made a deploy in out QA environment, I run one suite tests of robotframework. This, have several test that failed at first cause the environment is not "warm up". So, in the same pipeline I have the "--rerun" option if some test has failed. Usually, in the second run they work just fine. Then, I merge the outputs with rebot:
rebot --merge output.xml output2.xml
And even the log.html is showing correctly the information (on the test and on the suite level).
Now comes the fun part. Even when in the output.xml now I have two runs of the test (all with some failled and the retries with all pass), when I upload this to XRay, it create a test execution with the results of the first run only.
So, my question is: why? In the output.xml is clearly al the results as the last run. If not, I would understand that this create one test execution and then put all the results inside (first and second run) which is not the case.
It seems to me that XRay is not importing the data correctly.
First, I have never used rebot in that way.
However, I think what you obtained in Xray is the consequence of the "merged" output.xml you have and how Xray works.
Whenever you upload test automation results to Xray, usually a Test Execution is going to be created, containing Test Runs (one per each Test)🤞🏻.
Test issues will be autoprovisioned unless they already exist; if they exist, only Test Runs will be created for those Tests.
A Test Execution cannot contain more than one Test Run for each Test. In other words, a test execution is like a task for running (or that contains the results of) a list of tests; this list cannot have duplicates.
I would advise to upload all the reports, individually. Xray will consider the latest results whenever showing coverage status.

Creating Reports in Robotframework

I have recently started working on a program that should not be changed much and there are problems that it closes properly with Robot Framework. The method that came to my mind now was that I could get the reports before the test end.
So I have this Question:
Is there a possibility or keyword in Robotframework that I can use to get reports before the test is done?
Is there a possibility or keyword in Robotframework that I can use to get reports before the test is done?
No, there is not. Robot creates the reports as an in-memory xml document. It doesn't write the data to disk until the tests are finished. It then runs a post-processing step to convert them to html.
As #Bryan wrote, there is no way to get report.html during robot runtime as it is generated in postrun step.
However, you may use listener to get feedback from robot about execution status. For instance, in RED Robot Editor IDE, it is used to populate Execution View (see screenshot at the bottom):
http://nokia.github.io/RED/help/user_guide/launching/ui_elements.html
More about listener API: https://github.com/robotframework/robotframework/blob/master/doc/userguide/src/ExtendingRobotFramework/ListenerInterface.rst

Robot Framework 'Output.xml' copy to different folder

When I am trying to copy the file 'Output.xml' from parent folder to target folder, it is not being copied properly i.e: the file size, in the target folder, is different. I am executing the keyword to copy files in my 'Suite Teardown'. Any solution for the issue.
Code written to copy files:
OperatingSystem.Copy Files ${sProjectPath}//output.xml ${sFinalFolder}
The problem is that while the test is running, the output file doesn't exist. It can't exist during suite teardown, because the result of the suite teardown must be part of the log.
If you need the log in another folder, the simplest solution is to tell robot to write it there initially, with the command line option --output or --outputdir.
If you cannot use --output, perhaps the simple solution is to create a script that runs your tests, and then copies the file after it runs. This is mentioned in the user guide in the section titled Creating start-up scripts.
A slightly more complex solution is to use a listener which copies the file in the done method.
Robot run reports are created at the end by robot-framework tool. Hence complete result/report files doesn't exist even in suite teardown.
Best practice would be providing --ouput argument with desired output dir.
More detail you can find from user guide
Robot framework user Guide under section 3.5.1
Note:
The command line option --output (-o) determines the path where the output file is created relative to the output directory. The default name for the output file, when tests are run, is output.xml.
Alternative way is use to rebot command and act on the output file generated by robot run.
When post-processing outputs with Rebot, new output files are not created unless the --output option is explicitly used.
Whether or not it can be done from Robot Framework script, the question is it should be done from Robot Script. These files could be locked and or changed by the current Robot Framework process. If not today, this could well be a part of any future change in this area.
With this perspective I can only recommend to copy these files prior to the start of Robot Framework, and not after. This ensures that the contents of these files are not changed and are fit for the purpose you have in mind.
As these types of actions are typically not part of a Test Case scenario I consider them to be part of the overarching Orchestration. This includes fetching the right version of the test script from version control software, starting Robot Framework, communicating results (ex. email) and safeguarding the test evidence (Log files and other files).
In general these are standard step functionality in a CI tool like Jenkins, Travis CI or Bamboo (to name a few). Even if you have only started on this path, separating this kind of logic from your test script will save you a lot of effort later on.
looks like that Copy Files Kw does not copy the content of output.xml during Teardown ,although it creates file with samename. As A.Kootstra suggested it could be a limitation currently.
Alternative:
in a separate robot script you can rename the file 'output.xml' generated from first script and then can run below command
Copy Files
${sProjectPath}//output.xml ${sFinalFolder}
This copies the whole content

Using Test Configuration when running on-demand automated tests from VSTS Test Hub

My organization has moved from executing automated tests within MTM to executing automated tests within the VSTS Test Hub, via Release Definitions. We're using MSTest to run tests with Selenium / C#.
It is a requirement for us to be able to run any individual test on-demand, post-build, and to use the Test Configuration to control the browser. This was working without an issue when runnings tests through MTM.
Previously, when running tests through MTM, my MSTest TestContext's "Properties" were populated with runtime parameters, such as the Test Configuration, such as TestContext.Properties["__Tfs_TestConfigurationName__"]. The runtime properties could also be seen in the resulting .trx file.
Now, when running tests through the VSTS Test Hub, those values are no longer set in TestContext automatically, nor do I see them in the .trx file, or in my Release deployment logs.
Are there any suggestions as to how the Test Configuration values of each test can be propagated into TestContext (or accessible via code otherwise) to be able to control the browser at test run-time?
I have considered creating a .runsettings file, but am unaware as to how Test Configuration variables would be dynamically populated into that file.
I have considered attempting to query the Test Run / Test Results in my Selenium code to determine the Test Points / Configuration, but am unaware of how to determine the Test Run ID (I see the Test Run ID in the release logs, but don't know how to programmatically access it)
I have seen a VSTS Extension called 'VSTS Test Extensions' which is purported to inject the VSTS variables into a .runsettings file, but there are no details as to how to configure the .runsettings file in order to accomplish this, and the old MTM-style parameter names do not seem to work.
Are Test Configuration variables accessible globally in VSTS somehow?
I.e. could I create a PowerShell Release task to somehow access test run / test point data? Microsoft does not provide any indication about how to use Test Configurations other than using them for manual tests.
Any help is greatly appreciated.
After some more review, it seems that my only option is a giant hack:
I noticed that in the Release logs, the [TEST_RUNID] is output, which points me to the $(test.runid) variable, which I didn't know existed.
I can output the Run ID to an external file via a Powershell Release task.
I can then have my test code read that file, perform an API call to VSTS querying the Test Results for that Run ID, and then use that response to output the Test Configuration Name for each Test Case ID.
I can then match up my Test Case ID within each of my test methods with the Test Case IDs from the service call response, and set the browser according to the Test Configuration Name.
The only issue then is if the same test exists multiple times in the same run with different configurations, which can be hacked around by querying the Test Run every time a test begins and checking which configuration has already been run based on the State ("Pending", etc.) or by some other means I haven't thought of yet.

Running robotframework tests simultaneously

Is there a way to run all my test cases simultaneously? if so could you please post an example of how to do it?
Regards,
Meir
There is nothing built-in to robot to run tests in parallel. If your tests are spread across multiple suites you can write a script that runs them all at the same time, and then you can use rebot to combine all of the output.xml files into a single report.
There is an open source program that might do what you want. The project is named "pabot" and can be found here: https://github.com/mkorpela/pabot

Resources