Through Katalon studio UI tool we are able to perform parallel execution of test suites in test suite collection for different browsers.
Problem: Same approach is not working when we tried through cmd. Below is the command for the same:
katalon -noSplash -runMode=console -projectPath="<projectPath>" -retry=0 -testSuitePath="<testSuitePath>" -executionProfile="default" -browserType="Chrome,IE"
Note: Works fine for single browser as parameter
Please let us know if above command is correct for multiple browser execution
Expected :
Single report folder containing parallel execution results of both the browsers
You can do that, by using Test Suite Collections.
You put your TC1 (test case) in TS1 (test suite). My test case is called "proba" in this example. Then you create a TSC1 (test suite collection) and you add the same test suite twice to the collection. See screenshot. And change "Run with" parameter to Chrome and IE, respectively.
If you now create a command line argument, you will get something like
katalon -noSplash -runMode=console -consoleLog -projectPath="C:\\Katalon Studio\PROJECT NAME\PROJECT NAME.prj" -retry=0 -testSuiteCollectionPath="Test Suites/TEST SUITE COLLECTION 1"
Related
I am attempting to write a tool that will automate the generation of a visual studio test playlist based on failed tests from the spec flow report, we recently increased our testThreadCount to 4 and when using the LivingDocumentation plugin to generate the TestExecution.json file it is only generating a result for 1 in 4 tests and I think this is due to the threadCount so 4 tests are being seen as a single execution.
My aim is to generate a fully qualified test name for each of the failed tests using the TestExecution file but this will not work if I am only generating 25% of the results. Could I ask if anyone has an idea of a workaround for this?
<Execution stopAfterFailures="0" testThreadCount="4" testSchedulingMode="Sequential" retryFor="Failing" retryCount="0" />
This is our current execution settings in the .srprofile
We made this possible with the latest version of SpecFlow and the SpecFlow+ LivingDoc Plugin.
You can configure the filename for the TestExecution.json via specflow.json.
Here is an example:
{
"livingDocGenerator": {
"enabled": true,
"filePath": "TestExecution_{ProcessId}_{ThreadId}.json"
}
}
ProcessId and ThreadId will be replaced with values and you get for every thread a separate TestExecution.json.
You can then give the livingdoc CLI tool or the Azure DevOps task a list of TestExecution.jsons.
Example:
livingdoc test-assembly BookShop.AcceptanceTests.dll -t TestExecution*.json
This generates you one LivingDoc with all the test execution results combines.
Documentation links:
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/LivingDocGenerator/Setup-the-LivingDocPlugin.html
https://docs.specflow.org/projects/specflow-livingdoc/en/latest/Guides/Merging-Multiple-test-results.html
I'd like to have the option to run my tests with different test data depending on the environment I'm in, as they are slightly different.
My current setup: Test suite -> Test Cases each with 1 test data (excel file). I run checks (based off execution profile) to determine the environment and adjust the domain URL accordingly.
If I add a second data file to a test case, is there a way I can add logic to pick a specific test data file during execution time?
If you want to use "excel_file_1" for "default" execution profile, and "excel_file_2" for other execution profiles, use this:
import com.kms.katalon.core.configuration.RunConfiguration as RC
import com.kms.katalon.core.testdata.TestDataFactory as TestDataFactory
if (RC.getExecutionProfile()=='default'){
def data = TestDataFactory.findTestData("excel_file_1")
} else {
def data = TestDataFactory.findTestData("excel_file_2")
}
Just for clarity, I will explain most of the process to accomplish this.
You could create different profiles for this (Generally used for environment variables).
Katalon Profiles
You may then enter keywords (which are GlobalVariables) to get or set
your data (URLs, locations, etc.)
Remember to add your test case(s) in a test suite
You may then create separate build commands to test each profile you
created by clicking Build CMD & specifying the execution profile
Specify profile in Build CMD
This way you can use something like TeamCity to run each cases or a combination thereof
I don't think this works as Katalon have more that one Test Case in test suite
import com.kms.katalon.core.configuration.RunConfiguration as RC
import com.kms.katalon.core.testdata.TestDataFactory as TestDataFactory
if (RC.getExecutionProfile()=='default'){
def data = TestDataFactory.findTestData("excel_file_1")
} else {
def data = TestDataFactory.findTestData("excel_file_2")
}
In above code below question comes
whats 'data' variable
what if we have few more test cases and that too have data sheet
I'm currently developing a couple of test cases with robotframework to compare some excel value with value inside our database.
I have to do it inside a specific test case as it is deploy on zephyr.
I am checking each value inside this test case by calling a homemade Keyword that does :
Run Keyword Should Contain ${valeurExcel1} ${valeurBDD1}
Run Keyword Should Contain ${valeurExcel2} ${valeurBDD2}
etc...
I need every single one of those "Should Contain" to be display in a separated row in the report.html
It currently only appear as one row as it is one test case.
Is there anyway to specify to robot framework that i want him to consider every "Should Contain" as a unique test case and to display it in a row on the report.html ?
(Maybe by tagging ?)
No you can't. If you want a row for each "should contain" then each of those call should be made in its own test case.
But I think the problem lies in your "I have to do it inside a specific test case as it is deploy on zephyr". Whatever you need to do before/after a test case, can be done in a "suite setup" (and "suite teardown"). So you could have this kind of architecture:
*** Settings ***
Suite Setup deploy SUT / Zephyr
Suite Teardown shutdown SUT / Zephyr
*** Test Cases ***
tc1
Run Keyword Should Contain ${valeurExcel1} ${valeurBDD1}
tc2
Run Keyword Should Contain ${valeurExcel2} ${valeurBDD2}
I would like to move all my output files to a custom location, to a Run directory created based on Date time during Run time. The output folder by datetime is created in the TestSetup
I have function "Process_Output_files" which will move the files to the Run folder(Run1,Run2,Run3 Folders).
I have tried using the argument-d and used the function "Process_Output_files" as suite tear down to move the output files to the respective Run directory.
But I get the following error "The process cannot access the file because it is being used by another process". I know this is because the Robot Framework (Ride) is currently using this.
If I dont use the -d argument, the output files are getting saved in temp folders.
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\output.xml
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\log.html
c:\users\<user>\appdata\local\temp\RIDEfmbr9x.d\report.html
My question is, Is there a way to get move the files to custom location during run time with in Robot Framework.
You can use the following syntax in RIDE (Arguments:) to create the output in newfolders dynamically
--outputdir C:/AutomationLogs/%date:~-4,4%%date:~-10,2%%date:~-7,2% --timestampoutputs
The above syntax gives you the output in below folder:
Output: C:\AutomationLogs\20151125\output-20151125-155017.xml
Log: C:\AutomationLogs\20151125\log-20151125-155017.html
Report: C:\AutomationLogs\20151125\report-20151125-155017.html
Hope this helps :)
I understand the end result you want is to have your output files in their custom folders. If this is your desire, it can be accomplished at runtime and you won't have to move them as part of your post processing. This will not work in RIDE, unfortunately, since the folder structure is created dynamically. I have two options for you.
Option 1: Use a script to kick off your tests
RIDE is awesome, but in my humble opinion, one shouldn't be using it to run ones tests, only to build and debug ones tests. Scripts are far more powerful and flexible.
Assuming you have a test, test2.txt, you wish to run, the script you use to do this could be something like:
from time import gmtime, strftime
import os
#strftime returns string representations of a date-time tuple.
#gmtime returns the date-time tuple representing greenwich mean time
dts=strftime("%Y.%m.%d.%H.%M.%S", gmtime())
cmd="pybot -d Run%s test2"%(dts,)
os.system(cmd)
As an aside, if you do intend to do post processing of your files using rebot, be aware you may not need to create intermediate log and report files. The output.xml files contain everything you need, so if you don't want to create superfluous files, use --log NONE --report NONE
Option 2: Use a listener to do post processing
A listener is a program you write that responds to events (x_start, x_end, etc). The close() event is akin to the teardown function and is the last thing called. So, assuming you have a function moveFiles() you simply need to create a listener class (myListener), define the close() method to call your moveFiles() function, and alert your test that it should report to a listener with the argument --listener myListener.
This option should be compatible with RIDE though I admit I have never tried to use listeners with the IDE.
At least you can write a custom run script that handles the moving of files after the test case execution. In this case the files are no longer used by pybot.
I'm trying to write my first robot test; I'd like to use ride as advertized in http://developer.plone.org/reference_manuals/external/plone.app.robotframework/happy.html#install-robot-tools
I added
initialization =
import os
os.environ['PATH'] = os.environ['PATH'] + os.pathsep + '${buildout:directory}/bin'
to my [robot] section to make it possible to run the tests clicking "Start" in ride.
It works, but the second time I run the tests I still see the content created by the first test run.
How do I tell robot-server to go back to a just-initialized state?
Easily (and you should throw me into pool for not documenting this yet in plone.app.robotframework's documentation – I thought that RIDE is too difficult to get running until it works on wxPython 2.9).
In RIDE
select Run-tab
change Execution Profile to custom script
click browse to select for bin/robot from your buildout as the Script to run tests
Click Start.
Technically bin/robot is a shortcut for bin/pybot --listener plone.app.robotframework.RobotListener (I keep repeating bin/, because it's important that plone.app.robotframework is available in sys.path). Robot Framework Listener -interface is specified in Robot Framework User Guide.
Our listener calls bin/robot-server (using XML-RPC) before every test to testSetUp-methods for the current test layer and after every test testTearDown-methods. This resets the fixture and isolates functional tests.