I have 50+ Watir test scripts which currently just check a specific URL which is defined inside each of them.
Now we are launching 4 more sites and would like to run these tests on all 5 sites. To maintain 5 packs of 50+ tests would be a nightmare in the future.
Is there a way I can pass a variable to all of the individual tests with the URL to visit.
For example
url = "http://site1.com"
That way if we want to then test site 2 we just need to change the url variable and not every single script.
url = "http://site2.com"
Example test:
require "watir-webdriver"
browser = Watir::Browser.new :chrome
browser.goto "http://url.com/"
browser.text_field(:id, "edit-search").set("Accounting")
browser.button(:value,"Search").click
browser.link(:text, "Accounting Manager with a leading US MNC").click
browser.link(:text, "Apply").click
browser.text_field(:id, "edit-firstname").set("hi2")
browser.text_field(:id, "edit-lastname").set("hi")
browser.text_field(:id, "edit-email").set("t#t.com")
browser.text_field(:id, "edit-current-job").set("Test")
browser.radio(:id, "edit-use-stored").click
browser.radio(:id, "edit-existing-cv-319706").click
browser.text_field(:id, "edit-message").set("Testing")
browser.checkbox(:id, "edit-create-alert").click
browser.button(:value,"Apply").click
browser.screenshot.save '..\screenshots\ApplyWithAlertNonRegistered.png'
browser.link(:text, "Home").click
browser.close
While I would recommend moving to an actual test framework, I think the following approaches would work for your situation.
Solution 1 - Pass Value From Batch to Test
In a test script, you can get the parameters passed in from the batch file using the ARGV array.
In your batch file, you could define the URL and then pass to the test script as a parameter.
SET URL="http://site2.com"
ruby test_example1.rb %URL%
ruby test_example2.rb %URL%
Your tests would get the ARGV[0] value and go to it:
browser.goto ARGV[0]
For each test run, you would need to update the batch for the correct url.
Solution 2 - Specify URL in a Helper File
An alternative solution would be to specify the url in a variable that is included by each test. This is probably a better approach, especially if there are multiple variables.
Create a test_helper.rb file with:
url = "http://site1.com"
For each of your test scripts, require this test_helper file and use the url variable:
require "watir-webdriver"
require "test_helper" #(Change path if not in the same folder)
browser = Watir::Browser.new :chrome
browser.goto url
Before each test run, update the test_helper.rb file to point to the correct url.
I like Justin's answer, but another option is Fig Newton. It allows you to change multiple variables depending on the system that you are running on (localhost, on jenkins, UAT, whatever).
Justin - in the test_helper file, would it be possible to set the browser type (chrome, ie, or ff) and pass that variable into each test script; so, if I wanted to change the browser, I would only have to change it in the test_helper file rather than having to go into each test script to change?
If possible, how would this look in the test_helper file and test scripts?
test_helper: type = ff
test script: browser = Watir::Browser.new :type
Thanks!
Related
I am running a pipeline and was trying to optimize it by declaring the paths in a config file (config.yaml). The config.yaml file contains the path to find the scripts to run inside the pipeline, but when I expand the wildcard of the path, the pipeline does not run the script. The script itself runs fine.
To explain my problem:
rule with_script:
input: someinput
output: someoutput
script: expand("{script_path}/scriptfile", script_path = config[scriptpath])
input, output or rule all do not contain the script's path wildcard, so here is the first time I'm declaring it. The config.yaml line that contains the path looks like this:
scriptpath: /path/to/the/script
is there a way to maintain the wildcard and config file path (to make it easier for others to make changes if needed) and have the script work? Like this snakemake doesn't even enter the script file. Or maybe it is possible to declare global wildcards outside the rule all?
Thank you for your help!
P.S.: I'm sorry if this question has already been answered, but I couldn't find anything to help me with this.
You cannot define a function like expand() in the script section. Snakemake expects a path to your script.
Like the documentation states:
The script path is always relative to the Snakefile containing the directive (in contrast to the input and output file paths, which are relative to the working directory). It is recommended to put all scripts into a subfolder "scripts"
If you need to define different paths to your scripts, you can always do it in python outside of your rules. Don't forget, all python code outside of rules is executed before building the DAG. Thus, you can define all variables you want and use them in your rules.
SCRIPTSPATH = config["scriptpath"]
rule with_script:
input: someinput
output: someoutput
script: "{SCRIPTSPATH}/scriptfile"
Note:
Do not mix wildcards and "variables". In an expand function as
expand("{script_path}/scriptfile", script_path = config[scriptpath])
{script_path} is not a wildcard but just a placeholder for the values given in the second parameter of the function.
I implemented test cases for my application and decided to run it everyday. The problem is the result of the previous test will be overwritten by the latest test result. I need to keep them both so I came up with a solution that include the test date and time in the report name, for example; report-202111181704.html (use time in 24-hour format).
I searched through the internet and did not found any solution yet. Anybody here know the solution? or any alternative solution will be fine.
It depends on where you execute your tests. From command line you can save the date to variable. Then use this variable to change the name of generated outputs. For example
date=$(date '+%Y-%m-%d_%H:%M:%S')
robot --output ${date}output.xml --log ${date}log.html --report ${date}report.html test.robot
I found the solution. Instead of setting .html file name, I create a folder and put the result there.
To do this, add --outputdir in pabot command so it's gonna look like this
pabot --pabotlibport $PABOT_PORT --pabotlib --resourcefile ./DeviceSet.dat --processes $thread --verbose --outputdir ./result/$OUTPUT_DIR $ENV
where
$OUTPUT_DIR=`date + "%Y%m%d-%H%M"`
The output folder gonna be like ./result/20220301-2052
Basically 2 issues:
1. I plan to execute multiple test cases from argument file. The structure would look like that:
SOME_PATH/
-test_cases/
-some_keywords/
-argumentfile.txt
How should i define a suite setup and teardown for all those test cases executed from file (-A file)?
From what i know:
a) I could execute it in file with 1st and last test case, but the order of test cases may change so it is not desired.
b) provide it in init.robot and put it somewhere without test cases only to get the setup and teardown. This is because if I execute:
robot -i SOME_TAG -A argumentfile /path/to/init
and the init is in test_case folder it will execute the test_cases with a specific tag + those in a folder twice.
Is there any better way? Provide it, for example, in argumentfile?
2 How to provide PATH variable in argumentfiles in robotframework?
I know there is possibility to do:
--variable PATH:some/path/to/files
but is it not for test suite env?
How to get that variable to be visible in the file itself: ${PATH}/test_case_1.robot
For your 2nd question, you could create a temporary environment variable that you'd then use. Depending on the OS you're using, the way you'll do this will be different:
Windows:
set TESTS_PATH=some/path/here
robot -t %TESTS_PATH%/test_case_1.robot
Unix:
export TESTS_PATH="some/path/here"
robot -t $TESTS_PATH/test_case_1.robot
PS: you might want to avoid asking multiple, different questions in the same thread
I am new to robot framework and wanted to see if i can get any simple code for custom report. I am also fine with answer to my problem. I went through all questions related to report but could not find any specific answer to my problem. currently my report contains log and wanted to see if i can remove log information from reports and save report in specific location. I just want to get PASS/FAIL information in my report. Can any one give me example how i can overcome this problem? I also need to know how i can save my report in different location. Any example would be helpful. Thank you in advance.
There is a tool called Rebot which is part of Robot Framework.
By default, Robot Framework creates XML reports. The XML reports are automatically converted into HTML reports by Rebot.
You can set the location of the output files in the execution by specifying the parameter --outputdir (and thus set a different base directory for outputs).
From the documentaiton:
All output files can be set using an absolute path, in which case they are created to the specified place, but in other cases, the path is considered relative to the output directory. The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path. Regardless of how a path to an individual output file is obtained, its parent directory is created automatically, if it does not exist already.
You can call Rebot yourself to control this conversion.
You can also run Rebot after the test was run in order to create new output on a different location.
See documentation in:
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#post-processing-outputs
The following example shows how to store the HTML reports in a different location and including only partial data:
rebot --include smoke --name Smoke_Tests c:\results\output.xml --outputdir c:\version1.0\reports
In the example above, we process the file c:\results\output.xml, create a new report called Smoke_Tests that includes only tests with the tag smoke and save it to the output folder c:\version1.0\reports
In addition you can also set the location of the log file (HTML) from the execution.
The command line option --log (-l) determines where log files are created.
The command line option --report (-r) determines where report files are created
Removing log lines can be done a bit differently. If you run rebot --help you'll get the following options:
--removekeywords all|passed|for|wuks|name: * Remove keyword data
from all generated outputs. Keywords containing
warnings are not removed except in `all` mode.
all: remove data from all keywords
passed: remove data only from keywords in passed
test cases and suites
for: remove passed iterations from for loops
wuks: remove all but the last failing keyword
inside `BuiltIn.Wait Until Keyword Succeeds`
name:: remove data from keywords that match
the given pattern. The pattern is matched
against the full name of the keyword (e.g.
'MyLib.Keyword', 'resource.Second Keyword'),
is case, space, and underscore insensitive,
and may contain `*` and `?` as wildcards.
Examples: --removekeywords name:Lib.HugeKw
--removekeywords name:myresource.*
--flattenkeywords for|foritem|name: * Flattens matching keywords
in all generated outputs. Matching keywords get all
log messages from their child keywords and children
are discarded otherwise.
for: flatten for loops fully
foritem: flatten individual for loop iterations
name:: flatten matched keywords using same
matching rules as with
`--removekeywords name:`
In a larger project, I have set up ./tests/Makefile.am to run a number of tests when I call make check. The file global_wrapper.c contains the setup / breakdown code, and it calls test functions implemented in several subdirectories.
TESTS = global_test
check_PROGRAMS = global_test
global_test_SOURCES = global_wrapper.c foo/foo_test.c bar/bar_test.c
Works great. But the tests take a long time, so I would like to be able to optionally execute only tests from a single subdir. This is how I did it at first.
I added the subdirectories:
SUBDIRS = foo bar
In the subdirectories, I added local wrappers and Makefile.am's:
TESTS = foo_test
check_PROGRAMS = foo_test
# the foo_test.c here is of course the same as in the global Makefile.am
foo_test_SOURCES = foo_wrapper.c foo_test.c
This, too, works great - when I call make check in the subdirectory foo, only the foo tests are executed.
However, when I now call make check in ./tests, all tests are executed twice. Once through global_test, and once through the local test programs.
If I omit the SUBDIRS statement in the global Makefile.am, the subdirectory makefiles don't get build. If I omit TESTS from the local Makefile.am's, make check doesn't do anything for the local directories.
I'm not that familiar with automake, but I am pretty sure there is some way to solve this dilemma. Can anybody here give me a hint?
Break your tests up. In your tests/Makefile.am do:
TESTS = foo_test bar_test
and build foo_test bar_test appropriately with something like
foo_test_SOURCES = foo/foo_wrapper.c foo/foo_test.c
bar_test_SOURCES = bar/bar_wrapper.c bar/bar_test.c
Now, if you do a raw 'make check', both tests will be run. If you only want to run one test, you can do that with 'make check TESTS=foo_test' or 'make check TESTS=bar_test' and only the appropriate test will run. Typically, the Makefile.am lists all the tests that will be run by default in TESTS and the user selects alternate tests at make-time. Naturally, if you are running the tests a lot, you can 'export TESTS=foo_test' in your shell session and then only type 'make check'.
Can't you remove from "global_test" any test that is already executed in a subdirectory? (Just so they simply don't get executed twice.)
I think you could maybe overwrite the check rule at the top-level to define an environment variable:
check:
DISABLE_SUBTESTS=1 make check-recursive
and then test DISABLE_SUBTESTS in your sub-directories to decide whether to actually run the tests or not.
(Personally, I'd rather arrange to work in the existing make check framework by concealing the output of my tests, rather than overwriting the produced rules like this.)