Is there any other way to solve this question with the first answer? - robotframework

how to execute some test cases multiple times in robotframework
My reputations haven't reached to 50. I could not add the comments. So I'll ask here.
The first answer said that it can do Robot -t "My test" . . . "My file name" with the case that you want to run the file multiple times. And '.' represent the times.
I tried it and it did work.
But because some of my cases need to be run by 100 or even 200 times.
Maybe the ways of '.' are not that efficiently.
Is there a way that could use number or more efficiently to represent the times I want to run the files?
Also I want to know what does '.' mean in cmd or robot framework? Where is this method from? Is there a rule of '.'?

You can use your operating system scripting tool to create a loop. In a Windows cmd/bat file or in Posix systems a Shell script, using for.

Related

UNIX commands which use neither standard input nor standard output

I want 3 UNIX command which use neither standard input nor standard output .
I am still confused that a command with any redirection of using pipes is can be said as a example of not using standard input or standard output
"standard input" is normally what comes from the user's keyboard, or in the case of a pipeline, what comes from the preceding process.
"standard output" is normally what gets sent to the screen, or in the case of a pipeline, what gets sent onto the following process.
So, you seem to be looking for commands that read no input, and produce no output. Here are some that spring to mind:
cd
touch
mkdir
nice
shutdown
true
false

How To Run Multiple File Feature Automation Testing Using Behave On The Command Terminal

For an example, I have created some feature for automation testing using Behave login_account.feature and choose_product.feature. To run on a single feature, I use this command on the terminal behave -f behave_html_formatter:HTMLFormatter -i login_account.feature.
I want to run on multiple feature login_account.feature and choose_product.feature in one command on the terminal. Anyone, can you give me the example command to run multiple feature using Behave in one command on the terminal?
Thank you in advance.
I think there might be two issues:
I'm not sure if that's the way to run with that formatter. See behave-html-formatter docs for config and example. I can't trial myself as I don't want/need this formatter installed. I would suggest you worry about this after you understand how to run multiple features, which is where issue #2 comes in.
-i expects a single argument, which is a regular expression pattern.
Assuming you are in your features folder, just list the two features with space separator after the behave command:
behave login_account.feature choose_product.feature
Better still, you could use tags. This makes way more sense if you want to run more than the two files in your example. Add a tag at the top of each of the feature files you want to execute e.g. #runme.
Then execute only the ones having the preferred tag:
behave -t #runme

Running couple of tests in robot framework infinitely

How can I run a couple of tests in robot framework infinitely or atleast a large number of times finitely.
Eg:
Test case 1
.
.
.
Test case 2
.
.
.
Test case 3
.
.
.
I want Tests to run in the order 1,2,3,1,2,3... finitely (for a large number) or infinitely.
I know how to do it for a single test. But I want it to come back and do test 1 after test 3. And i want this batch to run in a loop.
It is not possible to create a infinite loop within RF which will run the current file over and over again indefinitely. Instead, you could create a script which points to the RF file, handles the infinity(ness) for you, and then when needed to, kill the process and join all the output.xml's together, creating the mother of all mothers RF reports. Here is a quick example within Python:
import subprocess
import os
import glob
try:
while True:
subprocess.call("robot EnterFileNameHere.robot") # Add any robot options you may want
except KeyboardInterrupt:
total = []
os.chdir("/DirectoryWhich/HasAll/TheXML/Files")
for GrabbedFiles in glob.glob("*.xml"):
total += GrabbedFiles
Converted = " ".join(str(x) for x in total)
subprocess.call("rebot {0}".format(Converted)) # Add any rebot options you may want
Change the directories to match where your files are, and this should infinitely fire off your robot file of choice, constantly creating report files / output files. Once you kill it (with CTRL+C) it will accept that as a KeyboardInterrupt which will then, merge all of the output files for you, and then close the terminal.
The only other way to do this within RF itself is by this answer here but this would only generate a report for you once the loop is completed. I do not know how it would handle report generation if you suddenly killed RF. I presume it wouldn't create any reports at all. So personally, I think this is your best bet.
Any questions let me know.

How to force robot framework to pick robot files in sequential order?

I have robot files in a folder (tests) as shown below:
tests
1_robotfile1.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
10_robotfile10.robot
11_robotfile11.robot
Now if I execute '/root/users1/power$ pybot root/user1/tests' command, robot files are running in following order:
tests
1_robotfile1.robot
10_robotfile10.robot
11_robotfile11.robot
2_robotfile2.robot
3_robotfile3.robot
4_robotfile4.robot
5_robotfile5.robot
6_robotfile6.robot
7_robotfile7.robot
8_robotfile8.robot
9_robotfile9.robot
I want to force robot_framework to pick robot files in sequential order, like 1,2,3,4,5....
Do we have any option for this?
If you have the option of renaming your files, you just need to make sure that the prefix is sortable. For numbers, that means they should all have the same number of digits.
I recommend renaming your test cases to have three or four digits for the prefix:
001_robotfile1.robot
002_robotfile2.robot
003_robotfile3.robot
004_robotfile4.robot
005_robotfile5.robot
006_robotfile6.robot
007_robotfile7.robot
008_robotfile8.robot
009_robotfile9.robot
010_robotfile10.robot
011_robotfile11.robot
...
With that, they will sort in the order that you expect.
Following #Emna answer, RF docs ( http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#execution-order ) provides some solution.
So what could you do:
rename all the files to have consecutive and computer numbering (001-test.robot instead of 1-test.robot). This may break any internal references to other files (resources), hard to add test in-between,error prone when execution order needs to be changed
you can tag it as Emna
idea from RF docs - write a script to create argument file which will keep ordering in proper way and use it as argument to robot execution. For 1000+ files it should not take longer than few seconds.
try to design tests to not be dependent from execution order, use suite setup instead.
good luck ;)
Tag the tests as foo and bar so you can run each test separately:
pybot -i foo tests
or
pybot -i bar tests
and decide the order you want
pybot -i bar tests || pybot -i foo tests

Compress EACH LINE of a file individually and independently of one another? (or preserve newlines)

I have a very large file (~10 GB) that can be compressed to < 1 GB using gzip. I'm interested in using sort FILE | uniq -c | sort to see how often a single line is repeated, however the 10 GB file is too large to sort and my computer runs out of memory.
Is there a way to compress the file while preserving newlines (or an entirely different method all together) that would reduce the file to a small enough size to sort, yet still leave the file in a condition that's sortable?
Or any other method of finding out / countin how many times each line is repetead inside a large file (a ~10 GB CSV-like file) ?
Thanks for any help!
Are you sure you're running out of the Memory (RAM?) with your sort?
My experience debugging sort problems leads me to believe that you have probably run out of diskspace for sort to create it temporary files. Also recall that diskspace used to sort is usually in /tmp or /var/tmp.
So check out your available disk space with :
df -g
(some systems don't support -g, try -m (megs) -k (kiloB) )
If you have an undersized /tmp partition, do you have another partition with 10-20GB free? If yes, then tell your sort to use that dir with
sort -T /alt/dir
Note that for sort version
sort (GNU coreutils) 5.97
The help says
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
I'm not sure if this means can combine a bunch of -T=/dr1/ -T=/dr2 ... to get to your 10GB*sortFactor space or not. My experience was that it only used the last dir in the list, so try to use 1 dir that is big enough.
Also, note that you can go to the whatever dir you are using for sort, and you'll see the acctivity of the temporary files used for sorting.
I hope this helps.
As you appear to be a new user here on S.O., allow me to welcome you and remind you of four things we do:
. 1) Read the FAQs
. 2) Please accept the answer that best solves your problem, if any, by pressing the checkmark sign. This gives the respondent with the best answer 15 points of reputation. It is not subtracted (as some people seem to think) from your reputation points ;-)
. 3) When you see good Q&A, vote them up by using the gray triangles, as the credibility of the system is based on the reputation that users gain by sharing their knowledge.
. 4) As you receive help, try to give it too, answering questions in your area of expertise
There are some possible solutions:
1 - use any text processing language (perl, awk) to extract each line and save the line number and a hash for that line, and then compare the hashes
2 - Can / Want to remove the duplicate lines, leaving just one occurence per file? Could use a script (command) like:
awk '!x[$0]++' oldfile > newfile
3 - Why not split the files but with some criteria? Supposing all your lines begin with letters:
- break your original_file in 20 smaller files: grep "^a*$" original_file > a_file
- sort each small file: a_file, b_file, and so on
- verify the duplicates, count them, do whatever you want.

Resources