remove log information from report and save report in desire location - robotframework

I am new to robot framework and wanted to see if i can get any simple code for custom report. I am also fine with answer to my problem. I went through all questions related to report but could not find any specific answer to my problem. currently my report contains log and wanted to see if i can remove log information from reports and save report in specific location. I just want to get PASS/FAIL information in my report. Can any one give me example how i can overcome this problem? I also need to know how i can save my report in different location. Any example would be helpful. Thank you in advance.

There is a tool called Rebot which is part of Robot Framework.
By default, Robot Framework creates XML reports. The XML reports are automatically converted into HTML reports by Rebot.
You can set the location of the output files in the execution by specifying the parameter --outputdir (and thus set a different base directory for outputs).
From the documentaiton:
All output files can be set using an absolute path, in which case they are created to the specified place, but in other cases, the path is considered relative to the output directory. The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path. Regardless of how a path to an individual output file is obtained, its parent directory is created automatically, if it does not exist already.
You can call Rebot yourself to control this conversion.
You can also run Rebot after the test was run in order to create new output on a different location.
See documentation in:
http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#post-processing-outputs
The following example shows how to store the HTML reports in a different location and including only partial data:
rebot --include smoke --name Smoke_Tests c:\results\output.xml --outputdir c:\version1.0\reports
In the example above, we process the file c:\results\output.xml, create a new report called Smoke_Tests that includes only tests with the tag smoke and save it to the output folder c:\version1.0\reports
In addition you can also set the location of the log file (HTML) from the execution.
The command line option --log (-l) determines where log files are created.
The command line option --report (-r) determines where report files are created
Removing log lines can be done a bit differently. If you run rebot --help you'll get the following options:
--removekeywords all|passed|for|wuks|name: * Remove keyword data
from all generated outputs. Keywords containing
warnings are not removed except in `all` mode.
all: remove data from all keywords
passed: remove data only from keywords in passed
test cases and suites
for: remove passed iterations from for loops
wuks: remove all but the last failing keyword
inside `BuiltIn.Wait Until Keyword Succeeds`
name:: remove data from keywords that match
the given pattern. The pattern is matched
against the full name of the keyword (e.g.
'MyLib.Keyword', 'resource.Second Keyword'),
is case, space, and underscore insensitive,
and may contain `*` and `?` as wildcards.
Examples: --removekeywords name:Lib.HugeKw
--removekeywords name:myresource.*
--flattenkeywords for|foritem|name: * Flattens matching keywords
in all generated outputs. Matching keywords get all
log messages from their child keywords and children
are discarded otherwise.
for: flatten for loops fully
foritem: flatten individual for loop iterations
name:: flatten matched keywords using same
matching rules as with
`--removekeywords name:`

Related

scp_download to download multiple files based on a pattern?

I need to download many files from a server (specifically tectia) ideally using the ssh package. These files all follow the a predictable pattern across multiple sub folders. The filepath is formatted like this
/directory/subfolder/A001/abcde001.csv
Where A001 counts up alongside the last 3 digits of the filename (/A002/abcde002.csv and so on)
In the vignette for scp_download it states that the files parameter may contain wildcards so I have tried to do something like
scp_download(session, "/directory/subfolder/A.*/abcde.*[.]csv", to=tempdir())
and
scp_download(session, "directory/subfolder/A\\d{3}/abcde\\d{3}[.]csv", to=tempdir())
but no matter which combination of patterns or wildcards I can think of (which isn't many) I only get something like
Warning: SSH warning: scp: /directory/subfolder/A\d{3}/abcde\d{3}[.]csv: No such file or directory
What I'm hoping to do is either find a way to do pattern matching here, or to find a way to store tectia directories as a string to be read by scp_download. I've made sure that my session is connected properly and it works without attempting to pattern match, which it does.
I had the same problem. The problem is that when you use * in your pattern it gets escaped when you send it to the server. However, when you request a special file name like this /directory/subfolder/A001/abcde001.csv, it works fine.
Finally I changed my code based on the below steps:
I got the list of files/folders using ls command with ssh_exec_wait function and then store them on a variable.
Download files in the variable separately
session <- ssh_connect("username#ip",passwd="password")
files<-capture.output(ssh_exec_wait(session, command = 'ls /directory/subfolder/A001/*'))
dnc1<- scp_download(session, files[1], to = paste0(getwd(),"/data/"))
dnc2<- scp_download(session, files[2], to = paste0(getwd(),"/data/"))
dnc3<- scp_download(session, files[3], to = paste0(getwd(),"/data/"))
The bottom 3 commands can be done in a loop as this could be hundreds or thousands of records.

Snakemake: how to use one integer from list each call as input to script?

I'm trying to practice writing workflows in snakemake.
The contents of my Snakefile:
configfile: "config.yaml"
rule get_col:
input:
expand("data/{file}.csv",file=config["datname"])
output:
expand("output/{file}_col{param}.csv",file=config["datname"],param=config["cols"])
params:
col=config["cols"]
script:
"scripts/getCols.R"
The contents of config.yaml:
cols:
[2,4]
datname:
"GSE3790_expression_data"
My R script:
getCols=function(input,output,col) {
dat=read.csv(input)
dat=dat[,col]
write.csv(dat,output,row.names=F)
}
getCols(snakemake#input[[1]],snakemake#output[[1]],snakemake#params[['col']])
It seems like both columns are being called at once. What I'm trying to accomplish is one column being called from the list per output file.
Since the second output never gets a chance to be created (both columns are used to create first output), snakemake throws an error:
Waiting at most 5 seconds for missing files.
MissingOutputException in line 3 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Job completed successfully, but some output files are missing.
On a slightly unrelated note, I thought I could write the input as:
'"data/{file}.csv"'
But that returns:
WildcardError in line 4 of /Users/rebecca/Desktop/snakemake-tutorial/practice/Snakefile:
Wildcards in input files cannot be determined from output files:
'file'
Any help would be much appreciated!
Looks like you want to run your Rscript twice per file, once for every value of col. In this case, the rule needs to be called twice as well.
The use of expand is also a bit too much here, in my opinion. expand fills your wildcards with all possible values and returns a list of the resulting files. So the output for this rule would be all possible combinations between files and cols, which the simple script can not create in one run.
This is also the reason why file can not be inferred from the output - it gets expanded there.
Instead, try writing your rule easier for just one file and column and expand on the resulting output, in a rule which needs this output as an input. If you generated the final output of your workflow, put it as input in a rule all to tell the workflow what the ultimate goal is.
rule all:
input:
expand("output/{file}_col{param}.csv",
file=config["datname"], param=config["cols"])
rule get_col:
input:
"data/{file}.csv"
output:
"output/{file}_col{param}.csv"
params:
col=lambda wc: wc.param
script:
"scripts/getCols.R"
Snakemake will infer from rule all (or any other rule to further use the output) what needs to be done and will call the rule get_col accordingly.

zsh: redirect only standard error to /dev/null

I want to use something like pdfs=$(echo *.pdf) and drop the error message that comes in case of no files present. But the docs have only examples where both outputs are redirected combined.
Standard error is file descriptor 2, if you are actually running a command you expect to produce output to standard error.
pdfs=$(echo *.pdf 2> /dev/null)
However, don't write code like in your example. A flat string cannot usefully store an arbitrary list of file names, because you can't distinguish between filename delimiters and valid characters in a filename. Instead, use an array which doesn't require any separate commands (and thus any need to redirect standard error):
pdfs=( *.pdf(N) ) # You can drop the (N) if you already have NULL_GLOB enabled

Robot Framework : Configuration Profiles

I have a configuration file that I am reading into my robot test cases. This configuration file contains the following variables:
${DATABASE_IP} 127.0.0.1
${ORACLE_SYSTEM_ID} xe
${ORACLE_DATABASE_URL} jdbc:oracle:thin:#${DATABASE_IP}:1521:${ORACLE_SYSTEM_ID}
${ORACLE_DATABASE_USER} cooluser
${ORACLE_DATABASE_PASSWORD} coolpassword
${ORACLE_DATABASE_DRIVER} oracle.jdbc.driver.OracleDriver
One thing I'd like to be able to do is change some of these properties, depending on where the script is executed from. Example: jenkins
A simple way to look at this, is say as follows:
I have a test file called database_test.robot.
If I invoke this file on my local machine, I'd like to pass in an argument to ensure ${DATABASE_IP} equates to 127.0.0.1 . When Jenkins does it, I want that value to point somewhere else.
Something like this already exists with maven, where you can specify a profile at runtime. Ex: mvn verify -Plocal-config ; mvn verify -Pjenkins-config
I have looked through the robot framework documentation, but cannot seem to implement something similar. The only way to swap out properties that I see is to remove the old and replace in the new. Note : I have hundreds of properties that will differ, and several other environments aside form Jenkins and local that would take different values.
Robot gives you at least three ways to solve this: argument files, variable files, and resource files. In each of the cases, you can specify which environment settings to use with a command line argument.
Argument files
Argument files are, as the name implies, files from which robot can read arguments. They are a convenient way to specify a group of command line arguments.
For example, you could create a "environments" folder that contains argument files for each of your environments (production.args, staging.args, local.args) and within the file you would set the values for all of the variables.
For example, you could create a file named local.args with the following contents:
--variable DATABASE_IP:127.0.0.1
--variable ORACLE_SYSTEM_ID:xe
--variable ORACLE_DATABASE_URL:jdbc:oracle:thin:#127.0.0.1:1521:xe
--variable ORACLE_DATABASE_USER:cooluser
--variable ORACLE_DATABASE_PASSWORD:coolpassword
--variable ORACLE_DATABASE_DRIVER:oracle.jdbc.driver.OracleDriver
Then, to run with this configuration you would use the -A or --argumentfile option:
robot --argumentfile environments/local.args ...
The advantage to using argument files is that you can override single values on the command line for times when you need to change just one value:
robot --argumentfile environments/local.args --variable ORACLE_DATABASE_USER:anotheruser
Also, with argument files you can also specify any other command line arguments. For example, if you always want to ignore tests on your CI server that are known to be broken, you could include something like --exclude known-broken (where known-broken is a tag you've applied to one or more tests)
One downside to argument files is that you can't define variables based on the value of previous variables (ie: you can't do --variable FOOBAR=${FOO}bar). I've not found that to be much of a problem.
Variable files
Variable files work in a similar way, but let you define the variables with python. The advantage to variable files is that you can do anything that python lets you do. For example, you could automatically determine the IP of the local database, or selectively turn features on or off based on runtime conditions.
The simplest way to define a variable file is to simply create python variables, which robot will find by importing your file.
For example, the variable file for your variables might look like this:
DATABASE_IP = "127.0.0.1"
ORACLE_SYSTEM_ID = "xe"
ORACLE_DATABASE_URL = " jdbc:oracle:thin:#%s:1521:%s % (DATABASE_IP, ORACLE_SYSTEM_ID)
ORACLE_DATABASE_USER} = "cooluser"
ORACLE_DATABASE_PASSWORD} = "coolpassword"
ORACLE_DATABASE_DRIVER} = "oracle.jdbc.driver.OracleDriver"
Resource Files
Much like the other two solutions, you can have separate resource files for each environment. Since robot allows you to use variables in resource file paths within a suite, you can use a variable to define which resource file to use.
For example, you could import a resource file like this:
# some_tests.robot
*** Settings ***
Resource config/${environment}.robot
You would then create a config file for each environment like you normally would (eg: config/local.robot, config/staging.robot, etc). Then, when you run robot you can tell it which resource file to use:
$ robot --variable environment:local ...
I tried the third option with Resource files but given above command line argument statement:
$ robot --variable environment=local
Didn't work for me. After looking at the robot help file, came to know that variable values should be passed through : and not with =.
So I tried with:
$ robot --variable environment:local
And it worked for me.
The correct way to specify the Resource path for a subdirectory is:
Resource ../config/${environment}.robot
if config is a subdirectory.

Process many EDI files through single MFX

I've created a mapping in MapForce 2013 and exported the MFX file. Now, I need to be able to run the mapping using MapForce Server. The problem is, I need to specify both the input EDI file and the output file. As far as I can tell, the usage pattern is to run the mapping with MapForce server using the input/output configuration in the MFX itself, not passed in on the command line.
I suppose I could change the input/output to some standard file name and then just write the input file to that path before performing the mapping, and then grab the output from the standard output file path when the mapping is complete.
But I'd prefer to be able to do something like:
MapForceServer run -in=MyInputFile.txt -out=MyOutputFile.xml MyMapping.mfx > MyLogFile.txt
Is something like this possible? Perhaps using parameters within the mapping?
There are two options that I've come across in dealing with a similar situation.
Option 1- If you set the input XML file to *.xml in the component settings, mapforceserver.exe will automatically process all txt in the directory assuming your source is xml (this should work for text just the same). Similar to the example below you can set a cleanup routine to move the files into another folder after processing.
Note: It looks in the folder where the schema file is located.
Option 2 - Since your output is XML you can use Altova's raptorxml (rack up another license charge). Now you can generate code in XSLT 2.0 and use a batch file to automatically execute, something like this.
::#echo off
for %%f IN (*.xml) DO (RaptorXML xslt --xslt-version=2 --input="%%f" --output="out/%%f" %* "mymapping.xslt"
if NOT errorlevel 1 move "%%f" processed
if errorlevel 1 move "%%f" error)
sleep 15
mymapping.bat
I tossed in a sleep command to loop the batch for rechecking every 15 seconds. Unfortunately this does not work if your output target is a database.

Resources