Allure MS Test Framework: Issue while generating the report - report

I am getting the below error while generating report through command line report generation tool.
Exception in thread "main" ru.yandex.qatools.allure.data.ReportGenerationExcepti on: Could not find any allure results at ru.yandex.qatools.allure.data.AllureReportGenerator.generate(AllureRe portGenerator.java:58) at ru.yandex.qatools.allure.data.AllureReportGenerator.generate(AllureRe portGenerator.java:53) at ru.yandex.qatools.allure.AllureMain.main(AllureMain.java:48)

You should provide the Allure results directory to commandline to generate the report, an example:
$ allure generate allure-results -o allure-report
Allure results directory is the directory which contains *-testsuite.xml files. Make sure your C:\TestResults\file.xml/ directory contains at least one such file.

Related

Visual Studio Team Services dotnet publish

My build completes with no errors, but it creates a randomly named zip file (s.zip) for the release step.
After the release step, I end up with that s.zip in inetpub/wwwroot/admin-tool/ folder. I'm almost there, but I want it to unzip and dump all the contents in here. This should be automatic, shouldn't it?
My dotnet publish looks like this:
and cause this to run, which is how I get the s.zip:
C:\Program Files\dotnet\dotnet.exe publish C:\agent\_work\3\s\Angular.AdminTool.csproj -c release --output C:\agent\_work\3\a\s
If I try to edit the -o argument and make it -o $(Build.ArtifactStagingDirectory)/admin-tool I will just end up with C:\agent\_work\3\a\admin-tool\s.zip
Is getting the name of my zip to be the same as the name of my web-site (admin-tool) the key to getting the zip to automatically extract in the release step?
In case it help others, I used the simple command line tools rather than the pre-canned "dotnet core" ones:
and for the Archive files task, be sure to include a hard-coded name for the zip to be used in the build process:
And for the release, in the "Deploy IIS App" task, be sure to include the full path for the zip file:
I also ran into this issue but solved it another way.
In my case I have a single project to build and deploy.
The $(Parameters.RestoreBuildProjects) value is set to a single project
MyProjectFolder/MyProject.csproj.
In the .Net Core Publish task, this value is used in the Path to Project(s) field.
Then I tick both the boxes for
I saved and queued this pipeline.
The zip file created seems to be derived from the name of the folder
so I ended up with a zip file in the artifact staging directory with the name of the project. Then the Publish Artifact task placed this zip file into the Artifact that is named in that task.

rebot command not generating output.xml

I have a rebot command usage which otherwise works fine for following;
rebot --reporttitle "Test Report" --outputdir /logs --output output.xml --report report.html /logs/api.xml /logs/ff.xml /logs/chrome.xml
But one of the test suites doesn't have the api tests hence there is no api.xml getting generated. For this rebot commands doesn't generated the output.xml. Is there a way to pass the xml files as optionals or not necessary to be checked by rebot?
No, there is not. You need to put that logic in a script that calls rebot. Though, if all of your logs (and only your logs) are written to /logs, you can use /logs/*.xml on the command line and it will process all of the .xml files in that directory.

How to save Robot framework test run logs in some folder with timestamp?

I am using Robot Framework, to run 50 Testcases. Everytime its creating following three files as expected:
c:\users\<user>\appdata\local\output.xml
c:\users\<user>\appdata\local\log.html
c:\users\<user>\appdata\local\report.html
But when I run same robot file, these files will be removed and New log files will be created.
I want to keep all previous run logs to refer in future. Log files should be saved in a folder with a time-stamp value in that.
NOTE: I am running robot file from command prompt (pybot test.robot). NOT from RIDE.
Could any one guide me on this?
Using the built-in features of robot
The robot framework user guide has a section titled Timestamping output files which describes how to do this.
From the documentation:
All output files listed in this section can be automatically timestamped with the option --timestampoutputs (-T). When this option is used, a timestamp in the format YYYYMMDD-hhmmss is placed between the extension and the base name of each file. The example below would, for example, create such output files as output-20080604-163225.xml and mylog-20080604-163225.html:
robot --timestampoutputs --log mylog.html --report NONE tests.robot
To specify a folder, this too is documented in the user guide, in the section Output Directory, under Different Output Files:
...The default output directory is the directory where the execution is started from, but it can be altered with the --outputdir (-d) option. The path set with this option is, again, relative to the execution directory, but can naturally be given also as an absolute path...
Using a helper script
You can write a script (in python, bash, powershell, etc) that performs two duties:
launches pybot with all the options you wan
renames the output files
You then just use this helper script instead of calling pybot directly.
I'm having trouble working out how to create a timestamped directory at the end of the execution. This is my script it timestamps the files, but I don't really want that, just the default file names inside a timestamped directory after each execution?
CALL "C:\Python27\Scripts\robot.bat" --variable BROWSER:IE --outputdir C:\robot\ --timestampoutputs --name "Robot Execution" Tests\test1.robot
You may use the directory creation for output files using the timestamp, like I explain in RIDE FAQ
This would be in your case:
-d ./%date:~-4,4%%date:~-10,2%%date:~-7,2%
User can update the default output folder of the robot framework in the pycharm IDE by updating the value for the key "OutputDir" in the Settings.py file present in the folder mentioned below.
..ProjectDirectory\venv\Lib\site-packages\robot\conf\settings.py
Update the 'outputdir' key value in the cli_opts dictionary to "str(os.getcwd()) + "//Results//Report" + datetime.datetime.now().strftime("%d%b%Y_%H%M%S")" of class _BaseSettings(object):
_cli_opts = {
# Update the abspath('.') to the required folder path.
# 'OutputDir' : ('outputdir', abspath('.')),
'OutputDir' : ('outputdir', str(os.getcwd()) + "//Results//Report_" + datetime.datetime.now().strftime("%d%b%Y_%H%M%S") + "//"),
'Report' : ('report', 'report.html'),

Get a proper report phpUnit

Is there a way to get propper logs from a phpUnit test?, In the help manual there are a few options
--log-junit <file> Log test execution in JUnit XML format to file.
--log-tap <file> Log test execution in TAP format to file.
--log-json <file> Log test execution in JSON format.
--coverage-clover <file> Generate code coverage report in Clover XML format.
--coverage-html <dir> Generate code coverage report in HTML format.
--coverage-php <file> Serialize PHP_CodeCoverage object to file.
--coverage-text=<file> Generate code coverage report in text format.
--testdox-html <file> Write agile documentation in HTML format to file.
--testdox-text <file> Write agile documentation in Text format to file.
Now, I get the --coverage-html /tmp/report to work and it generates a nice html report, but it's empty. how do I fill it with data about the test? because if I understand it correctly the coverage-html just create a template for the information to be shown.

rtags: "etags: no input files specified."

When I do rtags(ofile="TAGS") at the R prompt, the "TAGS" file is written and there is no output to the terminal (exactly as expected).
When I do R CMD rtags -o TAGS at the shell prompt, the "TAGS" file is written too, but I see several sets of messages on the terminal like this:
etags: no input files specified.
Try `etags --help' for a complete list of options.
I see 6 sets - 12 lines - when I move my libPath out of the current directory and two sets - 4 lines - when I keep it there. I.e., I see more warnings when rtags processes fewer files.
To reproduce, run in an empty directory:
$ mkdir z
$ cd z
$ R --vanilla CMD rtags
Tagging R/C/Rd files under /home/sds/z; writing to TAGS (overwriting)...
etags: no input files specified.
Try `etags --help' for a complete list of options.
etags: no input files specified.
Try `etags --help' for a complete list of options.
etags: no input files specified.
Try `etags --help' for a complete list of options.
etags: no input files specified.
Try `etags --help' for a complete list of options.
etags: no input files specified.
Try `etags --help' for a complete list of options.
etags: no input files specified.
Try `etags --help' for a complete list of options.
Done
What causes these warnings?
Is there a way to avoid them?
When I run this from a console session the warnings to the console are not at all like yours, but they are mostly basically meaningless comments about the process of walking through the files in my working directory:
1: In file.remove(ofile) :
cannot remove file 'TAGS', reason 'No such file or directory'
.....
: In readLines(file) :, incomplete final line found on './Untitled.R'
I did have few like this:
6: In grepl("\n", lines, fixed = TRUE) : input string 5 is invalid in this locale
The real information about the location of assigned tokens in the code goes into the TAGS file. Since the warnings in my setup are quite different, I still think the answer to your question about the increase in the number of warnings when you change the .Library variable will depend on the specific code that R is parsing during the operation. A guess: removing code from being loaded may be making certain operations not possible that would otherwise have run smoothly. And remember these are only 'warnings'.
The command R CMD rtags create a TAGS file for 3 different types of files: R files, C files and Rd files (documentation for R files). For R files it uses the R function utils::rtags() and for C and Rd files it calls the etags utility (on Linux; not sure what it does on MacOS or Windows).
The error messages that you see are emitted by etags when it is called without any input file. This happens because R CMD rtags uses find to look for files to process by etags and feeds the output of find into etags. If there are no C or Rd files in the directory you are processing or in any sub-directory of it, etags would be called with an empty list of files to process, which will cause it to print an error message.
You see several error messages because there are separate calls to etags for Rd files and 'C files', which actually inclueds .c, .h, .cpp and several other types.
In order to suppress these messages you should explicitly tell R CMD rtags not to process files you don't have. For example, R CMD rtags --no-c will not try to look for C files, and R CMD rtags --no-Rd will not try to look for Rd files.
So if, for example, you are interested in tagging only R files, use R CMD rtags --no-c --no-Rd. See R CMD rtags --help for more details.
As a side note, if you have C files in your project and you do want to tag them along with R files, you might still get such error messages - say that you have *.c and *.h files but no *.cpp files, and you call R CMD rtags without the --no-c flag. The command will also look for *.cpp files and call etags with an empty list.
Another note that worth mentioning here, although not directly related to the question, is that R CMD rtags look only for *.R files that are directly under a directory named R. So R files in a directory with a different name would not be tagged. If you need to tag R files that are in such directories, or if you need more flexibility, you can call utils::rtags() from an R session with suitable arguments.

Resources