Uploading test result to HP ALM - unix

I have created test result file at particular Unix Sever as below
TC1,Pass
TC2,Fail
TC3,Pass
........
and so on
with the help of jenkins jobs
I have to upload/update test cases result accordingly in HP ALM test lab.
I have two challenges
1. Transfer this test result file from Unix Server to Jenkins work space
2. And then tried to load result in HP ALM using HP ALM plugin
In step 2, It ask for testing framework(Junit,NUnit,TestNG), which I have not used any.
Please suggest...

Have a look at Agiletestware Bumblebee (http://www.agiletestware.com/bumblebee.html).
It has Jenkins plugin which can execute tests in HP ALM and/or upload test execution results in various formats into HP ALM

Related

Can TestCafe reults be tracked in HP ALM?

HP ALM is my current test management tool and I'm currently planning on using TestCafe for automation. I would like to know if I can have the results from my test runs in TestCafe shown in HP ALM?
Is it possible to make an integration or automatically generate reports in TestCafe that can then be imported to HP ALM? I have read about reporters for TestCafe but I'm unsure about the extend of their functionality.
Thanks
You might want to have a look at Agiletestware Bumblebee - it can parse reports from various testing tools.
For TestCafe, you can use xunit reporter to produce xUnit reports, e.g.:
testcafe chrome test.js -r xunit:result.xml
This will generate result.xml file which can be then uploaded to ALM with Bumblebee Server and CI plugin for Jenkins/TeamCity/Bamboo.
Please refer to the documentation.
Disclaimer: I'm developing Bumblebee Server
I could not find any HP ALM plug-ins for TestCafe at this time.
You can create your own reporter for HP ALM by following the instructions from the documentation: Reporter Plugin.

Are there any agents to spawn a script or exe with command from Reportportal dashboard?

I have reportportal installation running on Windows box. I am planning to use it as dashboard to look at unit test and other automated test results. I understand reportportal integration with unit test frameworks is done at the logger level so that the test app itself can send results back to dashboard.
I have a scenario where the test application is an exe that I want to launch by sending a command from dashboard to system under test.
Are there any provisions for doing it?
Do I have to build an agent that talks to reportportal using its api for this?
Thanks
No, nothing similar at the moment.
It is pretty popular request, so we have it in backlog, but still focus on test reports aggregations first. And the other types of functionality will come later.

Generating Coverage report for Protractor E2E test running on different machine

I want to get the coverage report for my Protractor E2E UI tests against a running node code.
I have tried the following steps:
Using Istanbul, I instrumented the code on one of my app server
managed through Nginx.
istanbul instrument . --complete-copy -o instrumented
Stopped the actual node code, and started instrumented code on the
same port (port 3000), without changing the NGINX config, so that any
traffic hitting that app server will be directed to the instrument
code which is running on the same server.
Run the protractor end to end tests which is on another machine. This is another local machine, which I run the test from and the instrumented app is in another server.
At the end of the run, I stopped the Instrumented code
Now:
- There is no coverage variable available
- There is no Coverage Folder
- No report generated
I thought the coverage report would be generated if the instrumented code was hit through the protractor script.
I also googled around, and found some plugin "protractor-istanbul-plugin" but not sure if this is what I should use.
My questions:
Is it even possible to generate coverage report if the instrumented code is in a different server and protractor script is run from a different machine?
If possible, is my assumption that report would be generated if instrumented code is hit is wrong?
should I use istanbul cover command here and if yes, how?
My goal is to instrument the code after deploying to QA env. and trigger the protractor script which is placed in another machine pointing to the QA env having the instrumented code.
Thanks in Advance.

Protractor E2e test run in Sauce labs is not running all tests listed in the config

We use grunt protractor runner and have 49 specs to run.
When I run them in sauce labs, there are times it just runs x number of tests but not all. Any idea why? Are there any sauce settings to be passed over apart from user and key in my protarctor conf.js?
Using SauceLabs selenium server at http://ondemand.saucelabs.com:80/wd/hub
[launcher] Running 1 instances of WebDriver
Started
.....
Ran 5 of 49 specs
5 specs, 0 failures
This kind of output is usually produced when there are "focused" tests present in the codebase. Check if there are fdescribe, fit in your tests.
As a side note, to avoid focused tests being committed to the repository, we've used static code analysis - eslint with eslint-plugin-jasmine plugin. Then, we've added a "pre-commit" git hook with the help of pre-git package that would run the eslint task before every commit eventually prohibiting any code style violations to be committed into the repository.

What alternatives exist for running QTP tests in batch?

We are in the process of implementing automated regression testing for our applications, and are looking for a solid batch-testing utility. We have QuickTest Professional 10.0, and it comes bundled with 'Test Batch Runner' which appears to be deprecated. It appears in previous versions there was 'Multi-Test Manager', which has been discontinued as well.
What alternatives exist, if any?
The canonical way to do this is via Quality Center, if you don't have QC you can use QTP's automation model from a vbs file. The documentation for this is available in Start -> Programs -> QuickTest Professional -> Documentation -> Automation Object Model Reference
QTP 10 works excellent with Multi Test Manager V8.2.4.
We use it for our project (previously used it with QTP 9.2).
Try google for an installation (of you don't have one), it should be free but just not supported by HP anymore.
From WinRunner times I very extensively used Test Driver scripts with great success due to the following benefits:
non-programming testers can easily create/maintain batches as their stored in XML format
test input files are externally configurable through mapping
a variety of customization parameters supported, from login credentials to prefixes and switches
test dependencies could be established so that if critical test cases is failed the whole branch of dependant test cases is skipped.
Now I continue using Test Drivers and introducing them to clients.
And Test Driver approach was integrated not only by the client companies that do not use Quality Center. Some others followed it because it gives much more flexibility and robustness in automated test plan execution.
Thank you,
Albert Gareev
http://automationbeyond.wordpress.com
I echo Motti...if i get your question right ....You can see the below written link as well..
Work with Test Batch runner

Resources