I have schedule regression automation that is running everyday from TFS, I want to develop functionality to run only failed test case against latest build not whole regression automation again.
Is there a way, I can only call web api and pass only failed test case name and when I trigger deploy to any environment that will only run
Or is there any other way around this.
There is already a Rerun failed tests option in the v2.* of the VSTest task in the build definition, you could select this option to rerun the failed tests:
If you want to use api, you would need to follow the steps below:
Get test results for a test run and find out the failed test cases.
GET https://{accountName}.visualstudio.com/{project}/_apis/test/Runs/{runId}/results?api-version=5.0-preview.5
Create a new test suite by querying the failed test case IDs:
POST https://{accountName}.visualstudio.com/{project}/_apis/test/Plans/{planId}/suites/{suiteId}?api-version=5.0-preview.3
Content-Type:application/json
{
"suiteType": "DynamicTestSuite",
"name": "FailedTestCases",
"queryString": "SELECT [System.Id],[System.WorkItemType],[System.Title],[Microsoft.VSTS.Common.Priority],[System.AssignedTo],[System.AreaPath] FROM WorkItems WHERE [System.TeamProject] = #project AND [System.WorkItemType] IN GROUP 'Microsoft.TestCaseCategory' AND ( [System.Id] = xxx OR [System.Id] = xxx )"
}
Select Test Plan and Test suite in your VSTest task:
Related
I have a netcore 6 unit test, which is linked to a tfs test case. In the build pipeline I build the netcore dll contains the test (netcore publish), and publish as an artifact (the bin\Release\net6.0\publish path).
In the release pipeline I copy the artifact files to a temp folder, then apply a Visual Studio Test step on this folder, using Test Plan, selecting test suites containing my test case.
Executing the test release pipeline I see the VS Test step found no tests (dunno what is 'test' - test case or test method):
Number of testcases discovered : 0
After this line I see
RunStatistics]This execution slice with id '8405', received '1'
testcases to execute out of which '0' is discovered.
2022-09-07T13:15:26.6593846Z ##[error]The slice of type 'Execution' is
'Aborted' because of the error :
Microsoft.VisualStudio.TestService.VstestAdapter.TestsNotFoundException:
No test assemblies found on the test machine matching the source
filter criteria or no tests discovered matching test filter criteria.
Verify that test assemblies are present on the machine and test filter
criteria is correct.
I use MSTest, the test can be discovered in VS2022 and can be executed correctly.
In the target directory )C:\temp\automata-tfs-tests\MyAPP-autotests) there exists my DLL (Test.AutoTest.dll), also a testhost.exe (if it is interested).
Anyone has any idea what should I do to fix the problem?
Got the following (full) log:
2022-09-07T14:16:02.0649513Z ##[section]Starting: Test run for Test
plans 2022-09-07T14:16:02.0989252Z
============================================================================== 2022-09-07T14:16:02.0989349Z Task : Visual Studio Test
2022-09-07T14:16:02.0989396Z Description : Run unit and functional
tests (Selenium, Appium, Coded UI test, etc.) using the Visual Studio
Test (VsTest) runner. Test frameworks that have a Visual Studio test
adapter such as MsTest, xUnit, NUnit, Chutzpah (for JavaScript tests
using QUnit, Mocha and Jasmine), etc. can be run. Tests can be
distributed on multiple agents using this task (version 2).
2022-09-07T14:16:02.0989460Z Version : 2.153.9
2022-09-07T14:16:02.0989504Z Author : Microsoft Corporation
2022-09-07T14:16:02.0989565Z Help : More
information
2022-09-07T14:16:02.0989612Z
============================================================================== 2022-09-07T14:16:03.4309686Z SystemVssConnection exists true
2022-09-07T14:16:03.4310108Z SystemVssConnection exists true
2022-09-07T14:16:03.5947964Z SystemVssConnection exists true
2022-09-07T14:16:03.6776898Z In distributed testing flow
2022-09-07T14:16:03.6777031Z
====================================================== 2022-09-07T14:16:03.6777567Z Test selector : Test plan
2022-09-07T14:16:03.6777717Z Test plan Id : 5955
2022-09-07T14:16:03.6777832Z Test plan configuration Id : 55
2022-09-07T14:16:03.6777963Z Test suite Id selected: 5956
2022-09-07T14:16:03.6778021Z Test suite Id selected: 5959
2022-09-07T14:16:03.6778071Z Test suite Id selected: 5957
2022-09-07T14:16:03.6778205Z Search folder :
C:\temp\automata-tfs-tests\MyAPP-autotests
2022-09-07T14:16:03.6779655Z VisualStudio version selected for test
execution : latest 2022-09-07T14:16:03.6780047Z Attempting to find
vstest.console from a visual studio installation.
2022-09-07T14:16:03.6959999Z Attempting to find vstest.console from a
visual studio build tools installation. 2022-09-07T14:16:03.7387349Z
Attempting to find vstest.console from a visual studio installation.
2022-09-07T14:16:03.7808933Z Attempting to find vstest.console from a
visual studio build tools installation. 2022-09-07T14:16:03.8285853Z
Distributed test execution, number of agents in job : 1
2022-09-07T14:16:03.8286311Z Number of test cases per batch : 100
2022-09-07T14:16:03.8307860Z Run in parallel : false
2022-09-07T14:16:03.8311511Z Run in isolation : false
2022-09-07T14:16:03.8312158Z Path to custom adapters : null
2022-09-07T14:16:03.8322692Z Other console options :
/UseVsixExtensions:true /logger:trx 2022-09-07T14:16:03.8400643Z
##[warning]Other console options is not supported for this task configuration. This option will be ignored.
2022-09-07T14:16:03.8412667Z Code coverage enabled : false
2022-09-07T14:16:03.8412905Z Diagnostics enabled : false
2022-09-07T14:16:03.9119940Z
====================================================== 2022-09-07T14:16:03.9120805Z Source filter: *test.dll,!**\obj*
2022-09-07T14:16:04.0673691Z SystemVssConnection exists true
2022-09-07T14:16:04.0755317Z
[command]C:\BuildAgent_work_tasks\VSTest_ef087383-ee5e-42c7-9a53-ab56c98420f9\2.153.9\Modules\DTAExecutionHost.exe
--inputFile C:\BuildAgent_work_temp\input_9b793c20-2eb7-11ed-955e-6334f9e0a11a.json
2022-09-07T14:16:04.1535459Z
########################################################################## 2022-09-07T14:16:04.1535614Z DtaExecutionHost version 17.153.29006.1.
2022-09-07T14:16:05.1309152Z
=========================================== 2022-09-07T14:16:05.1309447Z AgentName:
BUILDAGENT-MyBu4-BUILDAGENT-MyBu4-24 2022-09-07T14:16:05.1309556Z
ServiceUrl: http://tfs:8080/tfs/MyBu/ 2022-09-07T14:16:05.1309615Z
TestPlatformVersion: 14.0.25420 2022-09-07T14:16:05.1309671Z
EnvironmentUri: vstest://env/MyAPP/_apis/release/3/764/1759/1
2022-09-07T14:16:05.1309725Z QueryForTaskIntervalInMilliseconds: 3000
2022-09-07T14:16:05.1309821Z MaxQueryForTaskIntervalInMilliseconds:
10000 2022-09-07T14:16:05.1309901Z
QueueNotFoundDelayTimeInMilliseconds: 3000
2022-09-07T14:16:05.1312343Z MaxQueueNotFoundDelayTimeInMilliseconds:
50000 2022-09-07T14:16:05.1312398Z
=========================================== 2022-09-07T14:16:05.4078702Z TestExecutionHost.Execute: Registered
TestAgent : 4034 : BUILDAGENT-MyBu4-BUILDAGENT-MyBu4-24
2022-09-07T14:16:05.4371941Z IsValidServiceResponse: Received None
command..Service Workflow is not active 2022-09-07T14:16:05.4417823Z
Updated Run Settings: 2022-09-07T14:16:05.4434862Z
2022-09-07T14:16:05.4435203Z
2022-09-07T14:16:05.4435273Z
C:\BuildAgent_work_temp\TR_46cf95a5-fd11-47f6-b503-3fa5ffad22f3 2022-09-07T14:16:05.4435335Z
2022-09-07T14:16:05.4435413Z
2022-09-07T14:16:05.6424857Z Creating run for selected test plan with
following parameters 2022-09-07T14:16:05.6426439Z Test plan ID: 5955
2022-09-07T14:16:05.6427075Z Test suite ID: 5956,5959,5957
2022-09-07T14:16:05.6427697Z Test configuration ID: 55
2022-09-07T14:16:05.7792179Z No test cases for test suite 5956
2022-09-07T14:16:06.0575605Z No test cases for test suite 5957
2022-09-07T14:16:06.0577076Z test configuration mapping:
2022-09-07T14:16:06.0617024Z test settings id : 4895
2022-09-07T14:16:06.0617414Z Run title: MyAPP Auto Test Results
2022-09-07T14:16:06.0617774Z Build location:
C:\temp\automata-tfs-tests\MyAPP-autotests
2022-09-07T14:16:06.0618880Z Build Id: 15656
2022-09-07T14:16:06.3113960Z Test run with Id 16287 associated
2022-09-07T14:16:16.4930577Z Received the command : Start
2022-09-07T14:16:16.4949323Z TestExecutionHost.ProcessCommand. Start
Command handled 2022-09-07T14:16:16.5604339Z Slice with id = 8407, of
type = 'Execution' received. 2022-09-07T14:16:16.7663381Z Count of
test sources found: 1 2022-09-07T14:16:16.7763833Z
================================================================= 2022-09-07T14:16:16.7767368Z Discovering tests from sources
2022-09-07T14:16:23.4267264Z Number of testcases discovered : 0
2022-09-07T14:16:23.4269136Z Discovered tests 0 from sources
2022-09-07T14:16:23.4299463Z
================================================================= 2022-09-07T14:16:23.4344966Z [RunStatistics]This execution slice with
id '8407', received '1' testcases to execute out of which '0' is
discovered. 2022-09-07T14:16:23.4603621Z ##[error]The slice of type
'Execution' is 'Aborted' because of the error :
Microsoft.VisualStudio.TestService.VstestAdapter.TestsNotFoundException:
No test assemblies found on the test machine matching the source
filter criteria or no tests discovered matching test filter criteria.
Verify that test assemblies are present on the machine and test filter
criteria is correct. 2022-09-07T14:16:23.4604497Z at
Microsoft.VisualStudio.TestService.VstestAdapter.Execution.Run(ExecutionStateContext
stateModdelContext, CancellationToken cancellationToken)
2022-09-07T14:16:23.4604635Z at
Microsoft.VisualStudio.TestService.VstestAdapter.ExecutionAndPublish.Run(ExecutionStateContext
stateModelContext, CancellationToken cancellationToken)
2022-09-07T14:16:37.5289010Z Received the command : Stop
2022-09-07T14:16:37.5289369Z TestExecutionHost.ProcessCommand. Stop
Command handled 2022-09-07T14:16:37.5289442Z SliceFetch Aborted.
Moving to the TestHostEnd phase 2022-09-07T14:16:37.6567079Z Please
use this link to analyze the test run :
http://tfs:8080/tfs/MyBu/MyAPP/_TestManagement/Runs#_a=resultQuery&runId=16287&queryPath=Recent+Run%2FRun16287
2022-09-07T14:16:37.6567848Z Test run '16287' is in 'Aborted' state
with 'Total Tests' : 1 and 'Passed Tests' : 0.
2022-09-07T14:16:37.6593787Z ##[error]Test run is aborted. Logging
details of the run logs. 2022-09-07T14:16:37.6621205Z
##[error]System.Exception: The test run was aborted, failing the task. 2022-09-07T14:16:37.7157795Z
########################################################################## 2022-09-07T14:16:37.8420313Z ##[section]Finishing: Test run for Test
plans
Interestingly (this agent runs automata tests for another project without any problem) now I must add the "Visual Studio test platform installer" also and must change the setting *Test platform version" from "Latest" to "Installed by tools installer". And now it works.
I am currently running my automation (UI & API) tests on azure devops release pipeline.
Whenever the test run finishes I get a notification to my slack:
Now theres only one way to view the test results after a run:
You can click on the Release hotlink and you will get redirected to the full release run info and test results too
.
Now my question is: is it possible to somehow customize the release notes?
For example Id love to attach Test Results to the slack message. Something along the lines:
TestResults:
Passed: 13
Failed: 2
Or somehow attach the .trx/.html file that gets generated after the test run. So I could easily view the results without clicking on the release hotlink.
Maybe its possible to extract test results using GET Runs List API method?
Any kind of help would be greatly appreciated. Thanks!
You can parse the result file(eg:trx) with a powershell script, get the testrun details, post to slack channel via rest api or PostSlackNotification task.
For example: check the trx file in log:
Add a new powershell script task to parse testrun details:
#get the path of the trx file from the output folder.
$path = Get-ChildItem -Path $(Agent.TempDirectory)\TestResults -Recurse -ErrorAction SilentlyContinue -Filter *.trx | Where-Object { $_.Extension -eq '.trx' }
$appConfigFile = $path.FullName #path to test result trx file
$appConfig = New-Object XML
$appConfig.Load($appConfigFile)
$testsummary = $appConfig.DocumentElement.ResultSummary.Counters | select total, passed, failed, aborted
echo $testsummary # check testsummary
echo "##vso[task.setvariable variable=testSummary]$($testsummary)" #set the testsummary to environment variable
Get the testrun result as below:
Posted to slack channel:
There is an npm package to publish test results back to either microsoft teams or slack.
https://www.npmjs.com/package/test-results-reporter
You need to create an incoming-webhook and a simple config file to get started.
Add the below command in your pipeline yaml file.
- script: npx test-results-reporter publish -c config.json
I'm trying to use vstest.console.exe with the TfsPublisher logger in VSTS (cloud).
There's a URL example shown in the article for TFS onsite, but I'm trying to work out what parameters to use for my VSTS build. The example is:
/logger:TfsPublisher;Collection=http://localhost:8080/tfs/DefaultCollection;TeamProject=MyProject;BuildName=DailyBuild_20121130.1
But I just get an error saying the build cannot be found in the project, e.g.
Error: Build "1234" cannot be found under team project "MyProject".
I believe the problem is the BuildName parameter. My project and build definition have no spaces in the names. I have tried various values, e.g.:
BuildName=%BUILD_BUILDID% (resolves to number, e.g. 1234)
BuildName=%BUILD_DEFINITIONNAME% (resolves to build definition name OK)
BuildName=%BUILD_BUILDURI% (resolves to url, e.g. vstfs:///Build/Build/1234)
The error message confirms that the environment variables seem to be resolving OK, but I can't determine what I should substitute for "DailyBuild_20121130.1" in my case.
Updated: My vstest.console.exe logger parameter currently looks like
/logger:TfsPublisher;Collection=%SYSTEM_TEAMFOUNDATIONCOLLECTIONURI%;TeamProject=%SYSTEM_TEAMPROJECT%;BuildName=%BUILD_BUILDNUMBER%
I effectively got the result I wanted using the Trx logger and one of the "Publish Test Results" build steps:
vstest.console.exe ... /logger:Trx
The build name is generated by "Build number format" under build definition "General" tab. You can get it from "BUILD_BUILDNUMBER" variable.
In Robot Framework log.html, I want to log the command output that I am executing from a python file . As shown in the attached screenshot of log.html, now I am not able to see the command output. Simple it prints or logs as PASS.
My Robot File:
*** Settings ***
Library test
*** Test cases ***
check
test
Python Keyword:
def test():
cmd = ' net test" '
output = os.popen(cmd).read()
match1 = re.findall('.* (successfully).*',output)
mat1 = ['successfully']
if match1 == mat1:
print "PASS::"
Can anyone guide me on this please?
If you want the output of the command to appear in your log, there are three ways to do it: using the print statement, using the logging API, or using the built in log keyword. These methods are all documented in the robot framework users guide.
Of the three, the logging API is arguably the best choice.
Using print statements
You're already using this method. This is documented in the user guide, in a section named Logging information:
... methods can also send messages to log
files simply by writing to the standard output stream (stdout) or to
the standard error stream (stderr) ...
Example:
def test():
cmd = ' net test" '
output = os.popen(cmd).read()
match1 = re.findall('.* (successfully).*',output)
mat1 = ['successfully']
if match1 == mat1:
print "output: " + output
Using the logging API
There is a public API for logging, also documented in the user guide
in a section named Public API for logging:
Robot Framework has a Python based logging API for writing messages to
the log file and to the console. Test libraries can use this API like
logger.info('My message') instead of logging through the standard
output like print 'INFO My message'. In addition to a programmatic
interface being a lot cleaner to use, this API has a benefit that the
log messages have accurate timestamps.
Example:
from robot.api import logger
def test():
...
logger.info("output: " + output)
Using the built-in Log keyword
Finally, you can also use the built-in log keyword. Using the built in keywords is documented in the user guide in a section titled Using BuiltIn Library.
Test libraries implemented with Python can use Robot Framework's
internal modules, for example, to get information about the executed
tests and the settings that are used. This powerful mechanism to
communicate with the framework should be used with care, though,
because all Robot Framework's APIs are not meant to be used by
externally and they might change radically between different framework
versions.
Example:
from robot.libraries import BuiltIn
...
def test():
...
BuiltIn().log("this is a debug message", "DEBUG")
I'm supposed to run some jbehave(automated) tests in bamboo. Once the tests run I'll generate some junit compatible xml files so that bamboo could understand the same. All the jbehave tests are ran as part of a script, because I need to run the jbehave tests in a separate display screen(remember these are automated browser tests). Example script is as follows.
Ex:
export DISPLAY=:0 && xvfb-run --server-args="-screen 0, 1024x768x24"
mvn clean integration-test -DskipTests -P integration-test -Dtest=*
I have one more junit parser task which points to the generated junit compatible xml files. So, once the bamboo build runs and even if all the tests pass, I get red build with the message "No failed tests found, a possible compilation error occurred."
Can somone please help me on this regard.
Your build script might be producing successful test reports, but one (or both, possibly) of your tasks is failing. That means that the failure is probably* occurring after your tests complete. Check your build logs for errors. You might also try logging in to your Bamboo server (as the bamboo user) and running the commands by hand.
I've seen this message in the past when our test task was crashing halfway through the test run, resulting in one malformed report that Bamboo ignored and a bunch of successful reports.
*Check the build log to make sure that your tests are indeed running. If mvn clean doesn't clean out the test report directory, Bamboo might just be parsing stale test reports.
EDIT: (in response to Kishore's links)
It looks like your task to kill Xvfb is what is causing the build to fail.
18-Jul-2012 09:50:18 Starting task 'Kill Xvfb' of type 'com.atlassian.bamboo.plugins.scripttask:task.builder.script'
18-Jul-2012 09:50:18
Beginning to execute external process for build 'Functional Tests - Application Release Test - Default Job'
... running command line:
/bin/sh
/tmp/FUNC-APPTEST-JOB1-91-ScriptBuildTask-4153769009554485085.sh
... in: /opt/bamboo-home/xml-data/build-dir/FUNC-APPTEST-JOB1
... using extra environment variables:
<..snip (no meaningful output)..>
18-Jul-2012 09:50:18 Failing task since return code was 1 while expected 0
18-Jul-2012 09:50:18 Finished task 'Kill Xvfb'
What does your "Kill Xvfb" script do? Are you trying something like pkill -f "[x]vfb"? pkill -f silently returns non-zero if it can't match the expression to any processes.
My solution was to make a 'script' task:
#!/bin/bash
/usr/local/bin/phpcs --report=checkstyle --report-file=build/logs/checkstyle.xml --standard=PSR2 ./lib | exit 0
Which always exits with status 0.
This is because PHP code sniffer return exit status 1 when only 1 coding violation (warning / error) is found which causes the built to fail.
Turns out to be a simple fix.
General bamboo behavior is to scan the entire log and see for any failure codes(1). For this specific configuration i had some 6 scripts out of which one of them was to kill the xvfb(frame buffer). For some reason server is not able to kill xvfb and that task was returning a failure code. Because of this, though all the tests passed, bamboo got one of this error codes from previous tasks and build was failing.
Current fix is to remove the task which kills xvfb and the build went green! \o/.