Azure Pipeline Error: vstest.console process failed to connect to testhost process - .net-core

I am converting my azure pipeline to YAML pipeline. When I trigger the build, it gets failed on the Unit test step and gives the error as below
[error]vstest.console process failed to connect to testhost process after 90 seconds. This may occur due to machine slowness, please set environment variable VSTEST_CONNECTION_TIMEOUT to increase timeout.
I could not find a way to add the VSTEST_CONNECTION_TIMEOUT value anywhere. Could you please help me with this.
Here is the sample .yml I am using
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
testFiltercriteria: 'TestCategory=Unit'
runSettingsFile: XYZ.Tests/codecoverage.runsettings
codeCoverageEnabled: true
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true

I would recommend you to use the dotnetCli task instead. It is shorter, clearer and straight forward (It will have the "same" effect as executing dotnet test in your console)
- task: DotNetCoreCLI#2
displayName: 'Run tests'
inputs:
command: 'test'
Even in microsoft documentation page, they use the DotNetCoreCLI task.

If the vstest task can run successfully on your classic pipeline. It should be working in yaml pipeline too. You can check the agent pool selection and the task's settings to make sure they are same both in yaml and classic pipeline.
1,Your Unit tests seem like running on Vs2017 in the yaml pipeline. You can try running the pipeline on windows-latest agent to run the tests on Vs2019.
If your pipeline has to run on specific agent. You can use VisualStudioTestPlatformInstaller task to download the latest version. Then set the vsTestVersion: toolsInstaller for Vstest task. See below:
- task: VisualStudioTestPlatformInstaller#1
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
...
...
vsTestVersion: toolsInstaller
2,You can also check out the solution in this thread. As it mentioned in the solution deleting the entire solution folder, re-cloning the project. If you were running your pipeline on your self-hosted agent. You can try using Checkout in the yaml pipeline to clean the source folder before cloning your repo. See below:
steps:
- checkout: self
clean: true
You can also try adding below to your codecoverage.runsettings file under element <CodeCoverage> to exclude the microsoft assemblies as mentioned in the thread.
<ModulePath>.*microsoft\.codeanalysis\.csharp\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.csharp\.workspaces\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.workspaces\.dll$</ModulePath>
3,You can also try updating 'Microsoft.NET.Test.Sdk' to latest version for test projects.

Related

Dotnet 6 Trigger Webjob doesn't run directly after deployment

I have a Webapp hosting in Azure. Now I created a Triggerd webjob for it. I could deploy it successfully, but after each deployment the status is 'ready' and it will not start to work until I run the WbJobs manually from App Service in Azure.
There is anohter small problem that is nice to have to be fixed. The Schedule is there and after I manually run the TimerJob it will work in correct time interval but in Azure in the App Service in WebJobs section the schedule will not be shown.
any idea?
Goal is that after deployment the Triggered timer job begin its work without to run it manually from the Azure.
Goal is that after deployment the Triggered timer job begin its work without to run it manually from the Azure.
Check the below steps to run the Triggered timer job run automatically.
Create a Console Application in Visual Studio.
Build and run the Application locally.
Zip the .exe file in the application bin/net6.0 folder.
In Azure Portal, Create an App Service with Runtime stack .NET6 .
In App Service => WebJobs, add a new Type Triggered - Scheduled WebJob.
With the CRON expression which I have given 0 */2 * * * *, the webjob will start after 2 minutes. And will run after every 2 minutes.
Check the MSDoc for CRON expressions.
Initial WebJob
After 2 min
We can check the WebJob status, under Logs.
Select the WebJob and click on Logs.
Click on the Available Job Name.
You can see for every 2 minutes the Web Job is getting triggered and running.
For testing the job, I have scheduled it to run within few minutes.
Check your CRON expression once.Try with the less timing and cross check.
I found the problem and fixed it.
But what was the problem? I wanted to add a Webjobs of type trigger in my solution and then deploy it with a YAML file into the Azure in a App Service.
Under this link I found the way to write a Triggerd Webjob:
https://thechimpbrain.com/time-triggered-azure-webjob/
In this link the person creates a console application for .net core and in the program file he/she create a hostbuider and this host will run constantly and a function which has the TimerTrigger will call the function in certain time.
But this is not what I want, because if you have an app service, there is already a host running continuously. you should only define the trigger time.
That means you don't need anything form this link. All you do is create a console app and write your code. You should only add a settings.job file in your project. There you should define your schedule like this:
{
"schedule": "0 0 1 * * *"
}
In this example it will run every day.
Then you should deploy your project in to the correct place which is in my case App_Data/jobs/triggered/{YourJobName}
In your YAML you will have two steps, CI and CD.
In your CI:
- task: DotNetCoreCLI#2
displayName: 'Build Webjob Montly'
inputs:
command: 'build'
projects: 'xxx/xxx/xxx.csproj'
arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI#2
displayName: 'publish Webjob Montly'
inputs:
command: 'publish'
publishWebProjects: false
projects: 'xxx/xxx/xxx.csproj'
arguments: '--output $(Build.BinariesDirectory)/publish_Webjob/App_Data/jobs/triggered/WebjobMontly'
zipAfterPublish: false
modifyOutputPath: false
- task: ArchiveFiles#2
displayName: 'Zip Webjobs'
inputs:
rootFolderOrFile: '$(Build.BinariesDirectory)/publish_Webjob'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Webjobs.zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: Publish drop Artifact
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop'
publishLocation: 'pipeline'
and in your CD:
- task: AzureRmWebAppDeployment#4
displayName: 'Deploy Webjobs'
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'xxx'
appType: 'webApp'
WebAppName: 'xxx'
deployToSlotOrASE: true
ResourceGroupName: 'xxx'
SlotName: 'Test'
packageForLinux: '$(pipeline.workspace)/drop/Webjobs.zip'
JSONFiles: '**/appsettings.json'
and that's it.

Should I put my 2e2 tests before or after my WebAppDeployment in my pipeline?

I hope someone can help me
I am developing an api in dotnet, i am using azure devops and pipelines in yaml.
I have already done my 2e2 test of the api where I basically make real calls to the api that I am developing, in order to test a real user flow within the application.
My question is the following should:
1- Do the 2e2test task before the deployment to my webapp allowing me to know that there is a problem before it is deployed to the resource, but having the problem that I would not be testing with the changes of the present commit (since I would be testing with the previous one because I still I have not deployed the resource)
or
2-do the task 2e2test after the deployment to my webapp, allowing me the tests to be carried out with the changes I made in the commit reflected in the resource and in this way know that what I did gave a problem or not, but having the problem that As the resource was deployed, if there was a problem, it would already be contaminating my webapp.
the yaml I'm working on is:
# ASP.NET
# Build and test ASP.NET projects.
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4
trigger:
- development
pool:
vmImage: 'windows-latest'
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/ChatbotService/*.csproj'
zipAfterPublish: true
modifyOutputPath: true
- task: DotNetCoreCLI#2
displayName: 'dotnet MockUnitTest'
inputs:
command: test
projects: '**/*Tests/MockUnitTest/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
- task: DotNetCoreCLI#2
displayName: 'dotnet E2ETest'
inputs:
command: test
projects: '**/*Tests/E2ETest/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
- task: AzureRmWebAppDeployment#4
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'bla blab bla'
appType: 'webApp'
WebAppName: 'webapp-chatbotservice-dev'
packageForLinux: '$(System.DefaultWorkingDirectory)/ChatbotService/**/*.zip'
AppSettings: '-ASPNETCORE_ENVIRONMENT Development'
You should deploy first and then run your e-2-e tests but not on production. On production you should run smoke tests which are kind of e-2-e tests which test crucial parts of your app without changing state of the app. (of course it makes sense to run them after deployment)
So it could be in high-level like this:
- build stage
- deploy to test env stage
- run e-2-e test
- deploy to prod env stage
- run smoke test
There is always a risk if you run test on not updated env as your test may verify part of app which are not deployed yet.

System BadImageFormatException Format of the executable (.exe) or library (.dll) is invalid

CI pipeline taking about 50 min to complete and most of the time is consumed by the test. Have good number of unit test and data driven tests. Have decided to run test in parallel and the approach took based on this doc
Run Tests In Parallel In Build Pipelines
Idea is to split pipeline into 3 jobs
Build Job : builds the binaries and publish them to artifacts with
name pre-drop.
Test Job: downloads the artifact pre-drop, extract files, run tests parallel using VSTest#2 task
Publish Job: publish the artifacts to drop(for release pipeline).
Not sure if I was able to get my idea into .yml.
Test Job
- job : 'TestJob'
pool:
vmImage: windows-latest
strategy:
parallel: 2
dependsOn: 'BuildJob'
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'predrop'
downloadPath: '$(System.ArtifactsDirectory)'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: '$(System.ArtifactsDirectory)/predrop/predrop.zip'
destinationFolder: '$(System.ArtifactsDirectory)/predrop/Extpredrop'
- task: VSTest#2
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2: |
**\*tests.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.ArtifactsDirectory)'
vstestLocationMethod: 'location'
vstestLocation: 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\Extensions\TestPlatform\'
otherConsoleOptions: '/platform:x64 /Framework:.NETCoreApp,Version=v3.1'
The issue is with VSTest task recognizing & running some tests but erroring out on other tests with following error on some of the tests
System.BadImageFormatException : Could not load file or assembly 'Microsoft.Extensions.Logging.Abstractions, Version=3.1.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
Format of the executable (.exe) or library (.dll) is invalid.
The binaries from the first job has generated Microsoft.Extensions.Logging.Abstractions.dll as part of the artifact.
The document of BadImageFormatException Class says this exception is thrown in below scenario:
A DLL or executable is loaded as a 64-bit assembly, but it contains 32-bit features or resources. For example, it relies on COM interop or calls methods in a 32-bit dynamic link library.
To address this exception, set the project's Platform target property to x86 (instead of x64 or AnyCPU) and recompile.
So you can try configuring the VSBuild task to rebuild the project as x86 or x64. Check out this similar error in this thread.
If above changing the platform doesnot work. You can try this workaround to add a VSBuild task to build your project in job TestJob too. In this way, there will be no need to download and extract the artifacts in job TestJob. For below example:
- job : 'TestJob'
pool:
vmImage: windows-latest
strategy:
parallel: 2
dependsOn: 'BuildJob'
steps:
- task: VSBuild#1
inputs:
solution: '**/*.sln'
platform: "any cpu"
configuration: 'Release'
- task: VSTest#2
inputs:
...
You can also check out this thread.

Azure Devops + Coverlet + SonarQube shows 0%

Good morning,
Sorry to bother you, I have a problem and I have no leads.
I have a pipeline on Azure DevOps where I use coverlet to generate a code coverage report when I use the command "dotnet test".
Indeed, the report is well generated.
At first, in the "Prepare analysis on SonarQube" step, I set the variable "sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml".
And yet the end in my SonarQube there is 0% code coverage... I don't know what to do or any leads...
Thanks
I cannot reproduce above issue. And it is hard to troubleshoot since you did not share your configuration for dotnet test task or sonarqube prepare task.
I created a test project and the coverage was successfully published to my sonarqube server. You can refer to below my steps.
1, create sonarqube server and configure my projectName and projectKey (I use azure sonarqube container instance, check here for details).
2, configure sonarqube service connection in azure devops.
3, create build pipeline. I use yaml pipeline.
In Prepare Analysis Configuration task, I choose to Use standalone scanner, and Mode is Manually provide configure. And I set variable sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml".
Below screenshot is the task's setting in classic ui view.
In my dotnet test task I set the arguments as below, and specifically output the coverage result to $(Agent.TempDirectory)/ folder.
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutput=$(Agent.TempDirectory)/ /p:CoverletOutputFormat=opencover'
Below is the full content of my azure-pipelines.yml file.
trigger: none
jobs:
- job: 'Tests'
pool:
vmImage: windows-latest
variables:
buildConfiguration: 'Release'
continueOnError: true
steps:
- task: SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: sonarlevi
scannerMode: CLI
configMode: manual
cliProjectKey: myproject2
cliProjectName: myproject2
extraProperties: |
sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml"
- task: DotNetCoreCLI#2
inputs:
command: restore
projects: '**\*.csproj'
- task: DotNetCoreCLI#2
inputs:
command: custom
custom: tool
arguments: install --tool-path . dotnet-reportgenerator-globaltool
displayName: Install ReportGenerator tool
- task: DotNetCoreCLI#2
displayName: Test .NET
inputs:
command: test
projects: '**\*Test*.csproj'
publishTestResults: false
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutput=$(Agent.TempDirectory)/ /p:CoverletOutputFormat=opencover'
condition: succeededOrFailed()
- task: SonarQubeAnalyze#4
displayName: 'Run Code Analysis'
- task: SonarQubePublish#4
displayName: 'Publish Quality Gate Result'
I did a lot of things to finally managed to get the coverage working but I think that the problem was the "ProjectGUID" missing in each .csproj of my solution making the projects ignored by SonarQube scanner.
I also upgraded from SonarQube 6.2 to 8.1 at the same time which may have solved the problem.
My steps remained unchanged to make this work.

How to make azure devops build fail when R linting issues occur

I am using lintr library in R to find linting issues in the code. I put them into an xml format like this:
<lintsuites>
<lintissue filename="/home/.../blah.R" line_number="36" column_number="1" type="style" message="Trailing blank lines are superfluous."/>
<lintissue filename="/home/.../blahblah.R" line_number="1" column_number="8" type="style" message="Only use double-quotes."/>
</lintsuites>
Now, I would like to fail the Azure devops build when issues like this occur.
I was able to get my tests in a JUnit format like this:
<testsuite name="MB Unit Tests" timestamp="2020-01-22 22:34:07" hostname="0000" tests="29" skipped="0" failures="0" errors="0" time="0.05">
<testcase time="0.01" classname="1_Unit_Tests" name="1_calculates_correctly"/>
<testcase time="0.01" classname="1_Unit_Tests" name="2_absorbed_correctly"/>
...
</testsuite>
And when i do this step in the azure pipeline, my build fails if any tests in the test suite fail:
- task: PublishTestResults#2
displayName: 'Publish Test Results'
inputs:
testResultsFiles: '**/*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)/fe'
mergeTestResults: true
failTaskOnFailedTests: true
I would like something similar for failing the build when there are linting issues. I would also like the users to see what those linting issues are in the build output.
Thanks
This is not possible to achieve a similar result for lintr xml with plishTestResults#2.
The workaround you can try is to use a powershell task to check for the content of your lintr xml file. If the content isnot empty, then fail the pipeline in the powershell task.
Below powershell task will check the content of lintr.xml(<car></car>) and will echo the content to the task logs and exit 1 to fail the task if the content is null.
- powershell: |
[xml]$XmlDocument = Get-Content -Path "$(system.defaultworkingdirectory)/lintr.xml"
if($XmlDocument.OuterXml){
echo $XmlDocument.OuterXml
}else{exit 1}
displayName: lintr result.
You can aslo use below statement in a powershell task to upload lintr xml file to the build summary page where you can download
echo "##vso[task.uploadsummary]$(system.defaultworkingdirectory)/lintr.xml"
You can check here for more logging commands.
Update:
A workaround to show the lintr results in a nice way is to create a custom extension to display html results in azure devops pipeline.
You can try creating a custom extension, and produce a html lint results. Please refer to the answer to this thread an example custom extension to display html
Other developers already submit requests to Microsoft for implementing this feature. Please vote it up here or create a new one.

Resources