Dotnet 6 Trigger Webjob doesn't run directly after deployment - .net-core

I have a Webapp hosting in Azure. Now I created a Triggerd webjob for it. I could deploy it successfully, but after each deployment the status is 'ready' and it will not start to work until I run the WbJobs manually from App Service in Azure.
There is anohter small problem that is nice to have to be fixed. The Schedule is there and after I manually run the TimerJob it will work in correct time interval but in Azure in the App Service in WebJobs section the schedule will not be shown.
any idea?
Goal is that after deployment the Triggered timer job begin its work without to run it manually from the Azure.

Goal is that after deployment the Triggered timer job begin its work without to run it manually from the Azure.
Check the below steps to run the Triggered timer job run automatically.
Create a Console Application in Visual Studio.
Build and run the Application locally.
Zip the .exe file in the application bin/net6.0 folder.
In Azure Portal, Create an App Service with Runtime stack .NET6 .
In App Service => WebJobs, add a new Type Triggered - Scheduled WebJob.
With the CRON expression which I have given 0 */2 * * * *, the webjob will start after 2 minutes. And will run after every 2 minutes.
Check the MSDoc for CRON expressions.
Initial WebJob
After 2 min
We can check the WebJob status, under Logs.
Select the WebJob and click on Logs.
Click on the Available Job Name.
You can see for every 2 minutes the Web Job is getting triggered and running.
For testing the job, I have scheduled it to run within few minutes.
Check your CRON expression once.Try with the less timing and cross check.

I found the problem and fixed it.
But what was the problem? I wanted to add a Webjobs of type trigger in my solution and then deploy it with a YAML file into the Azure in a App Service.
Under this link I found the way to write a Triggerd Webjob:
https://thechimpbrain.com/time-triggered-azure-webjob/
In this link the person creates a console application for .net core and in the program file he/she create a hostbuider and this host will run constantly and a function which has the TimerTrigger will call the function in certain time.
But this is not what I want, because if you have an app service, there is already a host running continuously. you should only define the trigger time.
That means you don't need anything form this link. All you do is create a console app and write your code. You should only add a settings.job file in your project. There you should define your schedule like this:
{
"schedule": "0 0 1 * * *"
}
In this example it will run every day.
Then you should deploy your project in to the correct place which is in my case App_Data/jobs/triggered/{YourJobName}
In your YAML you will have two steps, CI and CD.
In your CI:
- task: DotNetCoreCLI#2
displayName: 'Build Webjob Montly'
inputs:
command: 'build'
projects: 'xxx/xxx/xxx.csproj'
arguments: '--configuration $(BuildConfiguration)'
- task: DotNetCoreCLI#2
displayName: 'publish Webjob Montly'
inputs:
command: 'publish'
publishWebProjects: false
projects: 'xxx/xxx/xxx.csproj'
arguments: '--output $(Build.BinariesDirectory)/publish_Webjob/App_Data/jobs/triggered/WebjobMontly'
zipAfterPublish: false
modifyOutputPath: false
- task: ArchiveFiles#2
displayName: 'Zip Webjobs'
inputs:
rootFolderOrFile: '$(Build.BinariesDirectory)/publish_Webjob'
includeRootFolder: false
archiveType: 'zip'
archiveFile: '$(Build.ArtifactStagingDirectory)/Webjobs.zip'
replaceExistingArchive: true
- task: PublishPipelineArtifact#1
displayName: Publish drop Artifact
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'drop'
publishLocation: 'pipeline'
and in your CD:
- task: AzureRmWebAppDeployment#4
displayName: 'Deploy Webjobs'
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'xxx'
appType: 'webApp'
WebAppName: 'xxx'
deployToSlotOrASE: true
ResourceGroupName: 'xxx'
SlotName: 'Test'
packageForLinux: '$(pipeline.workspace)/drop/Webjobs.zip'
JSONFiles: '**/appsettings.json'
and that's it.

Related

Devops Pipeline for multiple Azure Functions

I have a solution containing 2 Azure Functions.
In my build Pipeline I create 2 different zip and "seem" fine to me.
Then I use those zip in my release pipelines, everything seems fine (no error), but when I go in the azure portal, I don't see any function available.
While App Insights only gives me this : Loading functions metadata; 0 functions loaded.
What could be wrong?
build pipeline:
name: Azure Pipelines
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet build'
inputs:
projects: '**/*.csproj'
- task: DotNetCoreCLI#2
displayName: 'Func1 Publish Zip'
inputs:
command: publish
publishWebProjects: false
projects: '$(System.DefaultWorkingDirectory)/SolutionName/Function1/Function1.csproj'
arguments: '--output publish_output\Function1 --configuration Release'
modifyOutputPath: false
- task: PublishPipelineArtifact#1
displayName: 'Func1 Publish Artifact'
inputs:
targetPath: 'publish_output\Function1\'
artifact: Func1Artifact
- task: DotNetCoreCLI#2
displayName: 'Func2 Publish Zip'
inputs:
command: publish
publishWebProjects: false
projects: '$(System.DefaultWorkingDirectory)/SolutionName/Function2/Function2.csproj'
arguments: '--output publish_output\Function2 --configuration Release'
modifyOutputPath: false
- task: PublishPipelineArtifact#1
displayName: 'Func2 Publish Artifact'
inputs:
targetPath: 'publish_output\Func2\'
artifact: Func2Artifact
1 of the release pipeline :
steps:
- task: AzureFunctionApp#1
displayName: 'Deploy Function 1'
inputs:
azureSubscription: '$(Parameters.AzureSubscription)'
appType: '$(Parameters.AppType)'
appName: '$(Parameters.AppName)'
package: '$(System.DefaultWorkingDirectory)/xxx Build/Func1Artifact'
it is a continuation of this post :
Azure Devops Release pipeline(s) for multiple Azure Functions
EDIT:
log for deploy function:
2021-06-29T08:53:49.6084003Z ##[section]Starting: Deploy Func1 Function
2021-06-29T08:53:49.6208602Z
====
2021-06-29T08:53:49.6209159Z Task : Azure Functions
2021-06-29T08:53:49.6209483Z Description : Update a function app with
.NET, Python, JavaScript, PowerShell, Java based web applications
2021-06-29T08:53:49.6209969Z Version : 1.187.0
2021-06-29T08:53:49.6210422Z Author : Microsoft Corporation
2021-06-29T08:53:49.6211044Z Help :
https://aka.ms/azurefunctiontroubleshooting
2021-06-29T08:53:49.6211592Z
====
2021-06-29T08:53:50.6211817Z Got service connection details for Azure
App Service:'func1-dev'
2021-06-29T08:53:52.6469625Z Deleting App Service Application settings.
Data: ["WEBSITE_RUN_FROM_ZIP","WEBSITE_RUN_FROM_PACKAGE"]
2021-06-29T08:54:17.4575237Z Updated App Service Application settings
and Kudu Application settings.
2021-06-29T08:54:17.5343524Z Package deployment using ZIP Deploy
initiated.
2021-06-29T08:54:38.8090909Z Deploy logs can be viewed at https://func1-
dev.scm.azurewebsites.net/api/deployments/xxxx/log
2021-06-29T08:54:38.8091932Z Successfully deployed web package to App
Service.
2021-06-29T08:55:00.5393032Z Successfully added release annotation to
the Application Insight : func1-dev
2021-06-29T08:55:10.1885833Z Successfully updated deployment History at
https://func1-dev.scm.azurewebsites.net/api/deployments/xxx
2021-06-29T08:55:22.5301509Z App Service Application URL: http://func1-
dev.azurewebsites.net
2021-06-29T08:56:06.1898662Z ##[section]Finishing: Deploy Func1 Function

Should I put my 2e2 tests before or after my WebAppDeployment in my pipeline?

I hope someone can help me
I am developing an api in dotnet, i am using azure devops and pipelines in yaml.
I have already done my 2e2 test of the api where I basically make real calls to the api that I am developing, in order to test a real user flow within the application.
My question is the following should:
1- Do the 2e2test task before the deployment to my webapp allowing me to know that there is a problem before it is deployed to the resource, but having the problem that I would not be testing with the changes of the present commit (since I would be testing with the previous one because I still I have not deployed the resource)
or
2-do the task 2e2test after the deployment to my webapp, allowing me the tests to be carried out with the changes I made in the commit reflected in the resource and in this way know that what I did gave a problem or not, but having the problem that As the resource was deployed, if there was a problem, it would already be contaminating my webapp.
the yaml I'm working on is:
# ASP.NET
# Build and test ASP.NET projects.
# Add steps that publish symbols, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/apps/aspnet/build-aspnet-4
trigger:
- development
pool:
vmImage: 'windows-latest'
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/ChatbotService/*.csproj'
zipAfterPublish: true
modifyOutputPath: true
- task: DotNetCoreCLI#2
displayName: 'dotnet MockUnitTest'
inputs:
command: test
projects: '**/*Tests/MockUnitTest/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
- task: DotNetCoreCLI#2
displayName: 'dotnet E2ETest'
inputs:
command: test
projects: '**/*Tests/E2ETest/*.csproj'
arguments: '--configuration $(buildConfiguration) --collect "Code coverage"'
- task: AzureRmWebAppDeployment#4
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'bla blab bla'
appType: 'webApp'
WebAppName: 'webapp-chatbotservice-dev'
packageForLinux: '$(System.DefaultWorkingDirectory)/ChatbotService/**/*.zip'
AppSettings: '-ASPNETCORE_ENVIRONMENT Development'
You should deploy first and then run your e-2-e tests but not on production. On production you should run smoke tests which are kind of e-2-e tests which test crucial parts of your app without changing state of the app. (of course it makes sense to run them after deployment)
So it could be in high-level like this:
- build stage
- deploy to test env stage
- run e-2-e test
- deploy to prod env stage
- run smoke test
There is always a risk if you run test on not updated env as your test may verify part of app which are not deployed yet.

Azure Pipeline Error: vstest.console process failed to connect to testhost process

I am converting my azure pipeline to YAML pipeline. When I trigger the build, it gets failed on the Unit test step and gives the error as below
[error]vstest.console process failed to connect to testhost process after 90 seconds. This may occur due to machine slowness, please set environment variable VSTEST_CONNECTION_TIMEOUT to increase timeout.
I could not find a way to add the VSTEST_CONNECTION_TIMEOUT value anywhere. Could you please help me with this.
Here is the sample .yml I am using
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
testFiltercriteria: 'TestCategory=Unit'
runSettingsFile: XYZ.Tests/codecoverage.runsettings
codeCoverageEnabled: true
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
diagnosticsEnabled: true
I would recommend you to use the dotnetCli task instead. It is shorter, clearer and straight forward (It will have the "same" effect as executing dotnet test in your console)
- task: DotNetCoreCLI#2
displayName: 'Run tests'
inputs:
command: 'test'
Even in microsoft documentation page, they use the DotNetCoreCLI task.
If the vstest task can run successfully on your classic pipeline. It should be working in yaml pipeline too. You can check the agent pool selection and the task's settings to make sure they are same both in yaml and classic pipeline.
1,Your Unit tests seem like running on Vs2017 in the yaml pipeline. You can try running the pipeline on windows-latest agent to run the tests on Vs2019.
If your pipeline has to run on specific agent. You can use VisualStudioTestPlatformInstaller task to download the latest version. Then set the vsTestVersion: toolsInstaller for Vstest task. See below:
- task: VisualStudioTestPlatformInstaller#1
- task: VSTest#2
displayName: 'Test'
inputs:
testAssemblyVer2: '**\bin\**\Tests.dll'
...
...
vsTestVersion: toolsInstaller
2,You can also check out the solution in this thread. As it mentioned in the solution deleting the entire solution folder, re-cloning the project. If you were running your pipeline on your self-hosted agent. You can try using Checkout in the yaml pipeline to clean the source folder before cloning your repo. See below:
steps:
- checkout: self
clean: true
You can also try adding below to your codecoverage.runsettings file under element <CodeCoverage> to exclude the microsoft assemblies as mentioned in the thread.
<ModulePath>.*microsoft\.codeanalysis\.csharp\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.csharp\.workspaces\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.dll$</ModulePath>
<ModulePath>.*microsoft\.codeanalysis\.workspaces\.dll$</ModulePath>
3,You can also try updating 'Microsoft.NET.Test.Sdk' to latest version for test projects.

System BadImageFormatException Format of the executable (.exe) or library (.dll) is invalid

CI pipeline taking about 50 min to complete and most of the time is consumed by the test. Have good number of unit test and data driven tests. Have decided to run test in parallel and the approach took based on this doc
Run Tests In Parallel In Build Pipelines
Idea is to split pipeline into 3 jobs
Build Job : builds the binaries and publish them to artifacts with
name pre-drop.
Test Job: downloads the artifact pre-drop, extract files, run tests parallel using VSTest#2 task
Publish Job: publish the artifacts to drop(for release pipeline).
Not sure if I was able to get my idea into .yml.
Test Job
- job : 'TestJob'
pool:
vmImage: windows-latest
strategy:
parallel: 2
dependsOn: 'BuildJob'
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'predrop'
downloadPath: '$(System.ArtifactsDirectory)'
- task: ExtractFiles#1
inputs:
archiveFilePatterns: '$(System.ArtifactsDirectory)/predrop/predrop.zip'
destinationFolder: '$(System.ArtifactsDirectory)/predrop/Extpredrop'
- task: VSTest#2
inputs:
testSelector: 'testAssemblies'
testAssemblyVer2: |
**\*tests.dll
!**\*TestAdapter.dll
!**\obj\**
searchFolder: '$(System.ArtifactsDirectory)'
vstestLocationMethod: 'location'
vstestLocation: 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\IDE\Extensions\TestPlatform\'
otherConsoleOptions: '/platform:x64 /Framework:.NETCoreApp,Version=v3.1'
The issue is with VSTest task recognizing & running some tests but erroring out on other tests with following error on some of the tests
System.BadImageFormatException : Could not load file or assembly 'Microsoft.Extensions.Logging.Abstractions, Version=3.1.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
Format of the executable (.exe) or library (.dll) is invalid.
The binaries from the first job has generated Microsoft.Extensions.Logging.Abstractions.dll as part of the artifact.
The document of BadImageFormatException Class says this exception is thrown in below scenario:
A DLL or executable is loaded as a 64-bit assembly, but it contains 32-bit features or resources. For example, it relies on COM interop or calls methods in a 32-bit dynamic link library.
To address this exception, set the project's Platform target property to x86 (instead of x64 or AnyCPU) and recompile.
So you can try configuring the VSBuild task to rebuild the project as x86 or x64. Check out this similar error in this thread.
If above changing the platform doesnot work. You can try this workaround to add a VSBuild task to build your project in job TestJob too. In this way, there will be no need to download and extract the artifacts in job TestJob. For below example:
- job : 'TestJob'
pool:
vmImage: windows-latest
strategy:
parallel: 2
dependsOn: 'BuildJob'
steps:
- task: VSBuild#1
inputs:
solution: '**/*.sln'
platform: "any cpu"
configuration: 'Release'
- task: VSTest#2
inputs:
...
You can also check out this thread.

Azure Devops + Coverlet + SonarQube shows 0%

Good morning,
Sorry to bother you, I have a problem and I have no leads.
I have a pipeline on Azure DevOps where I use coverlet to generate a code coverage report when I use the command "dotnet test".
Indeed, the report is well generated.
At first, in the "Prepare analysis on SonarQube" step, I set the variable "sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml".
And yet the end in my SonarQube there is 0% code coverage... I don't know what to do or any leads...
Thanks
I cannot reproduce above issue. And it is hard to troubleshoot since you did not share your configuration for dotnet test task or sonarqube prepare task.
I created a test project and the coverage was successfully published to my sonarqube server. You can refer to below my steps.
1, create sonarqube server and configure my projectName and projectKey (I use azure sonarqube container instance, check here for details).
2, configure sonarqube service connection in azure devops.
3, create build pipeline. I use yaml pipeline.
In Prepare Analysis Configuration task, I choose to Use standalone scanner, and Mode is Manually provide configure. And I set variable sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml".
Below screenshot is the task's setting in classic ui view.
In my dotnet test task I set the arguments as below, and specifically output the coverage result to $(Agent.TempDirectory)/ folder.
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutput=$(Agent.TempDirectory)/ /p:CoverletOutputFormat=opencover'
Below is the full content of my azure-pipelines.yml file.
trigger: none
jobs:
- job: 'Tests'
pool:
vmImage: windows-latest
variables:
buildConfiguration: 'Release'
continueOnError: true
steps:
- task: SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: sonarlevi
scannerMode: CLI
configMode: manual
cliProjectKey: myproject2
cliProjectName: myproject2
extraProperties: |
sonar.cs.opencover.reportsPaths="$(Agent.TempDirectory)/coverage.opencover.xml"
- task: DotNetCoreCLI#2
inputs:
command: restore
projects: '**\*.csproj'
- task: DotNetCoreCLI#2
inputs:
command: custom
custom: tool
arguments: install --tool-path . dotnet-reportgenerator-globaltool
displayName: Install ReportGenerator tool
- task: DotNetCoreCLI#2
displayName: Test .NET
inputs:
command: test
projects: '**\*Test*.csproj'
publishTestResults: false
arguments: '--configuration $(buildConfiguration) /p:CollectCoverage=true /p:CoverletOutput=$(Agent.TempDirectory)/ /p:CoverletOutputFormat=opencover'
condition: succeededOrFailed()
- task: SonarQubeAnalyze#4
displayName: 'Run Code Analysis'
- task: SonarQubePublish#4
displayName: 'Publish Quality Gate Result'
I did a lot of things to finally managed to get the coverage working but I think that the problem was the "ProjectGUID" missing in each .csproj of my solution making the projects ignored by SonarQube scanner.
I also upgraded from SonarQube 6.2 to 8.1 at the same time which may have solved the problem.
My steps remained unchanged to make this work.

Resources