We are building a dotnet core application in Azure build pipe.
The configuration is set in such a way that the pipe fails if the code coverage goes below a specific threshold.
We would like to configure the pipe in such a way that the pipe fails if the coverage goes below the last builds coverage instead of validating against a fixed threshold.
Consider the below scenario :
The current coverage is 80%
In the next run the coverage reduces to 79%, we expect the pipe to fail now.
Is there any configuration available in Azure pipeline or dotnet core to achieve this.
Related
My organization has moved from executing automated tests within MTM to executing automated tests within the VSTS Test Hub, via Release Definitions. We're using MSTest to run tests with Selenium / C#.
It is a requirement for us to be able to run any individual test on-demand, post-build, and to use the Test Configuration to control the browser. This was working without an issue when runnings tests through MTM.
Previously, when running tests through MTM, my MSTest TestContext's "Properties" were populated with runtime parameters, such as the Test Configuration, such as TestContext.Properties["__Tfs_TestConfigurationName__"]. The runtime properties could also be seen in the resulting .trx file.
Now, when running tests through the VSTS Test Hub, those values are no longer set in TestContext automatically, nor do I see them in the .trx file, or in my Release deployment logs.
Are there any suggestions as to how the Test Configuration values of each test can be propagated into TestContext (or accessible via code otherwise) to be able to control the browser at test run-time?
I have considered creating a .runsettings file, but am unaware as to how Test Configuration variables would be dynamically populated into that file.
I have considered attempting to query the Test Run / Test Results in my Selenium code to determine the Test Points / Configuration, but am unaware of how to determine the Test Run ID (I see the Test Run ID in the release logs, but don't know how to programmatically access it)
I have seen a VSTS Extension called 'VSTS Test Extensions' which is purported to inject the VSTS variables into a .runsettings file, but there are no details as to how to configure the .runsettings file in order to accomplish this, and the old MTM-style parameter names do not seem to work.
Are Test Configuration variables accessible globally in VSTS somehow?
I.e. could I create a PowerShell Release task to somehow access test run / test point data? Microsoft does not provide any indication about how to use Test Configurations other than using them for manual tests.
Any help is greatly appreciated.
After some more review, it seems that my only option is a giant hack:
I noticed that in the Release logs, the [TEST_RUNID] is output, which points me to the $(test.runid) variable, which I didn't know existed.
I can output the Run ID to an external file via a Powershell Release task.
I can then have my test code read that file, perform an API call to VSTS querying the Test Results for that Run ID, and then use that response to output the Test Configuration Name for each Test Case ID.
I can then match up my Test Case ID within each of my test methods with the Test Case IDs from the service call response, and set the browser according to the Test Configuration Name.
The only issue then is if the same test exists multiple times in the same run with different configurations, which can be hacked around by querying the Test Run every time a test begins and checking which configuration has already been run based on the State ("Pending", etc.) or by some other means I haven't thought of yet.
I'm looking for automate Unit test (using MS TEST), when a developer commits his code on TFS.
I'm not sure it's possible ...
In a second time, I want to log on file, the result of the test unit execution, if you have a solution with VS.
Thanks for all :)
You could create a CI build in TFS to build and run your Unit tests just like it mentioned in the comment above. The Continuous integration (CI) build trigger will make your build definition be queued when someone check in something.
When a build finishes, all the test results could be found in TFS web page and the test result file(.trx) also supports to download.
We have a complete Lab Management environment running Coded UI tests in nightly builds. What we are trying to achieve is to run our integration tests (regular TestMethod() with SQL connections) just before all the Coded UI tests to verify that our db scripts are executed correctly and that there are no new changes causing any problems.
So far I have found a way to execute tests remotely through .testrunconfig. The problem we have with that approach is that it's not possible to choose a testcontroller connected to a team project so I guess that would be only useful for running tests on physical machines outside of Lab Management?
One option seem to be to create a Test Case for each integration test and that should run it together with the UI tests but it feels like it will be to much maintenance managing hundreds of test cases just to run the integration tests. Also it would be better to completely separate the test runs for the different kinds of tests.
Is there any easy way to achieve this that I have totally missed? Or do I have to modify the lab build template to deploy and run the tests?
I guess that would be only useful for running tests on physical machines outside of Lab Management?
If you run your tests remotely through .testrunconfig you have to connect the Test Agent to another Test Controller which is NOT connected to the team project.
Unfortunately it is impossible for the environment which are running under the Lab Management, to my knowledge.
What about this approach:
Create an Ordered Test containing all you integration tests.
Create a new Test Case "Integration Tests" and automate it by the ordered test
So you do not have to maintain hundreds of Test Cases.
You could also create several Ordered Tests if you want to group the integration tests and then
create a "main" Ordered Test containing them.
This way it will be easier to analyze test results especially if you have a lot of tests.
Let the integration tests run as a part of your existing nightly build.
Create a new Build Definition which does not start a build but uses the last successful nightly build and let your CodedUI tests run using Lab Build Template.
This way you will have different test runs for the different kinds of tests.
The only drawback is that you have to "synchronize" these two builds...
You could just schedule the second build later so you could be sure the fist build is done.
It's not really perfect, I know... but this way you could easily achieve your goal.
I am not sure if there is an alternative solution, but on the project I am currently working on we have both our Unit and Integration Test Assemblies set under the Process options (Process>Basic>AutomatedTest>TestAssembly) in our Nightly Build. This was achieved through altering the Default Build Process Template (not the Lab Default) a bit, as you suggested (I thought this was standard, but it's been a while).
I am trying to do line coverage analysis of a java based application. Found many resources on the internet on how to use Sonar+JaCoCo plugin to get line coverage results, and it looks very promising. However, I couldn't get a full clarity on how to go about implementing this solution.
More about my project:
There is a service being called by a website. The service is java based, and is built using maven.
There is also a selenium based test suite that is run on website (which makes calls to the above mentioned service at several instances). The test suite is built & invoked by Ant.
The code base for the service and the code base for the tests are at different locations on the same host.
I need to generate coverage report for the service based on the integration test suite.
The resources I went through are:
http://www.sonarsource.org/measure-coverage-by-integration-tests-with-sonar-updated/
http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even after going through all of these, I am not sure where to put jacoco-agent.jar, whether to make jacoco a part of maven (service's build process) or ant (tests' build process), how to invoke jacoco agent, where to specify the source repository(service's code base) and test repository locations.
I have tried blind permutations of all of the above, but either the maven build or the ant build starts failing as soon as I add jacoco tasks to them.
Can someone please help me out in this? I need to understand the exact steps to follow to get it done.
When you execute your server process for the test mode, you need to ensure that jacoco agent is setup on the classpath. The jacoco agent will then effectively listen and record details of the code covered for the life time of the JVM.
You then execute your client side selenium tests which will invoke the server. The jacoco agent in this case will record details of the code executed as part of your tests. When the client test finishes, you need to shutdown your server process which should result in a jacoco coverage file.
The final step is to generate a jacoco html report based on your coverage report. I might suggest you look into moving your ANT based selenium tests into your maven pom, since then it will be easier to control the order of test execution.
Is there any web-based test runner; to run a web-site's unit tests from the web-site?
i know this is runs afoul of some people's dogma, but i want to be able to run unit tests from inside the product i'm testing. Whether this be from inside the native Win32 executable, or from within a .NET executable.
A guy has already written a web-based AJAX test runner for UnitTest++.
In the past i had to rip apart NUnit, so i could tests embedded in the executable without having to ship NUnit dlls. This required me to also write my own graphical test-runner for Windows/WinForms.
Has anyone already done the work of creating runnable unit-tests for ASP.net?
Note: The usual response by people: "Unit tests are not supposed to be in the final product, and definitely not accessible by testers, support, or developers, when on-site."
Note: You may disagree with my desire to run unit tests, but don't let that affect your answer. If there is no web-based nunit test runner for ASP.net then that's the answer. Don't be afraid to answer the question. i won't bite.
I think the reasons why you want to do this kind of thing is because of the lack of a Continuous Integration server, I cannot think another justification, so instead of trying to patch your design by doing this, it would be cleaner if you evaluate the implementation of a CI server (which it's not so difficult, for instance you could look at: NCastor)
So in my opinion what you need is to setup a continuous integration server in order to run unit tests and integration tests automatically as part of your automated build process.
You would be deploying your application to the next 'stage' only when the build process and the tests are valid, you can also configure the CI server to perform:
run tests
run static analysis
run non-static analysis
generate tests-reports
run tests with test coverage
calculate and set application version
create packages of your application
modify config files according to the target stage
minify your scripts
send emails when a build fails
Among others
And as you mentioned, you wouldn't need to deploy the tests to your production servers
I strongly recommend you to read the next article:
http://misko.hevery.com/2008/07/16/top-10-things-i-do-on-every-project/
And this is a list of some CI servers
Jenkins - Free & Easy configuration
Hudson - Free & Easy configuration
TeamCity - Free for Open Source projects
Cruise Control - Free (however, this has decreased in popularity because configuration is only available through XML files, it's annoying...)
Found NUnitWebRunner on github https://github.com/tonyx/NUnitWebRunner