sonarqube: 5.1.2
sonar-runner: 2.2.1
php plugin: 2.6
PHPUnit 4.2.6
We're running phpunit over our application but I'm not able to get the correct coverage % on sonar as expected.
In my phpunit.xml we have filters defining folders that we only want to cover.
<whitelist addUncoveredFilesFromWhitelist="false">
<directory>./site-library-php/src/main/php/BabelCentral/Model/Content</directory>
</whitelist>
In my sonar properties
sonar.modules=php-module
php-module.sonar.phpUnit.coverage.analyzeOnly=true
php-module.sonar.phpUnit.analyzeOnly=true
php-module.sonar.php.tests.reportPath=site-main-php/src/test/target/reports/phpunit.xml
php-module.sonar.php.coverage.reportPath=site-main-php/src/test/target/reports/phpunit.coverage.xml
Viewing the log on jenkins, tests seem to run fine and ends with:
Tests: 1479, Assertions: 4165, Failures: 58, Incomplete: 14, Skipped: 55.
Generating code coverage report in Clover XML format ... done
Generating code coverage report in HTML format ... done
// further down
11:47:51.973 INFO - Analyzing PHPUnit test report: site-main-php/src/test/target/reports/phpunit.xml with org.sonar.plugins.php.phpunit.PhpUnitResultParser#35e7e715
11:47:52.604 INFO - Analyzing PHPUnit unit test coverage report: site-main-php/src/test/target/reports/phpunit.coverage.xml with PHPUnit Unit Test Coverage Result Parser
All seems to be working well. However, sonar would report a different result, covering the entire source folder. These are the number shown on the dashboard.
Unit Tests Coverage 1.8%
line coverage 1.8%
Unit test success 95.9%
Is there anyway to have sonar respect phpunit's configured filter?
Note:
I'm able to achieve the desired numbers on coverage if I explicitly set
php-module.sonar.sources
to the directories/files that I want. It's just that, comma-separated config is hard to manage than phpunit's xml config.
Defining the set of source files which should be analysed by SonarQube is crucial: it's on this set of files that coding rules are applied and that metrics are computed.
sonar.sources is therefore a mandatory property and is the main way to configure the set of source files.
Other properties may be used to refine the set of source files:
You may configure sonar.tests: test files are automatically excluded from sonar.sources.
You may have to exclude some other PHP files like dependencies or generated code with sonar.exclusions.
You should also be aware that several plugins may contribute to the global coverage of the project: if you installed the JavaScript plugin and if sonar.sources contains JavaScript files, they will also be taken into account when calculating the coverage metrics.
If you're only interested in adjusting coverage, you can exclude files from coverage metrics with sonar.coverage.exclusions.
Related
I am using lintr library in R to find linting issues in the code. I put them into an xml format like this:
<lintsuites>
<lintissue filename="/home/.../blah.R" line_number="36" column_number="1" type="style" message="Trailing blank lines are superfluous."/>
<lintissue filename="/home/.../blahblah.R" line_number="1" column_number="8" type="style" message="Only use double-quotes."/>
</lintsuites>
Now, I would like to fail the Azure devops build when issues like this occur.
I was able to get my tests in a JUnit format like this:
<testsuite name="MB Unit Tests" timestamp="2020-01-22 22:34:07" hostname="0000" tests="29" skipped="0" failures="0" errors="0" time="0.05">
<testcase time="0.01" classname="1_Unit_Tests" name="1_calculates_correctly"/>
<testcase time="0.01" classname="1_Unit_Tests" name="2_absorbed_correctly"/>
...
</testsuite>
And when i do this step in the azure pipeline, my build fails if any tests in the test suite fail:
- task: PublishTestResults#2
displayName: 'Publish Test Results'
inputs:
testResultsFiles: '**/*.xml'
searchFolder: '$(System.DefaultWorkingDirectory)/fe'
mergeTestResults: true
failTaskOnFailedTests: true
I would like something similar for failing the build when there are linting issues. I would also like the users to see what those linting issues are in the build output.
Thanks
This is not possible to achieve a similar result for lintr xml with plishTestResults#2.
The workaround you can try is to use a powershell task to check for the content of your lintr xml file. If the content isnot empty, then fail the pipeline in the powershell task.
Below powershell task will check the content of lintr.xml(<car></car>) and will echo the content to the task logs and exit 1 to fail the task if the content is null.
- powershell: |
[xml]$XmlDocument = Get-Content -Path "$(system.defaultworkingdirectory)/lintr.xml"
if($XmlDocument.OuterXml){
echo $XmlDocument.OuterXml
}else{exit 1}
displayName: lintr result.
You can aslo use below statement in a powershell task to upload lintr xml file to the build summary page where you can download
echo "##vso[task.uploadsummary]$(system.defaultworkingdirectory)/lintr.xml"
You can check here for more logging commands.
Update:
A workaround to show the lintr results in a nice way is to create a custom extension to display html results in azure devops pipeline.
You can try creating a custom extension, and produce a html lint results. Please refer to the answer to this thread an example custom extension to display html
Other developers already submit requests to Microsoft for implementing this feature. Please vote it up here or create a new one.
I am trying to create custom testcases using the VTS binary test template. But the codelab android pages do not describe how to incorporate shell executable tests into VTS framework using Binary test template. Is this even possible?
I have successfully created custom C/C++ tests using the same binary test template given as example in codelab
I assume you created
an Android.bp with a cc_test type binary called MyVtsTestBinary,
a corresponding AndroidTest.xml test configuration,
and an Android.mk test module configuration like so:
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := MyVtsTestName
include test/vts/tools/build/Android.host_config.mk
vts-tradefed will expect your test binary and all required libraries to be located in $ANDROID_HOST_OUT/vts/android-vts/testcases. Your binaries will be copied there if you add them to the target_native_modules in test/vts/tools/build/tasks/vts_package.mk.
target_native_modules := \
[...]\
MyVtsTestBinary
You can check whether you test is known to VTS with:
vts-tradefed list modules
I am attempting to make a migration from D7 to D8 using Migrate Manifest. I have all the migrate modules installed (i.e. the 3 core migrate modules, Migrate Upgrade, Migrate Plus, Migrate Tools and Migrate Manifest). I followed the directions at this link: https://www.drupal.org/node/2350651 to prepare my manifest file. This is the relevant part of my resulting yaml file (truncated for brevity):
- block_content_body_field
- block_content_entity_display
- block_content_entity_form_display
- block_content_type
- d7_block
- d7_custom_block
When I ran the migrate-manifest via Drush, as specified by the directions in the link above, I received an error declaring a parse error pointing to the first line in the yaml list. This is the exact error: Unable to parse at line 1 (near " - block_content_body_field").
I tried several debugging steps:
I checked a yaml linter to see whether my file was properly formatted. My file passed the test.
I moved the block migrations below the ones that are prefixed with a 'd7' to match the example in the link.
I attempted a piecemeal migration of a file from my list of migrations, it worked.
I tried to alter the spacing on the list items.
All these steps failed to resolve the issue I am seeing.
I am unsure what other debugging steps I should take. All the above failed to resolve my issue.
Any ideas would be appreciated.
I'm moving from Jenkins to using using Concourse CI to run my Sauce labs e2e tests. Sauce labs groups tests together that have the same build number string:
name: 'Chrome XS',
browserName: 'chrome',
tunnelIdentifier: process.env.TUNNEL_IDENTIFIER,
build: process.env.JENKINS_BUILD_NUMBER,
platform: 'Windows 10',
shardTestFiles: true,
maxInstances: 20,
How can I pass the build number to my script using an environment variable as shown above. The Concourse GUI uses name #number. Is there any way to retrieve this. I tried printing all the environment variables in the docker container but it's not set by default.
Metadata like the build number/ID are intentionally not provided to tasks. See https://concourse-ci.org/implementing-resources.html#resource-metadata
This sounds like potentially a use case for a Sauce Labs resource?
In Concourse, build metadata is only available for resources, not tasks.
An example on using build metadata with resources is to include it as part of build results notification emails. The following blog entry contains more information about it:
http://lmpsilva.typepad.com/cilounge/2016/10/how-to-insert-build-metadata-into-user-notifications-in-concourse.html
If you really want to use build number for versioning, you could try to create your own Concourse resource that would return the version number, however, I would use your code commit number instead. Another alternative would be to use the Semver resource in Concourse: https://github.com/concourse/semver-resource
Just ran nginx::source recipe on my vagrant box, and I have very unusual behaviour.
When I include a recipe from the Vagrantfile (as below), everything works like a charm,
chef.add_recipe("project::nginx")
chef.add_recipe("nginx::source")
(project::nginx recipe is very simple. Using it to override default attributes of the nginx cookbook)
but if I include a recipe at the very end of project::nginx (mentioned up), everything falls apart:
node.default['nginx']['server_names_hash_bucket_size'] = 128
include_recipe "nginx::source"
Until now I didn't know there's any difference in behaviour between those two invocations. Does anybody here knows what's the difference?
Gotya! Chef 11 feature. Issue with it exist in chef-solo solely :)
To make a quick resume, difference is:
chef.add_recipe() - loads entire cookbook context (all the files, e.g. recipes, definitions, attributes...)
include_recipe "" - files(attributes, definitions etc.) that are not in the expended run list are not loaded.
There are at least 4 ways to solve the issue(put files in the run list):
include_attribute - include desired attribute file explicitly.
metadata.rb->dependency - if your cookbook is using recipe from another cookbook, put that cookbook in metadata.rb's dependency section, and all it's files will be loaded.
chef.add_recipe() - Load recipe via Vagrantfile. (Mentioned here just for reference)
Berkshelf - you may use this cookbook manager to solve the issue as well. Here's the Stackoverflow thread about this exact problem and some Docs
For those who are interested in further reading, Chef 11 introduced dependency-based cookbook loading for non-recipe files. The new loading logic means that files belonging to cookbooks which exist in the cookbook_path but are not in the expanded run_list or dependencies of the cookbooks in the expanded run_list will no longer be loaded. REF: Opscode breaking changes documentation, and if you need a signature of the error I got, here's the exactly same one, even for the same cause.