I'm working on docs-only project (all files are HTML or MD) hosted on GitHub. I'd like each pull request to be automatically tested with spellchecker and write-good. I was thinking about using Travis CI for that, however I can't use the default approach where everything gets rebuilt. In case of docs-projects it's not desirable because:
each file in the docs is independent (no need to build the whole project each time something changes),
some spellchecker/write-good suggestions are debatable (or simply wrong) and should be ignored (e.g. because they miss context).
I don't want ALL pull requests to fail and show a long list of ignored suggestions from across the whole repo.
Is there any way for my Travis CI test to know which files/paragraphs actually changed and should be validated?
I managed to restrict Travis CI tests to files that are actually modified in the pull requests.
In .travis.yml you have a possibility to run a script before running tests. I used it to create a list of modified files:
before_script:
- git diff --name-only master > modified_files
Then, in package.json I'm passing that list to script that does actual validation.
"scripts": {
"test": "proofreader --file-list modified_files"
}
That reduced the noise, but I'd like to go deeper and validate only paragraphs that were changed.
Related
Is it possible to have do_package() before do_compile() in a bitbake recipe?
In this case, binaries from the previous bitbake run would be deployed. Would bitbake have warning about this situation?
I don't know for sure if it is possible to change the order of some built-in tasks, but given your sentence "in this case, binaries from the previous bitbake run would be deployed", I think it is not the proper way to archive that.
Yocto/Bitbake supports incremental build and package dependencies so that if a local artifact is still valid it won't be rebuild.
In Yocto, you can also setup a shared folder (on a server) to store build artifacts so that even when building "from scratch" from a known state, will actually results in downloading the artifacts instead of building them.
You can look at https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#shared-state-cache to get more info on that.
Note: if you want/need to try it anyway, maybe adding a custom task using add task custompackage before do_compile would be a way to investigate it.
Otherwise, to use built-in names, I would try to deltask those, and recreate them with custom ordering, but once again, that looks risky to me.
It is interesting that we must have used Jenkins predefined build environment variables like $WORKSPACE, $BUILD_NUMBER etc in a lot of our Jenkins job.
I find it boggling to understand that, how does Jenkins set rules such that when we print $WORKSPACE, it prints the current workspace of various jobs. How does it map the variable $WORKSPACE to the corresponding Jenkins Job.
Jenkins needs to know certain things about your build environment and jobs in order to do its job properly. For instance it needs to know the current build number, the location where your project should be checked out, who started the current build, etc. These things are typically exposed to you through the web interface.
Jenkins also exposes this information to your build scripts through environment variables that are injected into your scripts by Jenkins when your it is first launched. These environment variables can then be picked up by your script to do whatever is necessary with them.
In the example you gave ($WORKSPACE) Jenkins needs to know the absolute path to this location on your build slave because if it didn't, it wouldn't be able to check out your source and build it. Since it knows this information it exposes it to you as well to make writing your scripts easier.
There's a complete list of generally available environment variables provided by Jenkins available here.
My organization has moved from executing automated tests within MTM to executing automated tests within the VSTS Test Hub, via Release Definitions. We're using MSTest to run tests with Selenium / C#.
It is a requirement for us to be able to run any individual test on-demand, post-build, and to use the Test Configuration to control the browser. This was working without an issue when runnings tests through MTM.
Previously, when running tests through MTM, my MSTest TestContext's "Properties" were populated with runtime parameters, such as the Test Configuration, such as TestContext.Properties["__Tfs_TestConfigurationName__"]. The runtime properties could also be seen in the resulting .trx file.
Now, when running tests through the VSTS Test Hub, those values are no longer set in TestContext automatically, nor do I see them in the .trx file, or in my Release deployment logs.
Are there any suggestions as to how the Test Configuration values of each test can be propagated into TestContext (or accessible via code otherwise) to be able to control the browser at test run-time?
I have considered creating a .runsettings file, but am unaware as to how Test Configuration variables would be dynamically populated into that file.
I have considered attempting to query the Test Run / Test Results in my Selenium code to determine the Test Points / Configuration, but am unaware of how to determine the Test Run ID (I see the Test Run ID in the release logs, but don't know how to programmatically access it)
I have seen a VSTS Extension called 'VSTS Test Extensions' which is purported to inject the VSTS variables into a .runsettings file, but there are no details as to how to configure the .runsettings file in order to accomplish this, and the old MTM-style parameter names do not seem to work.
Are Test Configuration variables accessible globally in VSTS somehow?
I.e. could I create a PowerShell Release task to somehow access test run / test point data? Microsoft does not provide any indication about how to use Test Configurations other than using them for manual tests.
Any help is greatly appreciated.
After some more review, it seems that my only option is a giant hack:
I noticed that in the Release logs, the [TEST_RUNID] is output, which points me to the $(test.runid) variable, which I didn't know existed.
I can output the Run ID to an external file via a Powershell Release task.
I can then have my test code read that file, perform an API call to VSTS querying the Test Results for that Run ID, and then use that response to output the Test Configuration Name for each Test Case ID.
I can then match up my Test Case ID within each of my test methods with the Test Case IDs from the service call response, and set the browser according to the Test Configuration Name.
The only issue then is if the same test exists multiple times in the same run with different configurations, which can be hacked around by querying the Test Run every time a test begins and checking which configuration has already been run based on the State ("Pending", etc.) or by some other means I haven't thought of yet.
I have 3 stages (dev / staging / production). I've successfully set up publishing for each, so that the code will be deployed, using msbuild, to the correct location, with the correct web configs transformed - all within Jenkins.
The problem I'm having is that I don't know to deploy the code to staging from what was built on dev (and staging to production). I'm currently using SVN as the source control, so I think I would need to somehow save the latest revision number dev has built and somehow tell Jenkins to build/deploy staging based on that number?
Is there a way to do this, or a better alternative?
Any help would be appreciated.
Edit: Decided to use the save the revision number method, which parses a file containing the revision number to the next job -- to do this, I followed this answer:
How to promote a specific build number from another job in Jenkins?
It explains how to copy an artifact from one job to another using the promotion plugin. For the artifact itself, I added a "Execute Windows batch command" build step after the main build with:
echo DEV_ENVIRONMENT_CORE_REVISION:%SVN_REVISION%>env.properties
Then in the staging job, using that above guide, copied that file, and then using a plugin EnvInject, to read from that file and set an environment variable, which can then be used as a parameter to the SVN Repository URL.
You should be able to identify the changeset number that was built in DEV and manually pass that changeset to the Jenkins build to pull that same changeset from SVN. Obviously that makes your deployment more manual. Maybe you can setup Jenkins to publish the changeset number to a file and then have the later env build to read that file for the changeset number.
We used to use this model as well and it was always complex. Eventually we moved to a build once and deploy many times model using WebDeploy. This has made the process much more simple. Check it out - http://www.dotnetcatch.com/2016/04/16/msbuild-once-msdeploy-many-times/
In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.