do_package() before do_compile() possible? - bitbake

Is it possible to have do_package() before do_compile() in a bitbake recipe?
In this case, binaries from the previous bitbake run would be deployed. Would bitbake have warning about this situation?

I don't know for sure if it is possible to change the order of some built-in tasks, but given your sentence "in this case, binaries from the previous bitbake run would be deployed", I think it is not the proper way to archive that.
Yocto/Bitbake supports incremental build and package dependencies so that if a local artifact is still valid it won't be rebuild.
In Yocto, you can also setup a shared folder (on a server) to store build artifacts so that even when building "from scratch" from a known state, will actually results in downloading the artifacts instead of building them.
You can look at https://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#shared-state-cache to get more info on that.
Note: if you want/need to try it anyway, maybe adding a custom task using add task custompackage before do_compile would be a way to investigate it.
Otherwise, to use built-in names, I would try to deltask those, and recreate them with custom ordering, but once again, that looks risky to me.

Related

How can I permanently set an environment variable using Autotools?

I'm adapting an existing program to use Autotools for its build, but the resulting process depends on an environment variable. Is there a way to permanently set this environment variable during the build or installation process?
The program is intended to be used by Unix users and I could try to concatenate an export command directly to the .bashrc file and warn the user in case it fails because most of them will actually just use Ubuntu to run it (it's a relatively simple program that targets students), but I'd like to know if there's a more portable way to do this.
That's what I wouldn't like to do:
export VAR=/my/totally/not/hardcoded/path >> $HOME/.bashrc
Sorry to come to this late, but all of the answers to date are shockingly ... incomplete.
Building and installing software are both core use cases for the Autotools, and the installation part can absolutely involve adding or modifying files that affect user environments. If the software is installed by a user with sufficient privilege, then such effects can absolutely be applied to all system users, though the details may vary a bit from system to system (and the Autotools can help with that, too!).
For example, on RedHat-family Linuxes such as RedHat Enterprise, Fedora, Oracle Linux, and various others, you can drop an appropriately named file in /etc/profile.d, and the commands in it will automatically be read and executed by every login shell. Setting environment variables for all users is one of the common uses of this feature. I'm uncertain about Debian-family Linuxes such as Ubuntu, but it is always possible to modify file /etc/profile instead to have the same effect, and you absolutely can write an Automake install hook to do that.
Or for an altogether different approach, you can always provide a wrapper script around your program that sets the needed environment variables (supposing that the point is other than to add a directory to the PATH so as to find the program in the first place). In that case, you can even install the main program in a location that is not ordinarily in the path, so that users don't accidentally run it directly. This mechanism has the advantage that the environment variables are scoped to a run of the program, not a whole login session, but the disadvantage that users cannot override them.
I guess, no.
Autotools are about building your program, not about environment setup for the program to run. That's what users/admins are supposed to do. (Well I can imagine doing this, but I really don't want to try to figure it out, because the idea itself seems broken to me)
If your program REALLY needs some environment variable during run-time, then you should patch your sources for your application to test if the variable exists, and set one to default desired value, if it doesn't. Another idea is to enforce usage of an obligatory command line switch to pass the value in.
It's not clear what this has to do with autotools (or any other build system). No build system, by itself, can arrange for an env var to be present when the program it builds is run at a later tiem.
One solution is for your program to have a hardcoded default value for the var which is used if the environment var isn't present when the program starts running. Another frequently used solution is to name your binary something like myprog.bin and install a shell script named myprog which sets up the environment before doing exec myprog.bin.
I'm adapting an existing program to use Autotools for its build, but the resulting process depends on an environment variable. Is there a way to permanently set this environment variable during the build or installation process?
You've not been very concrete about what the program is (e.g. is the program a daemon? A user program?) or the nature of the environment variable dependency (e.g. is it another program? A mount point? A URL? A DB connection string?). Being more specific might give a better answer for you.
Anyway, autotools is not likely to offer any feature to help: It's a build system. Depending on the nature of your environment variable dependency, you're likely going to need package management (if you package it) or system administration level setup.
Since you think your primary user base is on Ubuntu this help page might give you some ideas.

git build number c#

I'm trying to embed git describe-generated version info into AssemblyInfo.cs plus some label within ASP.NET website.
I already tried using git-vs-versionino but this assumes Git executable on PATH. However default install of msysgit on Windows does not set this up; it uses git bash. This caused problems.
Now I am looking for a way to utilize libgit2sharp library (for zero external dependencies) to use as build number generator. However this library has no describe command...
Thanks!
git-describe is a UI feature that nobody has implemented in the library or bindings yet (or at least nobody's contributed it), but you can do it yourself fairly easily.
You get a list of the tags and what commits they point to, do a walk down the commits and count how many steps it took to get to a commit that you have in the list you built. This already gives you the information you need. If the steps were zero, then your description would be the tag name only; otherwise you append the number of steps and the current commit's id to it.
There's a work in progress libgit2 pull request that proposes an implementation of git-describe functionalities.
See #1066 for more information.
It's not finished yet. Make sure to subscribe to it in order to be notified of its future progress.
Once it's done, it should be quite easy to bind it and make it available through LibGit2Sharp.

Demote a build? How to delete promoted builds and run specified script on deletion in Jenkins

In the project I'm working for we're having a continuous deployment setup. The goal is to always install the latest working build to production, unless someone manually overrides this functionality.
In order to make this working we
Run static code analysis
Run unit tests
Run integration tests
Run automatic UI tests, to the extent this is feasible
If any of the above steps fail, the build process is halted, and the build marked as failed. If the installation package is created it is then in steps installed to
CI --> staging --> production
At each step we run a integration and UI tests for the environment, to make sure we didn't introduce some new things which fail on on the subsequent environments. If none of the tests fail, and N minutes pass without anyone pressing the panic button, the build gets promoted to the next env. If the tests fail, we want to delete the package, and discard it completely. The installation packages are, however, delivered to other servers, so we need to run a bunch of remote (shell) scripts to make this step happen.
The problem is, that there are a big set of failure cases which we cannot reliably test in the normal automated cycle, e.g. page layout, or some integrations fail only production and so on.
So the actual question: How shall I demote/delete builds, once they've been promoted? Is it possible to either run a remote script when doing delete build or use any of the promotion plugins to achieve this functionality? Is there some think-outside-the-box solution for this that I might not have thought about?
Instead of deleting builds manually, you may write a Jenkins job that accepts the build number as a parameter, deletes it, and then does the rest of the housekeeping. You can configure Jenkins access privileges so that people do not delete builds manually by accident.
This might be a very particular case, but we decided against creating a separate job for removing the builds, for the very simple reason of keeping all the logging related to a specific build number in one single place. The setup was the following:
Promotion here means make the installation package (RPM) available to the given server, where auto-update handles the actual upgrade of the package.
We have one main build, that builds every time a new commit is available. We had some fine-tuning related to quiet times etc. but basically every new pushed set of commits resulted in a new build. The build contains all the relevant and available testing, which is far from being complete, and probably never will be.
Every hour a separate promotion step handles promotion from staging to production. This build kicks off a separate promotion which takes the latest accepted build from CI to staging. There is a 30min delay before builds were promoted CI-->staging, to prevent accidental promotions for last second commits. Delays were achieved with some bash find scripting. The order of promotions is this, to make sure a build is available in staging for (at least) one hour before going to promotion.
The actual answer:
The promotion steps were done as separate builds. In order to do a real promotion, rather than a separate build with a separate log, the build kicks off an actual promotion in the main build, using curl and calling the remote HTTP API. This leaves a relevan promotions star in the main build log. Using different colors, the promotions are visible with one look.
To demote builds I decided to create a separate "demote build" promotion step. This would then issue a purple star as a sign of the build being defective, and thus removed. The demotion is done by accessing the correct build in the UI, and pressing the "Remove build" button. No automation has been added to this step, but by creating a separate test step, it would be fairly easy to automate the demotion as well. We, however, have not gotten quite this far yet.
The benefits of this approach include
A build is deleted by accessing the failed build, not by providing parameters. Makes it much easier to document, and get right under pressure
Having a "panic button" like this available for anyone to press, builds trust and ownership for the process not only amongst the developers but also managers and DevOps.
It's dead simple to spot dead builds, as the log is available besides the other promotion logs
Having all the relevant promotion calls in the same build makes further scripting easier
Acute things we still have to improve include automating the testing on the later stages of the build pipeline, and also a suitable way of downgrading builds after demotion. E.g. in production a defective build and a demotion must always lead to installing the last good build, which has turned out to be fairly hard to achieve. Production data centers are rarely allowed to be accessible to this level from the development DC where the CI system sits. Also stopping and starting the build pipeline must be automated, as else there is the chance of slipping back to the manual state.
Naturally, in the spirit of continuous improvement, there are always things to improve. The whole setup is something of a bash/perl scripting mess, but since it's scripted and repeatable, there is always the option of improving one small piece at a time. The most important thing is the automation, as it allows for incremental steps, which any manual steps more or less prevent.
For anyone looking for an easy way to delete a build with custom steps:
Create a 'defective' promotion.
Make it manually triggered.
Force it run on the master.
Add a choice parameter DELETE with choices NO and YES.
Add action Execute Shell.
_
if [ "${DELETE}" == "YES" ]; then
# TODO: my custom steps
curl -X POST ${PROMOTED_URL}/doDelete"
fi
To delete a build now, just go to promotions, flip the choice to YES and click approve.

With subversion or TFS, how would you go about setting up automatic builds?

With subversion or TFS, how would you go about setting up automatic builds?
I need some guidance with regards to naming convention and how this would happen automatically.
I am using /branches /trunk /tags folder structure.
I am using a build app (finalbuilder).
Which tag name would I tell it to pull from (or revision # etc)? Since it is going to change all the time, how do people perform nightly builds? Using the date in the name of the release?
Just use the revision number. Something like CruiseControl.NET should make this pretty easy for you.
Use TeamCity, setup a separate build for trunk + every branch. We do this and it's very helpful.
I would set up the build server to monitor the /trunk folder and trigger a build whenever anything is commited there. If wanted, you could have the build script end with creating a tag for the build (even though that might be a bit ambitious, depending on how often things are commited to the trunk). When I have done that I have usually included the subversion revision number in the tag name, and also in the version number of the files (to the extent that this is applicable).
You should be able to pull right from the /trunk (and possibly with other nightly builds from branches that you think are important). It's not particularly useful to do a nightly build from a tag, since generally tags are static. When it is checked out, you can identify the checkout by the revision number that was checked out. That way if you ever need to find out what has changed since then, you can diff from that revision (or branch, whatever).
We use Hudson, which checks periodically (set by you) for changes to whatever svn path you give it. It then has the ability to run a shell script (we are building for iPhone so use xcodebuild, but you could use whatever is used for ASP.net). We then upload the results of this to our local server under $REVISION. It would be easy to run automated tests in this as well.
Since you are asking about TFS:
We are using a CommonAssemblyInfo to increment the version of the dlls. Nightly Builds are typically from a trunk.
We have a Main-Folder from which a "Dev" Folder is branched for the current release. We make nightly builds from the current Dev-branch and manual, so called reference builds once we merge Dev-stuff back into Main.
Builds are defined via the Build Agent stuff. Custom Tasks like incrementing version number enter the game via MSBuild.

Improving Your Build Process

Or, actually establishing a build process when there isn't much of one in place to begin with.
Currently, that's pretty much the situation my group faces. We do web-app development primarily (but no desktop development at this time). Software deployments are ugly and unwieldy even with our modest apps, and we've had far too many issues crop up in the two years I have been a part of this team (and company). It's past time to do something about that, and the upshot is that we'll be able to kill two Joel Test birds with one stone (daily builds and one-step builds, neither of which exists in any form whatsoever).
What I'm after here is some general insight on the kinds of things I need to be doing or thinking about, from people who have been in software development for longer than I have and also have bigger brains. I'm confident that will be most of the people currently posting in the beta.
Relevant Tools:
Visual Build
Source Safe 6.0 (I know, but I can't do anything about whether or not we use Source Safe at this time. That might be the next battle I fight.)
Tentatively, I've got a Visual Build project that does this:
Get source and place in local directory, including necessary DLLs needed for project.
Get config files and rename as needed (we're storing them in a special sub directory that isn't part of the actual application, and they are named according to use).
Build using Visual Studio
Precompile using command line, copying into what will be a "build" directory
Copy to destination.
Get any necessary additional resources - mostly things like documents, images, and reports that are associated with the project (and put into directory from step 5). There's a lot of this stuff, and I didn't want to include it previously. However, I'm going to only copy changed items, so maybe it's irrelevant. I wasn't sure whether I really wanted to include this stuff in earlier steps.
I still need to coax some logging out of Visual Build for all of this, but I'm not at a point where I need to do that yet.
Does anyone have any advice or suggestions to make? We're not currently using a Deployment Project, I'll note. It would remove some of the steps necessary in this build I presume (like web.config swapping).
When taking on a project that has never had an automated build process, it is easier to take it in steps. Do not try to swallow to much at one time, otherwise it can feel overwhelming.
First get your code compiling with one step using an automated build program (i.e. nant/msbuild). I am not going to debate which one is better. Find one that feels comfortable to you and use it. Have the build scripts live with the project in source control.
Figure out how you want your automated build to be triggered. Whether it is hooking it up to CruiseControl or running a nightly build task using Scheduled Tasks. CruiseControl or TeamCity is probably the best choice for this, because they include a lot of tools you can use to make this step easier. CruiseControl is free and TeamCity is free to a point, where you might have to pay for it depending on how big the project is.
Ok, by this point you will be pretty comfortable with the tools. Now you are ready to add more tasks based on what you want to do for testing, deployment, and etc...
Hope this helps.
I have a set of Powershell scripts that do all of this for me.
Script 1: Build - this one is simple, it is mostly handled by a call to msbuild, and also it creates my database scripts.
Script 2: Package - This one takes various arguments to package a release for various environments, such as test, and subsets of the production environment, which consists of many machines.
Script 3: Deploy - This is run on each individual machine from within the folder created by the Package script (the Deploy script is copied in as a part of packaging)
From the deploy script, I do sanity checks on things like the machine name so things don't accidentally get deployed to the wrong place.
For web.config files, I use the
<appSettings file="Local.config">
feature to have overrides that are already on the production machines, and they are read-only so they don't accidentally get written over. The Local.config files are not checked in, and I don't have to do any file switching at build time.
[Edit] The equivalent of appSettings file= for a config section is configSource="Local.config"
We switched from using a perl script to MSBuild two years ago and haven't looked back.
Building visual studio solutions can be done by just specifying them in the main xml file.
For anything more complicated (getting your source code, executing unit tests, building install packages, deploying web sites) you can just create a new class in .net deriving from Task that overrides the Execute function, and then reference this from your build xml file.
There is a pretty good introduction here:
introduction
I've only worked on a couple of .Net projects (I've done mostly Java) but one thing I would recommend is using a tool like NAnt. I have a real problem with coupling my build to the IDE, it ends up making it a real pain to set up build servers down the road since you have to go do a full VS install on any box that you want to build from in the future.
That being said, any automated build is better than no automated build.
Our build process is a bunch of homegrown Perl scripts that have evolved over a decade or so, nothing fancy but it gets the job done. One script gets the latest source code, another builds it, a third stages it to a network location. We do desktop application development so our staging process also builds install packages for testing and eventually shipping to customers.
I suggest you break it down to individual steps because there will be times when you want to rebuild but not get latest, or maybe just need to re-stage. Our scripts can also handle building from different branches so consider that also with whatever solution you develop.
Finally we have a dedicated build machine that rebuilds the trunk and maintenance branches every night and sends out an email with any problems or if it completed successfully.
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it.
I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
Our build system is a makefile (or two). It has been rather fun getting it working as it needs to run on both windows (as a build task under VS) and under Linux (as a normal "make bla" task). The really fun thing is that the build gets the actual file list from a .csproj file, builds (another) makefile from that, and run that. In the processes the make file actually calls it's self.
If that thought doesn't scare the reader, then (either they are crazy or) they can probably get make + "your favorite string mangler" to work for them.
We use UppercuT.
UppercuT uses NAnt to build and it is extremely easy to use.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT

Resources