Currently I am trying to improve automation in my test environment which uses Apache Karaf 2.4 and Jenkins.
My goal is, if Jenkins successfully builds the project (and updates a Maven repository), to deploy the new version in Karaf. Karaf and Jenkins run on different machines.
My current solution is to use a feature-descriptor, but I am facing a lot of trouble with it: I found no easy way to update feature-bundles in Karaf 2.4. As far as I know, there is no command that is able to update all bundles of an existing feature in one command.
My current approach is to grep the output of the list command after a special pattern and find out all BIDs and then run update for all IDs. This approach tends to create bugs (it may include bundles that are not part of the feature when they match the same naming-pattern). And I was wondering if there is a better way to automatically update all my feature bundles in one?
Also, there is another problem: When a new feature gets added or removed from the feature-file, I found no elegant way to update it. My current approach was to first uninstall all associated bundles (with grep again...), then remove the repository from Karaf and then replace the new version of the feature-file in the deployment folder. As this is very circumstantial, I was wondering, if there is a better way to do this in Karaf 2.4?
I think deployment got better in Karaf 3, but I am unable to use it because of this bug (link). As I am in a testing environment and my software tends to be extremely unstable, I often have to kill the Karaf process and restart it again. And this is a big problem in Karaf 3. Hopefully it will be fixed in Karaf 4 but I do not want to wait until it is released.
Do you have any suggestions, how my deployment-process could be improved? Maybe there are better ways than my solution (I really hope so!), but I haven't seen them yet.
This is a nice request, cause I've been working on something similar just recently. Basically I've got the same setup you got.
So you just need to install the latest Jolokia osgi servlet on top of karaf to make all JMX commands accessible via REST.
For a showcase I created a maven plugin (I'm going to publish those sources in a couple of weeks - might even get into the karaf maven plugin) that installs the current bundle via a rest request on top of Karaf, it's currently also able to check for existing bundles.
Basically you need to issue the following rest POST:
{
"type":"EXEC",
"mbean":"org.apache.karaf:type=bundle,name=root",
"operation":"install(java.lang.String,boolean)",
"arguments":["mvn:${project.groupId}/${project.artifactId}/${project.version}", true]
}
You'll need to use the latest Jolokia snapshot version to get around the role based authentication which is used in the latest Karaf versions.
For this you'll also need to create a configuration file in etc, called org.jolokia.osgi.cfg, wich contains the following entries:
org.jolokia.user=karaf
org.jolokia.realm=karaf
org.jolokia.authMode=jaas
For more details also check the issue for it.
Related
We are currently in the process of developing custom operators and sensors for our Airflow (>2.0.1) on Cloud Composer. We use the official Docker image for testing/developing
As of Airflow 2.0, the recommended way is not to put them in the plugins directory of Airflow but to build them as separate Python package. This approach however seems quite complicated when developing DAGs and testing them on the Docker Airflow.
To use Airflows recommended approach we would use two separate repos for our DAGs and the operators/sensors, we would then mount the custom operators/sensors package to Docker to quickly test it there and edit it on the local machine. For further use on Composer we would need to publish our package to our private pypi repo and install it on Cloud Composer.
The old approach however, to put everything in the local plugins folder, is quite straight forward and doesnt deal with these problems.
Based on your experience what is your recommended way of developing and testing custom operators/sensors ?
You can put the "common" code (custom operators and such) in the dags folder and exclude it from being processed by scheduler via .airflowignore file. This allows for rather quick iterations when developing stuff.
You can still keep the DAG and "common code" in separate repositories to make things easier. you can easily use a "submodule" pattern for that (add "common" repo as submodule of the DAG repo - this way you will be able to check them out together, you can even keep different DAG directories (for different teams) with different version of the common packages this way (just submodule-link it to different versions of the packages).
I think the "package" pattern if more of a production deployment thing rather than development. Once you developed the common code locally, it would be great if you package it together in common package and version accordingly (same as any other python package). Then you can release it after testing, version it etc. etc..
In the "development" mode you can checkout the code with "recursive" submodule update and add the "common" subdirectory to PYTHONPATH. In production - even if you use git-sync, you could deploy your custom operators via your ops team using custom image (by installing appropriate, released version of your package) where your DAGS would be git-synced separately WITHOUT the submodule checkout. The submodule would only be used for development.
Also it would be worth in this case to run a CI/CD with the Dags you push to your DAG repo to see if they continue working with the "released" custom code in the "stable" branch, while running the same CI/CD with the common code synced via submodule in "development" branch (this way you can check the latest development DAG code with the linked common code).
This is what I'd do. It would allow for quick iteration while development while also turning the common code into "freezable" artifacts that could provide stable environment in production, while still allowing your DAGs to be developed and evolve quickly, while also CI/CD could help in keeping the "stable" things really stable.
I have a couple of large Symfony projects, and have noticed that after updating everything to Symfony 4 (Flex), when our deployment automation runs its normal process of:
composer install --no-dev
We end up with (for example) this:
Symfony operations: 2 recipes (72fad9713126cf1479bb25a53d64d744)
- Unconfiguring symfony/maker-bundle (>=1.0): From github.com/symfony/recipes:master
- Unconfiguring phpunit/phpunit (>=4.7): From github.com/symfony/recipes:master
Then, as expected, this results in changes to symfony.lock and config/bundles.php, plus whatever else, depending on what was included in require-dev in composer.json.
None of this is breaking, exactly, but it is annoying to have a production deploy that no longer has a clean git status output, and can lead to confusion as to what is actually deployed.
There are various workarounds for this, for example I could just put everything in require rather than require-dev since there is no real harm in deploying that stuff, or I could omit the --no-dev part of the Composer command.
But really, what is the right practice here? It seems odd that there is no way to tell Flex to make no changes to configuration if you are just deploying a locked piece of software. Is this a feature request, or have I missed some bit of configuration here?
This was briefly the case on old Syfmony Flex 1 versions.
It was reported here, and fixed here.
It this happens to you, it means you have a very old installation on your hands, and you should do your best to at least update Symfony Flex to a more current version (Symfony Flex 1 does not even work anymore, so switching to Flex 2 if possible would be the only way, or simply remove Flex in production).
If you deploy to prod from your master branch, you can setup a deploy branch instead. And in that branch you can prevent certain files from being merged in. See this post for more detail. This creates a situation in which you have a master branch, a version branch (example: 3.21.2) and you're having devs checkout master, work on it, then merge their changes into the version branch. From there you pick and choose what gets deployed to prod. (There will be a small balancing act here. You'll want to merge all dev changes into master until it matches your version branch and make sure master matches the version after you've deployed. This adds a bit of work and you have to keep an eye on it. etc.)
Another option is to separate your git repository from your deployment directory. In this example, a git directory is created in /var/repo/site.git and the deploy directory is /var/www/domain.com and a post-receive git hook is used to automatically update the www directory after a push is received to the repo/site directory. You obviously run composer, npm, gulp, whathaveyou in the www directory and the git directory is left as is.
Without getting into commercial options such as continuous deployment apps, you can write a deployment script. There are a lot of ways to write a shell script that takes one directory and copies it over, runs composer, runs npm, etc. all in one command -- separating your git from your deploy directories. Here's a simple one that makes use of the current datetime to name a directory then symlinking it to the deployment directory.
I have 3 stages (dev / staging / production). I've successfully set up publishing for each, so that the code will be deployed, using msbuild, to the correct location, with the correct web configs transformed - all within Jenkins.
The problem I'm having is that I don't know to deploy the code to staging from what was built on dev (and staging to production). I'm currently using SVN as the source control, so I think I would need to somehow save the latest revision number dev has built and somehow tell Jenkins to build/deploy staging based on that number?
Is there a way to do this, or a better alternative?
Any help would be appreciated.
Edit: Decided to use the save the revision number method, which parses a file containing the revision number to the next job -- to do this, I followed this answer:
How to promote a specific build number from another job in Jenkins?
It explains how to copy an artifact from one job to another using the promotion plugin. For the artifact itself, I added a "Execute Windows batch command" build step after the main build with:
echo DEV_ENVIRONMENT_CORE_REVISION:%SVN_REVISION%>env.properties
Then in the staging job, using that above guide, copied that file, and then using a plugin EnvInject, to read from that file and set an environment variable, which can then be used as a parameter to the SVN Repository URL.
You should be able to identify the changeset number that was built in DEV and manually pass that changeset to the Jenkins build to pull that same changeset from SVN. Obviously that makes your deployment more manual. Maybe you can setup Jenkins to publish the changeset number to a file and then have the later env build to read that file for the changeset number.
We used to use this model as well and it was always complex. Eventually we moved to a build once and deploy many times model using WebDeploy. This has made the process much more simple. Check it out - http://www.dotnetcatch.com/2016/04/16/msbuild-once-msdeploy-many-times/
I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on
Here is what I am trying to do: I have my code sitting on Bitbucket (it is a ASP.net web application). I want to put together a build machine that has a web-based dashboard. I go to the dashboard and I say: Ok show me all the branches and stuff on my Bitbucket project. Now please get the latest code from this branch for me and build it. Then please deploy it to this location for me or maybe this other location. I want this dashboard give me a history of all this builds and deployments too. I am very new to this concept I have been reading about CC.net and MSBuild and other stuff but I can not find the right answer. Please put me in the right direction.
The point of a build server is that it automatically runs a build each time you commit something to your repository.
In order for the build server to know exactly what to do, you normally put a build script (with MSBuild or NAnt) into your solution which does everything you want - building your solution, maybe create a setup package and so on.
The build server basically knows where the repository is and where in the repository your build script is.
You need to configure this once in the build server, and then it will always run after you commit (but you can also start a build manually, if you want).
If you want a solution with web-based dashboard, try TeamCity.
It's a commercial product, but it's free for up to 20 users.
You can do everything in the web interface - configuration, running the builds AND browsing the build history.
EDIT:
Houda, concerning your question about deployment:
I don't think that TeamCity has a "deployment mode" in that sense. What you could do is include the deployment stuff in your build script that is run by TeamCity.
So, after the build itself is finished, copy the generated assemblies and files on your web server(s).
If you do it this way, you HAVE to make sure in the build script that the deployment will only happen if the build didn't fail (and if you have unit tests, if the unit tests didn't fail as well).
This is very important for a live application, because if you don't take care of this well enough, your app will go immediately offline every time someone commits "bad" code to your repository (and it will stay offline until the next "good" commit)!!
EDIT 2:
See Lasse V. Karlsen's comment below: it's more comfortable with the new TeamCity version 6.