How to build a high-availability cluster in RedHat Fuse 7.x on Karaf? - apache-karaf

RedHat Fuse 7.x is offered both on OpenShift and Karaf. While OpenShift version supports containerization of RedHat Fuse application, it inherently makes it highly available I believe. But I am wondering if load balanced and highly available cluster can be formed for RedHat Fuse Karaf version. Earlier till version 6.x, they used to support Fuse Fabric for clustering. The documentation of version 7.x says they have discontinued Fuse Fabric support. If anyone has deployed RedHat Fuse on Karaf in a clustered environment, please let me know how it was achieved.

For my understanding Red Hat Fuse on karaf is just Red Hat supported version of Karaf with bunch of Red Hat Flavored features installed like Camel and Hawtio. OpenShift and Apache Karaf offer two very different levels of containerization. one could say you use OpenShift to run microservices and Apache karaf to run nanoservices.
While you can install and run multiple applications inside Apache Karaf they're still running on the same virtual machine and operating system. You can however create docker image and run Apache karaf on container(s) in Docker, Kubernetes, OpenShift etc like any other application.
This can be useful if you want to group bunch of applications/services together to save resources, share dependencies or just to reduce the amount of different docker containers or deployments.
There's official docker image available for Apache Karaf one can use for reference to create docker image(s) for Red hat fuse on Karaf.
Apache karaf docker - DockerHub
Apache karaf docker - Github
Generally you probably want to create image from a custom karaf assemply that includes all the features and bundles you need to run your applications so when things get added updated you can just swap the image.
Technically you could also use CD/CI pipelines or something like Karaf Cellar to manage multiple karaf instances. This would allow you to add, remove and update functionality of karaf instance even while its running. This however sounds quite complex to pull off and maintain.
As a disclaimer I would like to add that I've not used Apache Karaf or Red Hat Fuse on Karaf with Kubernetes or OpenShift yet. Most of my experience is with running karaf using docker/podman compose or RHEL VM.

Related

migrating from Apache Archiva to Nexus 3

I am trying to move all repositories I am using to Nexus 3. I use Apache Archiva as a Maven repository. I read that it's possible to copy repos between Archiva and Nexus 2. Is there any way to do the same with Nexus 3 ?
I have tried to do a workaround and succeeded. I created an instance of Nexus2 and migrated Archiva there. After that I have used the upgrade agent from Capabilities to migrate from Nexus2 to 3. Not a complicated way and is fast as well.

Migration OpenDJ to Directory Services 6.5

I'm currently use OpenDJ 2.6.4 in Suse Linux 11 and my goal is to upgraded to Directory Services 6.5.
From what I read, especialy on Chapter 9. Before You Upgrade
and Chapter 10. Upgrading a Directory Server, the process seems pretty simple, i.e, after checking Java version, backup and disable stuff we just need to execute the upgrade command.
This process run well or it's harder as it look?
From what I read on several release notes, i don't expect to have big changes on my current web application, is that right?
That is correct, there should be no change to the applications (since the interface is standard LDAPv3).
If your OpenDS service is replicated, you can upgrade one server after another, with zero downtime for the overall service.
When upgrading from 2.6, you will probably need to upgrade the Java runtime as well, since DS 6.5 requires Java 8 (and also supports 11).
So, stop a server, backup the whole server, unzip DS 6.5, upgrade Java to 8+, run upgrade, start-ds.
You might want to test the upgrade process on a dev environment. If you don’t have a dev env yet, you can create one by just copying the whole OpenDJ 2.6.4 directory and databases to a different location or another server.

How to use Codeship's project SSH key in step file

In the CodeShip Pro documentation, the recommendation for doing Continuous Deployment to Digital Ocean involves encrypting an SSH private key, and storing that in your repository. To do this, you need to install jet on your machine. Unfortunately, jet is not available on my platform (Win 10 64-bit).
In every CodeShip project, there's an SSH key generated by CodeShip, and controlled by them. The documentation doesn't describe how to use that SSH key in a CodeShip Pro setup. Is there a way to do so? Or is it only available in CodeShip Basic projects?
I'm trying to get a .NET Core 1.1 project built, copied, and deployed, with external system package dependencies. The commands involved include a big pile of apt-get work for setup, dotnet restore, dotnet build, maybe a dotnet publish, and an scp step for the deploy itself.

Packaging and Deploying ASP.net applications for continuous delivery

I come from a background of Java, PHP and NodeJS development. I have successfully built continuous integration/continuous deployment environments using these languages based on a Linux Platform but now I am working in an environment with a mix of .NET web development and Java based web development.
I would like to build a CI/CD environment that shares tools and concepts as much as possible. The workflow that has worked in the past and seems to be pretty standard is:
Check code into Git
Jenkins checks out the code, runs tests
Jenkins builds the code if tests pass
Jenkins builds a package. WAR file, RPM etc. and pushes it to an artifact repo, Maven, Yum repo, Artifactory, Nexus etc.
Jenkins deploys the package to a given environment by simply pulling the correct version of an artifact and pushing it to a given box. I like to use Ansible or Puppet or some nice configuration management tool for this step and let that handle the versioning and environment specific changes.
I know Microsoft has built tools that can do similar things but I would like to keep things consistent across the organization and I feel like Jenkins is the gold-standard and has been battle tested for a hundred years while Microsoft’s CI tools are relatively new.
I am able to set up a Windows based Jenkins slave compile the code using the MSBuild plugin (This is an excellent tutorial if you are interested http://blog.couchbase.com/2016/january/continuous-deployment-with-jenkins-and-.net). I am stuck on how to package the code. I had thought NuGet would be a good choice for this but I can’t seem to find any guidance on building/deploying NuGet packages for ASP.NET applications and deploying them. I prefer NuGet to something like the web application zip file as the packages are versioned.
Is NuGet the answer or is there something else out there that could support my needs or should I be altering my thinking for CI/CD in a Microsoft environment?

How can I use buildroot for my development machine in addition to my target?

I am developing a for an embedded target using buildroot, adding our custom applications as new packages.
These packages depend on some non standard libraries(which we already integrated into buildroot) that are painful to install natively on the development workstations. Can I use buildroot out-of-tree builds to compile the applications for my development machines to test them as well? Assuming all the libraries are in place, they are generic linux applications that should not have problem running on PCs.
Is there a more convenient way to manage both builds?
The only supported way is to use a "crosscompiler" for your host system.
See buildroot environment with host toolchain

Resources