What are best practices for Apache Karaf feature - apache-karaf

Are there any known best practices or projects to look into for an example of Apache Karaf feature?
All I has found is the official documentation: http://karaf.apache.org/manual/latest/#_feature_and_resolver
But it is not covering common usage examples.

Here are the guiding principals I defined for our consulting practice:
Have your features.xml file process as a filtered resource in maven so you can do version substitution, etc.
Depend on semantic version ranges, not specific versions as much as possible
Specify start-levels
Create a single features repository (features.xml) for a business domain -- ie Ordering, Billing, Quoting, etc.
Create a separate feature for API vs implementation
Specify a 'capability' when defining an implementation feature
Have dependent features (features that depend on features) depend on the API feature and specify a 'requirement' that is satisfied by an implementation that specifies a 'capability')
This allows you to swap implementations without re-defining features, and features that depend on other features

Related

GRPC Services: Central Proto Repository or Distributed

We plan to keep a central proto repository to keep all proto definitions and its generated code here. We would keep messages as well as service definitions in a central Git repo. We plan to drive API design standard from this central repository.
But, any service which want to use this to expose a sever service or generate clients would have to import from this repo (.pg.go).
Do you see any issue with this approach? Or do you see keeping service proto files individually in the service repos as a better alternative.
PS: Starter in the GRPC journey of building microservices. Still learning the right way to structure and distribute code here.
This question occurs regularly and I suspect the fact that there's no published guidance is because the answer depends on your needs more than the technology's.
The specific issue of many vs one is not dissimilar to whether you prefer to use a monorepo and only you can effectively determine that. Perhaps one way to determine this is to understand now (and in the future) how many shared dependencies your services will have? Another may be to determine how many repos you'll have (how complex would it be to manage 10s or 100s of repos?).
In my experience, it's a good practice to keep the protos distinct (i.e. separate repo) from code that uses them. Not only may you want to version protos independently from implementations (across languages) but the implementations themselves are independent; in one use-case I must clone a repo containing an entire system (written mostly in one language) in order to get its protos to generate bindings in another language. In this case, it would be preferable if the repo were limited to just the protos.
You could look to examples for guidance. The gRPC repo keeps a bunch of stuff rooted on the grpc package in addition to math. Although less broad, Google bundles its well-known types under google.protobuf.

ARM Templates are still the preferred deployment mechanism?

We're a little aghast at how time consuming it is to develop syntactically correct ARM templates from scratch.
The Portal helps, but pushes out non-development ready templates (pretty hard to find what the bug is when all the templates use 'name' for the resource name, versus maybe something more verbose like ('microsoftStorageAccountResourceName', microsoftStorageAccountResourceLocation, microsoftStorageAccountResourceTags, etc.).
We understand that there are many ways to deploy -- but if at all possible, we'd like some assurances that ARM is the current preferred way and will continue to be the preferred primary means of scripting deployments via VSTS -- or is it sliding towards a different -- maybe more programmatic -- approach (eg: Powershell, CLI, other).
We're asking this because it looks like we will have to invest significant effort to create a resource library for this organisation (to decrease the need for all projects to become proficient at ARM deployment) -- and would prefer to do it using an approach that will be preferred by developers over the coming years, for maintainability objectives.
Thanks for any insight on which approach to recommend as the best investment.
Templates are going to be around for the foreseeable future... it really depends on whether you want to orchestrate the deployment yourself (imperative deployments using CLI, PS, SDK) or you want ARM to orchestrate the deployment (via templates). Happy to chat in offline if you want to discuss more - email bmoore at microsoft.
Writing this now one year after the original post: The answer to 'ARM Templates are still the preferred deployment mechanism?' probably depends on who you ask. "Preferred" by Microsoft according to their product strategy may be meant differently than preferred by actual users that, well, feel the pain of vendor strategy decisions. I had started with an Azure automation book that used PS scripting only; I was lead (maybe mis-lead?) then to the ARM Template deployment model, mainly by the Microsoft web documentations, but found out that those templates need so much rework that writing a PS script, or even writing an ARM Template from scratch, seems to be a more efficient way to go. In fact, I am confused at the moment about what the 'Best Practice' is, i.e. what method other developers actually use. Is there a community-established opinion on this matter, now in August 2019? Or is it all VSTS / 3rd-party IDEs nowadays?

Need suggestion regarding SCORM compliant leaning solutions

We are building an m-learning solution[IOS and Android compatible] at our company. The product needs to be SCORM compliant. I would like to know whether it should be developed in-house by the developers or other paid options should be pursued? What are other ways of making our product SCORM compliant? We are not rally positive about using SCORM Engine for this due to its high cost solution to our problem here.Any suggestion/help is appreciated.
You can include SCORM within content using a number of open source options available on GitHub.
Getting SCORM in the content (free) is step 1.
Packaging, bundling and deploying is really step 2.
This typically has a close relationship to how Curriculum defines a structure of lessons, modules, units etc. Not knowing exactly how they want to organize this, I can speculate that you may just have a simple "I want to know that the student viewed the content" approach. If you get into a more rich dependency on how the student performs dictating what they see or do next, that requires a much for up front design so you can bridge the design, development, and deployment of your content.
Including SCORM Support in content -
Like mentioned if you search google for my SCOBot project or Pipwerks you'll hit the ground running.
Requires JavaScript friendly developer and some base SCORM knowledge attained thru reading. This could be outsourced.
Knowing the version of SCORM you wish to support can help. Consult the LMS to find out that info.
Far as presenting / creating content; if you are doing this from scratch you'd need a HTML/JS developer or if its more interactive your dipping into WebGL, Canvas or beyond. There are other paid services like iSpring, Captivate and others that offer content creation with SCORM Standards support. They may even take care of the packaging for you (covered below).
Packaging -
This requires a zip (CAM content aggregated model) which includes a imsmanifest.xml file to describe a one to many relationship of a TOC. Again simple is 1, many begins to allow you to group tiers and add objectives and other things increasing complexity but doable.
You can perform creating this package with XML, Zip and specification knowledge. I have a Packaging app on my site and a Mac (free) applescript which can also perform very basic packaging. I am not away of any other free options.
Deployment
Commonly performed thru FTP/FileShare by uploading these CAM (zip) packages. LMS decompresses and reads the manifest. Sometimes you can just copy the raw files up to the LMS thru a media / content server but this greatly depends on the options.

What is the prescriptive approach to supporting multiple RDBMS's with Flyway?

I have an application that supports multiple RDBMS's. The SQL needed to build the data model is different between each of the RDBMS's that I need to support. The differences aren't small either, they stem from the fact that one of the supported systems is expected for light use (development, small installations) and heavy use. Simply standardizing on a single supported RDBMS is not an option.
As it stands I need to be able to apply migrations to my application in all of the supported RDBMS's. Where possible I'd like to be able to share migration scripts to reduce the amount of duplication involved but I imagine that isn't entirely possible.
The only approach I can come up with so far is to keep separate directories in source control for each of the supported environments. Then at runtime, pick the appropriate directory for the RDBMS that the system is connected to.
Is having one directory per supported RDBMS the prescriptive approach or is there a better way?
Right from the FAQ: What is the best strategy for handling database-specific sql?

OSGi for non-java 3PPs

We are building a product that uses the apache hadoop & hbase frameworks for handling some of our big data requirements. We are also using Oracle for our reporting requirements. We are keen to go with the OSGi way of bundling our software to take advantage of the remote deployment,service management & loosely coupled packaging features that OSGi containers offer.
We have a few doubts in this area:
When it comes to our own Java apps, we now know how to create OSGi bundles out of them and deploy them over OSGi containers. But how do we handle Java based 3PPs that have a clustered architecture, for example HBase/Hadoop? We saw that Fuse Fabric has created a Hadoop (actually only HDFS not Map Reduce) bundle, but in general how do you go about creating bundles for 3PP's?
How do we handle non-java based 3PPs like for example Oracle. Should we create a OSGi bundle for it and deploy over OSGi or should we install these 3PP's outside of OSGi and write some monitoring scripts that are triggered over OSGi to track the status of these 3PP's? What are the best practices in this area?
Are all bundles launched over OSGi container (like Karaf) run within the same single JVM of the container? Some of our applications and 3PPs are huge and we may run into heap/GC issues if all of them are run inside a single JVM. What are the best practices here?
Thanks & Regards
Skanda
Creating bundles from non-OSGi libraries can be as simple as repackaging it with an appropriate manifest (there are tools for that, see below), but it can also become very difficult. OSGi has a special class-loading model, and many Java EE libraries that do dynamic class-loading don't play well with that.
I am not sure what you mean here. Theoretically OSGi supports loading native libraries using the Bundle-NativeCode manifest-header, but I have no experience with that.
Normally all bundles are run in the same virtual machine. However, Karaf supports clustering through Cellar, I don't know about other containers though.
Tools for wrapping 3rd-party libraries
In general you can use bnd for this (the tool of choice when it comes to automated generation of OSGi-bundle manifests). PAX-URL offers the wrap protocol-handler, which is present by default in Karaf. Using that, wrapping a library can be as simply as that (e.g. from the Karaf-command line, or a feature-descriptor):
wrap:file:path/to/library
The case of Oracle and most other db libs is simple. You can use the wrap protocol of pax url. Unter the covers it uses bnd with default options. I have a tutorial for using dbs with Apache Karaf.
In general making bundles out of third party libs can range from easy to quite complicated. It mainly depends on how much dirty classloading tricks the lib uses. Before you try to bundle stuff yourself you should look if there are ready made bundles. Most libs today either come directly as bundles or are already available as bundles from some source. For example the servicemix project creates a lot of bundles. You can ask on the user list there if something is available.

Resources