Corda Enterprise has Hot-cold high availability deployment feature.https://docs.corda.net/docs/corda-enterprise/4.2/hot-cold-deployment.html
It is possible to use that for Corda open source 4.3?
Any advice for improve availability with Corda open source 4.3?
If you use Docker Corda OS 4.3 in a HA k8s cluster that should to it.
I am afraid that the HA feature is not available with Corda Open Source.
Related
We are working with a client who is interested in developing a application using Corda Ledger. While in the initial phase of development to first rollout in to Production, client is looking to see the capabilities of Corda Ledger using its community version. Subsequent to first Production rollout when the capabilities of Corda are on the display with its own client, they want to look beyond making this solution a enterprise solution using by procuring Corda enterprise license.
I am not getting much help in forming a delineating line of difference between Community and Enterprise version of Corda.
**What are essential features which cannot be built using community version ?
**who governs Community version ?
**Is there any support provided for Community version ?
**Can we create a distributed architecture using Community version (Corda nodes located on different physical servers) ?
**Can we create Corda network using Docker containers using Community version ?
**Is there any detailed document to draw the lines between community and enterprise version ? **
I have worked on community version of Corda using it for developing PoC, Where all nodes are located on same server and were not truly distributedstrong text
Corda Open Source and Enterprise are functionally identical. What Enterprise offers extra is the non-functional stuff that is required for mission-critical enterprise applications, which includes performance, HA, HSM integration, Enterprise Database integration, 24 X 7 Support, etc.
The community version id developed primarily by R3, while we also accept and encourage community contribution to the Corda Open Source project.
There is no Official R3 Production Support for Open Source Corda, however, you could ask questions and ask for solutions to your problems on our public slack channel (stack.corda.net) and also here on StackOverflow.
You can operate a network of OS Corda with nodes on different servers without any problems.
Oracle JDK is recommended as per the Corda documentation. Corda does not officially support Open JDK.
Refer: Which JDK is best suited for R3 Corda framework
However if we use DockerForm to create Docker image for the Corda node, it internally uses OpenJDK.
Why is it so? I mean, is it just a consistency miss or a deliberate decision?
The license that comes with the Oracle JDK does not allow for redistributing. So they cannot offer a Docker image with the Oracle JDK/JRE on it. You can, however, build one yourself and install Corda on that.
We have 24 Huawei CH242 V3 blade servers and want to setup a private cloud with OpenStack, but we're very new to OpenStack and very lack of experiences about infrastructures. Could somebody kindly give us some useful information about the following question:
What kind of OS is more suitable for those blade servers? Is Linux like CentOS a good choice?
Is it OK(or encouraged) to directly use blade servers as OpenStack controller/compute/storage nodes? Or do we need to use one hypervisor to create many VMs and install OpenStack services on top of VMs?
What're the best practices or suggestions will you want to give beginners?
Maybe some questions are very silly but we're really stuck on the first step, thanks in advance for any information.
Below is my suggestions and there can be more good answers too
What kind of OS is more suitable for those blade servers? Is Linux like CentOS a good choice?
You can try any Linux flavours (OpenSUSE/CentOS/Ubuntu) mentioned in the openstack official site. I personally used Ubuntu for installing openstack.
There are openly available JuJu charms that works on Ubuntu for installing Openstack services. So it will be easy for you to edit the charms and deploy.
Is it OK(or encouraged) to directly use blade servers as OpenStack controller/compute/storage nodes? Or do we need to use one hypervisor to create many VMs and install OpenStack services on top of VMs?
I will prefer VM based installation from your list of choices. I personally suggest you to use containers to deploy your openstack services for better performance.
For compute service, you can go for bare metal installation, but it is upto you.
What're the best practices or suggestions will you want to give beginners?
a. Try installing the same topology/setup as mentioned in the openstack documentation
b. Use recommended databases and AMQP brokers
What kind of OS is more suitable for those blade servers? Is Linux like CentOS a good choice?
I use CentOS7.2, its very stable for openstack. and Ubuntu is also stable which is tried.
Is it OK(or encouraged) to directly use blade servers as OpenStack controller/compute/storage nodes? Or do we need to use one hypervisor to create many VMs and install OpenStack services on top of VMs?
Yes, I do like this, use bare machine as controller/compute/storage, performance good for me, I did not use container like docker.
What're the best practices or suggestions will you want to give beginners?
Because you are new to openstack, I recommend you begin with install openstack, see more logs when you install it. read official website docs is necessary. but you need to notice there are also some errors in the docs, and the configuration also is not optimized, that is just for experiment of private cloud.
If you are skilled at install openstack, then you can read the source code on github, try to contribute the code for it, from fix docs typo.
I am using Cloudify 2.7 with OpenStack Icehouse.
I would like know if Cloudify allows to attach a running Application VM to an existing network.
Does it exist a specific Rest API?
Thank you.
Cloudify 2.7 allows you to specify a compute template which should connect to existing network. An example is available here:
https://gist.github.com/tamirko/8040538#file-computenetwork-groovy
There is no additional API to call - the network setup is declarative.
Please note the Cloudify 2.X has reached end of life and is no longer supported. You should try out Cloudify 3
I have been looking to use Storm which is available with Hortonworks 2.1 installation but in order to avoid installing Hortonworks in addition to a Cloudera installation (which has Spark in it), I tried to find a way to use Storm in Cloudera.
If one can use both Storm and Spark on a single platform then it will save additional resources required to have both Cloudera and Hortonworks installations on a machine.
You can use storm with Cloudera installation. You will have to install it on your own and maintain it as such. It will not be part of the Cloudera stack but that should not stop you from using it along with Hadoop if you need it.
You can use Storm on any of the vendor platform. However, storm cluster management is something you have to consider. Storm is not part of the CDH distribution. Cloudera Manager does not manage the lifecycle of the storm services and configurations, nor does it monitor the storm cluster, unless you are willing to write a Clouderea Manager extension yourself. On the contrary, if you choose a vendor such as HDP, the Ambari management tool on HDP provides all the above management features.
If you have a streaming project on CDH, you should strongly consider Apache Spark first, as it provides the same programming model for both batch and streaming processing. You do not need to learn a new API. However, Apache Spark streaming is micro-batch. Thus in use cases that requires sub-second low latency real-time processing, Storm is more suitable.
You can use Storm alongside Cloudera.
All the above are true, but why would you?
Spark includes Spark Streaming, which allows you to handle data processing and stream/event processing workloads using a single API. Spark/Streaming is already inside CDH.
So, why burden yourself with two different APIs?
You can install Apache Storm on Cloudera VM.
For a basic setup and test run, follow below link:
https://github.com/vrmorusu/StormOnClouderaVM/wiki/Apache-Storm-on-Cloudera-VM
This should get you started on developing Storm applications on Cloudera VM.