I have made a simple project in Corda. My project has 4 nodes including notary and also SpringBoot APIs in the clients folders. I don't know how to deploy my project to the server. I saw the Corda docs but that tutorial was for a single node. So, my question is how to deploy the Corda project with Multiple nodes on the Server and also the SpringBoot APIs. Can anyone help me with this?
There are actually some good youtube videos on this (from me!).
You can find that here: https://www.youtube.com/watch?v=NtVbkUCSt7s
There's other videos there for GCP and Azure as well.
Essentially you need to make sure that your corda node config's p2pAddress specifies the IP address of the machine in your cloud provider of choice.
Related
I have deployed API manager 4.0.0 All-in-one on 2 VMs. I am using Mysql as DB which is on separate VM and I am sharing the databases as mentioned in the document. Now I am trying to cluster these 2 nodes as mentioned in this document. There are few things which are not clear to me from this document.
Which node is manager and which is worker or they are both manager or worker? What is basic difference between manager and worker?
If I use nfs to share the resources between all the nodes, which node do we setup nfs?
(I setup nfs in a different vm , and both nodes are mounted to nfs server, is that right?)
What happens under the hood when you publish an API in version 4.0.0. I understand that when an API is published, it gets deployed on the API Gateway, and the API lifecycle state will be changed to PUBLISHED. What artifacts are persisted in the DB (and where) and what artifacts are persisted to the filesystem (my understanding is that they are located at <APIM_HOME>/repository/deployment/server/synapse-configs/default directory as XMLs, but i don't think i notice something changed in the directory, where are they?).
What does step1 and step9 mean? why we need this?
We had this manager-worker concept in the older versions. But in APIM v4, we don't have that concept. Both nodes are accepting requests.
In APIM v4 we have inbuilt artifact synchroniser by default and you don't need a NFS for API artifacts and rate limiting policies. But if you are using tenants and userstores, then you need to have the NFS. In that case, you can mount both the nodes to the NFS server.
Before APIM v4, we had this file system based artifact approach. But in the latest it is loaded to the memory. When you create an API, it publishes an event to itself and to the other node. Then both the nodes load the API from the database to the memory.
Step 1: You can't use the default shipped keystores in production. You have to change those.
Step 9: This is for distributed caching.
I have developed a Azure Service Fabric service .Net core 2.2 which contains a controller and some API methods inside it. I deployed the service on local cluster and its working fine. I am able to access the api endpoints.
But now I need to deploy it on local IIS too. I published the service project that contains my controller, and when I try to deploy it on local IIS as we usually do while deployment of other apps, I get an error "HTTP Error 500.0 - ANCM In-Process Handler Load Failure". I am googling for this and trying to find solution but no success yet.
Is there any particular setting or process that needs to be followed to deploy Azure Service Fabric service on IIS ? I am unable to figure out what I am missing.
Any suggestions or ideas?
When you create a Service Fabric service, the runtime need to talk to the Service Fabric System Services at startup. If you deploy it to IIS, it does not have a cluster to talk to.
If you want to make an API flexible to be hosted either on Service Fabric or IIS, you need to decouple the hosting logic from the API.
In this case, you can either create two different hosts, or:
in the host entry point, check if you are running inside SF, if yes you start the Service Fabric Service otherwise you start a self-hosted or IIS version.
There are quite a few questions in SO with examples like this, worth a search to check which one fits your needs.
I am a student who just started working on a research project, which is to compare Kaa with Eclipse Kura. I don't have any knowledge about IoT before working this project, so I got really lost and had no idea how to compare them. Hope someone can give me some advice. Thanks!
I cannot speak to the specifics of Kaa, so you would need to thoroughly review their documentation. From my understanding, Kaa is mainly focused on the Cloud side of the IoT stack. They provide SDKs for various languages that you need to compile and install on whatever device you intend to connect to the Cloud.
Eclipse Kura is a Java/OSGi framework that runs on an IoT gateway. The framework provides built in services for managing the gateway (networking, cloud connectivity, remote management, etc.) and abstracts away many of the complexities in writing applications for the gateway (GPIO, serial, BLE, etc.). Eclipse Kura doesn't provide a Cloud backend itself, but has built in support for connecting to open source platforms such as Eclipse Kapua and industrial backends such as Eurotech Everyware Cloud, Amazon AWS IoT, Microsoft Azure, etc. In theory you could install the Kaa Java SDK in Eclipse Kura and have Kura connect to Kaa, but I have never tried this.
I hope this helps,
--Dave
I would like to deploy web app on Cloud (Which is built on using Spring MVC Farmework,JPA & Oracle ).Could any one suggest me the best way to deploy it on cloud?.
Vijay
Given your comment that you're happy to move to a MySQL database, then I'd suggest Jelastic, which has the easiest deployment for your stack. It also has a free trial.
Alternatively, AppFog is also great, and free for up to 2GB of RAM.
You'll have to first pick your cloud provider.
Given the technology stack, you may be able to deploy to the Oracle Public Java cloud, or you could also configure machine images to deploy on Amazon EC2. The limiting factor will be technology (and licensing) restrictions. If you had, for example, decided to use MySQL as a database (or any other data store), you would have more options.
I have a web application deployed on heroku. I just introduced Neo4j as data structure and, of course, I have to integrate it in production on heroku. I read on this link http://wiki.neo4j.org/content/Neo4j_Heroku_Addon that the heroku addon for neo4j is currently on beta testing. So have looked for alternative ways and I found this link: http://wiki.neo4j.org/content/Neo4j_in_the_Cloud ... do you know if it's possible to include such integration on heroku without the addon ?Tnx
If you are a registered beta tester on heroku you can already use the add-on for free.
Of course if you want to run the Neo4j REST server on your own aws ec2 instances you can do that easily (there are also preconfigured AMI's). Please make sure that your ec2 instances run in the aws us-east region as this is where heroku's machines are located too.