According to documentation many cordapps can be deployed to a single node that lives in /opt/corda. Can I deploy additional nodes on the same VM?
Yes, each node gets its own base directory under /opt. You can have N number of nodes on a single VM. For example:
/opt/PartyA
/opt/PartyB
etc
Each node must have it's own entry in systemctl have the service started.
If all nodes are running the same application you can put all configuration in the same base directory. However, this is more difficult because each node must specify its own configuration -config-file foo.conf, database, certificates, message queue, etc.
Related
I have a Symfony application that is running on several servers behind a load balancer. So I have separate hosts www1, www2, www3, etc.
At the moment I'm running messenger:consume only on www1, in fear of race conditions and potentially messages being handled twice.
Now I have a scenario where I need to execute a command on each host.
I was thinking of using separate transports for each host and running messenger:consume on each, consuming only messages from its respective queue. However I want the configuration to be dynamic, i.e. I don't want to do another code release with different transports configuration when a new host is added or removed.
Can you suggest a strategy to achieve this?
If you want to use different queues and different consumers... just configure a different DSNs for each www, stored on environment variables (not code). Then you could have different queues or transports for each server.
The Transport Configuration can include the desired queue name within the DSN, and best practice is to store that configuration on an environment variable, not as code, so you wouldn't need "another code release with different transports config when a new host is added or removed". Simply add the appropriate environment variables when each instance is deployed, same as you do with the rest of the configuration.
framework:
messenger:
transports:
my_transport:
dsn: "%env(MESSENGER_TRANSPORT_DSN)%"
On each "www" host you would have a different value for MESSENGER_TRANSPORT_DSN, which would include a different queue name (or a completely different transport).
You would need to create a separate consumer for each instance, with a matching configuration (or run the consumer off the same instance).
But if all the hosts are actually running the same app, generally, you'd use a single consumer, and al the instances should publish to the same queue.
The consumer does not even need to run on the same server than any of the web instances, simply be configured to consume from the appropriate transport/queue.
I have Corda network (3 nodes + 1 Notary Node) running locally on my windows system.
I am reading through this document # https://docs.corda.net/node-administration.html
Node Statistics are exposed through JMX beans to Jolokia agent running at the start of each node. I see jolokia agents tarting for each of the node at different ports. Ex - Jolokia: Agent started with URL http://127.0.0.1:xxxx/jolokia/
I am using Hawtio dashboard to see Corda node JVM statistics exposed through Jolokia agents storage. While hawtio is smart enough to discover jolokia agents started at different port for each Corda Node, i am not able to see required statistics displayed on the dashboard.
I have tried setting up jmxMonitoringHttpPort in each of the node.conf with jokia port for each node. But Node is not starting because Jolokia agent is not running at target port message.
I have downloaded binaries of Jolokia agent and ran it on a unused port in system, configured node.conf for each file pointing to this port. But i am still not seeing statistics for any of the node.
i think you can try with the obligation CorDapp project https://github.com/corda/obligation-cordapp#instructions-for-setting-up with ".runnodes" script, because the nodes are running with jolokia agent enabled.
alternatively you can run a single node with java -Dcapsule.jvm.args="-javaagent:drivers/jolokia-jvm-1.3.7-agent.jar=port=7033" -jar corda.jar and see if this works.
How can I deploy and run Corda nodes of spring webserver based "Yo!CorDapp" example (https://github.com/joeldudleyr3/spring-observable-stream), on separate machines?
What are the configuration changes I need to implement in this regard.
As long as you are running each server on the same machine as the node it talks to, there shouldn't be any configuration required.
Simply start the nodes on their separate machines, then start the webserver on each machine, with the application properties modified or overridden to point to that node's RPC port.
Since the nodes are on separate machines, it's even possible to use the same RPC port for all nodes, since the IP address will differ.
I'm trying to integrate Docker with OpenStack (icehouse) via the Docker-Heat Pluigin and I'm facing a problem.
OpenStack is configured according to the tutorial by OpenStack for Ubuntu. I'm using a controller node and a compute node (just the 2 nodes) with the legacy nova-networking.
Things to keep in mind:
Controller Node: 1 network interface - management interface
Compute Node : 2 network interfaces - management interface and the external interface (vm instance have ips of the same subnet of that external interface)
With OpenStack everything works perfect except (which might be the problem I'm facing for dockers)
1- You can't reach (ping) the deployed vm instances from the controller node [makes sense, i think no problem in that one]
2- You can't reach (ping) the deployed vm instances from the compute node (ping: operation not permitted) [might be the issue] - but you can ping from a vm instance to the compute node
3- The virtual machines themselves don't see each others [but i think doesn't have relation to the issue im facing]
For Dockers, the plug-in is installed. I assume perfect since the syntax for Dockers DockerInc::Docker ... is accepted but when I try to run the example posted in the Docker blog - making the adjustments required - the compute instance is created but the docker container is not. Im having this error:
When i try it as a user with admin role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/None/json': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/None/json
When i try it as a user with just a member role
MissingSchema: Invalid URL u'192.168.122.26/v1.9/containers/create': No schema supplied. Perhaps you meant http:/ /192.168.122.26/v1.9/containers/create
Notes:
192.168.122.26 is the ip of the created vm instance.
I've tried not only with cirros but also coreos and ubunto-precise (same error)
Docker itsself is installed on both Controller and Compute.
Docker plugin and its requirements are only installed on the controller node
Finally, both the controller and the compute nodes run as virtual machines themselves
I would be really glad if you had an idea. Thanks for your time,
Kindest Regards,
M. El Sioufy
My guess is that you haven't allowed communication to the VMs from the outside world (which the controller and/or the compute node will be from the VM's point of view). By default, communications from VMs to the outside world are allowed, but not inbound to the VMs. Try adding an "allow all TCP" rule to the default security group of the tenant that the VMs live in. This may fix your HTTP timeout.
following is a paragraph from glassfish 3.1.2.2 administration guide
You can rotate log files manually by using the rotate-log subcommand
in remote mode. The default target of this subcommand is the DAS.
Optionally, you can target a configuration, server, instance, or
cluster.You can rotate log files manually by using the rotate-log
subcommand in remote mode. The default target of this subcommand is
the DAS. Optionally, you can target a configuration, server, instance,
or cluster.
what is the difference between configuration, server, instance and cluster ? i understand cluster is a collection of instances. but what is difference between server and instance and configuration ?
From the Glassfish admin guide:
The default target for these two subcommands is the DAS. However, you
can optionally specify one of the following targets:
Configuration: to target all instances or clusters that share a specific configuration name.
Server: to target only a specific server.
Instance: to target only a specific instance.
Cluster: to target only a specific cluster.
A GlassFish Server instance is a single Virtual Machine for the Java
platform (Java Virtual Machine or JVM machine) on a single node in
which GlassFish Server is running. A node defines the host where the
GlassFish Server instance resides.
and
It is usually sufficient to create a single server instance on a
machine, since GlassFish Server and accompanying JVM are both designed
to scale to multiple processors. However, it can be beneficial to
create multiple instances on one machine for application isolation and
rolling upgrades.
This means you can have multiple instances of glassfish running on a single server, and you can either target a single instance or the whole server.