Coherence client - not store data - oracle-coherence

How do I stop the Coherence client node storing data. Ie to connect to the cluster, GET data and then disconnect, without the cluster trying to pass data to the node?

Start the client with the following jvm flag:
-Dtangosol.coherence.distributed.localstorage=false
I hope you are connecting to cluster as an extend client, but not like a regular member.

Related

Connecting to the databases of a MockNetwork in Corda

Often, when writing a MockNetwork test I want to connect to the databases of the nodes and interactively query them. Is there a way of doing this?
see this class and example
https://gist.github.com/dazraf/01115f0d376647f99e8fc453ba07251c
Essentially starts the H2 TCP server and dumps the jdbc connection strings for each node.
It has a method to block the test whilst you interactively query the DBs

Is there a way to know the status of a node using the network map service?

Looking through the documentation I can't find a way to get the current status of the nodes (online/offline) using the network map service.
Is this already implemented?
I can find this information using OS tools but I would like to know if there is a Corda way for this task.
Thanks in advance.
This feature is not implemented as of Corda V3. However, you can implement this functionality yourself. For example, see the Ping Pong sample here that allows you to ping other nodes.
In the future, it is expected that the network map will regularly poll each node on the network. Nodes that did not respond for a certain period of time (as defined by the network operator) would be evicted from the network map. However, this period of time is expected to be long (e.g. a month).
Please also note that:
In Corda, communication between nodes uses message acknowledgments. If a node is offline when you send it a message, no acknowledgment will be received, and the node will store the message to disk and retry delivery later. It will continue to retry until the counterparty acknowledges receipt of the message
Corda is designed with "always-on" nodes in mind. A node being offline will generally correspond to a disaster scenario, and the situation should not be long-lasting

MariaDB Galera cluster: are replicate-do-db filters applied before or after data sent?

I would like to synchronize only some databases on a cluster, with replicate-do-db.
→ If I use the Galera cluster, are all data sent over the network, or are nodes smart enough to only fetch their specific databases?
On "classic" master/slave MariaDB replication, filters are made by the slave, causing network charge for nothing if you don't replicate that database. You have to configure a blackhole proxy to filter binary logs to avoid this (setup example), but the administration after is not really easy. So it would be perfect with a cluster if I can perform the same thing :)
binlog_... are performed in the sending (Master) node.
replicate_... are performed in the receiving (Slave) node.
Is this filtered server part of the cluster? If so, you are destroying much of the beauty of Galera.
On the other hand, if this is a Slave hanging off one of the Galera nodes and the Slave does not participate in the "cluster", this is a reasonable architecture.

Should i run Carbon-relay or carbon-cashe or both?

I want to ask about the Graphite carbon daemons.
https://graphite.readthedocs.org/en/latest/carbon-daemons.html
I would like to ask while running a carbon-rely.py, should i also run carbon-cache.py or the relay is okay?
Regards
Murtaza
Carbon relay is used when you set up a cluster of graphite instances. However, a carbon cache does not need a cluster;
Reg Carbon cache: As we all know that write operations are expensive; Graphite enables collected data to be collected in a cache where the graphite webapp can be used irrespective of a cluster to read and display the most recent data recorded into graphite ( irrespective of whether it was written into disk).
Hope this answers your question.
Carbon-relay only resends data to one or more destinations, so it needed only if you want fork data into several points. Example schemas can be:
save locally and resend to another node (cache or temporary-storage and relay)
resend all data into multiple remote daemons (multiple remote storages)
save all data in multiple local daemons (parallel storage & redundancy)
save different data sets in multiple local daemons (performance)
... other cases ...
So,
in case you need store data locally - you have to use carbon-cache.
in case you need fork data flow on the node,- you have to use carbon-relay before or instead carbon-cache

SAP receive adapter high availability

We are having a active-active BizTalk cluster with windows server as software load balancer. The solution includes a SAP receive adapter accepting inbound rfc calls. The goal is to make SAP adapter high availabile.
Read the documentation (), it does says 'You must always cluster the SAP receive adapter to accommodate a two-phase commit scenario.' and 'hosts running the receive handlers for FTP, MSMQ, POP3, SQL, and SAP require a clustering mechanism to provide high availability.'
What we currently did in both the active-active node for BizTalk, we have a host instance enabled. With refering to above documentation, does it mean we did it incorrectly? We should take the clustered host instance instead the active-active deployment?
thanks for all the help in advance.
You need to cluster the host that handles the SAP receive. What this means is that you will always have only one instance of the adapter running at any given time and if one of the server goes down, the other will pick up.
Compare this with your scenario where you simply have two (non-clustered) instances running concurrently: yes, this gives you high availability - but also deadlocks! The two will run independently of each other... With the cluster scenario above, they will run one at the time
To cluster the SAP receive host: open the admin console, find the host, right-click and Cluster.

Resources