configuring orientdb class level clusters on different nodes - orientdb2.2

I have got a problem in configuring the orientdb distributed cluster. Basically i have three orientdb servers instances on three different machines and i wants to have three clusters for a class and each cluster should be present on two different nodes.For that i have the defaut-distributed-db-config.json as below.
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": "majority",
"executionMode": "undefined",
"readYourWrites": true,
"newNodeStrategy": "static",
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"employee_0": { "servers": ["node0","node1"] },
"employee_1": { "servers": ["node1","node2"] },
"employee_2": { "servers": ["node2","node3"] },
"*": { "servers": ["node1","<NEW_NODE>" }
}
}
So here i dont have any database already created before i start the services on all the three nodes. I have started all the nodes and all seems to be OK.When i created the database on node1, I have got exceptions on other nodes as below.
2017-08-03 18:19:04:727 INFO [node1]->[node2] Creating backup of cluster 'global' in directory: /tmp/orientdb/backup_global_user_1.zip... [OSy
ncClusterTask]Exception `3F3E8071` in storage `global`
com.orientechnologies.orient.core.exception.OStorageException: Cluster employee_1 does not exist in database 'global'
DB name="global"
so i have created the clusters employee_0,employee_1 and employee_2 on respective nodes(1,2 and 3) as below.
orientdb {db=global}> create cluster employee_0
Error: com.orientechnologies.orient.server.distributed.task.ODistributedOperationException: Quorum 3 not reached for request (id=0.29 task=command_sql(create cluster employee_0)). Elapsed=15002ms. No server in conflict. Received:
- mum105: 33
- hyd107: waiting-for-response
- ban106: 33
DB name="global"
DB name="global"
I have set the writeQuorum to majority in the config but it tries with 3.From where it is taking that 3 instead of 2. But i could see cluster employee_0 already created on node0 and node1. So i have proceeded to create the class once all the clusters created.
orientdb {db=global}> create class user cluster 33
Error: com.orientechnologies.orient.server.distributed.task.ODistributedOperationException: Quorum 3 not reached for request (id=0.295 task=command_sql(create class user cluster 33)). Elapsed=15015ms. No server in conflict. Received:
- mum105: 12
- hyd107: waiting-for-response
- ban106: 12
DB name="global"
DB name="global"
Even for this i got the same exception and again i could see class created.
|# |NAME |SUPER-CLASSES|CLUSTERS
|10 |user | |user_0(33)
so i have moved on to insert a record into the class and not inserting.
orientdb {db=global}> insert into employee(id,name) values (1,'jhon')
Error: com.orientechnologies.orient.core.exception.ORecordNotFoundException: The record with id '#33:0' was not found
DB name="global"
DB name="global"
Error: java.lang.IllegalArgumentException: Cluster 33 is null
Am i doing a mistake? any help would be appreciated. I'm stuck with these questions in mind.
1.Is there a correct approach to setup the distributed cluster as described above.
2.Why Quorum 3 is being used when i set it to majority?
3.While inserting the record it says RECORD #33:0 not found. what is wrong here?

Related

Ingest pipeline is not working over logs obtained from an event hub wih filebeat

I am sending logs to an azure eventhub with Serilog (using WriteTo.AzureEventHub(eventHubClient)), after that I am running a filebeat process with the azure module enabled, so I send these logs to elasticsearch to be able to explore them with Kibana.
The problem I have is that all the information goes to the field "message", I would need to separate the information of my logs in different fields to be able to do good queries.
The way I found was create an ingest pipeline in Kibana and through a grok processor I separate the fields inside the "meessage" and generate multiple fields. In the filebeat.yml I set the pipeline name, but nothing happen, it seems the pipeline is not working.
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
pipeline: "filebeat-otc"
Does anybody knows what I am missing? THANKS in advance.
EDITION. I will add an example of my pipeline and my data. In the simulation is working properly:
POST _ingest/pipeline/_simulate
{
"pipeline": {
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIME:timestamp}\\s%{LOGLEVEL}\\s{[a-zA-Z]*:%{UUID:CorrelationID},[a-zA-Z]*:%{TEXT:OperationTittle},[a-zA-Z]*:%{TEXT:OriginSystemName},[a-zA-Z]*:%{TEXT:TargetSystemName},[a-zA-Z]*:%{TEXT:OperationProcess},[a-zA-Z]*:%{TEXT:LogMessage},[a-zA-Z]*:%{TEXT:ErrorMessage}}"
],
"pattern_definitions": {
"LOGLEVEL" : "\\[[^\\]]*\\]",
"TEXT" : "[a-zA-Z0-9- ]*"
}
}
}
]
},
"docs": [
{
"_source": {
"message": "15:13:59 [INF] {CorrelationId:83355884-a351-4c8b-af8d-b77c48462f36,OperationTittle:Operation1,OriginSystemName:Fexa,TargetSystemName:Usina,OperationProcess:Testing Log Data,LogMessage:Esto es una buena prueba,ErrorMessage:null}"
}
},
{
"_source": {
"message": "20:13:48 [INF] {CorrelationId:8451ee54-efca-40be-91c8-8c8e18e33f58,OperationTittle:null,OriginSystemName:Fexa,TargetSystemName:Donna,OperationProcess:Testing Log Data,LogMessage:null,ErrorMessage:null}"
}
}
]
}
It seems when you use a module it will create and use an ingest pipeline in elasticsearch, and the pipeline option in the output is ignored.
So my solution was modify the index.final_pipeline. For this, in Kibana I went to Stack Management / Index Management there I found my index, there I went to Edit Settings and set "index.final_pipeline": "the-name-of-my-pipeline".
I hope this helps to anybody.
This was thanks to leandrojmp

Restarting nodes in a flow test throws an error

My nodes have a custom configuration file, and the flow of events is as follows:
1. Start my network
2. Run the flow that creates my TokenType
3. Stop the nodes
4. Add the token type identifier to the custom config
5. Start the nodes
6. Now my other flows can read that value from the custom config and do their job
// Custom config map
Map<String, String> customConfig = new LinkedHashMap<>();
// Assign custom config to nodes
network = new MockNetwork(new MockNetworkParameters().withCordappsForAllNodes(ImmutableList.of(
TestCordapp.findCordapp("xxx").withConfig(customConfig),
// Run the network and my flow that creates some value to be stored in the config
// Stop the nodes
network.stopNodes();
// Add new value to custom config
customConfig.put("new_value", someNewValue);
// Start the nodes
network.startNodes();
But I get this error when starting the network the second time:
java.lang.IllegalStateException: Unable to determine which flow to use when responding to:
com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokens.
[com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokensHandler,
com.r3.corda.lib.tokens.workflows.flows.rpc.ConfidentialRedeemFungibleTokensHandler] are all registered
with equal weight.
Do you have multiple flows present in corda-app?. I got similar error when while trying to override an existing flow. After adding the flowOverride in the the node definition under deployNodes gradle task, the issue is gone.
Example:
node {
name "O=PartyA,L=London,C=GB"
p2pPort 10004
rpcSettings {
address("localhost:10005")
adminAddress("localhost:10006")
}
rpcUsers = [[user: "user1", "password": "test", "permissions": ["ALL"]]]
flowOverride("com.example.flow.ExampleFlow.Initiator",
"com.example.flow.OverrideAcceptor")
}
More information on this present in below links:
https://docs.corda.net/head/flow-overriding.html#configuring-responder-flows
https://lankydan.dev/2019/03/02/extending-and-overriding-flows-from-external-cordapps

Assign profiles to whitelisted users

I have been exploring the profile list feature of the kubespawner, and am presented with a list of available notebooks when I login. All good. Now I have the use case of User A logging in and seeing notebooks 1 and 2, with User B seeing notebooks 2 and 3.
Is it possible to assign certain profiles to specific users?
I do not think Jupyterhub enables you do to that based on this https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html
I think a way to achieve this would be having multiple jupyterhub instances configured with different list of notebook images. Based on something like AD group, you redirect your user to required instance so they get specific image options.
You can dynamically configure the profile_list to provide users with different image profiles.
Here's a quick example:
#gen.coroutine
def get_profile_list(spawner):
"""get_profile_list is a hook function that is called before the spawner is started.
Args:
spawner (_type_): jupyterhub.spawner.Spawner instance
Yields:
list: list of profiles
"""
# gets the user's name from the auth_state
auth_state = yield spawner.user.get_auth_state()
if spawner.user.name:
# gets the user's profile list from the API
api_url = # Make a request
data_json = requests.get(url=api_url, verify=False).json()
data_json_str = str(data_json)
data_json_str = data_json_str.replace("'", '"')
data_json_str = data_json_str.replace("True", "true")
data_python = json.loads(data_json_str)
return data_python
return [] # return default profile
c.KubeSpawner.profile_list = get_profile_list
And you can have your API return some kind of configuration similar to this:
[
{
"display_name": "Google Cloud in us-central1",
"description": "Compute paid for by funder A, closest to dataset X",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>"
}
},
{
"display_name": "AWS on us-east-1",
"description": "Compute paid for by funder B, closest to dataset Y",
"spawner_override": {
"kubernetes_context": "<kubernetes-context-name-for-this-cluster">,
"ingress_public_url": "http://<ingress-public-ip-for-this-cluster>",
"patches": {
"01-memory": """
kind: Pod
metadata:
name: {{key}}
spec:
containers:
- name: notebook
resources:
requests:
memory: 16Mi
""",
}
}
},
]
Credit: https://multicluster-kubespawner.readthedocs.io/en/latest/profiles.html

Is there any way to create a default account whenever a node is created?

I am using Corda version 4.3 and doing all the transactions on the account level by creating accounts for each node. However, I want that whenever I create a node a default account gets created so that no node is created without an account.
I wonder if I can do that in the RPC settings or in the main build.gradle file where I initialize a node like this :
node {
name "O=Node1,L=London,C=GB"
p2pPort 10005
rpcSettings {
address("localhost:XXXXX")
adminAddress("localhost:XXXXX")
}
rpcUsers = [[ user: "user1", "password": "test", "permissions": ["ALL"]]]
}
Try the following:
Create a class and annotate it as #CordaService -which means this class gets loaded as soon as the node starts- (https://docs.corda.net/api/kotlin/corda/net.corda.core.node.services/-corda-service/index.html).
Inside your service class:
Fetch the default account (AccountService class from the Accounts library has methods to fetch and create accounts; it's inside com.r3.corda.lib.accounts.workflows.services).
If the default account is not found, create it.

using network-bootstrapper to generate node-info expects certificates with devMode=false

I am trying to use network-bootstrapper tool to generate node-infos(like certificates etc) by passing node.conf file as input with devMode=false, following is my node.conf file:
myLegalName="O=Bank,L=Paris,C=FR"
p2pAddress="localhost:10011"
devMode=false
rpcSettings {
address="localhost:10012"
adminAddress="localhost:10052"
}
security {
authService {
dataSource {
type=INMEMORY
users=[
{
password=test
permissions=[
ALL
]
user=user3
}
]
}
}
}
I am passing the path of node.conf file as an argument to the bootsrapper.jar, but it is exiting with error code 1, below is the screenshot of the error:
following is the log generated:
[INFO ] 2018-07-04T14:19:21,901Z [main] internal.Node.generateAndSaveNodeInfo - Generating nodeInfo ... {}
[ERROR] 2018-07-04T14:19:21,901Z [main] internal.Node.validateKeystore - IO exception while trying to validate keystore {}
java.nio.file.NoSuchFileException: C:\corda\work\keys- gen\Bank\certificates\sslkeystore.jks
......
......
And
[ERROR] 2018-07-04T14:19:21,917Z [main] internal.Node.run - Exception during node startup {}
java.lang.IllegalArgumentException: Identity certificate not found. Please either copy your existing identity key and certificate from another node, or if you don't have one yet, fill out the config file and run corda.jar --initial- registration. Read more at: https://docs.corda.net/permissioning.html
......
......
Can you please let me know how to generate certificates and place it already inside the folder {workspace}/{nodeName}/certificates which already does not exists and is being generated by the bootstrapper tool itself? can you help with certificate generation and usage of network-bootstrapper.jar tool with devMode turned off?
The bootstrapper tool can't be used outside of devMode. Outside of devMode, proper certificates and a network map server must be used.
This issue is being tracked here: https://r3-cev.atlassian.net/browse/CORDA-1735.

Resources