I am new to Janusgraph and using Cassandra as the backend database. I have a query which uses find all incoming edges to a node. For that I need to make the read consistency to ONE in Janusgraph configuration. I have tried the following configuration but am not able to get the correct read consistency:
public static JanusGraph create() {
JanusGraphFactory.Builder config = JanusGraphFactory.build();
config.set("storage.backend", "cassandrathrift");
config.set("storage.cassandra.keyspace", "cs_graph");
config.set("storage.cassandra.read-consistency-level","ONE");
config.set("storage.cassandra.write-consistency-level","ONE");
config.set("storage.cassandra.frame-size-mb", "128");
config.set("storage.cassandra.thrift.cpool.max-wait", 360000);
config.set("storage.hostname", "10.XXX.1.XXX");
config.set("connectionPool.keepAliveInterval","360000");
config.set("storage.cql.only-use-local-consistency-for-system-operations","true");
graph = config.open();
System.out.println("Graph = "+graph);
traversalSource = graph.traversal();
System.out.println("traversalSource = "+traversalSource);
getAllEdges();
return graph;
}
However, the client is still showing the CassandraTransaction in QUORUM level of consistency.
Here are the logs:
16:40:54.799 [main] DEBUG o.j.d.cassandra.CassandraTransaction -
Created CassandraTransaction#25e2a451[read=QUORUM,write=QUORUM]
16:40:54.800 [main] DEBUG o.j.d.cassandra.CassandraTransaction -
Created CassandraTransaction#1698ee84[read=QUORUM,write=QUORUM] All
edges = 100000 16:40:55.754 [main] DEBUG
o.j.g.database.StandardJanusGraph - Shutting down graph
standardjanusgraph[cassandrathrift:[10.70.1.167]] using shutdown hook
Thread[Thread-5,5,main] 16:40:55.755 [main] DEBUG
o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#3e5499cc[read=QUORUM,write=QUORUM] 16:40:55.755
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#67ab1c47[read=QUORUM,write=QUORUM] 16:40:56.113
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#6821ea29[read=QUORUM,write=QUORUM] 16:40:56.542
[main] DEBUG o.j.d.cassandra.CassandraTransaction - Created
CassandraTransaction#338494fa[read=QUORUM,write=QUORUM] 16:40:56.909
[main] INFO o.j.d.c.t.CassandraThriftStoreManager - Closed Thrift
connection pooler.
Any suggestions on how to change this to ONE or LOCAL consistency level??
For one, I would switch to connect over CQL instead of Thrift. Thrift has been deprecated, so it's not seeing the benefits of any improvements for bug fixes. In other words, if it's inherently broken, it won't be fixed. So you're much better off using CQL.
config.set("storage.backend", "cql");
config.set("storage.cql.keyspace", "cs_graph");
storage.cql.read-consistency-level=ONE
storage.cql.write-consistency-level=ONE
Secondly, you need to make sure that you're consistently using the config properties for your storage backend. Unfortunately with JanusGraph and Cassandra, these are easy to mix-up...
config.set("storage.cassandra.read-consistency-level","ONE");
config.set("storage.cassandra.write-consistency-level","ONE");
....
config.set("storage.cql.only-use-local-consistency-for-system-operations","true");
In the above example, you've set properties on storage.cassandra (Thrift) and the storage.cql (CQL) configs.
If that still doesn't work, try adding this setting as well:
log.tx.key-consistent=true
Setting the transaction log to be key-consistent overrides it's default QUORUM consistency access, if that's what is showing as QUORUM.
Related
I have set up kong in dbless mode on RHEL by following the below documentation
https://docs.konghq.com/gateway/latest/install-and-run/rhel/
Kong gateway is successfully started. Below are the configurations I added in kong.conf file where database is turned to off and path to declarative kong.yaml is specified
declarative_config = /temp/kong/kong.yml
database = off
Also, below is current .yaml file where I created a service using below link
https://docs.konghq.com/gateway/2.8.x/get-started/comprehensive/expose-services/
_format_version: "1.1"
services:
- host: mockbin.org
name: example_service
port: 80
protocol: http
routes:
- name: mocking
paths:
- /mock
strip_path: true
I have also installed deck to sync this the declarative configuration.
However, when I use the deck sync command to add this service to kong, I get below error
creating service example_service
Summary:
Created: 0
Updated: 0
Deleted: 0
Error: 1 errors occurred:
while processing event: {Create} service example_service failed: HTTP status 405 (message: "cannot create or update 'services' entities when not using a database")
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
Kindly need ideas on what could be wrong as I believe we can create a service in dbless mode, and I also think that this is the declarative format which should work. Looking forward to hear. Thanks
You are correct that we can create a service in dbless mode, however the approach will be different.
If you already have the new config file in yaml format. you can load it to Kong using /config endpoint
I also think that decK should be process-agnostic and can be used with both db and dbless mode, But as it stands, loading yaml config file with /config endpoint looks like the best option.
I am trying to craft a salt state file to simply ensure-enabled and re-run my one-shot service. I thought it would be nice to re-run if any of the dependent files changed, but honestly this is simple enough and the short-lived service is almost never going to be running when I want to update.
Current attempt:
myown-systemd-service-unit-file:
...
myown-systemd-service-executable-file:
...
myown-service:
systemd.force_reload:
- name: myown
- enable: True
- watch:
- myown-systemd-service-unit-file
- myown-systemd-service-executable-file
is failing at with errror:
----------
ID: myown-service
Function: systemd.force_reload
Name: myown
Result: False
Comment: State 'systemd.force_reload' was not found in SLS 'something.myown'
Reason: 'systemd.force_reload' is not available.
Changes:
By enable, I mean to have the equivalent of this CLI call be applied:
sudo systemctl enable myown.service
Relevant docs: https://docs.saltproject.io/en/latest/ref/modules/all/salt.modules.systemd_service.html#module-salt.modules.systemd_service
The systemd_service module is an execution module, and the syntax to use such modules is slightly different. The state declaration you are using is for state modules. Also, the example from the documentation points to use of service.force_reload rather than systemd.force_reload.
salt '*' service.force_reload <service name>
Considering all this, the below example restarts and enables myown service when the service unit file changes.
myown-service:
module.run:
- service.restart:
- name: myown
onchanges:
- file: myown-systemd-service-unit-file
- service.enable:
- name: myown
Note that I've used restart instead of force_reload to bounce the service. Also I'm using onchanges for file module as you haven't shown how you manage the two files. You can use the appropriate module and state IDs.
I'd like to listen to mutations on a remote JanusGraph and I'm unable to figure out the correct setup to make it work.
JanusGraph stack:
JanusGraph docker image **0.5.2 (which is using Apache TinkerPop Gremlin 3.4.6) with cql-es configuration
Cassandra docker image 3.11.6
ElasticSearch docker image 7.3.1
Serializers section of gremlin-server-cql-es.yaml is updated with the following line:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry, org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerIoRegistryV3d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
Java client stack:
Based on pluradj/janusgraph-java-example
Java8
janusgraph-core 0.5.2
gremlin-driver 3.4.6
remote-objects.yaml looks as follows:
hosts: [127.0.0.1]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0,
config: {
ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry, org.apache.tinkerpop.gremlin.tinkergraph.structure.TinkerIoRegistryV3d0]
}
}
Complete code (without ConsoleMutationListener) looks like this:
public static void main(String[] args) {
MutationListener mutationListener = new ConsoleMutationListener("Test");
EventStrategy eventStrategy = EventStrategy.build().addListener(mutationListener).create();
try (GraphTraversalSource g = AnonymousTraversalSource.traversal()
.withRemote("conf/remote-graph.properties")
.withStrategies(eventStrategy)) {
g.addV("person").property("name", "Test").next();
} catch (Exception e) {
e.printStackTrace();
}
}
ConsoleMutationListener is a copy of TinkerPop's sample ConsoleMutationListener with modified constructor to accept graph name instead of a full graph, since toString() was the only method used anyways.
Stack trace:
io.netty.handler.codec.EncoderException: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: An error occurred during serialization of this request [RequestMessage{, requestId=9436b08c-7e31-4fc0-b480-40904055f491, op='bytecode', processor='traversal', args={gremlin=[[withStrategies(EventStrategy)], [addV(person), property(name, Test)]], aliases={g=g}}}] - it could not be sent to the server - Reason: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.lang.IllegalArgumentException: Class is not registered: org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy
Note: To register this class use: kryo.register(org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy.class);
at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:107)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:716)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
at io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
at io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
at io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:510)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:518)
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: An error occurred during serialization of this request [RequestMessage{, requestId=9436b08c-7e31-4fc0-b480-40904055f491, op='bytecode', processor='traversal', args={gremlin=[[withStrategies(EventStrategy)], [addV(person), property(name, Test)]], aliases={g=g}}}] - it could not be sent to the server - Reason: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: java.lang.IllegalArgumentException: Class is not registered: org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy
Note: To register this class use: kryo.register(org.apache.tinkerpop.gremlin.process.traversal.strategy.decoration.EventStrategy.class);
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinRequestEncoder.encode(WebSocketGremlinRequestEncoder.java:60)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketGremlinRequestEncoder.encode(WebSocketGremlinRequestEncoder.java:38)
at io.netty.handler.codec.MessageToMessageEncoder.write(MessageToMessageEncoder.java:89)
... 12 more
If I remove the withStrategies(eventStrategy) the Vertex is added to the graph and I'm also able to query the graph normally. However I'm not able to configure the GraphTraversalSource with EventStrategy.
Q1: What I'm thinking is that message with defined Event Strategy cannot be serialized with GryoMessageSerializerV3d0 or Mutation Listener/Event Strategy should somehow be registered on the server side, but I can't find any references on how to do that. Are there any examples of such configuration?
Q2: What am I doing wrong? Is it even possible to use TinkerPop's EventStrategy with JanusGraph?
Q3: Is there any other approach to listen to remote JanusGraph mutations?
Changing serializer to GraphSONMessageSerializerV3d0 gives:
java.util.concurrent.CompletionException: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: EventStrategy does can only be constructed with instance() or create(Configuration)
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
...
Caused by: org.apache.tinkerpop.gremlin.driver.exception.ResponseException: EventStrategy does can only be constructed with instance() or create(Configuration)
Changing serializer to GraphBinaryMessageSerializerV1 gives:
java.util.concurrent.CompletionException: io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: The most significant bit should be set according to the format
at java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:375)
at java.util.concurrent.CompletableFuture.join(CompletableFuture.java:1947)
...
Caused by: io.netty.handler.codec.DecoderException: org.apache.tinkerpop.gremlin.driver.ser.SerializationException: The most significant bit should be set according to the format
Q1: What I'm thinking is that message with defined Event Strategy cannot be serialized with GryoMessageSerializerV3d0 or Mutation Listener/Event Strategy should somehow be registered on the server side, but I can't find any references on how to do that.
That is correct. EventStrategy does not work across remote connections.
Q2: What am I doing wrong? Is it even possible to use TinkerPop's EventStrategy with JanusGraph?
It is possible to use it with JanusGraph but only in embedded mode as the MutationListener implementations do not know how to send events back to the client. The driver would likely need some significant changes to introduce a mechanism to support that so it is a non-trivial change. If that were figured out then there still remain serialization issues to sort out for users who supply custom MutationListeners (though perhaps maybe that just wouldn't be allowed).
Q3: Is there any other approach to listen to remote JanusGraph mutations?
The key word there is "remote" and I don't think anything exists currently to allow that. You would need to build you're own of some sort. One way might be to configure "g" with EventStrategy on the server and then add a MutationListener that would send those events to a separate queue that you could consume remotely. You might also consider looking at the JanusGraph Bus and devise a similar scheme.
After some trivial tweaks in build.gradle:
corda_release_version to say, 4.0-SNAPSHOT-sean
corda_gradle_plugins_version to 4.0.23
quasar_version to '0.7.10'
And repositories:
maven { url 'https://jitpack.io' }
maven { url 'https://ci-artifactory.corda.r3cev.com/artifactory/corda-releases' }
maven { url 'https://dl.bintray.com/kotlin/kotlin-eap/' }
And in deployNodes task:
adding rpcSettings to the notary
removing finance from cordapps
The Yo CorDapp can stand up with three nodes: Notary, PartyA and PartyB.
The issue is that after a single /api/yo/yos query, the whole thing becomes frozen. All apis come back with HTTP ERROR 500, the debugging ports not working, trace logs show some artemis errors.
My setup: java version "1.8.0_172" on macOS 10.31.1.
It would be helpful to get at least the simplest CorDapp to run against the SNAPSHOT.
\Sean
You're not doing anything wrong here. We run continuous integration on the master branch but (unlike the release-Vx branches) we can't make any guarantees about the master branch working at any point in time.
I replicated your steps, and in my case, I got the following error:
[WARN ] 2018-06-14T14:18:22,744Z [Thread-11 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$5#d15f031)] core.client.fail - AMQ212037: Connection failure has been detected: AMQ119014: Did not receive data from /127.0.0.1:55037 within the 60,000ms connection TTL. The connection will now be closed. [code=CONNECTION_TIMEDOUT] {}
We have deployed a groovy script to the artifactory home plugin folders.
using the REST API we have loaded it successfully.
from the logs we can see that the load is successful.
2017-11-14 10:00:54,815 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.GroovyRunnerImpl:244) - Loading script from 'purgeLibrary.groovy'.
2017-11-14 10:00:55,015 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.e.ExecutePluginImpl:187) - Groovy execution 'purgeLibrary' has been successfully registered.
2017-11-14 10:00:55,023 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.j.JobsPluginImpl:92) - Groovy job 'purgeOutdatedArtifacts' has been successfully scheduled to run.
2017-11-14 10:00:55,024 [http-nio-8081-exec-74] [INFO ] (o.a.a.p.GroovyRunnerImpl:296) - Script 'purgeLibrary' loaded.
and using the REST API again, we have manually executed the script purgeLibrary (again verified though log messages).
Job purgeOutdatedArtifacts and execution purgeLibrary are both wrappers around the same internal method, but job has default parms.
However, this 'job' never actually executes - again we can tell because there is nothing in the logs.
The relevant 'hook points' below...
executions {
purgeLibrary() { params ->
def dryRun = params["dryRun"] ? params["dryRun"][0] as boolean : false
libraryPurge(dryRun)
}
}
jobs {
// Finds ci/cd published Artifacts that have reached max daysToLive and purges them. Executes daily at 1am, server time.
purgeOutdatedArtifacts(cron: "0 0 1 * * ?") {
libraryPurge(true) // default dryrun flag to true
}
}
Now: all of this works in our test server - also same version of Artifactory. So my assumption is there is a configuration difference on the production server that is missing / not set correctly.
Any idea why the 'job' does not actually execute?
Thanks!
So it turns out the job is actually running, but doing so at a different start time that equates to the value that was originally in the script.
I am still researching why it has not updated start time when we reloaded the script.
I am not deleting the question, in case this adds value to someone else