Why does Rexster show "titangraph[cassandra:null]" even though its connected? - gremlin

When I connect to Titan via the Gremlin console it says... titangraph[cassandra:127.0.0.1]
Rexster however says... titangraph[cassandra:null] even though I can browse the same set of vertices.
Why is this? Rexster makes it look as though it hasn't managed to connect.

This message indicates that Cassandra did not start correctly.
Try starting Titan with the following:
titan.sh -c cassandra-es start
Have a look at /conf for additional configuration files.
Unless you really have to, I strongly suggest installing Titan 0.5.0 which comes with many useful features.
If you're starting with graph databases and Titan, I suggest trying with a single machine cluster or Berkeley DB as a storage backend. You may not need Cassandra yet.
You can also have a look at Titan/Aurelius official mailing list, I know the issue you're experiencing has been discussed there before: https://groups.google.com/forum/#!forum/aureliusgraphs. You can search for resources there (see, for example, https://groups.google.com/d/msg/aureliusgraphs/bviB6E5TZ-A/TJxQv0U7WQEJ).
In the meanwhile, you can try Titan v0.5.0 in Node.js by connecting via HTTP with https://github.com/gulthor/grex (HTTP client). The recommended way of connecting to Rexster is via HTTP (TinkerPop 2.x) or WebSocket (in upcoming TinkerPop 3, which Titan will support in a future version).

Related

Maxscale "Capability mismatch"

I did a fresh install of Maxscale, and I was trying to set up a Read-Write-Split service on a master-slave mariadb cluster.
When I was trying to connect with DataGrip or DBeaver, I got the following error message:
[HY000][1927] Capability mismatch (bdd-master)
But when I use the mysql command line client, it works well.
Do you have any idea of what could be wrong?
MaxScale sends a Capability mismatch error when it detects that the client application requests a protocol capability that one of the backend databases cannot support. In general, this should not happen as MaxScale tries to mimic the backend database and calculates the capabilities so that these sort of mismatches do not happen.
There are some known bugs that can cause this, both in MaxScale as well as old versions of MariaDB and MySQL. Upgrading to the latest possible version of MaxScale should help solve any problems you might see.
Additionally, you should disable the query cache in the database if you are using MySQL as there is a bug in MySQL (and old MariaDB versions as well) that causes these sort of problems to appear.
It seems that is related to the router used (readwritesplit).
Datagrip send this command when it initiate the connection:
set autocommit=1, session_track_schema=1, sql_mode = concat(##sql_mode,',STRICT_TRANS_TABLES')
It seems that some of theses parameters are not supported by readwritesplit.

GremlinServerError: 499

I am running a Neptune Server on AWS and making gremlin queries to the db ipython cell magic in an jupyter notebook. I've got a number of traversals running and I am getting an error that is coming from aiogoblin in their resultset.py file: GremlinServerError: 499: {"requestId":"5bb1e6ea-49ec-4a1d-9364-2b1bf717df9c","code":"InvalidParameterException","detailedMessage":"The [eval] message contains 66 bindings which is more than is allowed by the server 64 configuration"}
How can I make continued queries against the server without this error message popping up?
I believe there was a known issue with the client/magic you are using and I don't think it has been updated in four years or so. I vaguely remember you could work around it by doing something like %reset in the cell but I really think you would be better off using a different client that is regularly updated and supported.
You could instead use the Apache TinkerPop Gremlin Python client (pip install gremlinpython) or try the new Amazon Neptune Workbench which offers a %%gremlin cell magic.
If you use the Gremlin Python client in a Jupyter notebook you can still issue queries in much the same way, you would just need to establish a connection to the server in a cell before issuing Python based queries. There is a blog post that may be of interest located here [1] and a stand alone Python example you could use to create a cell containing the imports and setup steps can be found here [2] and here [3]. In the sample you would replace localhost with the DNS name of your Neptune endpoint.
If you decide to try the new Neptune Workbench you can create one from the AWS Neptune Console web page.
[1] https://aws.amazon.com/blogs/database/let-me-graph-that-for-you-part-1-air-routes/
[2] https://github.com/krlawrence/graph/blob/master/sample-code/basic-client.py
[3] https://github.com/krlawrence/graph/blob/master/sample-code/glv-client.py

What is the recommended way to interface with a CorDapp via an API?

I am designing a CorDapp, which would require user input as well as API integration, and I am considering various approaches to expose flows and vault queries to the outside world.
Default option seems to be to use Corda RPC. Unless I missed something, there are only Java bindings for it, which is effectively restricting the clients to only be JVM-based. This is somewhat inconvenient, and ideally I would like something like OpenAPI to make it more open and implementation-agnostic.
Another option is to use some kind of Corda RPC to OpenAPI proxy. I know about Braid, and I'm sure there are others. Braid seems to support deployment as a Corda service packed together with the flows into the CorDapp itself, effectively making it running embedded into the Corda JVM.
Braid can be deployed as a standalone proxy too, which I suppose is option three.
Instinctively I find the embedded mode more attractive, as it reduces the number of moving parts, as opposed to a standalone mode. However, I am concerned that such model may be in fact become discouraged at some point, either because Corda developers consider it to be a misuse of services facility, or because some organisations will not be keen to deploy such code onto their nodes, especially when they may be running multiple CorDapps. I would imagine anything deployed as part of Corda JVM would at least require more scrutiny due to potential impact on other things running there, which in turn would reduce the agility.
I wonder what approach to integrate with a CorDapp is actually recommended?
Edit 1: I know it is technically possible to embed a webserver into the node and expose a REST API from there, at least in the current version of Corda (4.3 at the time of writing). The question is more about whether it is a good idea to do so, or not, and why.
Take a look at the question I had asked on Stackoverflow regarding front end for CordApp. This might be of some help.
Following is the link -
"Corda: Can we develop Dapps that will be run by IIS webserver to talk to Corda platform?"
You can use any front-end technology you want.
As of Corda 3, your backend must be JVM-based, for two reasons:
You need to load various flow, state and other class definitions onto
the classpath to pass as arguments to flows, retrieve objects from the
vault, etc.
You need to use the CordaRPCClient library to create an
RPC connection to the node
If you really need to write your back-end
in another language, there are a few workarounds:
Create a thin Java webserver that sits between your main webserver and
the node. The Java webserver translates HTTP requests from the main
webserver into RPC calls to the node, and RPC responses from the node
into HTTP responses to the main webserver
This is the approach taken
by libraries such as Braid
Use a library such as GraalVM to compile
non-JVM languages to JVM bytecode
An example of writing a JVM
webserver in Javascript using GraalVM is available here:
https://github.com/nitesh7sid/cordapp-example-nodejs-server-graalvm

RabbitMQ dependency graph

I would like to know if there is any existing tool to generate dependency graph from JSON RabbitMQ broker definition.
I looked into some open source GitHub Project and didn't find anything that create a full dependency graph for RabbitMQ (relations between queues - exchanges - routing keys...) .
It would be very interesting to have such tool to be able, through a graph, to see all rabbitMQ dependencies very quickly in a readable way.
For example :
Which exchange is routing messages to with queues (with routing keys indicated -> bindings)
Is anyone aware of any tools that does that out of the box ? Or the only way is to use tools such as neo4j
I didn't found anything so far. So I decided to do it on my own with neo4j.
I took one rabbitMQ broker definition with shovel, queue, binding and exchange.
https://github.com/aaleks/rabbitmq-neo4j-dependency-graph

Is encryption at rest supported on remote protocol in OrientDB?

In the documentation of OrientDB it mentioned that encryption at rest is not supported on remote protocol yet. It can be used only with plocal.
Currently we are using the OrientDB version 2.2.22. Database encryption is mandatory for us. We were previously using OrientDB in plocal mode, but now we have a new requirement in which multiple processes from different JVMs need to connect with same OrientDB database, which is not possible in plocal model.
Is there any way we can achieve it? Is there any workaround? Is this feature going to be supported in upcoming releases?
If you start your server and provide the key at startup, from that point on, the database is accessible via remote. So it would work. I suggest encrypting the TCP/IP connection too at that point.
No, it cannot currently be done:
NOTE: Encryption at rest is not supported on remote protocol yet. It can be used only with plocal.
Given your new requirements, it seems like OrientDB is not the right choice for you anymore.

Resources