Querying Titan ElasticSearch backend via Rexster - gremlin

I have Titan 0.3.2 running in embedded mode, and have been able to create and query ElasticSearch indexes via the Gremlin shell (see previous question). I am using the default configuration, which calls the ES index "search".
These searches return the correct nodes without error via the Gremlin shell:
g.query().has('my_label','abc').vertices()
g.query().has('my_label',CONTAINS,'abc').vertices()
However, if I attempt to run these same Gremlin queries via RexPro, Rexster sends back this error for the first query above:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
java.lang.IllegalArgumentException: Index is unknown or not configured: search
and this for the second:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
groovy.lang.MissingPropertyException: No such property: CONTAINS for class: Script3
Similarly, if I try to query on the indexed key via the REST API (GET):
http://localhost:8182/graphs/graph/vertices?key=my_key&value=abc
I receive the same error response:
{"message":"Index is unknown or not configured: search","error":"Index is unknown or not configured: search"}
Lastly, if I try to start with a clean database and run the index creation script through rexpro:
g.makeType().name("my_label").dataType(String.class).indexed("search", Vertex.class).unique(Direction.OUT).makePropertyKey();
I see the same unknown index error:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
java.lang.IllegalArgumentException: Index is unknown or not configured: search
So it seems that Rexster needs some additional information about the indexing backend, possibly in its configuration file (I am using the default one included with the installation). Anyone familiar with this issue? Happy to provide more information.

You might have a mix of things going on, but the main thing is that as of Titan 0.3.2, Rexster does not automatically import Titan classes, so you can't query with CONTAINS. You need to specify the full package name when doing so:
com.thinkaurelius.titan.core.attribute.Text.CONTAINS
I can't say for sure what else is wrong, but it looks like the elastic search index is not configured properly. That doesn't have much to do with the Rexster configuration file for Titan Server. It has more to do with the second argument you pass to titan.sh which contains Titan configuration information. Make sure that elastic search is properly configured as it is in this file shown here (default with the installation): https://github.com/thinkaurelius/titan/blob/master/config/titan-server-cassandra-es.properties

Related

SchemaViolationException while creating a new janus cluster

I am trying to create a janusgraph cluster with cassandra as backend and elastic for indexing. Getting a warning saying -
org.janusgraph.core.SchemaViolationException: Adding this property for key [~T$SchemaName] and value [rtusername] violates a uniqueness constraint [SystemIndex#~T$SchemaName]
followed by a error saying
ERROR org.apache.tinkerpop.gremlin.server.GremlinServer - Gremlin Server Error
java.lang.IllegalStateException: Could not create/configure Authenticator null
Can someone please help me understand what I am missing here? What does rtusername actually mean here
Things to check for the first error message:
is an old graph with the same value for the keyspace present (beware for the default value "janusgraph")?
was the property key added by a different gremlin client (check by printing the schema, before adding the property key)?
are you sure this happens during adding the property key to the schema (because this error can also happen while adding a property to a vertex of the graph)
The second error message is related to the communication between your gremlin client and the JanusGraph Server. Things to check:
do the TinkerPop versions of the gremlin client and the JanusGraph Server match (compatibility matrix)?

Can we use standalone Spring Cloud Schema Registry with Confluent's KafkaAvroSerializer?

I have a project using Spring cloud stream with Kafka Streams binder. For the output of a stream, I am using Avro, with the Serde provided by Confluent(io.confluent.kafka.streams.serdes.avro.SpecificAvroSerde).
I am able to use it with the Confluent Schema Registry. Serialization and Deserialization takes place correctly.
However, I wanted to see if we can use the Spring Cloud Schema Registry Server instead of the Confluent one. I configured a standalone Schema Registry server and set the schema registry in my project to it (changed the schemaRegistryClient.endpoint and schema.registry.url properties).
When I tried it out, it seems Spring Cloud is able to work with the standalone server. It registers the schema available in the resources folder as a .avsc file. However, when I send a message, it seems the Confluent serializer continues to approach it as a Confluent Schema Registry (which has different REST endpoints from Spring Schema Registry). As a result, it gets a 405 response code.
We get the following exception(partial stack-trace)
org.apache.kafka.common.errors.SerializationException: Error registering Avro schema: <my-avro-schema>
Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: Unexpected character ('<' (code 60)): expected a valid value (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (sun.net.www.protocol.http.HttpURLConnection$HttpInputStream); line: 1, column: 2]; error code: 50005
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:230)
It seems to me that there are two possibilities:
Spring Schema Registry Server can work only with the content-type provided by Spring (specified as content-type: application/*+avro) and not with the native Serde provided by Confluent, or
There is an issue with the project configuration.
Can someone help me figure out which one is it? If it is the second one, can someone point out what is wrong?
Each schema registry provider requires a proprietary SerDe library. For example, if you would like to integrate AWS Glue Schema Registry with Kafka, then you would need Amazon's SerDe stuff. Hence, the Confluent's SerDe library expects Confluent's Schema Registry at the address specified in the schema.registry.url property.

ORA-14102: only one LOGGING or NOLOGGING clause may be specified

While importing an oracle schema from dump file, i am getting below error while creating tables.
ORA-14102: only one LOGGING or NOLOGGING clause may be specified.
I see the above error while creating tables from the dumpfile for several tables.
How to enable or disable LOGGING/NOLOGGING at schema level before i start import?
When performing an Oracle database export with the expdp of Oracle 11gR2 (11.2.0.1) and then importing it into the database with impdp, the following error messages appear in the import log file:
ORA-39083: Object type INDEX failed to create with error:
ORA-14102: only one LOGGING or NOLOGGING clause may be specified
This is a known Oracle 11gR2 issue. The problem is that the DBMS_METADATA.GET_DDL returns invalid syntax for an index created. So, during the index creation, both the NOLOGGING and LOGGING keywords are visible in the DDL. Download and apply Patch 8795792 from Oracle to resolve this issue.

How do I index already existing objects in Riak

I created a bucket in riak and stored some key value pairs (value being a json object). After this I ran /usr/sbin/search-cmd install <bucket> to start riak search for the bucket.
Each object has a 'type' attribute and I am trying to search objects of particular type using /usr/sbin/search-cmd search <bucket> "type:xyz" but I get the following error:
RPC to 'riak#127.0.0.1' failed: {'EXIT',
{badarg,
[{ets,lookup,
[schema_table,<<"catalog">>],
[]},
{riak_search_config,get_schema,1,
[{file,"src/riak_search_config.erl"},
{line,69}]},
{riak_search_client,parse_query,3,
[{file,"src/riak_search_client.erl"},
{line,57}]},
{search,search,3,
[{file,"src/search.erl"},{line,55}]},
{riak_search_cmd,search,3,
[{file,"src/riak_search_cmd.erl"},
{line,188}]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,203}]}]}}
I read that indexing happens through a pre-commit hook so I also POSTed all objects again but still no results. Am I missing any step in setting up riak search?
Figured that riak search was not enabled in my app.config.

Riak Map/Reduce enableForSearch() error

I'm trying to use the Riak Java Client in an application, however I'm facing some errors. What I need is to perform a Riak Search query as input for a Map/Reduce. According to the official tutorial the search property must be enabled in the Bucket. I'm doing so, in the following code:
IRiakClient riakClient = RiakFactory.httpClient(HTTP_CLIENT);
Bucket bucket = (Bucket) riakClient.createBucket("test-bucket").enableForSearch().execute();
When I do this, the store operation, in the Bucket, doesn't work anymore. And the following error appears:
com.basho.riak.client.RiakRetryFailedException: java.io.IOException: 500 Error:
{precommit_fail,{hook_crashed,{riak_search_kv_hook,precommit,error,badarg}}}
I've already googled the problem, but it wasn't much help!
Do you have search enabled in your app.config? Find this section
%% Riak Search Config
{riak_search, [
%% To enable Search functionality set this 'true'.
{enabled, false}
]},
and set enabled to true.

Resources