I'm trying to use the Riak Java Client in an application, however I'm facing some errors. What I need is to perform a Riak Search query as input for a Map/Reduce. According to the official tutorial the search property must be enabled in the Bucket. I'm doing so, in the following code:
IRiakClient riakClient = RiakFactory.httpClient(HTTP_CLIENT);
Bucket bucket = (Bucket) riakClient.createBucket("test-bucket").enableForSearch().execute();
When I do this, the store operation, in the Bucket, doesn't work anymore. And the following error appears:
com.basho.riak.client.RiakRetryFailedException: java.io.IOException: 500 Error:
{precommit_fail,{hook_crashed,{riak_search_kv_hook,precommit,error,badarg}}}
I've already googled the problem, but it wasn't much help!
Do you have search enabled in your app.config? Find this section
%% Riak Search Config
{riak_search, [
%% To enable Search functionality set this 'true'.
{enabled, false}
]},
and set enabled to true.
Related
When error occurs, I want to see the data payload uploaded by users.
I wasn't able to find the post/put/patch data in apm report. (kibana)
Is there an option I need to turn on for this?
Most agents have a CaptureBody config option, with that you can capture the request body. It's off by default - you can set it to error.
I linked the Java docs, you should be able to find the same config for (I think all) other agents.
Hello I am trying to read a module with this code:
(: Entry point - must be a read-only query. :)
xdmp:invoke(
'/path/mydocument.xqy',
(xs:QName('var1'), 'test',
xs:QName('var2'), "response"))
I am new in MarkLogic, I am using groovy and the api to connect to it, but also I saw I can invoke the module with this and indeed I did but it returns me
your query returned an empty sequence
I want to know if I can query xs:QName('var1'), 'test', changing test with a wildcard or how can I get all the information from the file called /path/mydocument.xqy?
I tried to use this:
xdmp:document-get("/path/mydocument.xqy)
but it says the file is not found. Although, if I use invoke I can query it, but I don't know what are the values I have to pass. I was wondering if there is something like sql using %% or something to give me all the data.
To answer the first question: "I am trying to read a module "
IF the module is in the database, then you must query the Modules database in which the module resides.
If the module is in the filesystem then you cannot directly access its source as a document but you can by executing xdmp:filesystem-file()
Simplification:
With the Default configuration of the server and REST client, user placed modules are in the "Modules" database and user placed documents are in the "Documents" database. This means, if you do a GET (read a "Document") with no additional parameters, it will return documents from the "Documents" database. Assuming you are using the default configuration for client and server, this would result in the behavior you are seeing. E.g. your Module code is in the Modules database, doing a GET for it by name will search the Documents database and correctly not find it.
You don't mention, and I don't know, the groovy library being used, but the REST API itself and all implementations of general purpose ML REST client libraries I am familiar with have options for overriding the default database with another. If the groovy library supports that, then specify the "Modules" database for your query and it should return the module document. Note: content-type will be application/text not text/xml.
You can simplify things for testing by bypassing the libraries and simply use a browser and try a URL like this http://yourserver.com:8000/v1/documents?uri=/your/module.xqy&database=Modules
Ref: https://docs.marklogic.com/REST/GET/v1/documents
Making the appropriate changes to the path and server for your use.
If you are still confused, then you should start with the basic MarkLogic tutorials and work through them one by one. You will most likely succeed faster by doing this then jumping straight into coding you don't understand yet.
DETAIL:
Note: The default behaviour is to EXECUTE documents when doing a GET call, using the Modules database. Thus doing a GET of http://yourserver:8000/your/module.xqy will EXECUTE it not return its source.
You will notice the REST API has a uri query parameter. This is EXECUTING the REST API code on /v1/documents which in turn will read the document specified by the uri and database parameters and return it.
I guess I can use:
xdmp:invoke(/pview/get-pview-browse-profiles.xqy,
cts:and-query((
cts:element-value-query(
xs:QName("letter"),"*", "wildcarded"),
cts:element-value-query(
xs:QName("collection"),"*", "wildcarded"))))
although it doesn't return anything
I have the following setup: riak 1.4.12, riakcs 1.5.3, stanchion 1.5.0
I am able to list bucket contents, and the authentication works (I get a response when listing or trying to remove a bucket, PUT a file) but get an AccessDenied error when trying to create a bucket.
I found this thread http://riak-users.197444.n3.nabble.com/RIAK-CS-Unable-to-create-bucket-using-s3cmd-AccessDenied-td4032375.html and tried adding signature_v2 = True to .s3cfg with no success, and I've also tried three versions of s3cmd (1.5.0, 1.5.0alpha, 1.0.1) I also tried creating a bucket using the python library boto, which also gives an access denied error.
I'm stumped :( any suggestions on where I should look next would be greatly appreciated! Not sure where there are logs for individual operations against Riak-cs - I've set lager log level to debug and wasn't able to see anything in the logs.
Thanks!
Ambert
I posted the same question to riak-users mailing list, and got an answer!
In my case, I had to set the admin.key and admin.secret in /etc/stanchion/stanchion.conf.
After setting them, s3cmd mb succeeded.
I have Titan 0.3.2 running in embedded mode, and have been able to create and query ElasticSearch indexes via the Gremlin shell (see previous question). I am using the default configuration, which calls the ES index "search".
These searches return the correct nodes without error via the Gremlin shell:
g.query().has('my_label','abc').vertices()
g.query().has('my_label',CONTAINS,'abc').vertices()
However, if I attempt to run these same Gremlin queries via RexPro, Rexster sends back this error for the first query above:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
java.lang.IllegalArgumentException: Index is unknown or not configured: search
and this for the second:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
groovy.lang.MissingPropertyException: No such property: CONTAINS for class: Script3
Similarly, if I try to query on the indexed key via the REST API (GET):
http://localhost:8182/graphs/graph/vertices?key=my_key&value=abc
I receive the same error response:
{"message":"Index is unknown or not configured: search","error":"Index is unknown or not configured: search"}
Lastly, if I try to start with a clean database and run the index creation script through rexpro:
g.makeType().name("my_label").dataType(String.class).indexed("search", Vertex.class).unique(Direction.OUT).makePropertyKey();
I see the same unknown index error:
java.util.concurrent.ExecutionException:
javax.script.ScriptException:
javax.script.ScriptException:
java.lang.IllegalArgumentException: Index is unknown or not configured: search
So it seems that Rexster needs some additional information about the indexing backend, possibly in its configuration file (I am using the default one included with the installation). Anyone familiar with this issue? Happy to provide more information.
You might have a mix of things going on, but the main thing is that as of Titan 0.3.2, Rexster does not automatically import Titan classes, so you can't query with CONTAINS. You need to specify the full package name when doing so:
com.thinkaurelius.titan.core.attribute.Text.CONTAINS
I can't say for sure what else is wrong, but it looks like the elastic search index is not configured properly. That doesn't have much to do with the Rexster configuration file for Titan Server. It has more to do with the second argument you pass to titan.sh which contains Titan configuration information. Make sure that elastic search is properly configured as it is in this file shown here (default with the installation): https://github.com/thinkaurelius/titan/blob/master/config/titan-server-cassandra-es.properties
I created a bucket in riak and stored some key value pairs (value being a json object). After this I ran /usr/sbin/search-cmd install <bucket> to start riak search for the bucket.
Each object has a 'type' attribute and I am trying to search objects of particular type using /usr/sbin/search-cmd search <bucket> "type:xyz" but I get the following error:
RPC to 'riak#127.0.0.1' failed: {'EXIT',
{badarg,
[{ets,lookup,
[schema_table,<<"catalog">>],
[]},
{riak_search_config,get_schema,1,
[{file,"src/riak_search_config.erl"},
{line,69}]},
{riak_search_client,parse_query,3,
[{file,"src/riak_search_client.erl"},
{line,57}]},
{search,search,3,
[{file,"src/search.erl"},{line,55}]},
{riak_search_cmd,search,3,
[{file,"src/riak_search_cmd.erl"},
{line,188}]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,203}]}]}}
I read that indexing happens through a pre-commit hook so I also POSTed all objects again but still no results. Am I missing any step in setting up riak search?
Figured that riak search was not enabled in my app.config.