I've created a local version of the wikidata api using the instructions here, and I'd like to specify a custom timeout to override the 60 second timeout in the official api. I haven't found anything in RWStore.properties, but perhaps I'm missing something.
According to Blazegraph documentation, this should be the queryTimeout parameter in the web.xml file.
Using the pre-built full service package (https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#Standalone_service) with blazegraph-service-0.3.0.war, without a web.xml or other files to modify, there's also following way to adjust the query timeout limit:
Open the runBlazegraph.sh file and append following option:
-Dorg.wikidata.query.rdf.tool.rdf.RdfRepository.timeout=3600
to the java options.
This would increase the timeout time to 1 hour (3600s).
Related
When error occurs, I want to see the data payload uploaded by users.
I wasn't able to find the post/put/patch data in apm report. (kibana)
Is there an option I need to turn on for this?
Most agents have a CaptureBody config option, with that you can capture the request body. It's off by default - you can set it to error.
I linked the Java docs, you should be able to find the same config for (I think all) other agents.
Is there any difference in patch api in embedded and standard version of the server?
Is there a need to configure document store in some way to enable patch api?
I'm writing a test which use embedded raven. The code works correctly on the standard version but in test it doesn't. I'm constantly receiving patch result: DocumentDoesNotExists. I`ve checked with debugger and the document exists in the store - so it is not a problem with test.
Here you can find a repro of my issue: https://gist.github.com/pblachut/c2e0e227fa3beb51f4f9403505c292bb
I`ve reached the contact in the ravendb support and I have answer for my question.
There should be no difference between embedded and normal version of the server. The problem was that I did not passed explicitly for which database I want to invoke batch command. In the result I tried to patch document in system database.
var result = await documentStore.AsyncDatabaseCommands.ForDatabase("testDb).BatchAsync(new[] {command});
I assumed that database name will be taken from the session (beacuse I get documentStore from there). But the name of database should be always passed.
var documentStore = session.Advanced.DocumentStore;
How to get a list of active workflows/tasks of current user in Alfresco by JavaScript API ?
It is require to create a rule which will write active tasks to the some file and hang/attach this rule to/on a folder.
Yes it is possible to get the list of workflows.
You can do that with the following api.
GET /alfresco/service/api/task-instances?authority={authority?}&state={state?}&priority={priority?}&pooledTasks={pooledTasks?}&dueBefore={dueBefore?}&dueAfter={dueAfter?}&properties={properties?}&maxItems={maxItems?}&skipCount={skipCount?}&exclude={exclude?}
GET /alfresco/service/api/workflow-instances/{workflow_instance_id}/task-instances?authority={authority?}&state={state?}&priority={priority?}&dueBefore={isoDate?}&dueAfter={isoDate?}&properties={prop1, prop2, prop3...?}&maxItems={maxItems?}&skipCount={skipCount?}&exclude={exclude?}
Note: You can set your own parameters according to your requirements in the request
See the documentation.
I have the following setup: riak 1.4.12, riakcs 1.5.3, stanchion 1.5.0
I am able to list bucket contents, and the authentication works (I get a response when listing or trying to remove a bucket, PUT a file) but get an AccessDenied error when trying to create a bucket.
I found this thread http://riak-users.197444.n3.nabble.com/RIAK-CS-Unable-to-create-bucket-using-s3cmd-AccessDenied-td4032375.html and tried adding signature_v2 = True to .s3cfg with no success, and I've also tried three versions of s3cmd (1.5.0, 1.5.0alpha, 1.0.1) I also tried creating a bucket using the python library boto, which also gives an access denied error.
I'm stumped :( any suggestions on where I should look next would be greatly appreciated! Not sure where there are logs for individual operations against Riak-cs - I've set lager log level to debug and wasn't able to see anything in the logs.
Thanks!
Ambert
I posted the same question to riak-users mailing list, and got an answer!
In my case, I had to set the admin.key and admin.secret in /etc/stanchion/stanchion.conf.
After setting them, s3cmd mb succeeded.
My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.