Riak Search - Results limited to first 10000 - riak

I have a Riak Search node running and am trying out some test queries to get a feel for performance.
I'm running a query via the search shell, and the results are being restricted to the first 10000. I want all results to come back, but can't find where this 10000 limit is coming from?

From the riak_search root directory, edit rel/riaksearch/etc/app.config and in the riak search configuration, set your max_search_results:
...
{riak_search, [
{search_backend, merge_index_backend},
{java_home, "/usr"},
{max_search_results, <put your max>}
]},

Related

Java API to query CommonCrawl to populate Digital Object Identifier (DOI) Database

I am attempting to create a database of Digital Object Identifier (DOI) found on the internet.
By manually searching the CommonCrawl Index Server manually I have obtained some promising results.
However I wish to develop a programmatic solution.
This may result in my process only requiring to read the index files and not the underlying WARC data files.
The manual steps I wish to automate are these:-
1). for each CommonCrawl Currently available index collection(s):
2). I search ... "Search a url in this collection: (Wildcards -- Prefix: http://example.com/* Domain: *.example.com) " e.g. link.springer.com/*
3). this returns almost 6MB of json data that contains approx 22K unique DOIs.
How can I browse all available CommonCrawl indexes instead of searching for specific URLs?
From reading the API documentation for CommonCrawl I cannot see how I can browse all the indexes to extract all DOIs for all domains.
UPDATE
I found this example java code https://github.com/Smerity/cc-warc-examples/blob/master/src/org/commoncrawl/examples/S3ReaderTest.java
that shows how to access a common crawl dataset.
However when I run it I receive this exception
"main" org.jets3t.service.S3ServiceException: Service Error Message. -- ResponseCode: 404, ResponseStatus: Not Found, XML Error Message: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>common-crawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00160-ip-10-164-35-72.ec2.internal.warc.gz</Key><RequestId>1FEFC14E80D871DE</RequestId><HostId>yfmhUAwkdNeGpYPWZHakSyb5rdtrlSMjuT5tVW/Pfu440jvufLuuTBPC25vIPDr4Cd5x4ruSCHQ=</HostId></Error>
In fact every file I try to read results in the same error. Why is that?
what is the correct common crawl uri's for their datasets?
The data set location has changed since more than one year, see announcement. However, many examples and libraries still contain the old pointers. You can access the index files for all crawls back to 2013 on s3://commoncrawl/cc-index/collections/CC-MAIN-YYYY-WW/indexes/cdx-00xxx.gz - replace YYYY-WW with year and week of the crawle and expand xxx to 000-299 to get all 300 index parts. New crawl data is announced on the Common Crawl group, or read more about how to access the data.
To get the example code to work replace lines 24 and 25 with:
String fn = "crawl-data/CC-MAIN-2013-48/segments/1386163035819/warc/CC-MAIN-20131204131715-00000-ip-10-33-133-15.ec2.internal.warc.gz";
S3Object f = s3s.getObject("commoncrawl", fn, null, null, null, null, null, null);
Also note that the commoncrawl group have an updated example.

Elasticsearch bulk update followed by search

In my server I update some documents using bulk API:
{"update":{"_type":"post","_retry_on_conflict":"3","_index":"xxxx","_id":"yyyy"}}
{"doc":{"sentiment":"positive","mood":1,"upgrade":true}}
After I get the response I make a new request for the same document using search:
{"query":{"filtered":{"filter":{"ids":{"values":["yyyy"]}}}}}
But the returned document has no updated value( Still has the old value ). If I wait for some time the updated value appears. I Think that occurs because bulk is async? Is there any way to fix this?
you can use refresh api to force index update, or even add ?refresh=true at the end of bulk command. But normally not recommended. Also, If there are more than one Node, you may need to use synced flush.

Most efficient method of pulling in weather data for multiple locations

I'm working on a meteor mobile app that displays information about local places of interest and one of the things that I want to show is the weather in each location. I've currently got my locations stored with latlng coordinates and they're searchable by radius. I'd like to use the openweathermap api to pull in some useful 'current conditions' information so that when a user looks at an entry they can see basic weather data. Ideally I'd like to limit the number of outgoing requests to keep the pages snappy (and API requests down)
I'm wondering if I can create a server collection of weather data that I update regularly, server-side (hourly?) that my clients then query (perhaps using a mongo $near lookup?) - that way all of my data is being handled within meteor, rather than each client going out to grab the latest data from the API. I don't want to have to iterate through all of the locations in my list and do a separate call out to the api for each as I have approx. 400 locations(!). I'm afraid I'm new to API requests (and meteor itself) so apologies if this is a poorly phrased question.
I'm not entirely sure if this is doable, or if it's even the best approach - any advice (and links to any useful code snippets!) would be greatly appreciated!
EDIT / UPDATE!
OK I haven't managed to get this working yet but I some more useful details on the data!
If I make a request to the openweather API I can get data back for all of their locations (which I would like to add/update to a collection). I could then do regular lookup, instead of making a client request straight out to them every time a user looks at a location. The JSON data looks like this:
{
"message":"accurate",
"cod":"200",
"count":50,
"list":[
{
"id":2643076,
"name":"Marazion",
"coord":{
"lon":-5.47505,
"lat":50.125561
},
"main":{
"temp":292.15,
"pressure":1016,
"humidity":68,
"temp_min":292.15,
"temp_max":292.15
},
"dt":1403707800,
"wind":{
"speed":8.7,
"deg":110,
"gust":13.9
},
"sys":{
"country":""
},
"clouds":{
"all":75
},
"weather":[
{
"id":721,
"main":"Haze",
"description":"haze",
"icon":"50d"
}
]
}, ...
Ideally I'd like to build my own local 'weather' collection that I can search using mongo's $near (to keep outbound requests down, and speed up), but I don't know if this will be possible because the format that the data comes back in - I think I'd need to structure my location data like this in order to use a geo search:
"location": {
"type": "Point",
"coordinates": [-5.47505,50.125561]
}
My questions are:
How can I build that collection (I've seen this - could I do something similar and update existing entries in the collection on a regular basis?)
Does it just need to live on the server, or client too?
Do I need to manipulate the data in order to get a geo search to work?
Is this even the right way to approach it??
EDIT/UPDATE2
Is this question too long/much? It feels like it. Maybe I should split it out.
Yes easily possible. Because your question is so large I'll give you a high level explanation of what I think you need to do.
You need to create a collection where you're gonna save the weather data in.
A request worker that requests new data and updates the collection on a set interval. Use something like cron-tick for scheduling the interval.
Requesting data should only happen server side and I can recommend the request npm package for that.
Meteor.publish the weather collection and have the client subscribe to that, with optionally a filter for it's location.
You should now be getting the weather data on your client and should be able to get freaky with it.

How to download HTML Report from HP ALM Performance Center 11.0 using rest API

I want to download HTML default report for a test run from Performance Center storage (using Rest API). Actually I need just summary.html file.
I was using the following steps in PC 11.5:
Request test scenarios:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/tests?fields=id,last-modified,name,owner&query={subtype-id[=PERFORMANCE-TEST]}&page-size=max
Let user choose the scenario (id) and request all its runs:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/runs?page-size=max&fields=id,owner,pc-start-time,duration,status,test-id&query={test-id[=234]}
Let user choose the run (id) and request Report (result entity):
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results?page-size=max&query={run-id[=123];name[=Reports]}&fields=id,name
Request "summary.html" file using file-id taken from previous step response:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results/{file-id}/storage/report/summary.html
However it is not working with Performance Center 11.0. It fails at last step:
qccore.general-error
Not Found
I guess it is because the path of report was changed.
Can someone tell the path for summary.html for Performance Center 11.0?
I've been able to have a little bit of success with this. Rather than use the request you are using above I used the following:
http://{server:port}/qcbin/rest/domains/{domain}/projects/{project}/results/{file-id}/logical-storage/
This gave me a zip file, which contained the report inside it.

/alfresco/api/service/people only returns 5000 results in Alfresco EE 4.1.4

I've run into a problem where a call to https://localhost:8080/alfresco/service/api/people is only returning the first 5000 users.
I can't figure out how to get the rest out of the system--that API doesn't appear to support a "skipCount" argument.
I thought that I might be able to get at least a list of the usernames by using the WebDAV URL (https://localhost:8080/alfresco/webdav/User%20Homes/) to get the list, but that also only returns the first 5000.
So, how do I get the list of users from 5001 onwards?
there is a maxResult param you can give.
for e.g. https://localhost:8080/alfresco/service/api/people?filter=*&maxResults=10000
If you look at this JIRA ticket, you'll see that when you supply a * in the query it will search through SOLR and when you don't it'll search the DB.
If you look at the JAVA code beneath:
public PagingResults<PersonInfo> getPeople(String pattern, List<QName> filterStringProps, List<Pair<QName, Boolean>> sortProps, PagingRequest pagingRequest)
{
ParameterCheck.mandatory("pagingRequest", pagingRequest);
There is a PagingRequest which you can supply, so you could page that you need the rows/results after 5000.
Still you'd need to make a Java-Backend Webscript which retrieves the result.
---UPDATE---
In the org.alfresco.repo.jscript.People there is a maxResult:
private int defaultListMaxResults = 5000;
If you look a bit further then this class is initiated in the script-service-context.xml.
So just override the bean peopleScript and set the defaultListMaxResults to a higher nr, restart Alfresco and it should work.

Resources