The doc states:
The absence of LastEvaluatedKey is the only way to know that you have
reached the end of the result set
However if you have 10 items and you're Query-ing 10 items, you WILL get a result set with LastEvaluatedKey. However there is no more items after that.
Is there a reliable method to actually know when reaching the end of the result set?
When you specify the limit (10 as per this question), it finds number of items as provided by limit and does not look beyond that.
As items are 10 and limit is 10, it is able to find elements as per limit each time.
On the second attempt to read items, it finds no item in table and hence return null. You will need to have while loop something like below:
List<QueryResult> queryResultList = new ArrayList<>();
// Since query returns only max 1MB of items at a time,
// use of this flag tells if no more such elements are present in db.
Map<String, AttributeValue> lastKeyEvaluated = null;
Map<String, AttributeValue> expressionAttributeValue = new HashMap<>();
expressionAttributeValue.put(":primary_key_value", new AttributeValue().withS(primary_key_value));
do {
QueryRequest queryRequest = new QueryRequest()
.withTableName(this.getDynamoTable().getTableName())
.withIndexName(Constants.Table.INDEX_NAME)
.withKeyConditionExpression("primary_key = :primary_key_value")
.withExpressionAttributeValues(expressionAttributeValue)
.withExclusiveStartKey(lastKeyEvaluated);
QueryResult result = this.getAmazonDynamoDBClient().query(queryRequest);
queryResultList.add(result);
lastKeyEvaluated = result.getLastEvaluatedKey();
} while (lastKeyEvaluated != null);
I know this question is 4 years old, but I also encountered this problem and this is how I "solved" it:
In my case I was using the last evaluated key for doing a pagination on the results, and I was giving via a variable (lets call it pageSize) the query size.
Using the example of the question, we would be giving to the function a pageSize = 10.
How do we mitigate the lastEvaluatedKey problem when the results are exactly as the pageSize?
We do a query of pageSize + 1 items, and if the response.length == pageSize + 1, that means that we have 1 + X extra at least items, meaning that the are more pages. Then we retrieve the lastEvaluatedKey of the pageSize item (remember that we retrieved pageSize + 1 items). If there are response.length < pageSize there is no lastEvaluatedKey to pass
Summary example: we want a page of 10 items. Then we query 11 items. Two possibilities:
We get <= 10 items, which is cool, we retrieve the items without the lastEvaluatedKey, because there are no more items xd
We get 11 items, meaning that there are more pages needed, so we retrieve the lastEvaluatedKey of the 10th item. Small downside of this is that we need to "craft" the lastEvaluatedKey of this item, as it won't correspond as the same lastEvaluatedKey that dynamoDB response gives us (remember that we are doing a 10 + 1 query so in case there is a lastEvaluatedKey it would be of the 11 item)
Related
When I run the code below, the maximum number of lists is limited to 6, is there any way to change this?
listLimit = getCarContext().getCarService(ConstraintManager.class).getContentLimit(
ConstraintManager.CONTENT_LIMIT_TYPE_LIST);
As in the sample code below, if 6 or more items are added and executed, 7th list is not visible. The sample code shows the 7th list in a new screen by adding a more button. Is there a way to show more than 6 lists on one screen?
public Template onGetTemplate() {
ItemList.Builder listBuilder = new ItemList.Builder();
Row[] screenArray = new Row[]{
createRow(getCarContext().getString(R.string.pane_template_demo_title),
new PaneTemplateDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.list_template_demo_title),
new ListTemplateDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.place_list_template_demo_title),
new PlaceListTemplateBrowseDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.search_template_demo_title),
new SearchTemplateDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.msg_template_demo_title),
new MessageTemplateDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.grid_template_demo_title),
new GridTemplateDemoScreen(getCarContext())),
createRow(getCarContext().getString(R.string.long_msg_template_demo_title),
new LongMessageTemplateDemoScreen(getCarContext()))
};
...
int currentItemStartIndex = mPage * mItemLimit;
int currentItemEndIndex = Math.min(currentItemStartIndex + mItemLimit,
screenArray.length);
for (int i = currentItemStartIndex; i < currentItemEndIndex; i++) {
listBuilder.addItem(screenArray[i]);
There's no way for you as a developer to be able to change the number of list items that can be shown as these numbers have been chosen to meet safety regulations and can vary by region. Using the ConstraintManager as you're doing is the best practice since it will automatically give you the appropriate limit (vs. hard-coding this limit).
Since the number of items may vary (with 6 as the minimum), it's recommended to include the most relevant/important items first for dynamic lists.
(server side script)
This is a stripped down version of my code but what this should be doing is
find records where the "uniqueid" is equal to matchid
return 0 if there are less than two of these items
print the region of each item if there are two or more items
return the number of items
function copyFile(matchid){
var fileName = getProp('projectName')+" "+row[0];
var query = app.models.Files.newQuery();
query.filters.uniqueid._equals = matchid;
records = query.run();
var len = records.length;
if (len < 2) return 0;
console.log(row[2]+" - "+len);
for (var i=0; i<len;i++){
console.log("Loop "+i);
var r = records[i];
console.log(r.region);
}
return records.length
Strangely, it can only get at the region (or any of the other data for the FIRST record ( records[0]) for the others it says undefined. This is extremely confusing and frustrating. To reiterate it passes the len < 2 check, so there are more records in the set returned from the query, they just seem to be undefined if I try to get them from records[i]
Note: uniqueid is not actually a unique field, the name is from something else, sorry about confusion.
Question: WHY can't I get at records[1] records [2]
This was a ridiculous problem and I don't entirely understand the solution.
Changing "records" to "recs" entirely fixes my problem.
why does records[0] work, records[1] does not
but recs[0] and recs[1] both work.
I believe "records" has a special meaning and points at something regardless of assignment in this context.
Tl:dr
I have a listview with items. I want each individual item inserted into my sqlite database as a new entry. Right now, I am only able to insert all items into the database as a single entry.
I am able to populate the list from my database correctly. If I manually input the items in the SqliteStudio. The added items will show up as an individual item.
Code settings up the list
private ObservableList listchosedescription;
listchosedescription = FXCollections.observableArrayList();
this.descriptionschosen.setItems(listchosedescription);
Code for populating the list
while (result.next()) {
listchosedescription.add(result.getString("description"));
}
descriptionschosen.setItems(listchosedescription);
Faulty code for adding listview items to the database
Connection conn = dbConnection.getConnection();
PreparedStatement statement2 = conn.prepareStatement(sqlDesInsert);
statement2.setString(1, String.valueOf(descriptionschosen.getItems()));
statement2.setInt(2, Integer.parseInt(labelidnew.getText()));
statement2.execute();
From looking online. I think that I need a for-loop counting the individual items in the list.
for(int i = listchosedescription.size(); i != 0; i--){
Then I need to add each individual entry to a batch and then execute the batch.
I also understand how to get a single item from the listview. So I feel a little stuck, hence I thought I would post for guidance.
for (int i = listchosedescription.size(); i != 0; i--) {
statement2.setString(1, String.valueOf(listchosedescription.subList(i - 1, i)));
statement2.setInt(2, Integer.parseInt(labelidnew.getText()));
statement2.addBatch();
}
statement2.executeBatch();
In this for-loop, I have three statements:
I create an integer (i) which counts the size() of my observableList.
I run the loop as long as the size() is not equal to 0 (should probably be as long as it is larger than zero).
I decrease my integer (i) by 1 each time the loop is run.
Inside the loop, I add my two statements as I normally would. But the values from the observableList are accessed by using its subList. I acess the location using my integer (i).
i-1 will make sure I reach the correct fromIndex.
i will make sure I reach the correct toIndex.
Lastly, I add to the batch inside the loop and execute the batch after the loop.
I have a weired issue, I can't believe such a common feature could be broken (the error is certainely on my side), but I can't find how to make it work. I want to use the cursor from datastore to get paginated results, I keep getting all of them whatever i do
FetchOptions fetchOptions = FetchOptions.Builder.withChunkSize(5).prefetchSize(6);
String datastoreCursor = filter.getDatastoreCursor();
if (datastoreCursor != null) {
fetchOptions = fetchOptions.startCursor(Cursor.fromWebSafeString(datastoreCursor));
}
QueryResultList<Entity> result = preparedQuery.asQueryResultList(fetchOptions);
ArrayList<Product> productList = new ArrayList<Product>();
// int count = 0;
for (Entity entity : result) {
// if (++count == PRODUCTS_PER_PAGE)
// break;
Key key = entity.getKey();
productList.add(populateProduct(key.getId(), true, entity));
}
toReturn.setDatastoreCursor(result.getCursor());
Also if I don't read the rows (uncomment the lines with counter) and get the cursor the resulting cursor is the same. I thought it might bring me back to the last read element under the datastabase cursor (thinking result.getCursor() reflects the state of the db cursor)
I'm getting a cursor with this value E-ABAOsB8gEQbW9kaWZpY2F0aW9uRGF0ZfoBCQiIjsfAmKm_AuwBggIhagljaGF0YW1vamVyFAsSB1Byb2R1Y3QYgICAgICosgsMFA that points to no more elements (I have 23 elements for my test that I all receive from the first query)
When you use a QueryResultList, the requested cursor will always point to the end of the list. As specified by the javadoc of QueryResultList#getCursor:
Gets a Cursor that points to the result immediately after the last one in this list.
Even though you provide a prefetch and chunk size, The entire result list will still have all of your results since you have not specified a limit. Thus, the expected cursor is the cursor after the final element.
If you only want a specific number of entities per page, you should set a limit on the FetchOptions using the limit method. Then when you call getCursor(), you'll get a cursor at the end of your page, as opposed to the end of your dataset.
Instead, you could also use a QueryResultIterator. Unlike the QueryResultList, calling getCursor on a QueryResultIterator will result in the cursor that points after the last entity retrieved by calling .next() (javadoc).
We'll soon be embarking on the development of a new mobile application. This particular app will be used for heavy searching of text based fields. Any suggestions from the group at large for what sort of database engine is best suited to allowing these types of searches on a mobile platform?
Specifics include Windows Mobile 6 and we'll be using the .Net CF. Also some of the text based fields will be anywhere between 35 and 500 characters. The device will operate in two different methods, batch and WiFi. Of course for WiFi we can just submit requests to a full blown DB engine and just fetch results back. This question centres around the "batch" version which will house a database loaded with information on the devices flash/removable storage card.
At any rate, I know SQLCE has some basic indexing but you don't get into the real fancy "full text" style indexes until you've got the full blown version which of course isn't available on a mobile platform.
An example of what the data would look like:
"apron carpenter adjustable leather container pocket waist hardware belt" etc. etc.
I haven't gotten into the evaluation of any other specific options yet as I figure I'd leverage the experience of this group in order to first point me down some specific avenues.
Any suggestions/tips?
Just recently I had the same issue. Here is what I did:
I created a class to hold just an id and the text for each object (in my case I called it a sku (item number) and a description). This creates a smaller object that uses less memory since it is only used for searching. I'll still grab the full-blown objects from the database after I find matches.
public class SmallItem
{
private int _sku;
public int Sku
{
get { return _sku; }
set { _sku = value; }
}
// Size of max description size + 1 for null terminator.
private char[] _description = new char[36];
public char[] Description
{
get { return _description; }
set { _description = value; }
}
public SmallItem()
{
}
}
After this class is created, you can then create an array (I actually used a List in my case) of these objects and use it for searching throughout your application. The initialization of this list takes a bit of time, but you only need to worry about this at start up. Basically just run a query on your database and grab the data you need to create this list.
Once you have a list, you can quickly go through it searching for any words you want. Since it's a contains, it must also find words within words (e.g. drill would return drill, drillbit, drills etc.). To do this, we wrote a home-grown, unmanaged c# contains function. It takes in a string array of words (so you can search for more than one word... we use it for "AND" searches... the description must contain all words passed in... "OR" is not currently supported in this example). As it searches through the list of words it builds a list of IDs, which are then passed back to the calling function. Once you have a list of IDs, you can easily run a fast query in your database to return the full-blown objects based on a fast indexed ID number. I should mention that we also limit the maximum number of results returned. This could be taken out. It's just handy if someone types in something like "e" as their search term. That's going to return a lot of results.
Here's the example of custom Contains function:
public static int[] Contains(string[] descriptionTerms, int maxResults, List<SmallItem> itemList)
{
// Don't allow more than the maximum allowable results constant.
int[] matchingSkus = new int[maxResults];
// Indexes and counters.
int matchNumber = 0;
int currentWord = 0;
int totalWords = descriptionTerms.Count() - 1; // - 1 because it will be used with 0 based array indexes
bool matchedWord;
try
{
/* Character array of character arrays. Each array is a word we want to match.
* We need the + 1 because totalWords had - 1 (We are setting a size/length here,
* so it is not 0 based... we used - 1 on totalWords because it is used for 0
* based index referencing.)
* */
char[][] allWordsToMatch = new char[totalWords + 1][];
// Character array to hold the current word to match.
char[] wordToMatch = new char[36]; // Max allowable word size + null terminator... I just picked 36 to be consistent with max description size.
// Loop through the original string array or words to match and create the character arrays.
for (currentWord = 0; currentWord <= totalWords; currentWord++)
{
char[] desc = new char[descriptionTerms[currentWord].Length + 1];
Array.Copy(descriptionTerms[currentWord].ToUpper().ToCharArray(), desc, descriptionTerms[currentWord].Length);
allWordsToMatch[currentWord] = desc;
}
// Offsets for description and filter(word to match) pointers.
int descriptionOffset = 0, filterOffset = 0;
// Loop through the list of items trying to find matching words.
foreach (SmallItem i in itemList)
{
// If we have reached our maximum allowable matches, we should stop searching and just return the results.
if (matchNumber == maxResults)
break;
// Loop through the "words to match" filter list.
for (currentWord = 0; currentWord <= totalWords; currentWord++)
{
// Reset our match flag and current word to match.
matchedWord = false;
wordToMatch = allWordsToMatch[currentWord];
// Delving into unmanaged code for SCREAMING performance ;)
unsafe
{
// Pointer to the description of the current item on the list (starting at first char).
fixed (char* pdesc = &i.Description[0])
{
// Pointer to the current word we are trying to match (starting at first char).
fixed (char* pfilter = &wordToMatch[0])
{
// Reset the description offset.
descriptionOffset = 0;
// Continue our search on the current word until we hit a null terminator for the char array.
while (*(pdesc + descriptionOffset) != '\0')
{
// We've matched the first character of the word we're trying to match.
if (*(pdesc + descriptionOffset) == *pfilter)
{
// Reset the filter offset.
filterOffset = 0;
/* Keep moving the offsets together while we have consecutive character matches. Once we hit a non-match
* or a null terminator, we need to jump out of this loop.
* */
while (*(pfilter + filterOffset) != '\0' && *(pfilter + filterOffset) == *(pdesc + descriptionOffset))
{
// Increase the offsets together to the next character.
++filterOffset;
++descriptionOffset;
}
// We hit matches all the way to the null terminator. The entire word was a match.
if (*(pfilter + filterOffset) == '\0')
{
// If our current word matched is the last word on the match list, we have matched all words.
if (currentWord == totalWords)
{
// Add the sku as a match.
matchingSkus[matchNumber] = i.Sku.ToString();
matchNumber++;
/* Break out of this item description. We have matched all needed words and can move to
* the next item.
* */
break;
}
/* We've matched a word, but still have more words left in our list of words to match.
* Set our match flag to true, which will mean we continue continue to search for the
* next word on the list.
* */
matchedWord = true;
}
}
// No match on the current character. Move to next one.
descriptionOffset++;
}
/* The current word had no match, so no sense in looking for the rest of the words. Break to the
* next item description.
* */
if (!matchedWord)
break;
}
}
}
}
};
// We have our list of matching skus. We'll resize the array and pass it back.
Array.Resize(ref matchingSkus, matchNumber);
return matchingSkus;
}
catch (Exception ex)
{
// Handle the exception
}
}
Once you have the list of matching skus, you can iterate through the array and build a query command that only returns the matching skus.
For an idea of performance, here's what we have found (doing the following steps):
Search ~171,000 items
Create list of all matching items
Query the database, returning only the matching items
Build full-blown items (similar to SmallItem class, but a lot more fields)
Populate a datagrid with the full-blow item objects.
On our mobile units, the entire process takes 2-4 seconds (takes 2 if we hit our match limit before we have searched all items... takes 4 seconds if we have to scan every item).
I've also tried doing this without unmanaged code and using String.IndexOf (and tried String.Contains... had same performance as IndexOf as it should). That way was much slower... about 25 seconds.
I've also tried using a StreamReader and a file containing lines of [Sku Number]|[Description]. The code was similar to the unmanaged code example. This way took about 15 seconds for an entire scan. Not too bad for speed, but not great. The file and StreamReader method has one advantage over the way I showed you though. The file can be created ahead of time. The way I showed you requires the memory and the initial time to load the List when the application starts up. For our 171,000 items, this takes about 2 minutes. If you can afford to wait for that initial load each time the app starts up (which can be done on a separate thread of course), then searching this way is the fastest way (that I've found at least).
Hope that helps.
PS - Thanks to Dolch for helping with some of the unmanaged code.
You could try Lucene.Net. I'm not sure how well it's suited to mobile devices, but it is billed as a "high-performance, full-featured text search engine library".
http://incubator.apache.org/lucene.net/
http://lucene.apache.org/java/docs/