Use RocksDB to support key-key-value (RowKey->Containers) by splitting the container - rocksdb

Support I have key/value where value is a logical list of strings where I can append strings. To avoid the situation where inserting a single string item to the queue causing re-write the entire list, I'd using multiple key-value pairs to represent it.
Key -> metadata of the value such as length and subkey format
Key-l1 -> value of item 1 in list
Key-l2 -> value of item 2 in list
Key-ln -> the lastest value in the list
I'd override the key comparer in RocksDB such that sorting of Key-ln formatted key is sort Key part first and ln second (i.e. group by and sort by Key and within the same Key value sort by ln). This way, all the list items along with its root key and metadata are grouped together in sst during initial bulk insert and during later sst compaction.
Appending a new list item becomes (1) first read Key-metadata to get the current list size of n; 2) insert Key-l(n+1) with new value. Deleting list item works as it is for RocksDB by deleting Key-ln and update the metadata.
To ensure the consistency, (1) and (2) will be done inside a RocksDB transaction.
This design seems to be ok?
Now, if I want to add anther feature of TTL for entire key-value(list), I'd use TTL support already in RocksDB. My understanding is that TTL to remove expired item happens during compaction. However, such compaction is not done under a transaction. RocksDB doesn't know that Key-metadata and Key-ln entries are related. It is entirely possible that there is a time window where Key->metadata(root node) is deleted while child nodes of (Key-ln) is not deleted yet (or reverse order). If during this time window, someone reads or update the list, it will get an inconsistent for the Key-list. Any remedy for it?
Thanks

You should use Merge Operator, it's designed for such value append use case. Your design is read-before-write, which has performance penalty, in general it should be avoided if possible: What's read-before-write in NoSQL?.
Options options;
options.merge_operator.reset(new StringAppendOperator(','));
DB::Open(options, kDBPath, &db)
...
db->Merge(WriteOptions(), "key", "value1");
db->Merge(WriteOptions(), "key", "value2");
db_->Get(ReadOptions(), "key", &result); // return "value1,value2"
The above example uses a predefined StringAppendOperator, which simply append new values at the end. You can defined your own MergeOperator to customize the merge operation.
In the backend, the merge operation is done on the read path (and compaction to reduce the version number), details: Merge Operator Implementation.

Related

getting results for a list of primary keys from dynamodb using table

I have a dynamodB table with which i fetch a single row in the following way:
private Table myTable;
myTable = dynamoDB.getTable(tableName);
myTable.getItem(new PrimaryKey(primaryKey, primaryKeyValue));
Is there a way for me to retrieve with a list of primary keys? I see that I can use batchGetItem but however for that I will need to use the interface AmazonDynamoDB. Is there an alternative way using the table?
To get all items in your table you need to use Scan operation:
The Scan operation returns one or more items and item attributes by
accessing every item in a table or a secondary index. To have DynamoDB
return fewer items, you can provide a FilterExpression operation.
If the total number of scanned items exceeds the maximum data set size
limit of 1 MB, the scan stops and results are returned to the user as
a LastEvaluatedKey value to continue the scan in a subsequent
operation. The results also include the number of items exceeding the
limit. A scan can result in no table data meeting the filter criteria.
By default it will return all fields, but you can provide a projection expression to get only some fields (ids in your case):
To read data from a table, you use operations such as GetItem, Query,
or Scan. DynamoDB returns all of the item attributes by default. To
get just some, rather than all of the attributes, use a projection
expression.
A projection expression is a string that identifies the attributes you
want. To retrieve a single attribute, specify its name. For multiple
attributes, the names must be comma-separated.
Keep in mind that scans are expensive, since you pay not for items that DynamoDB returns, but for items that DynamoDB reads in the database:
A Scan operation always scans the entire table or secondary index,
then filters out values to provide the desired result, essentially
adding the extra step of removing data from the result set. Avoid
using a Scan operation on a large table or index with a filter that
removes many results, if possible. Also, as a table or index grows,
the Scan operation slows. The Scan operation examines every item for
the requested values, and can use up the provisioned throughput for a
large table or index in a single operation. For faster response times,
design your tables and indexes so that your applications can use Query
instead of Scan. (For tables, you can also consider using the GetItem
and BatchGetItem APIs.).
Reasons for not having batch get item on Table class:-
Table class is Thread safe
Table class implements the atomic operations of items such as DeleteItemApi, GetItemApi, PutItemApi, QueryApi, ScanApi, UpdateItemApi
Batch get item needs to deal with multiple items
The most important point is that Batch get item can get items from multiple tables
Example code to get items from multiple tables:-
The below code get items from Movies and Post table
DynamoDB dynamoDB = new DynamoDB(dynamoDBClient);
TableKeysAndAttributes movieTableKeyAndAttributes = new TableKeysAndAttributes("Movies").withPrimaryKeys(new PrimaryKey("yearkey",1999 ,"title", "List test title"));
TableKeysAndAttributes postableKeyAndAttributes = new TableKeysAndAttributes("post").withPrimaryKeys(new PrimaryKey("postId", "14"));
BatchGetItemSpec batchGetItemSpec = new BatchGetItemSpec().withTableKeyAndAttributes(movieTableKeyAndAttributes,postableKeyAndAttributes);
BatchGetItemOutcome batchGetItemOutcome = dynamoDB.batchGetItem(batchGetItemSpec);
System.out.println(batchGetItemOutcome.getBatchGetItemResult().getResponses());

EMC Documentum DQL - How to delete repeating attribute

I have a few objects created on my database and I need to delete some of the repeating attributes related to them.
The query I'm trying to run is:
UPDATE gemp1_product objects REMOVE ingredients[1] WHERE (r_object_id = '08015abd8002cd68')
But all I get is the folloing error message:
Error querying databse.
[DM_QUERY_E_UPDATE_INDEX]error: "UPDATE: Unable to REMOVE tghe attribute ingredients at index 1."
[DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR]warning: "attempt to delete
non-existent attribute 88"
Object 08015abd8002cd68 exists and I can see it on the database. Queries like SELECT and DELETE work fine but I do not want to delete the whole object.
There is no easy way to do this. The reason is that repeating attributes are ordered, to enable multiple repeating attributes to be synchronized for a given object.
Either
set the attribute value to be empty for the given position, and change your code to discard empty attributes, or
use multiple DQL statements to shuffle the order so that the last one becomes empty, or
change your data model, e.g. use a single attribute as a property bag with pre-defined delimiters.
Details (1)
UPDATE gemp1_product OBJECTS SET ingredients[1] = '' WHERE ...
Details (2)
For each index; first find the value of index+1:
SELECT ingredients
FROM gemp1_product
WHERE (i_position*-1)-1 = <index+1>
ENABLE (ROW_BASED)
Use the value in a new query:
UPDATE gemp1_product OBJECTS SET ingredients[1] = '<value_from_above>' WHERE ...
It should also be possible to do this by nesting DQL somehow, but it might not be worth the effort.
Something is either wrong with your query or with your repository. I think you are mistyping your attribute name or using wrong index in your UPDATE query.
If you google for DM_OBJECT_W_DELETE_ATTR_POSITION_ERROR you'll see on this link a bit more detailed explanation:
CAUSE: Program executed a DeleteAttr operation that specified an non-existent attribute position (either a negative number or a number larger than the number of attributes in the object).
From this you could guess that type isn't in consistent state, or that you are trying to remove too big index of your repeating attribute, etc. Did you checked your repository with Consistency checker Job and other similar Jobs?
As of for the removing of repeating property (sttribute) value with DQL query, this is unachievable with single query since you need to specify index position which you don't know at first. But writing a simple script or doing it manually if it's not big amount of values to delete is the way you want to go.

Dealing with PL/SQL Collections

I have following declaration for collection
TYPE T_TABLE1 IS TABLE OF TABLE_1%ROWTYPE INDEX BY BINARY_INTEGER;
tbl1_u T_TABLE1;
tbl1_i T_TABLE1;
This table will keep growing and at the end, will be used in FORALL loop to do insert or update on TABLE_1.
Now there might be cases, where I want to delete a certain element. So i am planning to create a procedure, which will take the KEY (unique) and matched the element if that key is found
PSEDUO CODE
FOR i in tbl1_u.FIST..tbl1_u.LAST
LOOP
if tbl1_u(i).key = key then
tbl1.delete(i);
end if;
END LOOP;
My question is,
Once i delete the particular element, would be collection adjust automatically i.e., the index i would be replaced by next element or would that particular index will remain null/invalid and could possibly give me exception if i use it in FORALL INSERT/UPDATE?
I don't think that i can pass TABLE_1%ROWTYPE object to a procedure, do i have to create a record type ?
Any other tip regarding managing collection for bull delete/update/insert would be appreciate. Remeber, I would be dealing with 2 tables, if i am inserting/updating in table_1 then it means i am deleting it from table_2 and vice-versa.
Given that TABLE_1.KEY is unique you might consider using that as the index to your associative arrays. That way you can delete from the collections using the KEY value, which according to the pseudocode is available when doing the deletions. This would also save you having to iterate through the table to find the KEY you want, as the KEY would be the index - so your "deletion" pseudo-code would become:
tbl1_u.delete(key);
To answer your questions:
Since you're using associative arrays, when an element is deleted there is no "empty" space in the collection. The indexes for the elements, however, don't actually change. Therefore you need to use the collection.PRIOR and collection.NEXT methods to loop through the collection. But again, if you use the KEY value as the index you may not need to loop through the collections at all.
You can pass a TABLE_1%ROWTYPE as a parameter to a PL/SQL procedure or function.
You might want to consider using a MERGE statement which could handle doing the inserts and updates in one step. This might allow you to maintain only a single collection. Might be worth looking in to.
Share and enjoy.

Riak inserting a list and querying a list

I was wondering if there was a effecient way of handling arrays/lists in Riak. Right now I'm storing the whole array as a string and searching the string to find out if a element exists in the array.
ID (key) : int[] (Value)
And also How do I write a map/reduce query to give all the keys for which the value array contains a element
For example 1 : 2,3,4
2 : 2,5
How would I write a M/R query to give me all the keys for which value contains 2 the result is 1,2 in this case.
Any help is appreciated
If you are searching for a specific element in the list and are using the LevelDB backend, you could create a secondary index that will contain the values of the array. Secondary indexes in Riak may contain multiple values and can be searched for equality, which should allow you to search for single elements in the array without having to resort to MapReduce.
If you need to make more complicated queries based on either several elements in the list or other parameters, you could retrieve a subset of records based on the secondary index and then process them further on the client side or perhaps even through a MapReduce job.

SQLite - Get a specific row index for a Sorted/Filtered Query

I'm creating a caching system to take data from an SQLite database table using a sorted/filtered query and display it. The tables I'm pulling from can be potentially very large and, of course, I need to minimize impact on memory by only retaining a maximum number of rows in memory at any given time. This is easily done by using LIMIT and OFFSET to load only the records I need and update the cache as needed. Implementing this is trivial. The problem I'm having is determining where the insertion index is for a new record inserted into a particular query so I can update my UI appropriately. Is there an easy way to do this? So far the ideas I've had are:
Dump the entire cache, re-count the Query results (there's no guarantee the new row will be included), refresh the cache and refresh the entire UI. I hope it's obvious why that's not really desirable.
Use my own algorithm to determine whether the new row is included in the current query, if it is included in the current cached results and at what index it should be inserted into if it's within the current cached scope. The biggest downfall of this approach is it's complexity and the risk that my own sorting/filtering algorithm won't match SQLite's.
Of course, what I want is to be able to ask SQLite: Given 'Query A' what is the index of 'Row B', without loading the entire query results. However, so far I haven't been able to find a way to do this.
I don't think it matters but this is all occurring on an iOS device, using the objective-c programming language.
More Info
The Query and subsequent cache is based off of user input. Essentially the user can re-sort and filter (or search) to alter the results they're seeing. My reticence in simply recreating the cache on insertions (and edits, actually) is to provide a 'smoother' UI experience.
I should point out that I'm leaning toward option "2" at the moment. I played around with creating my own caching/indexing system by loading all the records in a table and performing the sort/filter in memory using my own algorithms. So much of the code needed to determine whether and/or where a particular record is in the cache is already there, so I'm slightly predisposed to use it. The danger lies in having a cache that doesn't match the underlying query. If I include a record in the cache that the query wouldn't return, I'll be in trouble and probably crash.
You don't need record numbers.
Save the values of the ordered field in the first and last records of the LIMITed query result.
Then you can use these to check whether the new record falls into this range.
In other words, assuming that you order by the Name field, and that the original query was this:
SELECT Name, ...
FROM mytab
WHERE some_conditions
ORDER BY Name
LIMIT x OFFSET y
then try to get at the new record with a similar query:
SELECT 1
FROM mytab
WHERE some_conditions
AND PrimaryKey = LastInsertedValue
AND Name BETWEEN CachedMin AND CachedMax
Similarly, to find out before (or after) which record the new record was inserted, start directly after the inserted record and use a limit of one, like this:
SELECT Name
FROM mytab
WHERE some_conditions
AND Name > MyInsertedName
AND Name BETWEEN CachedMin AND CachedMax
ORDER BY Name
LIMIT 1
This doesn't give you a number; you still have to check where the returned Name is in your cache.
Typically you'd expect a cache to be invalidated if there were underlying data changes. I think dropping it and starting over will be your simplest, maintainable solution. I would recommend it unless you have a very good reason.
You could write another query that just returned the row count (example below) to see if your cache should be invalidated. That would save recreating the cache when it did not change.
SELECT name,address FROM people WHERE area_code=970;
SELECT COUNT(rowid) FROM people WHERE area_code=970;
The information you'd need from sqlite to know when your cache was invalidated would require some rather intimate knowledge of how the query and/or index was working. I would say that is fairly high coupling.
Otherwise, you'd want to know where it was inserted with regards to the sorting. You would probably key each page on the sorted field. Delete anything greater than the insert/delete field. Any time you change the sorting you'd drop everything.
Something like the below would be a start if you were using C++. I realize you aren't doing C++, but hopefully it is evident as to what I'm trying to do.
struct Person {
std::string name;
std::string addr;
};
struct Page {
std::string key;
std::vector<Person> persons;
struct Less {
bool operator()(const Page &lhs, const Page &rhs) const {
return lhs.key.compare(rhs.key) < 0;
}
};
};
typedef std::set<Page, Page::Less> pages_t;
pages_t pages;
void insert(const Person &person) {
if (sql_insert(person)) {
pages_t::iterator drop_cache_start = pages.lower_bound(person);
//... drop this page and everything after it
}
}
You'd have to do some wrangling to get different datatypes of key to work nicely, but its possible.
Theoretically you could just leave the pages out of it and only use the objects themselves. The database would no longer "own" the data though. If you only fill pages from the database, then you'll have less data consistency worries.
This may be a bit off topic, you aren't re-implementing views are you? It doesn't cache per se, but it isn't clear if that is a requirement of your project.
The solution I came up with is not exactly simple, but it's currently working well. I realized that the index of a record in a Query Statement is also the Count of all it's previous records. What I needed to do was 'convert' all the ORDER statements in the query to a series of WHERE statements that would return only the preceding records and take a count of those records. It's trickier than it sounds (or maybe not...it sounds tricky). The biggest issue I had was making sure the query was, in fact, sorted in a way I could predict. This meant I needed to have an order column in the Order Parameters that was based off of a column with unique values. So, whenever a user sorts on a column, I append to the statement another order parameter on a unique column (I used a "Modified Date Stamp") to break ties.
Creating the WHERE portion of the statement requires more than just tacking on a bunch of ANDs. It's easier to demonstrate. Say you have 3 Order columns: "LastName" ASC, "FirstName" DESC, and "Modified Stamp" ASC (the tie breaker). The WHERE statement would have to look something like this ('?' = record value):
WHERE
"LastName" < ? OR
("LastName" = ? AND "FirstName" > ?) OR
("LastName" = ? AND "FirstName" = ? AND "Modified Stamp" < ?)
Each set of WHERE parameters grouped together by parenthesis are tie breakers. If, in fact, the record values of "LastName" are equal, we must then look at "FirstName", and finally "Modified Stamp". Obviously, this statement can get really long if you're sorting by a bunch of order parameters.
There's still one problem with the above solution. Mathematical operations on NULL values always return false, and yet when you sort SQLite sorts NULL values first. Therefore, in order to deal with NULL values appropriately you've gotta add another layer of complication. First, all mathematical equality operations, =, must be replace by IS. Second, all < operations must be nested with an OR IS NULL to include NULL values appropriately on the < operator. This turns the above operation into:
WHERE
("LastName" < ? OR "LastName" IS NULL) OR
("LastName" IS ? AND "FirstName" > ?) OR
("LastName" IS ? AND "FirstName" IS ? AND ("Modified Stamp" < ? OR "Modified Stamp" IS NULL))
I then take a count of the RowID using the above WHERE parameter.
It turned out easy enough for me to do mostly because I had already constructed a set of objects to represent various aspects of my SQL Statement which could be assembled to generate the statement. I can't even imagine trying to manipulate a SQL statement like this any other way.
So far, I've tested using this on several iOS devices with up to 10,000 records in a table and I've had no noticeable performance issues. Of course, it's designed for single record edits/insertions so I don't really need it to be super fast/efficient.

Resources