I'm familiar with MySQL and am starting to use Amazon DynamoDB for a new project.
Assume I have a MySQL table like this:
CREATE TABLE foo (
id CHAR(64) NOT NULL,
scheduledDelivery DATETIME NOT NULL,
-- ...other columns...
PRIMARY KEY(id),
INDEX schedIndex (scheduledDelivery)
);
Note the secondary Index schedIndex which is supposed to speed-up the following query (which is executed periodically):
SELECT *
FROM foo
WHERE scheduledDelivery <= NOW()
ORDER BY scheduledDelivery ASC
LIMIT 100;
That is: Take the 100 oldest items that are due to be delivered.
With DynamoDB I can use the id column as primary partition key.
However, I don't understand how I can avoid full-table scans in DynamoDB. When adding a secondary index I must always specify a "partition key". However, (in MySQL words) I see these problems:
the scheduledDelivery column is not unique, so it can't be used as a partition key itself AFAIK
adding id as unique partition key and using scheduledDelivery as "sort key" sounds like a (id, scheduledDelivery) secondary index to me, which makes that index pratically useless
I understand that MySQL and DynamoDB require different approaches, so what would be a appropriate solution in this case?
It's not possible to avoid a full table scan with this kind of query.
However, you may be able to disguise it as a Query operation, which would allow you to sort the results (not possible with a Scan).
You must first create a GSI. Let's name it scheduled_delivery-index.
We will specify our index's partition key to be an attribute named fixed_val, and our sort key to be scheduled_delivery.
fixed_val will contain any value you want, but it must always be that value, and you must know it from the client side. For the sake of this example, let's say that fixed_val will always be 1.
GSI keys do not have to be unique, so don't worry if there are two duplicated scheduled_delivery values.
You would query the table like this:
var now = Date.now();
//...
{
TableName: "foo",
IndexName: "scheduled_delivery-index",
ExpressionAttributeNames: {
"#f": "fixed_value",
"#d": "scheduled_delivery"
},
ExpressionAttributeValues: {
":f": 1,
":d": now
},
KeyConditionExpression: "#f = :f and #d <= :d",
ScanIndexForward: true
}
Related
CURRENTLY
I have a table in DynamoDB with a single attribute - Primary Key - that contains unique values.
PK
------
#A#B#C#
#B#C#
#C#D#E#
#BC#
ISSUE
I am looking to do 2 searches for #B#C# (1) exact match, and (2) containing match, and therefore only want results:
(1) Exact Match:
#B#C#
(2) Containing Match:
#A#B#C#
#B#C#
Are these 2 searches possible against the primary key?
If so, what is the most efficient query to run? e.g. QUERY or SCAN
Note:
For (2) I am using the following code, but it is returning all items in DB:
params = {
TableName: 'myTable',
FilterExpression: "contains(#key, :v)",
ExpressionAttributeNames: { "#key": "PK" },
ExpressionAttributeValues: { ":v": #B#C# }
}
dynamodb.scan(params,callback)
DynamoDB supports two main types of searches: query and scan. The Query operation finds items based on primary key values. The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index
If you wanted to find the item with a primary key #B#C, you would use the query API:
ddbClient.query(
{
"TableName": "<YOUR TABLE NAME>",
"KeyConditionExpression": "#pk = :pk",
"ExpressionAttributeValues": {
":pk": {
"S": "#B#C"
}
},
"ExpressionAttributeNames": {
"#pk": "PK"
}
}
)
For your second access pattern, you'll need to use the scan API because you are searching across the entire table/secondary index.
You can use scan to test if a primary key has a substring using contains. I don't see anything wrong with the format of your scan operation.
Be careful when using scan this way. Because scan will read your entire table to fetch results, you will have a fairly inefficient operation at scale. If this operation is run infrequently, or you are running it against a sparse index, it's probably fine. However, if it's one of your primary access patterns, you may want to reconsider using the scan API for this operation.
Here is the sample JSON which we are planning to insert into DynamoDB table. As of now we are having organizationID as primary partition key and __id__ as sort key. Since we will query based on organizationID we kept it as primary partition key. Is it a good approach to keep __id__ as sort key.
{
"__class__": "package",
"__updated__": "2015-10-19T14:30:13Z",
"__created__": "2015-10-19T12:32:28Z",
"transactions": [
{
transaction1
},
{
transaction2
}
],
"carrier": "USPS",
"organizationID": "6406fa6fd32393908125d4d81ec358",
"barcode": "9400110891302408",
"queryString": [
"xxxxxxx",
"YYYY",
"delivered",
],
"deliveredTo": null,
"__id__": "3232d1a045476786fg22dfg32b82209155b32"
}
As per the best practice, you can have timestamp as sort key for the above data model. One advantage of having timestamp as sort key is that you can sort the data for the particular partition key and identity the latest updated item. This is the very common use case for having sort key.
It doesn't make much sense to keep both partition and sort key as randomly generated value because you can't use sort key efficiently (unless I miss something here).
Is there a way to manipulate the query plan generated in SQLite?
I 'l try to explain my problem:
I have 3 tables:
CREATE TABLE "index_term" (
"id" INT,
"term" VARCHAR(255) NOT NULL,
PRIMARY KEY("id"),
UNIQUE("term"));
CREATE TABLE "index_posting" (
"doc_id" INT NOT NULL,
"term_id" INT NOT NULL,
PRIMARY KEY("doc_id", "field_id", "term_id"),,
CONSTRAINT "index_posting_doc_id_fkey" FOREIGN KEY ("doc_id")
REFERENCES "document"("doc_id") ON DELETE CASCADE,
CONSTRAINT "index_posting_term_id_fkey" FOREIGN KEY ("term_id")
REFERENCES "index_term"("id") ON DELETE CASCADE);;
CREATE INDEX "index_posting_term_id_idx" ON "index_posting"("term_id");
CREATE TABLE "published_files" (
"doc_id" INTEGER NOT NULL,,
"uri_id" INTEGER,
"user_id" INTEGER NOT NULL,
"status" INTEGER NOT NULL,
"title" VARCHAR(1024),
PRIMARY KEY("uri_id"));
CREATE INDEX "published_files_doc_id_idx" ON "published_files"("doc_id");
about 600.000 entries in the index_term, about 4 Millions in the index_posting and 300.000 in the published_files table.
Now when i want to find the number of unique doc_ids in index_posting which reference some terms i use the following SQL.
select count(distinct index_posting.doc_id) from index_term, index_posting
where
index_posting.term_id = index_term.id and index_term.term like '%test%'
The result is displayed in reasonable time (0.3 secs). Asking Explain Query plan returns
0|0|0|SCAN TABLE index_term
0|1|1|SEARCH TABLE index_posting USING INDEX index_posting_term_id_idx (term_id=?)
When i want to filter the count in the way that it only includes doc_ids of index_posting if there exists a published_files entry:
select count(distinct index_posting.doc_id) from index_term, index_posting,
published_files where
index_posting.term_id = index_term.id and index_posting.doc_id = published_files.doc_id and index_term.term like '%test%'
The query takes almost 10 times as long. Asking Explain Query plan returns
0|0|1|SCAN TABLE index_posting
0|1|0|SEARCH TABLE index_term USING INDEX sqlite_autoindex_index_term_1 (id=?)
0|2|2|SEARCH TABLE published_files AS pf USING COVERING INDEX published_files_doc_id_idx (doc_id=?)
So as far as i understand SQLITE changed here its query plan doing a full table scan of index_posting and a lookup in index_term instead of the other way around.
As a workaround i did do a
analyze index_posting;
analyze index_term;
analyze published_files;
and now it seems correct,
0|0|0|SCAN TABLE index_term
0|1|1|SEARCH TABLE index_posting USING INDEX index_posting_term_id_idx (term_id=?)
0|2|2|SEARCH TABLE published_files USING COVERING INDEX published_files_doc_id_idx (doc_id=?)
but my question is - is there a way to force SQLITE to always use the correct query plan?
TIA
ANALYZE is not a workaround; it's supposed to be used.
You can use CROSS JOIN to enforce a certain order of the nested loops, or use INDEXED BY to force a certain index to be used.
However, you asked for "the correct query plan", which might not be same as the one enforced by these mechanisms.
There are three types of content in my database. They are Songs, Albums and Playlists. Albums and Playlists are just collections of songs. And I want to let the user put like for each of them. I made table with columns
LikeId UserId SongId PlaylistId AlbumId
for storing likes. For example if user puts like to song, I put song's id into SongId column and user's id into UserId column. Other columns will be null. It's working good,but I don't like this solution because it's not normalized.
So I want to ask if there are better solutions for this.
You should just create 3 tables - one for User paired with each of Playlist, Song, and Album. They'd look something like:
CREATE TABLE PlaylistLikes
(
UserID INT NOT NULL,
PlaylistID INT NOT NULL,
PRIMARY KEY (UserID, PlaylistID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (PlaylistID) REFERENCES Playlists (PlaylistID)
);
CREATE TABLE SongLikes
(
UserID INT NOT NULL,
SongID INT NOT NULL,
PRIMARY KEY (UserID, SongID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (SongID) REFERENCES Songs (SongID)
);
CREATE TABLE AlbumLikes
(
UserID INT NOT NULL,
AlbumID INT NOT NULL,
PRIMARY KEY (UserID, AlbumID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (AlbumID) REFERENCES Albums (AlbumID)
);
Here, having both columns in the primary key prevents the user from liking the song/playlist/album more than once (unless you want that to be available - then remove it or maybe keep track of that in a 'number of likes' column).
You should avoid putting all 3 different types of likes in the same table - different tables should be used to represent different things. You want to avoid "One True Lookup Table" - here's one answer detailing why: OTLT
If you want to query against all 3 tables, you can create a view which is the result of a UNION between the 3 tables.
How about
LikeId UserId LikeType TargetId
Where LikeType can be "Song", "Playlist" or "Album" ?
Your solution is fine. It has the nice feature that you can set up explicit foreign key relationships to the other tables. In addition, you can verify that exactly one of the values is set by adding a check constraint:
check ((case when SongId is null then 0 else 1 end) +
(case when AlbumId is null then 0 else 1 end) +
(case when PlayListId is null then 0 else 1 end)
) = 1
There is an overhead incurred, of storing NULL values for all three. This is fairly minimal for three values.
You can even add a computed column to get which value is stored:
WhichId = (case when SongId is not null then 'Song'
when AlbumId is not null then 'Album'
when PlayListId is not null then 'PlayList
end);
As a glutton for punishment, I would use three tables: UserLikesSongs, UserLikesPlaylists and UserLikesAlbums. Each contains a UserId and an appropriate reference to one of the other tables: Songs, Albums or Playlists.
This also allows adding additional type-specific information. Perhaps Albums will support a favorite track in the future.
You can always use UNION to combine data from the various entity types.
In SQLite I can run the following query to get a list of columns in a table:
PRAGMA table_info(myTable)
This gives me the columns but no information about what the primary keys may be. Additionally, I can run the following two queries for finding indexes and foreign keys:
PRAGMA index_list(myTable)
PRAGMA foreign_key_list(myTable)
But I cannot seem to figure out how to view the primary keys. Does anyone know how I can go about doing this?
Note: I also know that I can do:
select * from sqlite_master where type = 'table' and name ='myTable';
And it will give the the create table statement which shows the primary keys. But I am looking for a way to do this without parsing the create statement.
The table_info DOES give you a column named pk (last one) indicating if it is a primary key (if so the index of it in the key) or not (zero).
To clarify, from the documentation:
The "pk" column in the result set is zero for columns that are not
part of the primary key, and is the index of the column in the primary
key for columns that are part of the primary key.
Hopefully this helps someone:
After some research and pain the command that worked for me to find the primary key column name was:
SELECT l.name FROM pragma_table_info("Table_Name") as l WHERE l.pk = 1;
For the ones trying to retrieve a pk name in android, and while using the ROOM library.
#Oogway101's answer was throwing an error: "no such column [your_table_name] ... etc.. etc...
my way of query submition was:
String pkSearch = "SELECT l.name FROM pragma_table_info(" + tableName + ") as l WHERE l.pk = 1;";
database.query(new SimpleSQLiteQuery(pkSearch)
I tried using the (") quotations and still error.
String pkSearch = "SELECT l.name FROM pragma_table_info(\"" + tableName + "\") as l WHERE l.pk = 1;";
So my solution was this:
String pragmaInfo = "PRAGMA table_info(" + tableName + ");";
Cursor c = database.query(new SimpleSQLiteQuery(pragmaInfo));
String id = null;
c.moveToFirst();
do {
if (c.getInt(5) == 1) {
id = c.getString(1);
}
} while (c.moveToNext() && id == null);
Log.println(Log.ASSERT, TAG, "AbstractDao: pk is: " + id);
The explanation is that:
A) PRAGMA table_info returns a cursor with various indices, the response is atleast of length 6... didnt check more...
B) index 1 has the column name.
C) index 5 has the "pk" value, either 0 if it is not a primary key, or 1 if its a pk.
You can define more than one pk so this will not bring an accurate result if your table has more than one (IMHO more than one is bad design and balloons the complexity of the database beyond human comprehension).
So how will this fit into the #Dao? (you may ask...)
When making the Dao "abstract" you have access to a default constructor which has the database in it:
from the docummentation:
An abstract #Dao class can optionally have a constructor that takes a Database as its only parameter.
this is the constructor that will grant you access to the query.
There is a catch though...
You may use the Dao during a database creation with the .addCallback() method:
instance = Room.databaseBuilder(context.getApplicationContext(),
AppDatabase2.class, "database")
.addCallback(
//You may use the Daos here.
)
.build();
If you run a query in the constructor of the Dao, the database will enter a feedback loop of infinite instantiation.
This means that the query MUST be used LAZILY (just at the moment the user needs something), and because the value will never change, it can be stored. and never re-queried.