Tips for querying dynamic object fields in Crate - wildcard

I have a table such as the one at the end of this question. I insert into the peers_array field a dynamically keyed array/object such as:
{
"130":{
"to":5
},
"175":{
"fr":0
},
"188":{
"fr":0
},
"190":{
"to":5
},
"280":{
"fr":4
}
}
I'm looking for advice on how to wildcard query the key field. Such as:
select * from table where peers_array[*]['to'] > 10
In Elasticsearch I can query like this:
peers_array.*.to: >10
My Table:
CREATE TABLE table (
"id" long primary key,
"sourceRouteId" integer,
"rci" integer,
peers_array object(dynamic),
"partition_date" string primary key
) partitioned by (partition_date) with (number_of_replicas = 0, refresh_interval = 5000);

I'm sorry to say, but this is currently not possible. We'll put it on our backlog. Thanks for reporting this use case.

Related

DynamoDB filter if primary key contains value

CURRENTLY
I have a table in DynamoDB with a single attribute - Primary Key - that contains unique values.
PK
------
#A#B#C#
#B#C#
#C#D#E#
#BC#
ISSUE
I am looking to do 2 searches for #B#C# (1) exact match, and (2) containing match, and therefore only want results:
(1) Exact Match:
#B#C#
(2) Containing Match:
#A#B#C#
#B#C#
Are these 2 searches possible against the primary key?
If so, what is the most efficient query to run? e.g. QUERY or SCAN
Note:
For (2) I am using the following code, but it is returning all items in DB:
params = {
TableName: 'myTable',
FilterExpression: "contains(#key, :v)",
ExpressionAttributeNames: { "#key": "PK" },
ExpressionAttributeValues: { ":v": #B#C# }
}
dynamodb.scan(params,callback)
DynamoDB supports two main types of searches: query and scan. The Query operation finds items based on primary key values. The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index
If you wanted to find the item with a primary key #B#C, you would use the query API:
ddbClient.query(
{
"TableName": "<YOUR TABLE NAME>",
"KeyConditionExpression": "#pk = :pk",
"ExpressionAttributeValues": {
":pk": {
"S": "#B#C"
}
},
"ExpressionAttributeNames": {
"#pk": "PK"
}
}
)
For your second access pattern, you'll need to use the scan API because you are searching across the entire table/secondary index.
You can use scan to test if a primary key has a substring using contains. I don't see anything wrong with the format of your scan operation.
Be careful when using scan this way. Because scan will read your entire table to fetch results, you will have a fairly inefficient operation at scale. If this operation is run infrequently, or you are running it against a sparse index, it's probably fine. However, if it's one of your primary access patterns, you may want to reconsider using the scan API for this operation.

Why is this Knex migration not forcing a column to be unique?

I'm creating a SQLite database with this Knex migration. When I review the DB in SQLiteStudio, it doesn't indicate that the email column is unique. Is there a mistake I'm missing?
exports.up = function (knex) {
return knex.schema
.createTable('users', users => {
users.increments();
users.string('email', 128).unique().notNullable();
users.string('password', 256).notNullable();
})
Generated DDL code:
CREATE TABLE users (
id INTEGER NOT NULL
PRIMARY KEY AUTOINCREMENT,
email VARCHAR (128) NOT NULL,
password VARCHAR (256) NOT NULL
);
Alternatives I've tried that didn't work:
-Switching order of unique() and notNullable()
users.string('email', 128).notNullable().unique()
-Creating a separate line to add the Unique constraint
.createTable('users', users => {
users.increments();
users.string('email', 128).notNullable();
users.string('password', 256).notNullable();
users.unique('email');
})
It's unique, you're just not going to see it in the CREATE TABLE statement. SQLite sets a UNIQUE constraint by creating an index with the UNIQUE qualifier. Take the following Knex migration, for example:
exports.up = knex =>
knex.schema.debug().createTable("users", t => {
t.increments("id");
t.string("name").unique();
});
Note debug(), very handy if you want to see what SQL is being generated. Here's the debug output:
[
{
sql: 'create table `users` (`id` integer not null ' +
'primary key autoincrement, `name` ' +
'varchar(255))',
bindings: []
},
{
sql: 'create unique index `users_name_unique` on `users` (`name`)',
bindings: []
}
]
As you can see, a second statement is issued to create the UNIQUE constraint. If we now go and look at the database, we'll see something like:
07:48 $ sqlite3 dev.sqlite3
sqlite> .dump users
BEGIN TRANSACTION;
CREATE TABLE `users` (`id` integer not null primary key autoincrement,
`name` varchar(255));
CREATE UNIQUE INDEX `users_name_unique` on `users` (`name`);
COMMIT;
As an aside, you may wish to do more research about the possible length of user emails. See this answer as a starting point.

What's the equivalent DynamoDB solution for this MySQL Query?

I'm familiar with MySQL and am starting to use Amazon DynamoDB for a new project.
Assume I have a MySQL table like this:
CREATE TABLE foo (
id CHAR(64) NOT NULL,
scheduledDelivery DATETIME NOT NULL,
-- ...other columns...
PRIMARY KEY(id),
INDEX schedIndex (scheduledDelivery)
);
Note the secondary Index schedIndex which is supposed to speed-up the following query (which is executed periodically):
SELECT *
FROM foo
WHERE scheduledDelivery <= NOW()
ORDER BY scheduledDelivery ASC
LIMIT 100;
That is: Take the 100 oldest items that are due to be delivered.
With DynamoDB I can use the id column as primary partition key.
However, I don't understand how I can avoid full-table scans in DynamoDB. When adding a secondary index I must always specify a "partition key". However, (in MySQL words) I see these problems:
the scheduledDelivery column is not unique, so it can't be used as a partition key itself AFAIK
adding id as unique partition key and using scheduledDelivery as "sort key" sounds like a (id, scheduledDelivery) secondary index to me, which makes that index pratically useless
I understand that MySQL and DynamoDB require different approaches, so what would be a appropriate solution in this case?
It's not possible to avoid a full table scan with this kind of query.
However, you may be able to disguise it as a Query operation, which would allow you to sort the results (not possible with a Scan).
You must first create a GSI. Let's name it scheduled_delivery-index.
We will specify our index's partition key to be an attribute named fixed_val, and our sort key to be scheduled_delivery.
fixed_val will contain any value you want, but it must always be that value, and you must know it from the client side. For the sake of this example, let's say that fixed_val will always be 1.
GSI keys do not have to be unique, so don't worry if there are two duplicated scheduled_delivery values.
You would query the table like this:
var now = Date.now();
//...
{
TableName: "foo",
IndexName: "scheduled_delivery-index",
ExpressionAttributeNames: {
"#f": "fixed_value",
"#d": "scheduled_delivery"
},
ExpressionAttributeValues: {
":f": 1,
":d": now
},
KeyConditionExpression: "#f = :f and #d <= :d",
ScanIndexForward: true
}

Make own like system for various content

There are three types of content in my database. They are Songs, Albums and Playlists. Albums and Playlists are just collections of songs. And I want to let the user put like for each of them. I made table with columns
LikeId UserId SongId PlaylistId AlbumId
for storing likes. For example if user puts like to song, I put song's id into SongId column and user's id into UserId column. Other columns will be null. It's working good,but I don't like this solution because it's not normalized.
So I want to ask if there are better solutions for this.
You should just create 3 tables - one for User paired with each of Playlist, Song, and Album. They'd look something like:
CREATE TABLE PlaylistLikes
(
UserID INT NOT NULL,
PlaylistID INT NOT NULL,
PRIMARY KEY (UserID, PlaylistID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (PlaylistID) REFERENCES Playlists (PlaylistID)
);
CREATE TABLE SongLikes
(
UserID INT NOT NULL,
SongID INT NOT NULL,
PRIMARY KEY (UserID, SongID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (SongID) REFERENCES Songs (SongID)
);
CREATE TABLE AlbumLikes
(
UserID INT NOT NULL,
AlbumID INT NOT NULL,
PRIMARY KEY (UserID, AlbumID),
FOREIGN KEY (UserID) REFERENCES Users (UserID),
FOREIGN KEY (AlbumID) REFERENCES Albums (AlbumID)
);
Here, having both columns in the primary key prevents the user from liking the song/playlist/album more than once (unless you want that to be available - then remove it or maybe keep track of that in a 'number of likes' column).
You should avoid putting all 3 different types of likes in the same table - different tables should be used to represent different things. You want to avoid "One True Lookup Table" - here's one answer detailing why: OTLT
If you want to query against all 3 tables, you can create a view which is the result of a UNION between the 3 tables.
How about
LikeId UserId LikeType TargetId
Where LikeType can be "Song", "Playlist" or "Album" ?
Your solution is fine. It has the nice feature that you can set up explicit foreign key relationships to the other tables. In addition, you can verify that exactly one of the values is set by adding a check constraint:
check ((case when SongId is null then 0 else 1 end) +
(case when AlbumId is null then 0 else 1 end) +
(case when PlayListId is null then 0 else 1 end)
) = 1
There is an overhead incurred, of storing NULL values for all three. This is fairly minimal for three values.
You can even add a computed column to get which value is stored:
WhichId = (case when SongId is not null then 'Song'
when AlbumId is not null then 'Album'
when PlayListId is not null then 'PlayList
end);
As a glutton for punishment, I would use three tables: UserLikesSongs, UserLikesPlaylists and UserLikesAlbums. Each contains a UserId and an appropriate reference to one of the other tables: Songs, Albums or Playlists.
This also allows adding additional type-specific information. Perhaps Albums will support a favorite track in the future.
You can always use UNION to combine data from the various entity types.

How to autogenerate the username with specific string?

I am using asp.net2008 and MY SQL.
I want to auto-generate the value for the field username with the format as
"SISI001", "SISI002",
etc. in SQL whenever the new record is going to inserted.
How can i do it?
What can be the SQL query ?
Thanks.
Add a column with auto increment integer data type
Then get the maximum value of that column in the table using "Max()" function and assign the value to a integer variable (let the variable be 'x').
After that
string userid = "SISI";
x=x+1;
string count = new string('0',6-x.ToString().length);
userid=userid+count+x.ToString();
Use userid as your username
Hope It Helps. Good Luck.
PLAN A>
You need to keep a table (keys) that contains the last numeric ID generated for various entities. This case the entity is "user". So the table will contain two cols viz. entity varchar(100) and lastid int.
You can then have a function written that will receive the entity name and return the incremented ID. Use this ID concatenated with the string component "SISI" to be passed to MySQL for insertion to the database.
Following is the MySQL Table tblkeys:
CREATE TABLE `tblkeys` (
`entity` varchar(100) NOT NULL,
`lastid` int(11) NOT NULL,
PRIMARY KEY (`entity`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
The MySQL Function:
DELIMITER $$
CREATE FUNCTION `getkey`( ps_entity VARCHAR(100)) RETURNS INT(11)
BEGIN
DECLARE ll_lastid INT;
UPDATE tblkeys SET lastid = lastid+1 WHERE tblkeys.entity = ps_entity;
SELECT tblkeys.lastid INTO ll_lastid FROM tblkeys WHERE tblkeys.entity = ps_entity;
RETURN ll_lastid;
END$$
DELIMITER ;
The sample function call:
SELECT getkey('user')
Sample Insert command:
insert into users(username, password) values ('SISI'+getkey('user'), '$password')
Plan B>
This way the ID will be a bit larger but will not require any extra table. Use the following SQL to get a new unique ID:
SELECT ROUND(NOW() + 0)
You can pass it as part of the insert command and concatenate it with the string component of "SISI".
I am not an asp.net developer but i can help you
You can do something like this...
create a sequence in your mysql database as-
CREATE SEQUENCE "Database_name"."SEQUENCE1" MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 001 START WITH 21 CACHE 20 NOORDER NOCYCLE ;
and then while inserting use this query-----
insert into testing (userName) values(concat('SISI', sequence1.nextval))
may it help you in your doubt...
Try this:
CREATE TABLE Users (
IDs int NOT NULL IDENTITY (1, 1),
USERNAME AS 'SISI' + RIGHT('000000000' + CAST(IDs as varchar(10)), 4), --//getting uniqueness of IDs field
Address varchar(150)
)
(not tested)

Resources