I'm struggling with DynamoDB schema design of a table storing locations. The table will have [userId, lastUpdatedTime, locationGooglePlaceId, longitude, latitude, hideOnUI(bool)]
one of the main query is given user current location (x, y) as GPS coordinate, query nearby userId based on their longitude and latitude.
The problem is how would I design index for this purpose? The table itself can have HASH key UserId, SORT key lastUpdatedTime; but how would GSI go? I seem can't identify any partition key for "equal" operation
In SQL it'll be something like:
select * from table
where x-c <= longitude and longitude < x+c
AND y-c <= latitude and latitude < y+c
Thanks
First of all, I am not sure if DynamoDB is a good fit here, maybe it's better to use another database, since Dynamo does not support complicated indexes.
Nonetheless, here is a design that you can try.
First of all you can split your map into multiple square blocks, every square block would have an id, and known position and size.
Then if you have a location and you want to find all nearby points, you can do the following.
Every point in your database will be storred in the Points table and it will have following keys:
BlockId (String, UUID, partition key) - id of a block this point belongs to
Latitude (Number, sort key) - latitute of a point
Longtitude (Number) - simple attribute
Now if you know what square a user location in and what squares are nearby, you can perform in all nearby squares the following search:
BlockId = <nearby_block_id>
Latitute between(y-c, y+c)
and use a filter expression based on the Longtitude attribute:
Longtitutede between(x-c, x+c)
It does not really matter what to use as a sort key latitude or longtitude here.
Between is a DynamoDB operator that can be used with sort keys or for filtering expressions:
BETWEEN : Greater than or equal to the first value, and less than or
equal to the second value. AttributeValueList must contain two
AttributeValue elements of the same type, either String, Number, or
Binary (not a set type). A target attribute matches if the target
value is greater than, or equal to, the first element and less than,
or equal to, the second element. If an item contains an AttributeValue
element of a different type than the one provided in the request, the
value does not match. For example, {"S":"6"} does not compare to
{"N":"6"}. Also, {"N":"6"} does not compare to {"NS":["6", "2", "1"]}
Now the downside of this is that for every partition key there can be no more than 10GB of data with this key. So the number of dots that you can put in a single square is limited. You can go around this if your squares are small enough or if your squares have variable sizes and you use big squares for not very crowded areas and small squares for very crowded areas, but seems to be a non-trivial project.
Related
I want to publish metrics about my RocksDB instance, including the disk size of each column over time. I've come across a likely API method to calculate disk size for a column:
GetApproximateSizes():
Status s = GetApproximateSizes(options, column_family, ranges.data(), NUM_RANGES, sizes.data());
This is a nice API, but I don't know how to provide a Range that will specify my entire column. Is there a way to do so without finding the min/max key in the column?
For the whole database, you can approximate it using 0x00 or the empty byte string and an arbitrarily big key as end such as 0xFFFFFF.
Otherwise, if the column share a common prefix, use the following function to compute the end key:
def strinc(key):
key = key.rstrip(b"\xff")
return key[:-1] + int2byte(ord(key[-1:]) + 1)
strinc will compute the next byte string that is not prefix of key, together they describe the whole keyspace having KEY as prefix.
Is there a storage and performance gain to denormalize sqlite3 blob column into primary table, but treat it as a foreign key only in some cases? I have two implementations, and both seem to run slower. Is there some sqlite3 internals that preclude such usage?
I have a ~100GB sqlite file with two tables. The first maps z,x,y coordinates to an ID -- a 32 chars hex string stored as TEXT. The second table maps that ID to a BLOB, usually a few kilobytes. There are unique indexes for both (z,x,y) and ID. There is a VIEW that joins both tables.
For ~30% of coordinates, BLOBs are unique per coordinate combination. The rest reference the same ~100 of frequently occurring BLOBs.
I would like to optimize for space and performance: move unique BLOBs into the first table, and keep the second table only as a small 100-row lookup for the few shared BLOBs. The first table's blob could be checked at run time -- if it is exactly the size of the hash key, treat it as a lookup. Otherwise, treat it as value.
My thinking is that this will often avoid a lookup into a large second table, keep small lookup table fully in cache, and avoid storing keys for most of the blobs. My perf testing does not confirm this theory, and I don't understand why.
Original implementation:
CREATE TABLE map (z INTEGER, x INTEGER, y INTEGER, id TEXT);
CREATE TABLE blobs (id TEXT, data BLOB);
CREATE VIEW tiles AS
SELECT z, x, y, data FROM map JOIN blobs ON blobs.id = map.id;
CREATE UNIQUE INDEX map_index ON map (z, x, y);
CREATE UNIQUE INDEX blobs_id ON blobs (id);
Optimized implementation changes ID column in the map table from id TEXT to mix BLOB.
CREATE TABLE map (z INTEGER, x INTEGER, y INTEGER, mix BLOB);
I tried two VIEW implementations, both run slower by ~10% than the INNER JOIN method above. The LEFT JOIN method:
CREATE VIEW tiles AS
SELECT z, x, y,
COALESCE(blobs.data, map.mix) AS data
FROM map LEFT JOIN blobs ON LENGTH(map.mix) = 32 AND map.mix = blobs.id;
And I tried the sub-query approach:
CREATE VIEW tiles AS
SELECT z, x, y,
CASE
WHEN LENGTH(map.mix) = 32 THEN
(SELECT COALESCE(blobs.data, map.mix) FROM blobs WHERE map.mix = blobs.id)
ELSE map.mix
END AS data
FROM map;
P.S. COALESCE() ensures that in case my data length happened to be 32, but it is not a foreign key, the query should return data as is.
P.P.S. This is an mbtiles file with map tiles, and duplicate tiles represent empty water and land, whereas unique tiles represent places with some unique features, like city streets)
I want to show the string value as one of the measure value. When a fact table has a integer value and string value respectively and also has some foreign table's keys. Then I could show the integer value as a measure value, but I couldn't show the string value as a measure. Because Measure element in schema of cube (written in XML) doesn't allow that a measure value doesn't have 'aggregator'(It specify the aggregate function of measure values). Of course I understood that we can't aggregate some string values. But I want to show the string value of the latest level in hierarchy.
I read following article. A figure (around middle of this page) shows a cube that contains string value as a measure value. But this is an example of Property value of Dimension table, so this string value isn't contain in fact table. I want to show the string value that contains in fact table.
A Simple Date Dimension for Mondrian Cubes
Anyone have some idea that can be shown the string value as a measure value? Or I have to edit Mondrian's source code?
I have had the same problem and solved it by setting the aggregator attribute in the measure tag to max.
e.g.
\<Measure name="Comment" datatype="String" column="comment" caption="Comment" aggregator="max"/\>
Why does it need to be a measure?
If no aggregation would naturally be applied to it and you just want the string value, it is a dimension, not a measure. Trying to force it to be a measure is not the best approach.
I think the figure you reference is just showing a drillthrough, and that the only actual
measure is Turnover. The report layout is slightly misleading in terms of dimensions and measures.
You can just use the fact table again in the schema as a dimension table if for some reason you don't want to split this out into a separate physical table.
Sounds like the string may be high cardinality to the integer, possibly 1:1. Depending upon the size of your cube, this might or might not be a performance challenge. But don't try to make it a measure.
Good luck!
I have a SQL table :
SuburbId int,
SubrubName varchar,
StateName varchar,
Postcode varchar,
Latitude Decimal,
Longtitude Decimal
and in MY C# I have created code that creates a bounding box so I can search by distance.
And my stored procedure to get the subrubs is:
[dbo].[Lookup] (
#MinLat decimal(18,15),
#MaxLat decimal(18,15),
#MinLon decimal(18,15),
#MaxLon decimal(18,15)
)
AS
BEGIN
SELECT SuburbId, suburbName, StateName, Latitude, Longitude
FROM SuburbLookup
WHERE (Latitude >= #MinLat AND Latitude <= #MaxLat AND Longitude >= #MinLon AND Longitude <= #MaxLon)
END
My Question is.. this is a Clustered Index Scan... Is there a more efficient way of doing this?
This type of query tends to perform quite poorly with the standard B-tree index. For better performance you can use the geography column type and add a spatial index.
Queries such as WHERE geography.STDistance(geography2) < number can use a spatial index.
Couple of links that should help. Of course depending on the scope of you project you may already have the best solution.
That said if you care to, you can create a custom index in sql server for your locations.
Custom Indexing
Additionally if you wanted to, you could look into Quadtrees and Quadtiles. There is a technique where you can calculate a key via an interleaved addresses, a combination of the lat and lon pairs that, can be represented as an integer and then truncating to a base level to see how they relate to eachother.
see more here
What are they and how do they work?
Where are they used?
When should I (not) use them?
I've heard the word over and over again, yet I don't know its exact meaning.
What I heard is that they allow associative arrays by sending the array key through a hash function that converts it into an int and then uses a regular array. Am I right with that?
(Notice: This is not my homework; I go too school but they teach us only the BASICs in informatics)
Wikipedia seems to have a pretty nice answer to what they are.
You should use them when you want to look up values by some index.
As for when you shouldn't use them... when you don't want to look up values by some index (for example, if all you want to ever do is iterate over them.)
You've about got it. They're a very good way of mapping from arbitrary things (keys) to arbitrary things (values). The idea is that you apply a function (a hash function) that translates the key to an index into the array where you store the values; the hash function's speed is typically linear in the size of the key, which is great when key sizes are much smaller than the number of entries (i.e., the typical case).
The tricky bit is that hash functions are usually imperfect. (Perfect hash functions exist, but tend to be very specific to particular applications and particular datasets; they're hardly ever worthwhile.) There are two approaches to dealing with this, and each requires storing the key with the value: one (open addressing) is to use a pre-determined pattern to look onward from the location in the array with the hash for somewhere that is free, the other (chaining) is to store a linked list hanging off each entry in the array (so you do a linear lookup over what is hopefully a short list). The cases of production code where I've read the source code have all used chaining with dynamic rebuilding of the hash table when the load factor is excessive.
Good hash functions are one way functions that allow you to create a distributed value from any given input. Therefore, you will get somewhat unique values for each input value. They are also repeatable, such that any input will always generate the same output.
An example of a good hash function is SHA1 or SHA256.
Let's say that you have a database table of users. The columns are id, last_name, first_name, telephone_number, and address.
While any of these columns could have duplicates, let's assume that no rows are exactly the same.
In this case, id is simply a unique primary key of our making (a surrogate key). The id field doesn't actually contain any user data because we couldn't find a natural key that was unique for users, but we use the id field for building foreign key relationships with other tables.
We could look up the user record like this from our database:
SELECT * FROM users
WHERE last_name = 'Adams'
AND first_name = 'Marcus'
AND address = '1234 Main St'
AND telephone_number = '555-1212';
We have to search through 4 different columns, using 4 different indexes, to find my record.
However, you could create a new "hash" column, and store the hash value of all four columns combined.
String myHash = myHashFunction("Marcus" + "Adams" + "1234 Main St" + "555-1212");
You might get a hash value like AE32ABC31234CAD984EA8.
You store this hash value as a column in the database and index on that. You now only have to search one index.
SELECT * FROM users
WHERE hash_value = 'AE32ABC31234CAD984EA8';
Once we have the id for the requested user, we can use that value to look up related data in other tables.
The idea is that the hash function offloads work from the database server.
Collisions are not likely. If two users have the same hash, it's most likely that they have duplicate data.