My dynamo db tables has hash key and range Key and other data columns which we can insert .
In dynamo db what i understood is that when items are inserted in GSI/Base table then items get sorted in ascending order based on range key and hash key is not ordered.
Example :
hashId - rangeKey
1 - 1
1 - 2
1 - 3
3 - 1
3 - 2
3 - 3
2 -1
2 -2
2 -3
Is there any way we can have a ordered hash keys as well in dynamo db?
like this when we save data in any random order :
hashId -rangeId
1 -1
1- 2
1- 3
2 -1
2 -2
2 -3
3 -1
3- 2
3 -3
I think this is not possible, because the way dynamo DB works is that it hashes the partition/hash Key and saves it in the respective partition. Though you can have sorted data in the dynamo DB based on the range key for the partition key.
Related
I have a set of data in the following format:
Items Shipped | Month
A 1
B 1
C 1
D 2
E 2
F 3
G 3
H 3
I would like to show the count of items shipped each month using a calculated field in Tableau.
Item_Count | Month
3 1
2 2
3 3
Any Suggestions?
You should probably have a look on the Tableau page for their basic tutorials:
https://www.tableau.com/learn/training
Drag the [month] pill to row (if it's an actual date, change it to discrete month, otherwise leave it like it is)
Drag the [item_count] to columns, click on it and change it to COUNT or COUNTD depending whether you want the total count or only the distinct elements.
I got a table like this
a b c
-- -- --
1 1 10
2 1 0
3 1 0
4 4 20
5 4 0
6 4 0
The b column 'points' to 'a', a bit like if a is the parent.
c was computed. Now I need to propagate the parent c value to their children.
The result would be
a b c
-- -- --
1 1 10
2 1 10
3 1 10
4 4 20
5 4 20
6 4 20
I can't make an UPDATE/SELECT combo that works
So far I got a SELECT that procuce the c column I'd like to get
select t1.c from t t1 join t t2 on t1.a=t2.b;
c
----------
10
10
10
20
20
20
But I dunno how to stuff that into c
Thanx in advance
Cheers, phi
You have to look up the value with a correlated subquery:
UPDATE t
SET c = (SELECT c
FROM t AS parent
WHERE parent.a = t.b)
WHERE c = 0;
I finnally found a way to copy back my initial 'temp' SELECT JOIN to table 't'. Something like this
create temp table u as select t1.c from t t1 join t t2 on t1.a=t2.b;
update t set c=(select * from u where rowid=t.rowid);
I'd like to know how the 2 solutions, yours with 1 query UPDATE correlated SELECT, and mine that is 2 queries and 1 correlated query each, compare perf wise. Mine seems more heavier, and less aesthetic, yet regarding perf I wonder.
On the Algo side, yours take care not to copy the parent data, only copy child data, mine copy parent on itself, but that's a nop, yet consuming some cycles :)
Cheers, Phi
I have a data frame that is structured identical to a table in my mysql db. I want to update the rows of the mysql db where the primary key of my data frame and that table match.
For example
DF 1
PK Count Temperature
3 1 111
4 2 100
5 3 190
6 4 200
MySQL Table
PK Count Temperature
1 1 100
2 10 11
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
Notice that I can't simply overwrite the table because I have rows in my DB that do not exist in my data frame.
After the update, what I would like to have is the following table.
PK Count Temperature
1 1 100
2 10 11
3 1 111
4 2 100
5 3 190
6 4 200
7 0 0
8 0 0
Thoughts?
So, I haven't been able to directly update a row. However, what I have done is create a holding table in my DB that I can append to from R. I then created a trigger in my db to update the desired rows in my desired table. From their, I have created another trigger to empty my holding table.
This is sort of what Dean was suggesting, but a little different.
Here, I am providing alternative approach over writing frame to a temp table and performing update in the main table or getting holding table, append, and update desired table by trigger method.
I believe following approach is easy and effective as it performs update record directly in the target table.
#install.packages("RMySQL")
#install.packages("DBI")
library(DBI)
library(RMySQL)
#Establish the connection
mydb = dbConnect(MySQL(),
user='your user',
password='your password',
dbname='your DB name',
host='Host Name')
#Eusuring the connection working by listing table
dbListTables(mydb)
#Applying update statement directly
rs = dbSendQuery(mydb, "UPDATE DB_NAME.TABLE_1
SET FIELD_1 = 0
WHERE ID = 5")
#Verifying the result
rs = dbSendQuery(mydb, "SELECT * FROM DB_NAME.TABLE_1
WHERE ID = 5")
data = fetch(rs, n=-1)
print(data)
I have tried above code in R Studio Version 1.1.453 and R 3.5.0 (64bit).
I am quite new to Sqlite and have a dilemma about database design. Suppose we have a number of matrices (of various sizes) that is going to be stored in a table. We can further assume that no matrise is sparse.
Let's say we have:
A = [[1, 4, 5],
[8, 1‚ 4],
[1, 1, 3]]
B = [['what', 'a', 'good', 'day'],
['for', 'a', 'walk', 'outside']]
C = [['AAA', 'BBB', 'CCC', 'DDD', 'EEE'],
['FFF', 'GGG', 'HHH', 'III', 'JJJ'],
['KKK', 'LLL', 'MMM', 'NNN', 'OOO']]
And D which is [NxM]
When we create the table we do not know all the sizes that the matrices will have. I do not think it would be nice to alter the table size afterwards. What would be a recommended way to store the matrices to efficiently get them back? I wish to query out a matrix row-by-row.
I am thinking of transforming matrices into a column vector that somehow ends up in a table like this,
CREATE TABLE mat(id INT,
row INT,
col INT,
val TEXT)
How can I get them back line by line with a query in sqlite that looks like this for matrix A?
[1, 4, 5]
[8, 1‚ 4]
[1, 1, 3]
Ideas? Or could someone kindly refer to any similar problems
---------------------- UPDATE ----------------------
Okay. My question was not clear enough. That is probably the way I'm intended to arrange the data in my database. I hope you can help me find a way to organize my database,
Suppose we have some sets of data:
Compilation User BogoMips
1 Andrew 1.04
1 Klaus 1.78
1 James 1.99
1 David 2.09
. . .
. . .
1 Alex 4.71
Compilation Time Temperature Colour
2 10:20 10 Blue
2 10:28 21 Green
2 10:42 25 Red
. . . .
. . . .
2 18:16 16 Green
Compilation Colour Distance
3 Blue 4
3 Green 9
. . .
. . .
3 Yellow 12
...And there will be many more sets of data with different numbers columns and new headers. Some header names will return in another set. In advance, we have no idea what kind of sets needs to be stored. Every set has a common header 'compilation' that binds them together.
How would you structure the data in a database?
I find it hard to believe that creating a new table for each set is a good solution. or?
My idea is to have two tables, headers and data.
CREATE TABLE headers (id INT,
header TEXT
)
CREATE TABLE data (id INT,
compilation INT,
fk_header_id INT REFERENCES headers,
row INT,
col INT,
value TEXT)
So the populated tables looks like this,
SELECT * FROM headers;
id header
------------
1 User
2 BogoMips
3 Time
4 Temperature
5 Colour
6 Distance
SELECT * FROM data;
id compilation fk_header_id row col value
----------------------------------------------------
1 1 1 1 1 Andrew
2 1 2 1 2 1.04
3 1 1 2 1 Klaus
4 1 2 2 2 1.78
. . . . . .
. 2 3 1 1 10:20
. 2 4 1 2 10
. 2 5 1 3 Blue
. 2 3 2 1 10:28
. 2 4 2 2 21
. 2 5 2 3 Green
. . . . . .
. 3 5 1 1 Blue
. 3 6 1 2 4
. 3 5 2 1 Green
. 3 6 2 2 9
. . . . . .
.
and so on
The problem is that I don't know how to query out the datasets in Sqlite. Anyone (Tony) have an idea?
You'd need a pivot / cross tab query (or it's join equivalent) to get the data out.
e.g
Select c1.value as col1, c2.value as col2, c3.value as col3
from data c1 on col = 1
inner join data c2 on col = 2 and c2.compilation = c1.compilation and c2.row = c1.row
inner join data c3 on col= 3 and c3.compilation = c1.compilation and c3.row = c1.row
Where c1.compilation = 1 order by c1.row
As you can see this is less than funny. In particular with the above, you'd have to know the number of columns in advance. Crosstab or pivot would relieve you of that in terms of the sql, but you'd still have to mess about to read in the data from the query result.
Haven't seen anything is your question that indicates a need to extract a row or a column from a matrix, never mind a cell from the db
My Table would start as simple as
Compilation, Description, Matrix
Matrix would be sort of serialisation of a matrix object, Binary, xml even some sort of string eg. 1,2,3|4,5,6|7,8,9
If this was all I needed to store, I'd be looking at a NoSQL variant.
I currently have 2 tables as follows within my database:
Table: SampleProducts
SampleProductsId (PK) Name
1 A
2 B
3 C
4 D
5 E
6 F
7 G
Table: SampleProductsBoms
SampleProductsBomId (PK) ParentId (FK) ChildId (FK) Quantity
1 1 2 3
2 2 3 4
3 4 6 2
ParentId and ChildId both reference SampleProductsId
In English so I can ensure that we are all on the same page:
Product A is made up of 3 of B
Product B is made up of 4 of C
Product D is made up of 2 of F
I would like to create a Stored Procedure / LinQ statement or something which I can use in my MVC 3 c# Web Application which will give me the following table structure / object to use...
Example:
Recursive Query to find the components of B
ProductId Name Quantity
3 C 4
6 F 2
This could go quite deep, so I really do need recursion!
CTE is helpfull for recursing as require in your problem statement check the link
Common Table Expression
or i think following query may also solve your purpose
select components.SampleProductId as productid,components.Name as Name,Quantity
from SampleProductsBOM bom
inner join SampleProducts products
on products.ParentId=bom.ParentId
inner join SampleProducts components
on components.SampleProductId=bom.ChildId
where products.Name='B'