let's assume we have a primary table, with following content:
key value
--------------
a andreas
b bernd
c chris
e ernst
f frank
g gerold
and we created a seconardy, with a callback that just counts the letters in the primary data, we'll get
5 b
5 c
5 e
5 f
6 g
7 a
now, when I delete the entry for "ernst", the secondary entry "5 e" will be deleted too. How is determined, which secondary entry must be deleted? Does BDB execute the callback again, followed by a table-scan on the calculated value? "5" can be jumped on, but to find "5 e" there would be a cursor needed, right?
The association between secondary <-> primary in Berkeley DB is on a unique identifier. So with key->value stores, the secondary value == the primary key.
With a unique identifier that associates the secondary <-> primary databases, there is no ambiguity.
Related
Let's say we have a loop that enters Associations to a Dictionary in a clear order:
| d |
d := Dictionary new: 10.
1 to: 10 do: [ :i |
d add: i -> (i + 9 printStringBase: 20)
].
d
When I evaluate this code I get a Dictionary in a "twisted" order:
9 -> I
5 -> E
1 -> A
10 ->J
6 -> F
2 -> B
7 -> G
3 -> C
8 -> H
4 -> D
Each time a Dictionary with the same entry data is created it have the same order, so I assume it is a feature not a bug ..?
I use Pharo v9.0.21.
A Dictionary is not an ordered collection. Instead it is a keyed collection, keeping key-value pairs.
There is an OrderPreservingDictionary for you to use: https://github.com/pharo-contributions/OrderPreservingDictionary
In addition to this other answer is is worth explaining the apparent disorder shown in the question.
Firstly observe that Dictionary new: 10 will create a new instance of Dictionary with capacity for a prime number p of associations greater than 10. Say 11, 13, 17, whatever.
Secondly, for every association added, the dictionary will compute the hash value of the key and deduce the location from its remainder modulo p.
Since all keys occurring in the example are instances of SmallInteger, their hashes will be themselves(*). And since these are smaller than p, they will equal the modulo and hence be stored in the slots derived from their values in some implementation-dependent way.
Finally, the printing method is free to enumerate the associations in any order.
(*) While this is true in some dialects, I've checked in Pharo and this is not true, 3 hash is not 3 etc. which explains the "twisting" in the case of Pharo.
For completeness of the answer there is such thing as ordered Dictionary.
At Smalltalk/X the #OrderedDictionary is defined as:
Dictionary subclass:#OrderedDictionary
instanceVariableNames:'order'
classVariableNames:''
poolDictionaries:''
category:'Collections-Sequenceable'
"/ I am a subclass of Dictionary whose elements (associations) are ordered in a
"/ similar fashion to OrderedCollection.
"/ That is, while being filled via #at:put: messages (or similar Dictionary protocol),
"/ the order in which associations are added, is remembered and accessible via the #atIndex:
"/ or #order messages.
"/ Therefore, this combines fast access via hashing with a defined order when enumerating.
"/
"/ [instance variables:]
"/ order <OrderedCollection> Ordered collection of keys reflecting the order of
"/ associations in the dictionary.
"/
"/ [complexity:]
"/ access by index: O(1)
"/ access by key: O(1)
"/ searching: O(n)
"/ insertion: mostly O(1)
"/ removal: mostly O(N) (because order will have O(n) behavior)
"/
"/ [author:]
"/ Ifor Wyn Williams
"/ Changed by: exept
I have 2 tables, which has one - many relation ship. The child table has 4 columns (Let say, A, B, C, D)
A - Table column
B - Foreign key column
C - Foreign key column
D - Table column
Requirement is to sort the data by A, B,then by C when displaying records
I tried with query script but not working. (Relation data source doesn't load)
I tried with specifying the sort field on Relations tab (Sorting work only for table column, fk columns are not displayed)
Is there a way to do this?
I am struggling optimising this past amazon Interview question involving a DAG.
This is what I tried (The code is long and I would rather explain it)-
Basically since the graph is a DAG and because its a transitive relation a simple traversal for every node should be enough.
So for every node I would by transitivity traverse through all the possibilities to get the end vertices and then compare these end vertices to get
the most noisy person.
In my second step I have actually found one such (maybe the only one) most noisy person for all the vertices of the traversal in step 2. So I memoize all of this in a mapping and mark the vertices of the traversal as visited.
So I am basically maintaining an adjacency list for the graph, A visited/non visited mapping and a mapping for the output (the most noisy person for every vertex).
In this way by the time I get a query I would not have to recompute anything (in case of duplicate queries).
The above code works but since I cannot test is with testcases it may/may not pass the time limit. Is there a faster solution(maybe using DP) to this. I feel I am not exploiting the transitive and anti symmetric condition enough.
Obviously I am not checking the cases where a person is less wealthy than the current person. But for instance if I have pairs like - (1,2)(1,3)(1,4)...etc and maybe (2,6)(2,7)(7,8),etc then if I am given to find a more wealthy person than 1 I have traverse through every neighbor of 1 and then the neighbor of every neighbor also I guess. This is done only once as I store the results.
Question Part 1
Question Part 2
Edit(Added question Text)-
Rounaq is graduating this year. And he is going to be rich. Very rich. So rich that he has decided to have
a structured way to measure his richness. Hence he goes around town asking people about their wealth,
and notes down that information.
Rounaq notes down the pair (Xi; Yi) if person Xi has more wealth than person Yi. He also notes down
the degree of quietness, Ki, of each person. Rounaq believes that noisy persons are a nuisance. Hence, for
each of his friends Ai, he wants to determine the most noisy(least quiet) person among those who have
wealth more than Ai.
Note that "has more wealth than"is a transitive and anti-symmetric relation. Hence if a has more wealth
than b, and b has more wealth than c then a has more wealth than c. Moreover, if a has more wealth than
b, then b cannot have more wealth than a.
Your task in this problem is to help Rounaq determine the most noisy person among the people having
more wealth for each of his friends ai, given the information Rounaq has collected from the town.
Input
First line contains T: The number of test cases
Each Test case has the following format:
N
K1 K2 K3 K4 : : : Kn
M
X1 Y1
X2 Y2
. . .
. . .
XM YM
Q
A1
A2
. . .
. . .
AQ
N: The number of people in town
M: Number of pairs for which Rounaq has been able to obtain the wealth
information
Q: Number of Rounaq’s Friends
Ki: Degree of quietness of the person i
Xi; Yi: The pairs Rounaq has noted down (Pair of distinct values)
Ai: Rounaq’s ith friend
For each of Rounaq’s friends print a single integer - the degree of quietness of the most noisy person as required or -1 if there is no wealthier person for that friend.
Perform a topological sort on the pairs X, Y. Then iterate from the most wealthy down the the least wealthy, and store the most noisy person seen so far:
less wealthy -> most wealthy
<- person with lowest K so far <-
Then for each query, binary search the first person with greater wealth than the friend. The value we stored is the most noisy person with greater wealth than the friend.
UPDATE
It seems that we cannot rely on the data allowing for a complete topological sort. In this case, traverse sections of the graph that lead from known greatest to least wealth, storing for each person visited the most noisy person seen so far. The example you provided might look something like:
3 - 5
/ |
1 - 2 |
/ |
4 --
Traversals:
1 <- 3 <- 5
1 <- 2
4 <- 2
4 <- 5
(Input)
2 1
2 4
3 1
5 3
5 4
8 2 16 26 16
(Queries and solution)
3 4 3 5 5
16 2 16 -1 -1
table details are:
Likes ( ID1, ID2 )
What I think you're trying to do is this...
A likes B.
B likes C.
Therefore A likes C.
Hard coding "A likes C" into the table is fragile. Consider what happens if B stops liking C. Do you delete "A likes C"? How do you know it was transitive? What if A said they like C? You can get around this with a column that says "A likes C" was transitive.
What you have here is a directed graph. And what you're asking is every time you add or delete an edge to check whether this changes the connection between ALL nodes. That is an extraordinarily expensive thing to do. Consider this.
A -> B -> C
X -> Y -> Z
So it's inferred that A likes C and X likes Z.
What if C likes X? Now...
A likes X
A likes Y
A likes Z
B likes X
B likes Y
B likes Z
C likes Y
C likes Z
Now what if B unlikes C?
A unlikes C
A unlikes X
A unlikes Y
A unlikes Z
B unlikes X
B unlikes Y
B unlikes Z
And that's a simple linear case with just six nodes.
Hard coding every possible relationship in a graph is a bad idea. And SQL databases aren't very good at traversing graphs.
Instead you want to store and walk the graph. This is probably not possible with SQLite's limited query language. You can use a graph database or the WITH query in PostgreSQL.
I have this school assignment about Vigenere code.
I've got 2 keys: AB and XYZ. A text is encrypted twice with these keys.
The questions are:
How to make 1 key out of those 2?
How to make 1 key when there are 3 keys?
Choose the length of the combined key as the least common multiple of the key lengths.
Repeat each key until it fills the combined key
Add all the repeated keys.
For example with AB and XYZ assuming A=0:
The lengths are 2 and 3, the common multiple is 6.
AB AB AB and XYZ XYZ
A+X, B+Y, A+Z, B+X, A+Y, B+Z = XZZYYA
This algorithm works with any number of keys.
#CodesInChaos answer is great, but let's add some math:
|C| = gcd(|A|, |B|)
# gcd: greatest common divisor.
Also notice that you can compute the value of the combined key:
Let's define the keys as:
A=(a_0,a_1,…,a_i)
B=(b_0,b_1,…,b_j)
Then the value of the combined key is:
C={c_i=a_(i%|A|)+b_(i%|B|) | 0≤i≤lcd(|A|,|B|)}
And can be generalized to each number of keys:
C={c_i=a_(i%|A|)+b_(i%|B|)+...+z_(i%|Z|) | 0≤i≤lcd(|A|,|B|,...,|Z|)}