Cassandra duplicate keys in map - collections

Is Cassandra map implementation Map or MultiMap? In other words, does Cassandra allow duplicate keys in map type? Based on this example, if I called
UPDATE cycling.cyclist_teams SET teams = teams + {2009 : 'First team'} WHERE id = 5b6962dd-3f90-4c93-8f61-eabfa4a803e2;
and then
UPDATE cycling.cyclist_teams SET teams = teams + {2009 : 'Second team'} WHERE id = 5b6962dd-3f90-4c93-8f61-eabfa4a803e2;
then database would look like this: A)
teams[2009]: 'Second team'
or this: B)
teams[2009]: 'First team'
teams[2009]: 'Second team'

The documentation is very clear on this:
A map relates one item to another with a key-value pair. For each key, only one value may exist, and duplicates cannot be stored. Both the key and the value are designated with a data type.
That is, the value will be overwritten, just as is the case with a java HashMap. The result will be {2009 : 'Second team'}

Related

How to get all the elements of an association?

I have created a CDS views as follows:
define view YGAC_I_REQUEST_ROLE
with parameters
pm_req_id : grfn_guid,
#Consumption.defaultValue: 'ROL'
pm_item_type : grac_prov_item_type,
#Consumption.defaultValue: 'AP'
pm_approval : grac_approval_status
as select from YGAC_I_REQ_PROVISION_ITEM as provitem
association [1..1] to YGAC_I_ROLE as _Role on _Role.RoleId = provitem.ProvisionItemId
association [1..*] to YGAC_I_ROLE_RS as _Relation on _Relation.RoleId1 = provitem.ProvisionItemId
{
key ReqId,
key ReqIdItem,
Connector,
ProvisionItemId,
ActionType,
ValidFrom,
ValidTo,
_Role.RoleId,
_Role.RoleName,
_Role.RoleType,
_Role,
_Relation
}
where
ReqId = $parameters.pm_req_id
and ProvisionItemType = $parameters.pm_item_type
and ApprovalStatus = $parameters.pm_approval
Then I have consumed in ABAP:
SELECT
FROM ygac_i_request_role( pm_req_id = #lv_test,
pm_item_type = #lv_item_type,
pm_approval = #lv_approval
)
FIELDS reqid,
connector,
provisionitemid
INTO TABLE #DATA(lt_result).
How to get the list of _Relation according to selection above.
This is generally not possible like in ABAP SQL queries:
SELECT m~*, kt~*
FROM mara AS m
JOIN makt AS kt
...
This contradicts the whole idea of CDS associations, because they were created to join on-demand and to reduce redundant calls to database. Fetching all fields negates the whole idea of "lazy join".
However, there is another syntax in FROM clause which is enabled by path expressions that allows querying underlining associations both fully and by separate elements. Here is how
SELECT *
FROM ygac_i_request_role( pm_req_id = #lv_test )
\_Role AS role
INTO TABLE #DATA(lt_result).
This fetches all the fields of _Role association into internal table.
Note: remember, it is not possible to fetch all the published associations of current view simultaneously, only one path per query.
Possible workaround is to use JOIN
SELECT *
FROM ygac_i_request_role AS main
JOIN ygac_i_request_role
\_Role AS role
ON main~ProvisionItemId = role~RoleId
JOIN ygac_i_request_role
\_Relation AS relation
ON main~ProvisionItemId = relation~RoleId1
INTO TABLE #DATA(lt_table).
This creates deep-structured type with dedicated structure for every join association like this:
If you are not comfortable with such structure for you task, lt_table should be declared statically to put up all the fields in a flat way
TYPES BEGIN OF ty_table.
INCLUDE TYPE ygac_i_request_role.
INCLUDE TYPE ygac_i_role.
INCLUDE TYPE ygac_i_role_rs.
TYPES END OF ty_table.

How to set a table field primary in design-time Firedac?

I want to make a not-null non-auto-inc integer my primary key, but I am unable to do so in design time with Firedac. There is no parameter for the TIntegerfield that allows me to make it primary. There is also no parameter of the TFDTable where I can choose the primary field out of all available fields.
I get that it may be possible doing it in code & combining it with my design time table but that beats the whole purpose of doing it all in design time.
Earlier I did have a auto-inc ID in my table, and this was automatically set to primary key. I deleted this field now because I need another integer to be the primary.
Also I can't find information about the primary key & TFDTable on the official Embacadero website.
It's best to experiment with this using a new table in your database and a minimal new Delphi project.
Update: See below for database DDL and Form's DFM.
You need to have your ID field marked as a primary key in your database.
After you've added an FDConnection and an FDTable to your project, select the FDTable's TableName from the drop down list. Then, click in the FDTable's IndexName field and you should find an automatically-named index on the table's Primary Key. Just select it so that the IndexName takes its value. That's all there is to it.
For the table created using the DDL below, the IndexName property of the FDTable appears as sqlite_autoindex_test_1
If you then dbl-click the FDTable and use the pop-up Fields editor to set up persistent fields on the FDTable and then select your ID field, you should find that if you examine its ProviderFlags, they should include pfInKey, which is what tells FireDAC to use the field as the table's primary key when generating the SQL to update it, do inserts, etc.
You should find that the ID field's Required field is autmatically set to True, btw.
If you want to supply the ID Field's value yourself when adding a new record, use the table's OnNewRecord to generate the ID value an assign it to the field.
DDL for test Sqlite database
create table test(
id int not null primary key,
AName nchar(12)
)
Project DFM extract
object Form2: TForm2
object DBGrid1: TDBGrid
DataSource = DataSource1
end
object DBNavigator1: TDBNavigator
DataSource = DataSource1
end
object FDConnection1: TFDConnection
Params.Strings = (
'Database=D:\aaad7\sqlite\MADB1.sqlite'
'DriverID=SQLite')
Connected = True
LoginPrompt = False
end
object DataSource1: TDataSource
DataSet = FDTable1
end
object FDTable1: TFDTable
IndexName = 'sqlite_autoindex_test_1'
Connection = FDConnection1
UpdateOptions.UpdateTableName = 'test'
TableName = 'test'
object FDTable1id: TIntegerField
FieldName = 'id'
Origin = 'id'
ProviderFlags = [pfInUpdate, pfInWhere, pfInKey]
Required = True
end
object FDTable1AName: TWideStringField
FieldName = 'AName'
Origin = 'AName'
FixedChar = True
Size = 12
end
end
end

Dictionaries in ABAP. How?

How to translate this snippet of executable pseudo code into ABAP?
phone_numbers = {
'hans': '++498912345',
'peter': '++492169837',
'alice': '++6720915',
}
# access
print (phone_numbers['hans'])
# add
phone_numbers['bernd']='++3912345'
# update
phone_numbers['bernd']='++123456'
if 'alice' in phone_numbers:
print('Yes, alice is known')
# all entries
for name, number in phone_numbers.items():
print(name, number)
Modern ABAP is possible up to 752, less chars, more upvotes :-)
P.S. BTW, up to now no one has added abap to pleac (Programming Language Examples Alike Cookbook)
Well, how about the following solution?
REPORT ZZZ.
TYPES: BEGIN OF t_phone_number,
name TYPE char40,
number TYPE char40,
END OF t_phone_number.
DATA: gt_phone_number TYPE HASHED TABLE OF t_phone_number WITH UNIQUE KEY name.
START-OF-SELECTION.
gt_phone_number = VALUE #(
( name = 'hans' number = '++498912345' )
( name = 'peter' number = '++492169837' )
( name = 'alice' number = '++6720915' )
).
* access
WRITE / gt_phone_number[ name = 'hans' ]-number.
* add
gt_phone_number = VALUE #( BASE gt_phone_number ( name = 'bernd' number = '++3912345' ) ).
* update
MODIFY TABLE gt_phone_number FROM VALUE #( name = 'bernd' number = '++123456' ).
IF line_exists( gt_phone_number[ name = 'alice' ] ).
WRITE / 'Yes, Alice is known.'.
ENDIF.
* all entries
LOOP AT gt_phone_number ASSIGNING FIELD-SYMBOL(<g_phone_number>).
WRITE: /, <g_phone_number>-name, <g_phone_number>-number.
ENDLOOP.
#Jagger's answer is great, but #guettli asked for shorter syntax. So just for completeness, there is of course always the possibility to wrap this in a class:
CLASS dictionary DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF row_type,
key TYPE string,
data TYPE string,
END OF row_type.
TYPES hashed_map_type TYPE HASHED TABLE OF row_type WITH UNIQUE KEY key.
METHODS put
IMPORTING
key TYPE string
data TYPE string.
METHODS get
IMPORTING
key TYPE string
RETURNING
VALUE(result) TYPE string.
METHODS get_all
RETURNING
VALUE(result) TYPE hashed_map_type.
METHODS contains
IMPORTING
key TYPE string
RETURNING
VALUE(result) TYPE abap_bool.
PRIVATE SECTION.
DATA map TYPE hashed_map_type.
ENDCLASS.
CLASS dictionary IMPLEMENTATION.
METHOD put.
READ TABLE map REFERENCE INTO DATA(row) WITH TABLE KEY key = key.
IF sy-subrc = 0.
row->*-data = data.
ELSE.
INSERT VALUE #( key = key
data = data )
INTO TABLE map.
ENDIF.
ENDMETHOD.
METHOD get.
result = map[ key = key ]-data.
ENDMETHOD.
METHOD get_all.
INSERT LINES OF map INTO TABLE result.
ENDMETHOD.
METHOD contains.
result = xsdbool( line_exists( map[ key = key ] ) ).
ENDMETHOD.
ENDCLASS.
Leading to:
DATA(phone_numbers) = NEW dictionary( ).
phone_numbers->put( key = 'hans' data = '++498912345' ).
phone_numbers->put( key = 'peter' data = '++492169837' ).
phone_numbers->put( key = 'alice' data = '++6720915' ).
" access
WRITE phone_numbers->get( 'hans' ).
" add
phone_numbers->put( key = 'bernd' data = '++3912345' ).
" update
phone_numbers->put( key = 'bernd' data = '++123456' ).
IF phone_numbers->contains( 'alice' ).
WRITE 'Yes, alice is known'.
ENDIF.
" all entries
LOOP AT phone_numbers->get_all( ) INTO DATA(row).
WRITE: / row-key, row-data.
ENDLOOP.
People rarely do this in ABAP because internal tables are so versatile and powerful. From my personal point of view, I'd like to see people build more custom data structures. Implementation details like HASHED or SORTED, see discussion in #Jagger's answer, are hidden away in a natural way when doing this.

What's the equivalent DynamoDB solution for this MySQL Query?

I'm familiar with MySQL and am starting to use Amazon DynamoDB for a new project.
Assume I have a MySQL table like this:
CREATE TABLE foo (
id CHAR(64) NOT NULL,
scheduledDelivery DATETIME NOT NULL,
-- ...other columns...
PRIMARY KEY(id),
INDEX schedIndex (scheduledDelivery)
);
Note the secondary Index schedIndex which is supposed to speed-up the following query (which is executed periodically):
SELECT *
FROM foo
WHERE scheduledDelivery <= NOW()
ORDER BY scheduledDelivery ASC
LIMIT 100;
That is: Take the 100 oldest items that are due to be delivered.
With DynamoDB I can use the id column as primary partition key.
However, I don't understand how I can avoid full-table scans in DynamoDB. When adding a secondary index I must always specify a "partition key". However, (in MySQL words) I see these problems:
the scheduledDelivery column is not unique, so it can't be used as a partition key itself AFAIK
adding id as unique partition key and using scheduledDelivery as "sort key" sounds like a (id, scheduledDelivery) secondary index to me, which makes that index pratically useless
I understand that MySQL and DynamoDB require different approaches, so what would be a appropriate solution in this case?
It's not possible to avoid a full table scan with this kind of query.
However, you may be able to disguise it as a Query operation, which would allow you to sort the results (not possible with a Scan).
You must first create a GSI. Let's name it scheduled_delivery-index.
We will specify our index's partition key to be an attribute named fixed_val, and our sort key to be scheduled_delivery.
fixed_val will contain any value you want, but it must always be that value, and you must know it from the client side. For the sake of this example, let's say that fixed_val will always be 1.
GSI keys do not have to be unique, so don't worry if there are two duplicated scheduled_delivery values.
You would query the table like this:
var now = Date.now();
//...
{
TableName: "foo",
IndexName: "scheduled_delivery-index",
ExpressionAttributeNames: {
"#f": "fixed_value",
"#d": "scheduled_delivery"
},
ExpressionAttributeValues: {
":f": 1,
":d": now
},
KeyConditionExpression: "#f = :f and #d <= :d",
ScanIndexForward: true
}

SQLite query to find primary keys

In SQLite I can run the following query to get a list of columns in a table:
PRAGMA table_info(myTable)
This gives me the columns but no information about what the primary keys may be. Additionally, I can run the following two queries for finding indexes and foreign keys:
PRAGMA index_list(myTable)
PRAGMA foreign_key_list(myTable)
But I cannot seem to figure out how to view the primary keys. Does anyone know how I can go about doing this?
Note: I also know that I can do:
select * from sqlite_master where type = 'table' and name ='myTable';
And it will give the the create table statement which shows the primary keys. But I am looking for a way to do this without parsing the create statement.
The table_info DOES give you a column named pk (last one) indicating if it is a primary key (if so the index of it in the key) or not (zero).
To clarify, from the documentation:
The "pk" column in the result set is zero for columns that are not
part of the primary key, and is the index of the column in the primary
key for columns that are part of the primary key.
Hopefully this helps someone:
After some research and pain the command that worked for me to find the primary key column name was:
SELECT l.name FROM pragma_table_info("Table_Name") as l WHERE l.pk = 1;
For the ones trying to retrieve a pk name in android, and while using the ROOM library.
#Oogway101's answer was throwing an error: "no such column [your_table_name] ... etc.. etc...
my way of query submition was:
String pkSearch = "SELECT l.name FROM pragma_table_info(" + tableName + ") as l WHERE l.pk = 1;";
database.query(new SimpleSQLiteQuery(pkSearch)
I tried using the (") quotations and still error.
String pkSearch = "SELECT l.name FROM pragma_table_info(\"" + tableName + "\") as l WHERE l.pk = 1;";
So my solution was this:
String pragmaInfo = "PRAGMA table_info(" + tableName + ");";
Cursor c = database.query(new SimpleSQLiteQuery(pragmaInfo));
String id = null;
c.moveToFirst();
do {
if (c.getInt(5) == 1) {
id = c.getString(1);
}
} while (c.moveToNext() && id == null);
Log.println(Log.ASSERT, TAG, "AbstractDao: pk is: " + id);
The explanation is that:
A) PRAGMA table_info returns a cursor with various indices, the response is atleast of length 6... didnt check more...
B) index 1 has the column name.
C) index 5 has the "pk" value, either 0 if it is not a primary key, or 1 if its a pk.
You can define more than one pk so this will not bring an accurate result if your table has more than one (IMHO more than one is bad design and balloons the complexity of the database beyond human comprehension).
So how will this fit into the #Dao? (you may ask...)
When making the Dao "abstract" you have access to a default constructor which has the database in it:
from the docummentation:
An abstract #Dao class can optionally have a constructor that takes a Database as its only parameter.
this is the constructor that will grant you access to the query.
There is a catch though...
You may use the Dao during a database creation with the .addCallback() method:
instance = Room.databaseBuilder(context.getApplicationContext(),
AppDatabase2.class, "database")
.addCallback(
//You may use the Daos here.
)
.build();
If you run a query in the constructor of the Dao, the database will enter a feedback loop of infinite instantiation.
This means that the query MUST be used LAZILY (just at the moment the user needs something), and because the value will never change, it can be stored. and never re-queried.

Resources