How to create multiple vertex in SAP HANA Graph - graph

I'm trying to create 2 (multiple) vertex in SAP HANA like -
Create two table for vertex ITEM and DATASET
CREATE COLUMN TABLE "GREEK_MYTHOLOGY"."ITEM" (
"ITEM_ID" VARCHAR(100) PRIMARY KEY,
"ITEM_NAME" VARCHAR(100)
);
CREATE COLUMN TABLE "GREEK_MYTHOLOGY"."DATASET" (
"DATASET_ID" VARCHAR(100) PRIMARY KEY,
"DATASET_NAME" VARCHAR(100)
);
And creating edge as REFERENCES
CREATE COLUMN TABLE "GREEK_MYTHOLOGY"."REFERENCES" (
"REF_ID" INT UNIQUE NOT NULL,
"SOURCE" VARCHAR(100) NOT NULL
REFERENCES "GREEK_MYTHOLOGY"."ITEM" ("ITEM_ID")
ON UPDATE CASCADE ON DELETE CASCADE,
"TARGET" VARCHAR(100) NOT NULL
REFERENCES "GREEK_MYTHOLOGY"."DATASET" ("DATASET_ID")
ON UPDATE CASCADE ON DELETE CASCADE,
"TYPE" VARCHAR(100)
);
Now I would like to connect both vertex (ITEM and DATASET) with edge REFERENCES like below
CREATE GRAPH WORKSPACE "GREEK_MYTHOLOGY"."GRAPH"
EDGE TABLE "GREEK_MYTHOLOGY"."DATASET"
SOURCE COLUMN "SOURCE"
TARGET COLUMN "TARGET"
VERTEX TABLE "GREEK_MYTHOLOGY"."ITEM" KEY COLUMN "ITEM_ID"
VERTEX TABLE "GREEK_MYTHOLOGY"."DATASET"KEY COLUMN "DATASET_ID"
KEY COLUMN "REF_ID";
But it throws this exception at line VERTEX TABLE "GREEK_MYTHOLOGY"."DATASET"KEY COLUMN "DATASET_ID":
sql syntax error: incorrect syntax near "VERTEX": line 6 col 1 (at pos 200)
Is it possible to create multiple vertex in SAP HANA graph ? If yes then what is the right way to do this.

There's a misunderstanding here. The REFERENCES clause in the CREATE TABLE statement has nothing to do with the graph structure you want to represent.
Instead, it defines a foreign key constraint between the two tables.
The CREATE GRAPH WORKSPACE command only accepts one EDGE TABLE and one VERTEX TABLE as parameters.
However, you can also pass in synonyms or views here.
That way, you could create a view "ALL_ITEMS" like this:
CREATE VIEW "GREEK_MYTHOLOGY"."ALL_ITEMS" as
SELECT "ITEM_ID" as "ID", "ITEM_NAME" as "NAME" FROM "GREEK_MYTHOLOGY"."ITEM"
UNION
SELECT "DATASET_ID" as "ID", "DATASET_NAME" as "NAME" FROM "GREEK_MYTHOLOGY"."DATASET";
and then reference this view:
CREATE GRAPH WORKSPACE "GREEK_MYTHOLOGY"."GRAPH"
EDGE TABLE "GREEK_MYTHOLOGY"."DATASET"
SOURCE COLUMN "SOURCE"
TARGET COLUMN "TARGET"
VERTEX TABLE "GREEK_MYTHOLOGY"."ALL_ITEMS"
KEY COLUMN "NAME";
Using this approach is possible, but you now have to make sure that the "NAME" values are unique and not NULL across both tables.

Related

How to add existing table column in existing projection segmented by hash clause in vertica db?

I have created one table and have one projection of that table. I have to add existing table column in existing projection segmented by hash clause in vertica db.
"I have to add SBS_ALERT_ID column in existing projection segmented by hash clause without creating new projection."
CREATE TABLE public.ALERT
(
AS_OF_DATE date,
ALERT_ID int,
LOAN_NUMBER varchar(20),
SERVICER_LOAN_NUMBER varchar(20),
SBS_LOAN_NUMBER varchar(20),
SBS_ALERT_ID int,
ALERT_TYPE_ID varchar(25),
);
CREATE PROJECTION public.ALERTTT_SEG /*+createtype(D)*/
(
AS_OF_DATE ENCODING RLE,
ALERT_ID ENCODING DELTARANGE_COMP,
LOAN_NUMBER ENCODING ZSTD_FAST_COMP,
SERVICER_LOAN_NUMBER,
SBS_LOAN_NUMBER ENCODING RLE,
SBS_ALERT_ID ENCODING DELTARANGE_COMP,
ALERT_TYPE_ID,
)
AS
SELECT ALERT.AS_OF_DATE,
ALERT.ALERT_ID,
ALERT.LOAN_NUMBER,
ALERT.SERVICER_LOAN_NUMBER,
ALERT.SBS_LOAN_NUMBER,
ALERT.SBS_ALERT_ID,
ALERT.ALERT_TYPE_ID,
FROM public.ALERT
ORDER BY ALERT.LOAN_NUMBER,
ALERT.SBS_LOAN_NUMBER
SEGMENTED BY hash(ALERT.LOAN_NUMBER, ALERT.SBS_LOAN_NUMBER) ALL NODES;
Do you mean something like this?
Initial situation:
CREATE TABLE segby (
fullname varchar(12),
dob date
);
CREATE PROJECTION segby_super /*+basename(segby),createtype(A)*/ (
fullname,
dob
)
AS
SELECT segby.fullname,
segby.dob
FROM segby
ORDER BY segby.fullname,
segby.dob
SEGMENTED BY hash(segby.dob, segby.fullname) ALL NODES OFFSET 0;
Then, you add a column and initialise it with a DEFAULT ...
ALTER TABLE segby ADD id INT NOT NULL DEFAULT HASH(dob,fullname);
And subsequently, you make sure the table is segmented by that new column, by creating such a projection:
CREATE PROJECTION segby_id
AS SELECT
id
, fullname
, dob
FROM segby
ORDER BY id
SEGMENTED BY HASH(id) ALL NODES;
And finish by running a REFRESH on your table ...
SELECT REFRESH('segby');
-- out REFRESH
-- out -
-- out Refresh completed with the following outcomes:
-- out Projection Name: [Anchor Table] [Status] [Refresh Method] [Error Count] [Duration (sec)]
-- out ----------------------------------------------------------------------------------------
-- out "dbadmin"."segby_id": [segby] [refreshed] [scratch] [0] [0]
For your specific question: It looks as if you would like to be able to fire a command like:
ALTER PROJECTION public.ALERTTT_SEG
SEGMENTED BY hash(ALERT.LOAN_NUMBER, ALERT.SBS_LOAN_NUMBER,ALERT.SBS_ALERT_ID) ALL NODES;
This simply does not work.
You will have to create a new projection, segmented the new way, refresh the table, and drop the original projection

Cascade delete not working SQLite with 1:1 relationship

I'm trying to set up a 1:1 relationship between two tables Places and People. A person has a home, and when that person is deleted the home should also be deleted. Other tables also use the Places table, so there is no column in the Places table that refers to the People table.
To try and achieve this, I've set the People table up so that when a row is deleted, there is a cascade delete on the foreign key pointing at the Places table row is also deleted.
CREATE TABLE IF NOT EXISTS "People" (
"Id" TEXT NOT NULL CONSTRAINT "PK_People" PRIMARY KEY,
"Name" TEXT NOT NULL,
"HomeId" TEXT NOT NULL,
CONSTRAINT "FK_People_Places_HomeId" FOREIGN KEY ("HomeId") REFERENCES "Places" ("Id") ON DELETE CASCADE
);
However, when I actually tried this, the row in the Places table still existed. Is there any way to fix this?
Fully runnable example
PRAGMA foreign_keys = ON;
CREATE TABLE IF NOT EXISTS "Places" (
"Id" TEXT NOT NULL CONSTRAINT "PK_Places" PRIMARY KEY,
"Name" TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS "People" (
"Id" TEXT NOT NULL CONSTRAINT "PK_People" PRIMARY KEY,
"Name" TEXT NOT NULL,
"HomeId" TEXT NOT NULL,
CONSTRAINT "FK_People_Places_HomeId" FOREIGN KEY ("HomeId") REFERENCES "Places" ("Id") ON DELETE CASCADE
);
DELETE FROM Places;
DELETE FROM People;
INSERT INTO "Places" ("Id", "Name") VALUES ("6f81fa78-2820-48e1-a0a7-b0b71aa38262", "Castle");
INSERT INTO "People" ("Id", "HomeId", "Name") VALUES ("ccb079ce-b477-47cf-adba-9fdac6a41718", "6f81fa78-2820-48e1-a0a7-b0b71aa38262", "Fiona");
-- Should delete both the person and the place, but does not
DELETE FROM "People" WHERE "Id" = "ccb079ce-b477-47cf-adba-9fdac6a41718";
SELECT pl.Name "Place Name",
po.Name "Person Name"
FROM Places pl
LEFT JOIN People po USING(Name)
UNION ALL
SELECT pl.Name,
po.Name
FROM People po
LEFT JOIN Places pl USING(Name)
WHERE pl.Name IS NULL;
The "ON DELETE CASCADE" action for the foreign key that you defined in the table People for the column HomeId which references the column Id of the table Places means that:
whenever you delete a row in the table Places (which is the parent
table in this relationship) all rows in the table People that hold a
reference to the deleted row will also be deleted.
See the demo.
In your case you are deleting a row in the table People and this does not affect at all the table Places.

How to write a delete trigger in Oracle PL/SQL - stuck on identifying "matching rows"

I have a trigger I'm writing whereby , once I delete a row, I want to delete the corresponding row in the another table (which is common_cis.security_function ).
and the source table is party.security_function
Here are the columns in common_cis.security_function :
URL
SCRTY_FUNC_NAME
SCRTY_FUNC_DESC
IDN
CREATE_TMSTMP
CNCRCY_USER_IDN
Here are the columns in party.security_function :
UPDATE_USER_SRC_SYS_CD
UPDATE_USER_ID
UPDATE_TS
SCRT_FUNC_NM
SCRT_FUNC_DESC
CREAT_USER_SRC_SYS_CD
CREAT_USER_ID
CREAT_TS
What I have so far is :
delete from common_cis.security_function CCSF
where CCSF.SCRTY_FUNC_NAME = :new.SCRT_FUNC_NM;
Is this the right idea? Or do I use some kind of row-ID ?
thanks
I think you should use integrity constraints for that, namely foreign key constraint with "ON DELETE CASCADE" condition.
Here is an example, but check first is there tables in your schema with names that I used:
-- create tables:
create table master_table(
URL varchar2(1000),
SCRTY_FUNC_NAME varchar2(100),
SCRTY_FUNC_DESC varchar2(1000));
create table detail_table(
SCRT_FUNC_NM varchar2(100),
SCRT_FUNC_DESC varchar2(1000),
UPDATE_USER_ID number,
UPDATE_TS varchar2(100));
-- add primary key and foreign key constraints:
alter table master_table add constraint function_pk primary key (SCRTY_FUNC_NAME);
alter table detail_table add constraint function_fk foreign key (SCRT_FUNC_NM) references master_table (SCRTY_FUNC_NAME) on delete cascade;
-- fill tables with data:
insert into master_table
values ('url number 1', 'sec function #1', 'description of function #1');
insert into detail_table
values('sec function #1', 'description', 1, '123abc');
insert into detail_table
values('sec function #1', 'description', 2, '456xyz');
-- check tables: first contains 1 row and second - 2 rows
select count(*) from master_table;
select count(*) from detail_table;
-- delete rows from first table only:
delete from master_table;
-- check tables once again - both are empty:
select count(*) from master_table;
select count(*) from detail_table;
-- clear test tables:
drop table detail_table;
drop table master_table;

Oracle: searching over a range of values?

CREATE TABLE "DEPARTMENT"
( "DEP_NO" NUMBER(*,0) NOT NULL ENABLE,
"SSN" NUMBER(*,0),
"STREET" CHAR(40) NOT NULL ENABLE,
"CITY" CHAR(25) NOT NULL ENABLE,
"NAME" CHAR(50) NOT NULL ENABLE,
"BUDGET" NUMBER(8,2),
CONSTRAINT "PK_DEPARTMENT" PRIMARY KEY ("DEP_NO") ENABLE
) ;
ALTER TABLE "DEPARTMENT" ADD CONSTRAINT "FK_DEPARTMENT_EMPLOYEE" FOREIGN KEY ("SSN")
REFERENCES "EMPLOYEE" ("SSN") ENABLE;
ALTER TABLE "DEPARTMENT" ADD CONSTRAINT "FK_DEPARTMENT_LOCATION" FOREIGN KEY ("STREET", "CITY")
REFERENCES "LOCATION" ("STREET", "CITY") ENABLE;
what is the correct way in building a data base , is it better to create the tables with their primary keys , insert the data and then link these tables to another tables with the foreign key or it is better to create all the tables , link them together and then insert the required data ???
There is no correct way. Both approaches can be used.
The simpler approach is to first create all the tables, indexes and constraints and then to insert the data.
For maximum performance, first create just the tables and the primary key indexes, then insert the data and finally create the additional indexes and the constraints.

How do I update or insert an sqlite row with multiple conditions

I have three tables. A, B, and A_to_B. The relationship between A and B is many-to-many. This relationship information is stored in table A_to_B. It's construction is defined as follows:
CREATE TABLE
(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
identifier_from_a TEXT NOT NULL,
identifier_from_b TEXT NOT NULL);
Each relationship is unique.
I would like to persist my relationship data with a single statement per relationship. My question is, how can I achieve this without inserting duplicates?
The solution is to use a multiple column UNIQUE definition in the create table statement.
For example:
CREATE TABLE
(id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
identifier_from_a TEXT NOT NULL,
identifier_from_b TEXT NOT NULL, UNIQUE (a,b) ON CONFLICT REPLACE);

Resources