Oracle 11g VARRAY of OBJECTS - plsql

I have the following statements in Oracle 11g:
CREATE TYPE person AS OBJECT (
name VARCHAR2(10),
age NUMBER
);
CREATE TYPE person_varray AS VARRAY(5) OF person;
CREATE TABLE people (
somePeople person_varray
)
How can i select the name value for a person i.e.
SELECT somePeople(person(name)) FROM people
Thanks

I'm pretty sure that:
What you're doing isn't what I'd be doing. It sort of completely violates relational principles, and you're going to end up with an object/type system in Oracle that you might not be able to change once it's been laid down. The best use I've seen for SQL TYPEs (not PL/SQL types) is basically being able to cast a ref cursor back for pipelined functions.
You have to unnest the collection before you can query it relationally, like so:
SELECT NAME FROM
(SELECT SP.* FROM PEOPLE P, TABLE(P.SOME_PEOPLE) SP)
That'll give you all rows, because there's nothing in your specifications (like a PERSON_ID attribute) to restrict the rows.
The Oracle Application Developer's Guide - Object Relational Features discusses all of this in much greater depth, with examples.

To insert query:-
insert into people values (
person_varray(person('Ram','24'))
);
To select :-
select * from people;
SELECT NAME FROM (SELECT SP.* FROM PEOPLE P, TABLE(P.somePeople) SP)
While inserting a row into people table use constructor of
person_varray and then the constructor
of person type for each project.
The above INSERT command
creates a single row in people table.
select somePeople from people ;
person(NAME, age)
---------------------------------------------------
person_varray(person('Ram', 1),
To update the query will be:-
update people
set somePeople =
person_varray
(
person('SaAM','23')
)

Related

Inserting from an object into another object

From table abc i am inserting values in the object abc_type now i'm trying
to insert form abc_type to abc_second object on some condition.While doing
his i'm getting error that this is not a table.Is it even possible to fetch
data from an object and insert into another one.
create table abc(id number,name varchar2(50));
create or replace type abc_obj as object(id number,name varchar2(50) ) ;
create or replace type abc_ref as table of abc_obj;
declare
abc_type abc_ref := abc_ref();
abc_second abc_ref := abc_ref();
begin
select abc_obj(id ,name)
bulk collect into abc_type
from abc;
insert into table(abc_second) select * from abc_type where id=1;
end;
Unfortunately, Oracle user the term "table" in 3, or more, totally different contexts. When you "create table ...' you build the definition of an object in which to persist data, this is the normal use for the term. However, when you use the form "... table of ...' you define an pl/sql collection (array) for holding data inside pl/sql. In this case you have created a "nested table" (3rd use of table). (Note: Some collection types can be declared as column attributes on tables.)
While not identical there are multiple issues with your object definitions as well.
You did not explain the intended use for "second_table" but it seems you merely a
copy of the data from "abc". This can be achieved in multiple ways. If it is basically a one time process then just
create table second_table as select * from abc;
If this is an ongoing action then
create table second_table as select * from abc where 1=0;
-- then when ever needed
insert into second_table select * from abc;
If neither of these satisfy your intended use please expand your question to explain the intended use.

Efficient insertion of row and foreign table row if it does not exist

Similar to this question and this solution for PostgreSQL (in particular "INSERT missing FK rows at the same time"):
Suppose I am making an address book with a "Groups" table and a "Contact" table. When I create a new Contact, I may want to place them into a Group at the same time. So I could do:
INSERT INTO Contact VALUES (
"Bob",
(SELECT group_id FROM Groups WHERE name = "Friends")
)
But what if the "Friends" Group doesn't exist yet? Can we insert this new Group efficiently?
The obvious thing is to do a SELECT to test if the Group exists already; if not do an INSERT. Then do an INSERT into Contacts with the sub-SELECT above.
Or I can constrain Group.name to be UNIQUE, do an INSERT OR IGNORE, then INSERT into Contacts with the sub-SELECT.
I can also keep my own cache of which Groups exist, but that seems like I'm duplicating functionality of the database in the first place.
My guess is that there is no way to do this in one query, since INSERT does not return anything and cannot be used in a subquery. Is that intuition correct? What is the best practice here?
My guess is that there is no way to do this in one query, since INSERT
does not return anything and cannot be used in a subquery. Is that
intuition correct?
You could use a Trigger and a little modification of the tables and then you could do it with a single query.
For example consider the folowing
Purely for convenience of producing the demo:-
DROP TRIGGER IF EXISTS add_group_if_not_exists;
DROP TABLE IF EXISTS contact;
DROP TABLE IF EXISTS groups;
One-time setup SQL :-
CREATE TABLE IF NOT EXISTS groups (id INTEGER PRIMARY KEY, group_name TEXT UNIQUE);
INSERT INTO groups VALUES(-1,'NOTASSIGNED');
CREATE TABLE IF NOT EXISTS contact (id INTEGER PRIMARY KEY, contact TEXT, group_to_use TEXT, group_reference TEXT DEFAULT -1 REFERENCES groups(id));
CREATE TRIGGER IF NOT EXISTS add_group_if_not_exists
AFTER INSERT ON contact
BEGIN
INSERT OR IGNORE INTO groups (group_name) VALUES(new.group_to_use);
UPDATE contact SET group_reference = (SELECT id FROM groups WHERE group_name = new.group_to_use), group_to_use = NULL WHERE id = new.id;
END;
SQL that would be used on an ongoing basis :-
INSERT INTO contact (contact,group_to_use) VALUES
('Fred','Friends'),
('Mary','Family'),
('Ivan','Enemies'),
('Sue','Work colleagues'),
('Arthur','Fellow Rulers'),
('Amy','Work colleagues'),
('Henry','Fellow Rulers'),
('Canute','Fellow Ruler')
;
The number of values and the actual values would vary.
SQL Just for demonstration of the result
SELECT * FROM groups;
SELECT contact,group_name FROM contact JOIN groups ON group_reference = groups.id;
Results
This results in :-
1) The groups (noting that the group "NOTASSIGNED", is intrinsic to the working of the above and hence added initially) :-
have to be careful regard mistakes like (Fellow Ruler instead of Fellow Rulers)
-1 used because it would not be a normal value automatically generated.
2) The contacts with the respective group :-
Efficient insertion
That could likely be debated from here to eternity so I leave it for the fence sitters/destroyers to decide :). However, some considerations:-
It works and appears to do what is wanted.
It's a little wasteful due to the additional wasted column.
It tries to minimise the waste by changing the column to an empty string (NULL may be even more efficient, but for some can be confusing)
There will obviously be an overhead BUT in comparison to the alternatives probably negligible (perhaps important if you were extracting every Facebook user) but if it's user input driven likely irrelevant.
What is the best practice here?
Fences again. :)
Note Hopefully obvious, but the DROP statements are purely for convenience and that all other SQL up until the INSERT is run once
to setup the tables and triggers in preparation for the single INSERT
that adds a group if necessary.

How to get relationship between tables in Progress db

I want to get relationships of a table in progress db. For example:
OrderDetail: Date, Product_Id, Order_Id, Quantity
In this case, I want to get Product_Id and Order_Id columns are Foreign key
The OpenEdge database does not have explicit support for "foreign keys".
Some application schemas have naming conventions that might help you.
You can, as Mike mentioned, loop through the meta schema tables _file, _field and _index and apply logic that follows a such a naming convention but there is no generic solution that can be applied to all OpenEdge databases.
For instance, if you naming convention is that a field name of tableNameId indicates a potential foreign key for tableName you might try something like:
find _file no-lock where _file._file-name = "tableName" no-error.
if available( _file ) then
do:
find _field no-lock where _file-recid = recid ( _file ) and _field-name = "tableNameId" no-error.
if available( _field ) then
do:
message "common field exists!".
find first _index-field no-lock where _field-recid = recid( _field ) no-error.
if available( _index-field ) then
do:
message "and there is at least one index on tableNameId!".
find _index no-lock where recid( _index ) = _index-recid no-error.
message _index-name _unique _num-comp. /* you probably want a unique single component index */
end.
end.
end.
While the OpenEdge database and the ABL engine don't know about relationships or external keys, the SQL engine does implement foreign key constraints. See
https://knowledgebase.progress.com/articles/Article/000034195
I don't know if this is useful for you. These constraints would be to be defined first if they don't exist already, which is unlikely if your application is mainly ABL and not SQL. Also the website would need to access the database through SQL. It is not enough to write SQL statements in your ABL code, the access needs to go through the SQL engine.

PL/SQL insert based on conditions

I would appreciate all help I can get. I'm learning PL/SQL and have stumbled on a problem so please help me find an appropriate way of handling this situation :)
I'm running Oracle 11gR2
My schema:
CREATE TABLE "ENTRY"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
CREATE TABLE "VIEW"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
CREATE TABLE "REJECTED"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
My sample data:
insert into entry (type, transaction, owner) values (11111, 11111, 11111);
insert into entry (type, transaction, owner) values (22222, 22222, 22222);
Now for the puzzling part, I've wrote this procedure that should copy the values from the ENTRY table to VIEW table if a record does not exist for specific (transaction AND owner) combination. If such a combination exists in the VIEW table that record should then go to the REJECTED table. This procedure does that but on multiple runs of the procedure I get more and more entries in the REJECTED table so my question is how to limit inserts in the REJECTED table - if a record already exists in the REJECTED table then do nothing.
create or replace PROCEDURE COPY AS
v_owner_entry ENTRY.owner%TYPE;
v_transaction_entry ENTRY.transaction%TYPE;
v_owner VIEW.owner%TYPE;
v_transaction VIEW.transaction%TYPE;
begin
begin
select e.owner, e.transaction, v.owner, v.transaction
into v_owner_entry, v_transaction_entry, v_owner, v_transaction
from entry e, view v
where e.owner = v.owner
and e.transaction = v.transaction;
EXCEPTION
when too_many_rows
then
insert into REJECTED
(
TYPE,
TRANSACTION,
OWNER
)
SELECT
s1.TYPE,
s1.TRANSACTION,
s1.OWNER
FROM ENTRY s1;
when no_data_found
THEN
insert into VIEW
(
TYPE,
TRANSACTION,
OWNER
)
SELECT
s.TYPE,
s.TRANSACTION,
s.OWNER
FROM ENTRY s;
end;
end;
Any suggestions guys? :)
Cheers!
UPDATE
Sorry if the original post wasn't clear enough -
The procedure should replicate data (on a daily basis) from DB1 to DB2 and insert into VIEW or REJECTED depending on the conditions. Here is a photo, maybe it would be clearer:
I think Dmitry was trying to suggest using MERGE in the too_many_rows case of your exception handler. So you've already done the SELECT up front and determined that the Entry row appears in your View table and so it raises the exception too_many_rows.
The problem is that you don't know which records have thrown the exception (assuming your Entry table has more than one row easy time this procedure is called). So I think your idea of using the exception section to determine that you have too many rows was elegant, but insufficient for your needs.
As a journeyman programmer, instead of trying to come up with something terribly elegant, I'd use more brute force.
Something more like:
BEGIN
FOR entry_cur IN
(select e.owner, e.transaction, SUM(NVL2(v.owner, 1, 0)) rec_count
from entry e, view v
where e.owner = v.owner(+)
and e.transaction = v.transaction(+)
GROUP BY e.owner, e.transaction)
LOOP
CASE WHEN rec_count > 0
THEN INSERT INTO view
ELSE MERGE INTO rejected r
ON (r.transaction = entry_cur.transaction
AND r.owner = entry_cur.owner)
WHEN NOT MATCHED THEN INSERT blah blah blah
;
END LOOP;
END;
The HAVING COUNT(*) > 1 will
No exceptions thrown. The loop gives you the correct record you don't want to insert into View. BTW, I couldn't get over that you used keywords for object names - view, transaction, etc. You enquoted the table names on the CREATE TABLE "VIEW" statement, which gets around the fact that these are keywords, but you didn't when you referenced them later, so I'm surprised the compiler didn't reject the code. I think that's a recipe for disaster because it makes debugging so much harder. I"m just hoping that you did this for the example here, not in PL/SQL.
Personally, I've had trouble using the MERGE statement where it didn't seem to work consistently, but that was an Oracle version long ago and probably my own ignorance in how it should work.
Use MERGE statement:
merge into REJECTED r
using ENTRY e
on (r.type = e.type and
r.transaction = e.transaction and
r.owner = e.owner)
when not matched then insert (type, transaction, owner)
values (e.type, e.transaction, e.owner)
This query will insert into table REJECTED only combinations of (type, transaction, owner) from table ENTRY that are not present there yet.
You're trying to code yourself out of a quandary you've modeled yourself into.
A table should contain your entity. There should not be a table of entities in one state, another table for entities in another state, and yet another table for entities in a different state altogether. You're seeing the kind of problems this can lead to.
The state can be an attribute (field or column) of the one table. Or normalized to a state table but still only one entity table. When an entity changes states, this is accomplished by an update, not a move from one table to another.

Hierarchical Database Select / Insert Statement (SQL Server)

I have recently stumbled upon a problem with selecting relationship details from a 1 table and inserting into another table, i hope someone can help.
I have a table structure as follows:
ID (PK) Name ParentID<br>
1 Myname 0<br>
2 nametwo 1<br>
3 namethree 2
e.g
This is the table i need to select from and get all the relationship data. As there could be unlimited number of sub links (is there a function i can create for this to create the loop ?)
Then once i have all the data i need to insert into another table and the ID's will now have to change as the id's must go in order (e.g. i cannot have id "2" be a sub of 3 for example), i am hoping i can use the same function for selecting to do the inserting.
If you are using SQL Server 2005 or above, you may use recursive queries to get your information. Here is an example:
With tree (id, Name, ParentID, [level])
As (
Select id, Name, ParentID, 1
From [myTable]
Where ParentID = 0
Union All
Select child.id
,child.Name
,child.ParentID
,parent.[level] + 1 As [level]
From [myTable] As [child]
Inner Join [tree] As [parent]
On [child].ParentID = [parent].id)
Select * From [tree];
This query will return the row requested by the first portion (Where ParentID = 0) and all sub-rows recursively. Does this help you?
I'm not sure I understand what you want to have happen with your insert. Can you provide more information in terms of the expected result when you are done?
Good luck!
For the retrieval part, you can take a look at Common Table Expression. This feature can provide recursive operation using SQL.
For the insertion part, you can use the CTE above to regenerate the ID, and insert accordingly.
I hope this URL helps Self-Joins in SQL
This is the problem of finding the transitive closure of a graph in sql. SQL does not support this directly, which leaves you with three common strategies:
use a vendor specific SQL extension
store the Materialized Path from the root to the given node in each row
store the Nested Sets, that is the interval covered by the subtree rooted at a given node when nodes are labeled depth first
The first option is straightforward, and if you don't need database portability is probably the best. The second and third options have the advantage of being plain SQL, but require maintaining some de-normalized state. Updating a table that uses materialized paths is simple, but for fast queries your database must support indexes for prefix queries on string values. Nested sets avoid needing any string indexing features, but can require updating a lot of rows as you insert or remove nodes.
If you're fine with always using MSSQL, I'd use the vendor specific option Adrian mentioned.

Resources