I want to get relationships of a table in progress db. For example:
OrderDetail: Date, Product_Id, Order_Id, Quantity
In this case, I want to get Product_Id and Order_Id columns are Foreign key
The OpenEdge database does not have explicit support for "foreign keys".
Some application schemas have naming conventions that might help you.
You can, as Mike mentioned, loop through the meta schema tables _file, _field and _index and apply logic that follows a such a naming convention but there is no generic solution that can be applied to all OpenEdge databases.
For instance, if you naming convention is that a field name of tableNameId indicates a potential foreign key for tableName you might try something like:
find _file no-lock where _file._file-name = "tableName" no-error.
if available( _file ) then
do:
find _field no-lock where _file-recid = recid ( _file ) and _field-name = "tableNameId" no-error.
if available( _field ) then
do:
message "common field exists!".
find first _index-field no-lock where _field-recid = recid( _field ) no-error.
if available( _index-field ) then
do:
message "and there is at least one index on tableNameId!".
find _index no-lock where recid( _index ) = _index-recid no-error.
message _index-name _unique _num-comp. /* you probably want a unique single component index */
end.
end.
end.
While the OpenEdge database and the ABL engine don't know about relationships or external keys, the SQL engine does implement foreign key constraints. See
https://knowledgebase.progress.com/articles/Article/000034195
I don't know if this is useful for you. These constraints would be to be defined first if they don't exist already, which is unlikely if your application is mainly ABL and not SQL. Also the website would need to access the database through SQL. It is not enough to write SQL statements in your ABL code, the access needs to go through the SQL engine.
Related
I am trying to check if a table exists prior to send a SELECT query on that table.
The table name is composed with a trailing 2 letters language code and when I get the full table name with the user's language in it, I don't know if the user language is actually supported by my database and if the table for that language really exists.
SELECT name FROM sqlite_master WHERE name = 'mytable_zz' OR name = 'mytable_en' ORDER BY ( name = 'mytable_zz' ) DESC LIMIT 1;
and then
SELECT * FROM table_name_returned_by_first_query;
I could have a first query to check the existence of the table like the one above, which returns mytable_zz if that table exists or mytable_en if it doesn't, and then make a second query using the result of the first as table name.
But I would rather have it all in one single query that would return the expected results from either the user's language table or the english one in case his language is not supported, without throwing a "table mytable_zz doesn't exist" error.
Anyone knows how I could handle this ?
Is there a way to use the result of the first query as a table name in the 2nd ?
edit : I don't have the hand of the database itself which is generated automatically, I don't want to get involved in a complex process of manually updating any new database that I get. Plus this query is called multiple times and having to retrieve the result of a first query before launching a second one is too long. I use plain text queries that I send through a SQLite wrapper. I guess the simplest would rather be to check if the user's language is supported once for all in my program and store a string with either the language code of the user or "en" if not supported, and use that string to compose my table name(s). I am going to pick that solution unless someone has a better idea
Here is a simple MRE :
CREATE TABLE IF NOT EXISTS `lng_en` ( key TEXT, value TEXT );
CREATE TABLE IF NOT EXISTS `lng_fr` ( key TEXT, value TEXT );
INSERT INTO `lng_en` ( key , value ) VALUES ( 'question1', 'What is your name ?');
INSERT INTO `lng_fr` ( key , value ) VALUES ( 'question1', 'Quel est votre nom ?');
SELECT `value` FROM lng_%s WHERE `key` = 'question1';
where %s is to be replaced by the 2 letters language code. This example will work if the provided code is 'en' or 'fr' but will throw an error if the code is 'zh', in this case I would like to have the same result returned as with 'en' ....
Not in SQL, without executing it dynamically.. But if this is your front end that is running this SQL then it doesn't matter so much. Because your table name came out of the DB there isn't really any opportunity for SQL injection hacking with it:
var tabName = db.ExecuteScalar("SELECT name FROM sqlite_master WHERE name = 'mytable_zz' OR name = 'mytable_en' ORDER BY ( name = 'mytable_zz' ) DESC LIMIT 1;")
var results = db.ExecuteQuery("SELECT * FROM " + tabName);
Yunnosch's comment is quite pertinent; you're essentially storing in a table name information that really should be in a column.. You could consider making a single table and then a bunch of views like mytable_zz the definition of which is SELECT * FROM mytable WHERE lang = 'zz' etc, and make instead-of triggers if you want to cater for a legacy app that you cannot change; the legacy app would select from / insert into the views thinking they are tables, but in reality your data is single table and easier to manage
I would appreciate all help I can get. I'm learning PL/SQL and have stumbled on a problem so please help me find an appropriate way of handling this situation :)
I'm running Oracle 11gR2
My schema:
CREATE TABLE "ENTRY"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
CREATE TABLE "VIEW"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
CREATE TABLE "REJECTED"
(
"TYPE" VARCHAR2(5 CHAR) ,
"TRANSACTION" VARCHAR2(5 CHAR),
"OWNER" VARCHAR2(5 CHAR)
);
My sample data:
insert into entry (type, transaction, owner) values (11111, 11111, 11111);
insert into entry (type, transaction, owner) values (22222, 22222, 22222);
Now for the puzzling part, I've wrote this procedure that should copy the values from the ENTRY table to VIEW table if a record does not exist for specific (transaction AND owner) combination. If such a combination exists in the VIEW table that record should then go to the REJECTED table. This procedure does that but on multiple runs of the procedure I get more and more entries in the REJECTED table so my question is how to limit inserts in the REJECTED table - if a record already exists in the REJECTED table then do nothing.
create or replace PROCEDURE COPY AS
v_owner_entry ENTRY.owner%TYPE;
v_transaction_entry ENTRY.transaction%TYPE;
v_owner VIEW.owner%TYPE;
v_transaction VIEW.transaction%TYPE;
begin
begin
select e.owner, e.transaction, v.owner, v.transaction
into v_owner_entry, v_transaction_entry, v_owner, v_transaction
from entry e, view v
where e.owner = v.owner
and e.transaction = v.transaction;
EXCEPTION
when too_many_rows
then
insert into REJECTED
(
TYPE,
TRANSACTION,
OWNER
)
SELECT
s1.TYPE,
s1.TRANSACTION,
s1.OWNER
FROM ENTRY s1;
when no_data_found
THEN
insert into VIEW
(
TYPE,
TRANSACTION,
OWNER
)
SELECT
s.TYPE,
s.TRANSACTION,
s.OWNER
FROM ENTRY s;
end;
end;
Any suggestions guys? :)
Cheers!
UPDATE
Sorry if the original post wasn't clear enough -
The procedure should replicate data (on a daily basis) from DB1 to DB2 and insert into VIEW or REJECTED depending on the conditions. Here is a photo, maybe it would be clearer:
I think Dmitry was trying to suggest using MERGE in the too_many_rows case of your exception handler. So you've already done the SELECT up front and determined that the Entry row appears in your View table and so it raises the exception too_many_rows.
The problem is that you don't know which records have thrown the exception (assuming your Entry table has more than one row easy time this procedure is called). So I think your idea of using the exception section to determine that you have too many rows was elegant, but insufficient for your needs.
As a journeyman programmer, instead of trying to come up with something terribly elegant, I'd use more brute force.
Something more like:
BEGIN
FOR entry_cur IN
(select e.owner, e.transaction, SUM(NVL2(v.owner, 1, 0)) rec_count
from entry e, view v
where e.owner = v.owner(+)
and e.transaction = v.transaction(+)
GROUP BY e.owner, e.transaction)
LOOP
CASE WHEN rec_count > 0
THEN INSERT INTO view
ELSE MERGE INTO rejected r
ON (r.transaction = entry_cur.transaction
AND r.owner = entry_cur.owner)
WHEN NOT MATCHED THEN INSERT blah blah blah
;
END LOOP;
END;
The HAVING COUNT(*) > 1 will
No exceptions thrown. The loop gives you the correct record you don't want to insert into View. BTW, I couldn't get over that you used keywords for object names - view, transaction, etc. You enquoted the table names on the CREATE TABLE "VIEW" statement, which gets around the fact that these are keywords, but you didn't when you referenced them later, so I'm surprised the compiler didn't reject the code. I think that's a recipe for disaster because it makes debugging so much harder. I"m just hoping that you did this for the example here, not in PL/SQL.
Personally, I've had trouble using the MERGE statement where it didn't seem to work consistently, but that was an Oracle version long ago and probably my own ignorance in how it should work.
Use MERGE statement:
merge into REJECTED r
using ENTRY e
on (r.type = e.type and
r.transaction = e.transaction and
r.owner = e.owner)
when not matched then insert (type, transaction, owner)
values (e.type, e.transaction, e.owner)
This query will insert into table REJECTED only combinations of (type, transaction, owner) from table ENTRY that are not present there yet.
You're trying to code yourself out of a quandary you've modeled yourself into.
A table should contain your entity. There should not be a table of entities in one state, another table for entities in another state, and yet another table for entities in a different state altogether. You're seeing the kind of problems this can lead to.
The state can be an attribute (field or column) of the one table. Or normalized to a state table but still only one entity table. When an entity changes states, this is accomplished by an update, not a move from one table to another.
I know the benefits of using CRUD, and that there are also some disadvantages, but I'd like to get some more expert feedback and advice on the process below for writing data to a database, particularly regarding best practice and possible pro's and con's.
I've come across two basic methods of creating records in my time as a developer. The first (and usually least helpful in most of the works I've seen) is to create a stub and use the various populated fields (including the PK) wherever it is needed. This usually leads to a raft of disowned records floating around the database with no real purpose.
The second way is to only hold a stub in memory, giving (what would be) the object's PK field a default value of, for instance, -1 to represent a new record. This keeps database access to a minimum, especially if the record is not needed later.
Personally, I've found the second way a lot more forgiving and straightforward than the first. The question I'd like to pose, though, is whether to rule out CRUD in favour of a stored procedure that carries out both the INSERT and UPDATE aspects of the CRUD process based on the afore mentioned default value, something like...
BEGIN
IF #record_id = -1
INSERT ....
ELSE
UPDATE ....
END
Any feedback would be appreciated.
As a rule of thumb, I tend to write Upsert procedures.......but I based the "match" on the unique_constraint, not the surrogate key.
For example.
dbo.Employee
EmployeeUUID is the PK, Surrogate Key
SSN is a unique constraint.
dbo.uspEmployeeUpsert would look something like this:
Insert into dbo.Employee (EmployeeUUID , LastName , FirstName, SSN )
Select NEWID() , LastName , FirstName , SSN
from #SomeHolderTable holder
where not exists (select null from dbo.Employee innerRealTable where
innerRealTable.SSN = holder.SSN )
Update dbo.Employee
Set EmployeeUUID = holder.EmployeeUUID
, LastName = ISNULL ( holder.LastName , e.LastName ) /* or COALESCE */
, FirstName = COALESCE ( holder.FirstName , e.FirstName )
from dbo.Employee e , #SomeHolderTable holder
Where e.SSN = holder.SSN
You can also use the MERGE function.
You can also replace the SSN with the SurrogateKey (EmployeeUUID in this case)
What is #SomeHolderTable you ask?
I like to pass xml to the stored procedure, shred it into a #Variable or #Temp table, then write the logic for CU. D(elete) is possible as well, but I usually isolate to a separate procedure.
Why do I do it this way?
Because I can update 1 or 100 or 1000 or N records with one db hit.
My logic seldom changes, and is isolated to one place.
Now, there is a small performance hit for shredding the Xml.
But I find it acceptable 99% of the time.
Every once in a while, I write a non "set based" Upsert routine. But that is for heavy hitter procedures for heavy hitting usage.
That's my take.
You can see the "set based" part of this approach (with the older OPENXML syntax) at this article:
http://msdn.microsoft.com/en-us/library/ff647768.aspx
Find the phrase : "Perform bulk updates and inserts by using OpenXML"
Here is the "more code" version of what the above URL talks about:
http://support.microsoft.com/kb/315968
EDIT
if exists ( select 1 from dbo.Employee e where e.SSN = holder.SSN )
BEGIN
Insert into dbo.Employee (EmployeeUUID , LastName , FirstName, SSN )
Select NEWID() , LastName , FirstName , SSN
from #SomeHolderTable holder
where not exists (select null from dbo.Employee innerRealTable where
innerRealTable.SSN = holder.SSN )
END
I wouldn't necessarily do this. But its an option if you want a "boolean check".
So, with my uniqueidentifier setup, I will pass down an "Empty Guid" (00000000-0000-0000-0000-000000000000) (Guid.Empty in C#) to the procedure, when I know I have a new item. That would be my "-1" check in your scenario.
That's one method, that you could check for an "if exists".
It kinda depends on how many hands you have in the pot.
Also, I didn't mention that when I have lot of hands in the pot, I'll shred the xml.....then I'll do a BEGIN TRAN and COMMIT TRAN around my CU statements (with ROLLBACK in there as well). That way my CU is atomic, all or nothing.
The MERGE function will do this as well. But the pros and cons of MERGE is a different topic.
In my programming task I've gone down a dark alley and wished I hadn't, but there is no turning back now.
I'm building up a SQL statement where the table name, column name and id value are retrieved from query string parameters i.e. ("SELECT [{0}] FROM [{1}] WHERE [Id] = {2};", c, t, id)
But it isn't as bad as it looks, I'm protected:
Only authenticated users (i.e. signed in users) can execute the Page_Load
I'm checking that both the table and the column exists beforehand
(using GetSchema etc.)
I'm checking that the Id is an integer beforehand
All my tables have Id columns
The database connection is reasonably secure
The field value is expected to be of type NVARCHAR(4000) or NVARCHAR(MAX) so I'm avoiding ExecuteScalar and I'm trying out LINQ ExecuteQuery because I like LINQ. But I'm a bit out of my depth again.
I've got this far:
Dim db As New MyDataContext
Dim result = db.ExecuteQuery(Of ITable)("SELECT [{0}] FROM [{1}] WHERE [Id] = {2};", c, t, id)
Is this the right way to go?
How do I get first row and first column value?
Is there a better alternative?
P.S. It's a SQL Server 2005 database
Any help appreciated.
Thanks.
SQL Server requires the tables ans columns to be statically known. You can't provide them using command parameters. You can't say
select * from #tableName
because the table name can't be a variable.
You need to build the SQL string with C# ensuring proper escaping of identifiers. Escaping works like this:
var escaped = "[" + rawUntrustedUserInput.Replace("]", "]]") + "]";
This is safe.
I have the following statements in Oracle 11g:
CREATE TYPE person AS OBJECT (
name VARCHAR2(10),
age NUMBER
);
CREATE TYPE person_varray AS VARRAY(5) OF person;
CREATE TABLE people (
somePeople person_varray
)
How can i select the name value for a person i.e.
SELECT somePeople(person(name)) FROM people
Thanks
I'm pretty sure that:
What you're doing isn't what I'd be doing. It sort of completely violates relational principles, and you're going to end up with an object/type system in Oracle that you might not be able to change once it's been laid down. The best use I've seen for SQL TYPEs (not PL/SQL types) is basically being able to cast a ref cursor back for pipelined functions.
You have to unnest the collection before you can query it relationally, like so:
SELECT NAME FROM
(SELECT SP.* FROM PEOPLE P, TABLE(P.SOME_PEOPLE) SP)
That'll give you all rows, because there's nothing in your specifications (like a PERSON_ID attribute) to restrict the rows.
The Oracle Application Developer's Guide - Object Relational Features discusses all of this in much greater depth, with examples.
To insert query:-
insert into people values (
person_varray(person('Ram','24'))
);
To select :-
select * from people;
SELECT NAME FROM (SELECT SP.* FROM PEOPLE P, TABLE(P.somePeople) SP)
While inserting a row into people table use constructor of
person_varray and then the constructor
of person type for each project.
The above INSERT command
creates a single row in people table.
select somePeople from people ;
person(NAME, age)
---------------------------------------------------
person_varray(person('Ram', 1),
To update the query will be:-
update people
set somePeople =
person_varray
(
person('SaAM','23')
)