SQL Subquery and CAST not working - asp.net

I am trying to get data from one table which is in varchar form and pass the same to other query which has same field as int.
Table Article with following fields
(ArticleID int, ArticleTitle nvarchar(200), ArticleDesc nvarchar(MAX))
Sample data
1 Title1 Desc1
2 Title2 Desc2
3 Title3 Desc3
I have another Table called Banner which has banner related to Articles
(BannerID int, BannerName nvarchar(200), BannerPath nvarchar(MAX), ArticleID varchar(200))
Sample Data
**BannerID BannerName BannerFile ArticleID**
100 Banner1 BannerPath1 '1','3','5'
101 Banner2 BannerPath2 '2','3','5'
102 Banner3 BannerPath3 '8','3','5'
103 Banner4 BannerPath4 '10','30','5','2','3','5'
Sample Query
SELECT ArticleTitle
FROM Article
WHERE CAST(ArticleID AS varchar(200)) IN (
SELECT ArticleID FROM Banner WHERE BannerID = 2
)
In my actual project i have multiple fields in banner table so that i can assign banner article, writer, Category, Pages..
For this reason i decided to store ArticleID or WriteID or CatID as single field in this format'10','30','5','2','3','5'
I change my structure then i may end up create hundreds of records for one banner and one banner can be assigned to any one of this article, writer, Category, Pages
Query below return zero rows may be my casting is creating problem I would appreciate how i can get arounds this without changing my database structure
SELECT ArticleTitle FROM Article WHERE CAST(ArticleID AS varchar(200)) IN (SELECT ArticleID FROM Banner WHERE BannerID = 2)
UPDATED:
No offense to any one i have decided to stick to my design as the question i had asked was for backed reporting section of the website which won't be used to often. I can be wrong regarding not normalizing the tables ...
My actual scenario Suppose users visit url
`abc.com/article/article.aspx?articleID=30&CatID=10&PageID=3&writerID=3`
Based on this url i can run four queries with UNION to get the required banner off-course i have to decide on banner precedence so i will do it like this
`SELECT BannerName, BannerImage FROM Banner WHERE ArticleID LIKE '%''30''%'`
UNION ALL
`SELECT BannerName, BannerImage FROM Banner WHERE CategoryID LIKE '%''10''%'`
UNION ALL
`Another query .......`
If i do it this way then query will have to look for banners in single table with few rows But if i normalize table based on JW which is good way of doing it may result in 30-40 rows for each banner in different table which may effect performance as i have to add new banner for new articles (for new magazine issues).
I know i am breaking every law of normalizing but i am afraid i have to do it for performance as i may end up having 2000 rows for every 100 banners & this will grow with time.
Updated Again
I hope this image will give you an over view of what i am trying to do
If i do it this way then i only need 1 row per banner & if i further normalize and create more table then i might end up having several row for one banner for example
Taking above image sample from banner table then my first banner will have 27 Rows
Second Banner 11 Rows
Thirds Banner 14 rows.
In order to avoid this i thought of to store multiple articleID, IssueID, PageID .... in their respected fields. This approach might be dirty but it is working.
I Definitely had some -ve feedback which from their point of view is understandable.. Since i have provided further details is my approach totally unprofessional or it is fine keeping in mind that website might have very good traffic & this approach may be faster.

It is a very bad design when you have saved comma separated values in a column when these values will be used in searching of records.
You need to properly normalize and restructure the table into 3-table design because I can see a Many-to-Many relationship on Article and Banner.
Suggested Schema design:
Article Table
ArticleID (PK)
ArticleTitle
ArticleDesc
Banner Table
BannerID (PK)
BannerName
BannerPath
Article_Banner Table
ArticleID (FK) (Also a compound PK with BannerID)
BannerID (FK)
and by this design you can simply query your records like:
SELECT a.*
FROM Article a
INNER JOIN Article_Banner b
ON a.ArticleID = b.ArticleID
WHERE b.BannerID = 2
advantages of the structure:
can easily create query statements
can take advantage of the indexes defined
etc..

In additio to John Woo's excellent answer, I will try to answer the question "Why doesn't the query return any results".
I'm going to leave aside the WHERE b.BannerID = 2 clause, which is obviously not met by any of the sample records.
The main issue with the query is the IN clause. IN will tell you whether an item is found in a set of items. What you are expecting it to do is iterate through a set of sets and tell you whether the item is found.
To illustrate this, here are two simplified queries:
-- this will print 0
if '1' in ('''1'',''3'',''5''')
print 1
else
print 0
-- this will print 1
if '1' in ('1', '3', '5')
print 1
else
print 0
The main point is that IN is a set-based operation, not a string function that will find a substring.
One possible solution to your problem would be to use CHARINDEX to perform the substring detection:
select ArticleTitle
from Article a
join Banner b
on charindex(CAST(a.ArticleID AS varchar(200)), b.ArticleID) > 0
This version is incorrect, because searching for the id '1' will also match values like '11','12'.
In order to get correct results, you could end up with a query similar to this (in order to make sure you only match on values between asterisks):
select ArticleTitle
from Article a
join Banner b
on charindex('''' + CAST(a.ArticleID AS varchar(200)) + '''', b.ArticleID) > 0
SQLFiddle: http://www.sqlfiddle.com/#!3/2ee3c/23
This query, however, has two big disadvantages:
it gets awfully slow for relatively big tables, as it cannot use any indexes and needs to scan the Banner table for each row in Article
the code got a little bit more complex and the more functionality you'll add to it, the harder it will get to reason about it, resulting in maintainability problems.
These two problems are smells that you are doing something wrong. Following JW's solution will get rid of the two problems.

Fully agree the above example is bad and not the correct way for what he is doing. However the root error message issue still exist and is a problem under some conditions.
My situation is working with a table to hold some custom form field element data. Without laying out the entire structure I’ll just lay out what is needed to show and reproduce the issue. I can also confirm the issue resolves around the IsNumeric in this case. Combined with the SubQueries as well. The sample holds two items, item name simulating the custom field element/type and the field value. Some are names, and some are minutes of labor. Could be weights, temps, distances, whatever, it’s customer definable extra data.
Create Table dboSample (cKey VarChar(20), cData VarChar(50))
Insert Into dboSample (cKey, cData) Values ('name', 'Jim')
Insert Into dboSample (cKey, cData) Values ('name', 'Bob')
Insert Into dboSample (cKey, cData) Values ('labortime', '60')
Insert Into dboSample (cKey, cData) Values ('labortime', '00')
Insert Into dboSample (cKey, cData) Values ('labortime', '15')
Select * From (Select * From dboSample Where IsNumeric(cData) = 1) As dboSampleSub Where Cast(cData As Int) > 0
Resulting in an error “Conversion failed when converting the varchar value 'Jim' to data type int.”
The lower nested query has a where clause limiting returned rows to only included numeric based data. However the cast in the higher level is clearly seeing rows not included in the sub query return. It is in fact seeing and processing data of the lower nested query. Cannot locate an Select OPTION flags to prevent this.

Related

Efficient insertion of row and foreign table row if it does not exist

Similar to this question and this solution for PostgreSQL (in particular "INSERT missing FK rows at the same time"):
Suppose I am making an address book with a "Groups" table and a "Contact" table. When I create a new Contact, I may want to place them into a Group at the same time. So I could do:
INSERT INTO Contact VALUES (
"Bob",
(SELECT group_id FROM Groups WHERE name = "Friends")
)
But what if the "Friends" Group doesn't exist yet? Can we insert this new Group efficiently?
The obvious thing is to do a SELECT to test if the Group exists already; if not do an INSERT. Then do an INSERT into Contacts with the sub-SELECT above.
Or I can constrain Group.name to be UNIQUE, do an INSERT OR IGNORE, then INSERT into Contacts with the sub-SELECT.
I can also keep my own cache of which Groups exist, but that seems like I'm duplicating functionality of the database in the first place.
My guess is that there is no way to do this in one query, since INSERT does not return anything and cannot be used in a subquery. Is that intuition correct? What is the best practice here?
My guess is that there is no way to do this in one query, since INSERT
does not return anything and cannot be used in a subquery. Is that
intuition correct?
You could use a Trigger and a little modification of the tables and then you could do it with a single query.
For example consider the folowing
Purely for convenience of producing the demo:-
DROP TRIGGER IF EXISTS add_group_if_not_exists;
DROP TABLE IF EXISTS contact;
DROP TABLE IF EXISTS groups;
One-time setup SQL :-
CREATE TABLE IF NOT EXISTS groups (id INTEGER PRIMARY KEY, group_name TEXT UNIQUE);
INSERT INTO groups VALUES(-1,'NOTASSIGNED');
CREATE TABLE IF NOT EXISTS contact (id INTEGER PRIMARY KEY, contact TEXT, group_to_use TEXT, group_reference TEXT DEFAULT -1 REFERENCES groups(id));
CREATE TRIGGER IF NOT EXISTS add_group_if_not_exists
AFTER INSERT ON contact
BEGIN
INSERT OR IGNORE INTO groups (group_name) VALUES(new.group_to_use);
UPDATE contact SET group_reference = (SELECT id FROM groups WHERE group_name = new.group_to_use), group_to_use = NULL WHERE id = new.id;
END;
SQL that would be used on an ongoing basis :-
INSERT INTO contact (contact,group_to_use) VALUES
('Fred','Friends'),
('Mary','Family'),
('Ivan','Enemies'),
('Sue','Work colleagues'),
('Arthur','Fellow Rulers'),
('Amy','Work colleagues'),
('Henry','Fellow Rulers'),
('Canute','Fellow Ruler')
;
The number of values and the actual values would vary.
SQL Just for demonstration of the result
SELECT * FROM groups;
SELECT contact,group_name FROM contact JOIN groups ON group_reference = groups.id;
Results
This results in :-
1) The groups (noting that the group "NOTASSIGNED", is intrinsic to the working of the above and hence added initially) :-
have to be careful regard mistakes like (Fellow Ruler instead of Fellow Rulers)
-1 used because it would not be a normal value automatically generated.
2) The contacts with the respective group :-
Efficient insertion
That could likely be debated from here to eternity so I leave it for the fence sitters/destroyers to decide :). However, some considerations:-
It works and appears to do what is wanted.
It's a little wasteful due to the additional wasted column.
It tries to minimise the waste by changing the column to an empty string (NULL may be even more efficient, but for some can be confusing)
There will obviously be an overhead BUT in comparison to the alternatives probably negligible (perhaps important if you were extracting every Facebook user) but if it's user input driven likely irrelevant.
What is the best practice here?
Fences again. :)
Note Hopefully obvious, but the DROP statements are purely for convenience and that all other SQL up until the INSERT is run once
to setup the tables and triggers in preparation for the single INSERT
that adds a group if necessary.

How to put a part of a code as a string in table to use it in a procedure?

I'm trying to resolve below issue:
I need to prepare table that consists 3 columns:
user_id,
month
value.
Each from over 200 users has got different values of parameters that determine expected value which are: LOB, CHANNEL, SUBSIDIARY. So I decided to store it in table ASYSTENT_GOALS_SET. But I wanted to avoid multiplying rows and thought it would be nice to put all conditions as a part of the code that I would use in "where" clause further in procedure.
So, as an example - instead of multiple rows:
I created such entry:
So far I created testing table ASYSTENT_TEST (where I collect month and value for certain user). I wrote a piece of procedure where I used BULK COLLECT.
declare
type test_row is record
(
month NUMBER,
value NUMBER
);
type test_tab is table of test_row;
BULK_COLLECTOR test_tab;
p_lob varchar2(10) :='GOSP';
p_sub varchar2(14);
p_ch varchar2(10) :='BR';
begin
select subsidiary into p_sub from ASYSTENT_GOALS_SET where user_id='40001001';
execute immediate 'select mc, sum(ppln_wartosc) plan from prod_nonlife.mis_report_plans
where report_id = (select to_number(value) from prod_nonlife.view_parameters where view_name=''MIS'' and parameter_name=''MAX_REPORT_ID'')
and year=2017
and month between 7 and 9
and ppln_jsta_symbol in (:subsidiary)
and dcs_group in (:lob)
and kanal in (:channel)
group by month order by month' bulk collect into BULK_COLLECTOR
using p_sub,p_lob,p_ch;
forall x in BULK_COLLECTOR.first..BULK_COLLECTOR.last insert into ASYSTENT_TEST values BULK_COLLECTOR(x);
end;
So now when in table ASYSTENT_GOALS_SET column SUBSIDIARY (varchar) consists string 12_00_00 (which is code of one of subsidiary) everything works fine. But the problem is when user works in two subsidiaries, let say 12_00_00 and 13_00_00. I have no clue how to write it down. Should SUBSIDIARY column consist:
'12_00_00','13_00_00'
or
"12_00_00","13_00_00"
or maybe
12_00_00','13_00_00
I have tried a lot of options after digging on topics like "Deling with single/escaping/double qoutes".
Maybe I should change something in execute immediate as well?
Or maybe my approach to that issue is completely wrong from the very beginning (hopefully not :) ).
I would be grateful for support.
I didn't create the table function described here but that article inspired me to go back to try regexp_substr function again.
I changed: ppln_jsta_symbol in (:subsidiary) to
ppln_jsta_symbol in (select regexp_substr((select subsidiary from ASYSTENT_GOALS_SET where user_id=''fake_num''),''[^,]+'', 1, level) from dual
connect by regexp_substr((select subsidiary from ASYSTENT_GOALS_SET where user_id=''fake_num''), ''[^,]+'', 1, level) is not null) Now it works like a charm! Thank you #Dessma very much for your time and suggestion!
"I wanted to avoid multiplying rows and thought it would be nice to put all conditions as a part of the code that I would use in 'where' clause further in procedure"
This seems a misguided requirement. You shouldn't worry about number of rows: databases are optimized for storing and retrieving rows.
What they are not good at is dealing with "multi-value" columns. As your own solution proves, it is not nice, it is very far from nice, in fact it is a total pain in the neck. From now on, every time anybody needs to work with subsidiary they will have to invoke a function. Adding, changing or removing a user's subsidiary is much harder than it ought to be. Also there is no chance of enforcing data integrity i.e. validating that a subsidiary is valid against a reference table.
Maybe none of this matters to you. But there are very good reasons why Codd mandated "no repeating groups" as a criterion of First Normal Form, the foundation step of building a sound data model.
The correct solution, industry best practice for almost forty years, would be to recognise that SUBSIDIARY exists at a different granularity to CHANNEL and so should be stored in a separate table.

Efficient way to load referenced data in one query

My application uses a database to save its data. I have table Objects that looks like
localID | title | content
1 Test "1,embed","3,embed","5,append"
and another table Contents that looks like
localID | content
1 Alpha
2 Beta
3 Gamma
4 Delta
5 Epsilon
The main applications runs in the main thread, the whole database stuff in a second thread. So if my application loads, I want to pass each record (QSqlRecord) to the main thread where it gets further processed (loaded into real objects). I pass that record via signals. But my data is split up into 2 tables. I want to return a record containing both, perhaps similar to a join:
localID | title | content
1 Test "Alpha,embed","Gamma,embed","Epsilon,append"
So this way, I would have all the needed information at once after only one thread return value. Without combining, I would have to call the database for each single referenced content.
I expect the database to contain less than 100.000 records, yet some content may be big (files saved as blob, e.g. a book of size of 300 mb or so).
I have two questions:
(How) Can I join the tables this way inside a query (efficiently)?
Am I too concerned about threading and should make it single threaded?
That way I would not need to bother with multiple read requests.
As a sidenode, this is my first post on Database Admins, I was not too sure about this site or Stackoverflow being the right place to ask this.
For any actual problem, use the way recommended by #Vérace in the comments,
i.e. a "linking" table. That is the way.
However, if you are either forced to keep the database structure
or for fun
or for learning (which is indicated by the migration header),
learning dirty tricks however, instead of good design...
have a look at this:
select
localID, title,
(
with recursive cnt(x) as
( select ','||a.content
union all
select replace(x, '"'||b.localID||',', '_"'||b.content||',')
from cnt, toy2 as b
)
select replace('_"'||replace(x, ',_"', ',"'), '_","', '"') from cnt
where not x like '%,"%' LIMIT 1
) as 'content' from toy as a;
using a recursive method to flexibly
(no assumptions on number of entries in AlphaBeta table, or number of their uses)
replace the numbers by greek
applying a naming scheme with "_" to create an end condition
prepend a "_" to content, to make it be processed
and cooperate with end condition
cleanup the end-condition "_"s for desired output
cleanup special case at start of output line
select the result of recursive together with other desired outputs
Note the assumption that your table does not naturally contain '__"' or '_"'. If that happens choose more "weird" strings there. If you have all kinds of strings in your table, then you look at a very meek example of what Verace describes as "a desaster to happen". Actually this non-trivial solution is in itself probably a desaster which happened.
Output (.headers on and .mode column):
localid title content
---------- ---------- --------------------------------------------
1 Test "Alpha,embed","Gamma,embed","Epsilon,append"
2 mal "Beta,append","Delta,embed"
Here is my mcve (.dump), with an additional line "mal" for testing purposes:
BEGIN TRANSACTION;
CREATE TABLE toy (localid int, title varchar(20), content varchar(100));
INSERT INTO toy VALUES(1,'Test','"1,embed","3,embed","5,append"');
INSERT INTO toy VALUES(2,'mal','"2,append","4,embed"');
CREATE TABLE toy2 (localID int, content varchar(10));
INSERT INTO toy2 VALUES(1,'Alpha');
INSERT INTO toy2 VALUES(2,'Beta');
INSERT INTO toy2 VALUES(3,'Gamma');
INSERT INTO toy2 VALUES(4,'Delta');
INSERT INTO toy2 VALUES(5,'Epsilon');
COMMIT;
SQLite 3.18.0 2017-03-28 18:48:43

Cascading List of Values with many to many relationship

I am developing an application which tracks class attendance of students in a school, in Apex.
I want to create a page with three level cascading select lists, so the teacher can first select the Semester, then the Subject and then the specific Class of that Subject, so the application returns the Students who are enrolled in that Class.
My problem is that these three tables have a many-to-many relationship between them, so I use extra tables with their keys.
Every Semester has many Subjects and a Subject can be taught in many Semesters.
Every Subject has many classes in every Semester.
The students must enroll in a subject every semester and then the teacher can assign them to a class.
The tables look something like this:
create table semester(
id number not null,
name varchar2(20) not null,
primary key(id)
);
create table subject(
id number not null,
subject_name varchar2(50) not null,
primary key(id)
);
create table student(
id number not null,
name varchar2(20),
primary key(id)
);
create table semester_subject(
id number not null,
semester_id number not null,
subject_id number not null,
primary key(id),
foreign key(semester_id) references semester(id),
foreign key(subject_id) references subject(id),
constraint unique sem_sub_uq unique(semester_id, subject_id)
);
create table class(
id number not null,
name number not null,
semester_subject_id number not null,
primary key(id),
foreign key(semester_subject_id) references semester_subject(id)
);
create table class_enrollment(
id number not null,
student_id number not null,
semester_subject_id number not null,
class_id number,
primary_key(id),
foreign key(student_id) references student(id),
foreign key(semester_subject_id) references semester_subject(id),
foreign key(class_id) references class(id)
);
The list of value for the Semester select list looks like this:
select name, id
from semester
order by 1;
The the subject select list should include the names of all the Subjects available in the semester selected above, but I can't figure the query or even if it's possible. What I have right now:
select s.name, s.id
from subject s, semester_subject ss
where ss.semester_id = :PX_SEMESTER //value from above select list
and ss.subject_id = s.id;
But you can't have two tables in a LoV and the query is probably wrong anyway...
I didn't even begin to think about what the query for the class would look like.
I appreciate any help or if you can point me in the right direction so I can figure it out myself.
Developing an Apex Input Form Using Item-Parametrized Lists of Values (LOVs)
Your initial schema design looks good. One recommendation once you've developed and tested your solution on a smaller scale, append to the ID (primary key) columns a trigger that can auto-populate its values through a sequence. You could also skip the trigger and just reference the sequence in your sql insert DML commands. It just makes things simpler. Creating tables in the APEX environment with their built-in wizards offer the opportunity to make an "auto-incrementing" key column.
There is also an additional column added to the SEMESTER table called SORT_KEY. This helps when you are storing string typed values which have logical sorting sequences that aren't exactly alphanumeric in nature.
Setting Up The Test Data Values
Here is the test data I generated to demonstrate the cascading list of values design that will work with the example.
Making Dynamic List of Value Queries
The next step is to make the first three inter-dependent List of Values definitions. As you have discovered, you can reference page parameters in your LOVs which may come from a variety of sources. In this case, the choice selection from our LOVs will be assigned to Apex Page Items.
I also thought only one table could be referenced in a single LOV query. This is incorrect. The page documentation suggests that it is the SQL query syntax that is the limiting factor. The following LOV queries reference more than one table, and they work:
-- SEMESTER LOV Query
-- name: CHOOSE_SEMESTER
select a.name d, a.id r
from semester a
where a.id in (
select b.semester_id
from semester_subject b
where b.subject_id = nvl(:P5_SUBJECT, b.subject_id))
order by a.sort_id
-- SUBJECT LOV Query
-- name: CHOOSE_SUBJECT
select a.subject_name d, a.id r
from subject a
where a.id in (
select b.subject_id
from semester_subject b
where b.semester_id = nvl(:P5_SEMESTER, b.semester_id))
order by 1
-- CLASS LOV Query
-- name: CHOOSE_CLASS
select a.name d, a.id r
from class a, semester_subject b
where a.semester_subject_id = b.id
and b.subject_id = :P5_SUBJECT
and b.semester_id = :P5_SEMESTER
order by 1
Some design notes to consider:
Don't mind the P5_ITEM notation. The page in my sample app happened to be on "page 5" and so the convention goes.
I chose to assign a name for each LOV query as a hint. Don't just embed the query in an item. Add some breathing room for yourself as a developer by making the LOV a portable object that can be referenced elsewhere if needed.
MAKE a named LOV for each query through the SHARED OBJECTS menu option of your application designer.
The extra operator involving the NVL command, as in nvl(:P5_SUBJECT, b.subject_id) for the CHOOSE_SEMESTER LOV is an expression mirrored on the CHOOSE_SUBJECT query as well. If the default value of P5_SUBJECT and P5_SEMESTER are null when entering the page, how does that assist with the handling of the cascading relationships?
The table SEMESTER_SUBJECT represents a key relationship. Why is a LOV for this table not needed?
APEX Application Form Design Using Cascading LOVs
Setting up the a page for testing the schema design and LOV queries requires the creation of three page items:
Each page item should be defined as a SELECT LIST leave all the defaults initially until you understand how the basic design works. Each select list item should be associated with their corresponding LOV, such as:
The key design twist is the Select List made for the CHOOSE_CLASS LOV, which represents a cascading dependency on more than one data source.
We will use the "Cascading Parent" option so that this item will wait until both CHOOSE_SEMESTER and CHOOSE_SUBJECT are selected. It will also refresh if either of the two are changed.
YES! The cascading parent item can consist of multiple page items/elements. They just have to be declared in a comma separated list.
From the online help info, this is a general introduction to how cascading LOVs can be used in APEX designs:
From Oracle Apex Help Docs: A cascading LOV means that the current item's list of values should be refreshed if the value of another item on this page gets changed.
Specify a comma separated list of page items to be used to trigger the refresh. You can then use those page items in the where clause of your "List of Values" SQL statement.
Demonstration of APEX Application Items with Cascading LOVs
These examples are based on the sample data given at the beginning of this solution. The path of the chosen example case is:
SEMESTER: SPRING 2014 + SUBJECT: PHYS ED + Verify Valid Course Options:
Fitness for Life
General Flexibility
Presidential Fitness Challenge
Running for Fun
Volleyball Basics
The choice from above will be assigned to page item P5_CLASS.
Selection Choices for P5_SEMESTER:
Selection Choices for P5_SUBJECT:
Selection Choices for P5_CLASS:
Closing Remarks and Discussion
Some closing thoughts that occurred to me while working with this design project:
About the Primary Keys: The notion of a generic, ID named column for a primary key was a good design choice. While APEX can handle composite business keys, it gets clumsy and difficult to work around.
One thing that made the schema design challenging to work with was that the notion of "id" transformed in the other tables that referenced it. (Such as the ID column in the SEMESTER table became SEMESTER_ID in the SEMESTER_SUBJECT table. Just keep an eye on these name changes with larger queries. At times I actually lost track exactly what ID I was working with.
A Word for Sanity: In the likely event you decide to assign ID values through a database sequence object, the default is usually to begin at one. If you have several different tables in your schema with the same column name: ID and some associating tables such as CLASS_ENROLLMENT which connects the values of one primary key ID and three additional foreign key ID's, it may get difficult to discern where the data values are coming from.
Consider offsetting your sequences or arbitrarily choosing different increments and starting values. If you're mainly pushing ID's around in your queries, if two different ID sets are separated by two or three orders of magnitude, it will be easy to know if you've pulled the right data values.
Are There MORE Cascading Relationships? If a "parent" item relationship indicates a dependency that makes a page item LOV wait or change depending on the value of another, could there be another cascading relationship to define? In the case of CHOOSE_SEMESTER and CHOOSE_SUBJECT is it possible? Is it necessary?
I was able to figure out how to make these two items hold an optional cascading dependency, but it required setting up another outside page item reference. (If it isn't optional, you get stuck in a closed loop as soon as one of the two values changes.) Fancy, but not really necessary to solve the problem at hand.
What's Left to Do? I left out some additional tasks for you to continue with, such as managing the DML into the ENROLLMENT table after selecting a valid STUDENT.
Overall, you've got a workable schema design. There is a way to represent the data relationships through an APEX application design pattern. Happy coding, it looks like a challenging project!

Hierarchical Database Select / Insert Statement (SQL Server)

I have recently stumbled upon a problem with selecting relationship details from a 1 table and inserting into another table, i hope someone can help.
I have a table structure as follows:
ID (PK) Name ParentID<br>
1 Myname 0<br>
2 nametwo 1<br>
3 namethree 2
e.g
This is the table i need to select from and get all the relationship data. As there could be unlimited number of sub links (is there a function i can create for this to create the loop ?)
Then once i have all the data i need to insert into another table and the ID's will now have to change as the id's must go in order (e.g. i cannot have id "2" be a sub of 3 for example), i am hoping i can use the same function for selecting to do the inserting.
If you are using SQL Server 2005 or above, you may use recursive queries to get your information. Here is an example:
With tree (id, Name, ParentID, [level])
As (
Select id, Name, ParentID, 1
From [myTable]
Where ParentID = 0
Union All
Select child.id
,child.Name
,child.ParentID
,parent.[level] + 1 As [level]
From [myTable] As [child]
Inner Join [tree] As [parent]
On [child].ParentID = [parent].id)
Select * From [tree];
This query will return the row requested by the first portion (Where ParentID = 0) and all sub-rows recursively. Does this help you?
I'm not sure I understand what you want to have happen with your insert. Can you provide more information in terms of the expected result when you are done?
Good luck!
For the retrieval part, you can take a look at Common Table Expression. This feature can provide recursive operation using SQL.
For the insertion part, you can use the CTE above to regenerate the ID, and insert accordingly.
I hope this URL helps Self-Joins in SQL
This is the problem of finding the transitive closure of a graph in sql. SQL does not support this directly, which leaves you with three common strategies:
use a vendor specific SQL extension
store the Materialized Path from the root to the given node in each row
store the Nested Sets, that is the interval covered by the subtree rooted at a given node when nodes are labeled depth first
The first option is straightforward, and if you don't need database portability is probably the best. The second and third options have the advantage of being plain SQL, but require maintaining some de-normalized state. Updating a table that uses materialized paths is simple, but for fast queries your database must support indexes for prefix queries on string values. Nested sets avoid needing any string indexing features, but can require updating a lot of rows as you insert or remove nodes.
If you're fine with always using MSSQL, I'd use the vendor specific option Adrian mentioned.

Resources