First of all I am querying directly in a SQLite database managment software. Therefore, any use of programming language is impossible in my case and my only option is to work with triggers.
My database has a table named Article that I would like to populate with n dummy objects for test purpposes without reaching the recursive limit of triggers(limit I am unable to change since I would have to recompile the database). I suppose, by reading the official documentation, that this limit is fixed to 500 by default.
So far I have created a functionnal trigger but I am unable to stop its recursion after n insertion:
CREATE TRIGGER 'myTrigger'
AFTER INSERT ON 'Article'
WHEN (insertedRowNumber < 500)
BEGIN
INSERT INTO Article(...) VALUES(...);
END;
The Article table structure doesn't contain any kind of timestamp and it can not be changed because the database is already deployed for production.
How would one limit the number of rows inserted with the trigger pattern I provided ?
Thank you for your help !
If you can limit entries in Article instead of number of insertions, just:
CREATE TRIGGER myTrigger AFTER INSERT ON Article WHEN ((SELECT COUNT() FROM Article)<500) BEGIN INSERT INTO Article(...) VALUES(...); END;
Another option is using a helper view:
CREATE VIEW hlpArticle(a, ..., z, hlpCnt) AS SELECT a, ..., z, 1 AS hlpCnt FROM Article;
CREATE TRIGGER hlpTrigger INSTEAD OF INSERT ON hlpArticle WHEN (NEW.hlpCnt>0) BEGIN
INSERT INTO Article(a, ..., z) VALUES(NEW.a, ..., NEW.z);
INSERT INTO hlpArticle(a, ..., z) VALUES(NEW.a, ..., NEW.z, NEW.hlpCnt-1);
END;
So when you do:
INSERT INTO hlpArticle(a, ..., z, hlpCnt) VALUES('val_a', ..., 'val_z', 500);
it will insert 500 records on Article.
Related
In my android application, I use Cursor c = db.rawQuery(query, null); to query data from a local sqlite database, and one of the query string looks like the following:
SELECT t1.* FROM table t1
WHERE NOT EXISTS (
SELECT 1 FROM table t2
WHERE t2.start_time = t1.start_time AND t2.stop_time > t1.stop_time
)
however, the issue is that the query gets very slow when the database gets huge. Trying to look into introducing indexing to speed up the query, but so far, not been very successful, therefore, would be great to have some help here, as it's also hard to find examples for this for android applications.
You can create a composite index for the columns start_time and stop_time:
CREATE INDEX idx_name ON table_name(start_time, stop_time);
You can read in The SQLite Query Optimizer Overview:
The ON and USING clauses of an inner join are converted into
additional terms of the WHERE clause prior to WHERE clause analysis
...
and:
If an index is created using a statement like this:
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
Then the index might be used if the initial columns of the index
(columns a, b, and so forth) appear in WHERE clause terms. The initial
columns of the index must be used with the = or IN or IS operators.
The right-most column that is used can employ inequalities.
You may have to uninstall the app from the device so that the db is deleted and rerun to recreate it, or increase the version number of the db so that you can create the index in the onUpgrade() method.
I am trying to build a simple hotel room check-in database as a learning exercise.
CREATE TABLE HotelReservations
(
roomNum INTEGER NOT NULL,
arrival DATE NOT NULL,
departure DATE NOT NULL,
guestName CHAR(30) NOT NULL,
CONSTRAINT timeTraveler CHECK (arrival < departure) /* stops time travelers*/
/* CONSTRAINT multipleReservations CHECK (my question is about this) */
PRIMARY KEY (roomNum, arrival)
);
I am having trouble specifying a constraint that doesn't allow inserting a new reservation for a room that has not yet been vacated. For example (below), guest 'B' checks into room 123 before 'A' checks out.
INSERT INTO HotelStays(roomNum, arrival, departure, guestName)
VALUES
(123, date("2017-02-02"), date("2017-02-06"), 'A'),
(123, date("2017-02-04"), date("2017-02-08"), 'B');
This shouldn't be allowed but I am unsure how to write this constraint. My first attempt was to write a subquery in check, but I had trouble figuring out the proper subquery because I don't know how to access the 'roomNum' value of a new insert to perform the subquery with. I then also figured out that most SQL systems don't even allow subquerying inside of check.
So how am I supposed to write this constraint? I read some about triggers which seem like it might solve this problem, but is that really the only way to do it? Or am I just dense and missing an obvious way to write the constraint?
The documentation indeed says:
The expression of a CHECK constraint may not contain a subquery.
While it would be possible to create a user-defined function that goes back to the database and queries the table, the only reasonable way to implement this constraint is with a trigger.
There is a special mechanism to access the new row inside the trigger:
Both the WHEN clause and the trigger actions may access elements of the row being inserted, deleted or updated using references of the form "NEW.column-name" and "OLD.column-name", where column-name is the name of a column from the table that the trigger is associated with.
CREATE TRIGGER multiple_reservations_check
BEFORE INSERT ON HotelReservations
BEGIN
SELECT RAISE(FAIL, "reservations overlap")
FROM HotelReservations
WHERE roomNum = NEW.roomNum
AND departure > NEW.arrival
AND arrival < NEW.departure;
END;
I am using table adapter Query configuration wizard in Visual studio 2013 for getting data from my database. For some queries like this:
SELECT *
FROM ItemsTable
ORDER BY date_of_creation desc, time_of_creation desc
OFFSET (#PageNumber - 1) * #RowsPerPage ROWS
FETCH NEXT #RowsPerPage ROWS ONLY
it doesn't recognize the #pageNumber as a paremeter and it cannot generate function that has these arguments while it works fine for queries like:
Select Top (#count) * from items_table
Why does in first query tableadapter fail to generate function with mentioned arguments whereas it can generate function fine for second one for example: tableadapter.getDataByCount(?int count)
Am I forced to use stored procedure, if yes since I don't know anything about it how?
Update: The Problem exactly occurs in TableAdapter Configuration Wizard in DataSet editor (VS 2013) and it doesn't generate functions with these parameters some times it says #RowsPerPage should be declared! but it should generate a function with this arguments I found that it happens when we don't use #parameter_name in clause other than SELECT and WHERE for example in this query we used the, in Offset clause.
I can't tell you how to fix it in ASP, but here is a simple stored procedure that should do the same thing:
CREATE PROCEDURE dbo.ReturnPageOfItems
(
#pageNumber INT,
#rowsPerPage INT
)
AS
BEGIN;
SELECT *
FROM dbo.ItemsTable
ORDER BY date_of_creation desc,
time_of_creation desc
OFFSET (#pageNumber - 1) * #rowsperpage ROWS
FETCH NEXT #rowsPerPage ROWS ONLY;
END;
This will also perform better than simply passing the query, because SQL Server will take advantage of the cached query plan created for the procedure on its first execution. It is best practice not to use SELECT *, as that can cause maintenance trouble for you if there are schema changes to the table(s) involved, so I encourage you to spell out the columns in which you're actually interested. The documentation for the CREATE PROCEDURE command is available here, and it spells out the many various options you have in greater detail. However, the code above should work fine as is.
If you need to grant access to your application user so they can use this proc, that code is
GRANT EXECUTE ON OBJECT::dbo.ReturnPageOfItems TO userName;
Please help me in resolving below issue i am facing, i have to insert data into a table(table name genereted using variable value and table is created already) within FORALL..
Declare
TYPE dept_data_rec IS RECORD
(
Dept_no number,
Dept_name varchar2(100),
Dept_loc Varchar2(20)
);
TYPE nt_dept_data IS TABLE OF dept_data_rec ;
l_dept_data_nt nt_dept_data;
BEGIN
FORALL j IN 1..l_dept_data_nt.COUNT SAVE EXCEPTIONS
EXECUTE IMMEDIATE 'INSERT INTO '||l_get_dept_rec.dept_seq_no||'_Dept_Data VALUES '||
l_dept_data_nt(j);
COMMIT;
while compiling this code i am getting below error:
PLS-00306: wrong number or types of arguments in call to '||'
However when code using actual table name it works
FORALL j IN 1..l_dept_data_nt.COUNT SAVE EXCEPTIONS
INSERT INTO A1_dept_data VALUES
l_dept_data_nt(j);
COMMIT;
Oracle 10g -
In versions of Oracle prior to 11g, you can't use FORALL with EXECUTE IMMEDIATE, only with INSERT, UPDATE, or DELETE.
See http://docs.oracle.com/cd/B13789_01/appdev.101/b10807/13_elems021.htm
It's a special syntax
that reads like a FOR loop but isn't, and
is used by PL/SQL to perform bulked DML operations and only with the exact keyword, not with dynamic SQL or any other code.
Oracle 11g +
In 11g, the restriction on using EXECUTE IMMEDIATE was lifted. See http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/forall_statement.htm
However, the only variables allowed in the string are subscripted elements from a single array in the USING clause.
The documentation is unclear whether you can dynamically "change" the table per row using the FORALL syntax. Remember that the FORALL is used by PL/SQL to perform a bulk DML operation and that needs to go to one table for this to yield any performance benefit.
Best performance solution for the above problem
You should make two levels of arrays, the first defines which table and the second defines the data for that table.
Use an ordinary FOR loop over the table array and inside that loop use the special FORALL syntax to perform all the DML for the one table.
I would like to ask you how would you increase the performance on Insert cursor in this code?
I need to use dynamic plsql to fetch data but dont know how to improve the INSERT in best way. like Bulk Insert maybe?
Please let me know with code example if possible.
// This is how i use cur_handle:
cur_HANDLE integer;
cur_HANDLE := dbms_sql.open_cursor;
DBMS_SQL.PARSE(cur_HANDLE, W_STMT, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS2(cur_HANDLE, W_NO_OF_COLS, W_DESC_TAB);
LOOP
-- Fetch a row
IF DBMS_SQL.FETCH_ROWS(cur_HANDLE) > 0 THEN
DBMS_SQL.column_value(cur_HANDLE, 9, cont_ID);
DBMS_SQL.COLUMN_VALUE(cur_HANDLE, 3, proj_NR);
ELSE
EXIT;
END IF;
Insert into w_Contracts values(counter, cont_ID, proj_NR);
counter := counter + 1;
END LOOP;
You should do database actions in sets whenever possible, rather than row-by-row inserts. You don't tell us what CUR_HANDLE is, so I can't really rewrite this, but you should probably do something like:
INSERT INTO w_contracts
SELECT ROWNUM, cont_id, proj_nr
FROM ( ... some table or joined tables or whatever... )
Though if your first value there is a primary key, it would probably be better to assign it from a sequence.
Solution 1) You can populate inside the loop a PL/SQL array and then just after the loop insert the whole array in one step using:
FORALL i in contracts_tab.first .. contracts_tab.last
INSERT INTO w_contracts VALUES contracts_tab(i);
Solution 2) if the v_stmt contains a valid SQL statement you can directly insert data into the table using
EXECUTE IMMEDIATE 'INSERT INTO w_contracts (counter, cont_id, proj_nr)
SELECT rownum, 9, 3 FROM ('||v_stmt||')';
"select statement is assembled from a website, ex if user choose to
include more detailed search then the select statement is changed and
the result looks different in the end. The whole application is a web
site build on dinamic plsql code."
This is a dangerous proposition, because it opens your database to SQL injection. This is the scenario in which Bad People subvert your parameters to expand the data they can retrieve or to escalate privileges. At the very least you need to be using DBMS_ASSERT to validate user input. Find out more.
Of course, if you are allowing users to pass whole SQL strings (you haven't provided any information regarding the construction of W_STMT) then all bets are off. DBMS_ASSERT won't help you there.
Anyway, as you have failed to give the additional information we actually need, please let me spell it out for you:
will the SELECT statement always have the same column names from the same table name, or can the user change those two?
will you always be interested in the third and ninth columns?
how is the W_STMT string assembled? How much control do you have over its projection?