I have a table X and table Y .
There is a java subscriber that inserts data continously on X .
On X we have a insert trigger that , dumps data into Y on each insert.
I want to delete the processed records from Y , while simultaneous inserts are happening on Y.
Will the delete on Y encounter a lock
It shouldn't. Unless otherwise specified in your DML, locks occur on a row level. See this answer here.
Related
I have a 3 GB SQLite database. I want to modify column type of one of the table columns.
I know that sqlite does not support altering columns and this can only be done by recreating a table.
That's how I do it:
BEGIN TRANSACTION;
ALTER TABLE tbl RENAME TO tbl_;
CREATE TABLE tbl (a INTEGER, b TEXT, c TEXT);
INSERT INTO tbl SELECT * FROM tbl_;
DROP TABLE tbl_;
COMMIT;
I thought that during this process since I use a transaction then the database size will not increase. But it did. On my disk not enough space to double the database size. Is it normal that the database size increases within the transaction? Is there any other way of modifying column type without increasing database size? This process also takes a lot of time. Unexpectedly, most of the time is taken by DROP TABLE statement, it's even longer than INSERT statement. Why dropping table is longer than copying data from one table to another?
Thanks in advance!
Hi I am trying to use the Data Load Table in Oracle Apex for a table that I have access to through a different schema. For example I need to insert via CSV for table x which belongs in the schema X. However, I only have access to schema Y, which has access granted by schema Y. This mean I can access table x by querying, but whenever I try to choose the table for the Data Load table for schema Y, table x doesn't show up. Whenever I choose table x through schema X, it shows an error, which is because I don't have access to schema X. How can I select the table x through schema Y. I tried everything, and tried to edit the code looking through pages, but I can't find anything. Any help would be appreciated.
As you have access to schema Y:
connect to it
create a synonym for user X's table X
use that synonym in Apex
[EDIT, a new approach, fooling Apex]
connect as Y
create table x as select * from x.x where 1 = 2
go to Apex, create the whole loading process which uses table X that belongs to user Y
once it is done & tested, drop table y.x
create synonym x for x.x
Apex will still think that it is a table, but it is a synonym instead.
First of all I am querying directly in a SQLite database managment software. Therefore, any use of programming language is impossible in my case and my only option is to work with triggers.
My database has a table named Article that I would like to populate with n dummy objects for test purpposes without reaching the recursive limit of triggers(limit I am unable to change since I would have to recompile the database). I suppose, by reading the official documentation, that this limit is fixed to 500 by default.
So far I have created a functionnal trigger but I am unable to stop its recursion after n insertion:
CREATE TRIGGER 'myTrigger'
AFTER INSERT ON 'Article'
WHEN (insertedRowNumber < 500)
BEGIN
INSERT INTO Article(...) VALUES(...);
END;
The Article table structure doesn't contain any kind of timestamp and it can not be changed because the database is already deployed for production.
How would one limit the number of rows inserted with the trigger pattern I provided ?
Thank you for your help !
If you can limit entries in Article instead of number of insertions, just:
CREATE TRIGGER myTrigger AFTER INSERT ON Article WHEN ((SELECT COUNT() FROM Article)<500) BEGIN INSERT INTO Article(...) VALUES(...); END;
Another option is using a helper view:
CREATE VIEW hlpArticle(a, ..., z, hlpCnt) AS SELECT a, ..., z, 1 AS hlpCnt FROM Article;
CREATE TRIGGER hlpTrigger INSTEAD OF INSERT ON hlpArticle WHEN (NEW.hlpCnt>0) BEGIN
INSERT INTO Article(a, ..., z) VALUES(NEW.a, ..., NEW.z);
INSERT INTO hlpArticle(a, ..., z) VALUES(NEW.a, ..., NEW.z, NEW.hlpCnt-1);
END;
So when you do:
INSERT INTO hlpArticle(a, ..., z, hlpCnt) VALUES('val_a', ..., 'val_z', 500);
it will insert 500 records on Article.
I am unsure on how to do this 'best practice' wise.
I have a web application (asp.net VB) that connects to an MS SQL server 2012. Currently when the page loads the app connects to a DB table and gets the last ID and adds 1 to it and displays this to the user. When the user submits the form the new ID is saved to the DB.
The problem being the app may be opened by 2 users at the same time and therefore they will get assigned the same ref number which will cause problems when the data is saved.
How can I assign different numbers to different users if the app is opened at the same time without saving unnecessary data?
You have multiple solutions for this, I'll try to outline a few approaches. (I'll assume that you need to insert things into a DB that I'll call "orders".)
First of all, you can move the ID-generation to the moment when the order is actually inserted, and not at the moment when the user start to enter the data. That way, you do not generate an ID for a user that never completes the form. Also this scenario is easy to accomplish using autoincrementing values in sql server. You can, for example do:
-- create a table with an identity column
create table Orders (
ID int identity(1,1) not null,
Description nvarchar(max) not null
);
-- insert values, without specifying the ID column
insert into Orders (Description) values ()
-- select the row back
-- returns 1, 'My First Order'
select * from Orders;
Another way to do this is to use SQL Server Sequences. These are things that do nothing except generate numbers in a row. They guarantee that the numbers won't be repeated, and always keep count of the current value, i.e.
-- create a sequence
create sequence OrderIdSequence
start with 1
increment by 1;
-- get the next sequence value
select next value for OrderIdSequence
So this is essentially a follow-up question on Finding duplicate records.
We perform data imports from text files everyday and we ended up importing 10163 records spread across 182 files twice. On running the query mentioned above to find duplicates, the total count of records we got is 10174, which is 11 records more than what are contained in the files. I assumed about the posibility of 2 records that are exactly the same and are valid ones being accounted for as well in the query. So I thought it would be best to use a timestamp field and simply find all the records that ran today (and hence ended up adding duplicate rows). I used ORA_ROWSCN using the following query:
select count(*) from my_table
where TRUNC(SCN_TO_TIMESTAMP(ORA_ROWSCN)) = '01-MAR-2012'
;
However, the count is still more i.e. 10168. Now, I am pretty sure that the total lines in the file is 10163 by running the following command in the folder that contains all the files. wc -l *.txt.
Is it possible to find out which rows are actually inserted twice?
By default, ORA_ROWSCN is stored at the block level, not at the row level. It is only stored at the row level if the table was originally built with ROWDEPENDENCIES enabled. Assuming that you can fit many rows of your table in a single block and that you're not using the APPEND hint to insert the new data above the existing high water mark of the table, you are likely inserting new data into blocks that already have some existing data in them. By default, that is going to change the ORA_ROWSCN of every row in the block causing your query to count more rows than were actually inserted.
Since ORA_ROWSCN is only guaranteed to be an upper-bound on the last time there was DML on a row, it would be much more common to determine how many rows were inserted today by adding a CREATE_DATE column to the table that defaults to SYSDATE or to rely on SQL%ROWCOUNT after your INSERT ran (assuming, of course, that you are using a single INSERT statement to insert all the rows).
Generally, using the ORA_ROWSCN and the SCN_TO_TIMESTAMP function is going to be a problematic way to identify when a row was inserted even if the table is built with ROWDEPENDENCIES. ORA_ROWSCN returns an Oracle SCN which is a System Change Number. This is a unique identifier for a particular change (i.e. a transaction). As such, there is no direct link between a SCN and a time-- my database might be generating SCN's a million times more quickly than yours and my SCN 1 may be years different from your SCN 1. The Oracle background process SMON maintains a table that maps SCN values to approximate timestamps but it only maintains that data for a limited period of time-- otherwise, your database would end up with a multi-billion row table that was just storing SCN to timestamp mappings. If the row was inserted more than, say, a week ago (and the exact limit depends on the database and database version), SCN_TO_TIMESTAMP won't be able to convert the SCN to a timestamp and will return an error.