Unable to delete oldest table partition - oracle11g

I'm using the 11g interval partitioning feature in one of my tables. I set it up to create 1 day partitions on a timestamp field and created a job to delete data 3 months old. When I try to delete the oldest partition I get the following error:
ORA-14758: Last partition in the range section cannot be dropped
I would have thought that "Last" refers to the newest partition and not the oldest. How should I interpret this error? Is there something wrong with my partitions or should I in fact keep the oldest partition there at all time?

Yes, the error message is somewhat misleading, but it refers to the last STATICALLY created partition (in your original table DDL before Oracle started creating the partitions automatically. I think the only way to avoid this is to create an artifical "MINVAL" partition that you're sure will never be used and then drop the real partitions above this.
[Edit after exchange of comments]
I assume this test case reproduces your problem:
CREATE TABLE test
( t_time DATE
)
PARTITION BY RANGE (t_time)
INTERVAL(NUMTODSINTERVAL(1, 'DAY'))
( PARTITION p0 VALUES LESS THAN (TO_DATE('09-1-2009', 'MM-DD-YYYY')),
PARTITION p1 VALUES LESS THAN (TO_DATE('09-2-2009', 'MM-DD-YYYY')),
PARTITION p2 VALUES LESS THAN (TO_DATE('09-3-2009', 'MM-DD-YYYY')),
PARTITION p3 VALUES LESS THAN (TO_DATE('09-4-2009', 'MM-DD-YYYY'))
);
insert into test values(TO_DATE('08-29-2009', 'MM-DD-YYYY'));
insert into test values(TO_DATE('09-1-2009', 'MM-DD-YYYY'));
insert into test values(TO_DATE('09-3-2009', 'MM-DD-YYYY'));
insert into test values(TO_DATE('09-10-2009', 'MM-DD-YYYY'));
When I do this I can drop partitions p0,p1, and p2 but get your error when attempting to drop p3 even though there is a system-generated partition beyond this.
The only workaround I could find was to temporarily redefine the table partitioning by:
alter table test set interval ();
and then drop partition p3. Then you can redefine the partitioning as per the original specification by:
alter table test set INTERVAL(NUMTODSINTERVAL(1, 'DAY'));

All correct in dpbradley's answer. But it could be done more safe way if you're dropping oldest partition(s):
In fact it is enough just to reset interval like this :
alter table test set interval ();
alter table test set INTERVAL(NUMTODSINTERVAL(1, 'DAY'));
And then drop partition oldest partition.
Otherwise there is a risk if drop partition fails then table will have no interval. So need to catch all exceptions and handle this.

Related

Create partition error - HINT: If the column does not currently contain nulls, advance the AHM

I am trying to partition my existing table by year.(there are no existing partions)
alter table test_table PARTITION BY EXTRACT (year FROM date_c);
But there seems to be some error
ROLLBACK 2628: Column "date_c" in PARTITION BY expression is not allowed, since it contains NULL values
**HINT: If the column does not currently contain nulls, advance the AHM and purge the nulls from the delete vectors before altering the partitioning**
The column does not have any null value so followed the hint. I did advance AHM to now. But how to purge the nulls from delete vectors?
After setting the AHM (Ancient History Marker) to the greatest allowable value, you can use PURGE_TABLE() to permanently remove delete data from physical storage.
The MAKE_AHM_NOW() function advances the epoch and performs a moveout operation on all projections. The AHM is then set to LGE (Last Good Epoch). At this point, any historical data (including delete vectors) will be lost and rollbacks are not possible. It does not automatically purge old data.
Looks like I have to purge data after setting AHM to now() (I assumed setting AHM to now() automatically takes care of purging older data).

SQLite data retrieve with select taking too long

I have created a table with sqlite for my corona/lua app. It's a hashtable with ~=700 000 values.The table has two columns, which are the hashcode (a string), and the value (another string). During the program I need to get data several times by providing the hashcode.
I'm using something like this code to get the data:
for p in db:nrows([[SELECT * FROM test WHERE id=']].."hashcode"..[[';]]) do
print(p)
-- p = returned value --
end
This statement is though taking insanely too much time to perform
thanks,
Edit:
Success!
the mistake was with the primare key thing.I set the hashcode as the primary key like below and the retrieve time whent to normal:
CREATE TABLE IF NOT EXISTS test (id STRING PRIMARY KEY , array);
I also prepared the statements in advance as you said:
stmt = db:prepare("SELECT * FROM test WHERE id = ?;")
[...]
stmt:bind(1,s)
for p in stmt:nrows() do
The only problem was that the db file size,that was around 18 MB, went to 29,5 MB
You should create the table with id as a unique primary key; this will automatically make an index.
create table if not exists test
(
id text primary key,
val text
);
You should not construct statements using string concatenation; this is a security issue so avoid getting in this habit. Also, you should prepare statements in advance, at program initialization, and run the prepared statements.
Something like this... initially:
hashcode_query_stmt = db:prepare("SELECT * FROM test WHERE id = ?;")
then for each use:
hashcode_query_stmt:bind_values(hashcode)
for p in hashcode_query_stmt:urows() do ... end
Ensure that there is an index on the id/hashcode column? Without one such queries will be slow, slow, slow. This index should probably be unique.
If only selecting the value/hashcode (SELECT value FROM ..), it may be beneficial to have a covering index over (id, value) as that can avoid additional seeking to the row data (see SQLite Query Planning). Try it with and without such a covering index.
Also, it may be worthwhile to employ caching if the same hashcodes are queried multiple times.
As already stated, get sure you have an index on ID.
If you can't change table schema now, you can add a index ad hoc:
CREATE INDEX test_id ON test (id);
About hashes: if you are computing hashes in your software to speed up searches, don't!
SQLite will use your supplied hashes as any regular string/blob. Also, RDBMS are optimized for efficient searching, which may be greatly improved with indexes.
Unless your hashing to save space, you are wasting processor time computing hashes in your application.

sqlite3 autoincrement - am I missing something?

I want to create unique order numbers for each day. So ideally, in PostgreSQL for instance, I could create a sequence and read it back for these unique numbers, because the readback both gets me the new number and is atomic. Then at close of day, I'd reset the sequence.
In sqlite3, however, I only see an autoincrement for the integer field type. So say I set up a table with an autoincrement field, and insert a record to get the new number (seems like an awfully inefficient way to do it, but anyway...) When I go to read the max back, who is to say that another task hasn't gone in there and inserted ANOTHER record, thereby causing me to read back a miss, with my number one too far advanced (and a duplicate of what the other task reads back.)
Conceptually, I require:
fast lock with wait for other tasks
increment number
retrieve number
unlock
...I just don't see how to do that with sqlite3. Can anyone enlighten me?
In SQLite, autoincrementing fields are intended to be used as actual primary keys for their records.
You should just it as the ID for your orders table.
If you really want to have an atomic counter independent of corresponding table records, use a table with a single record.
ACID is ensured with transactions:
BEGIN;
SELECT number FROM MyTable;
UPDATE MyTable SET number = ? + 1;
COMMIT;
ok, looks like sqlite either doesn't have what I need, or I am missing it. Here's what I came up with:
declare zorder as integer primary key autoincrement, zuid integer in orders table
this means every new row gets an ascending number, starting with 1
generate a random number:
rnd = int(random.random() * 1000000) # unseeded python uses system time
create new order (just the SQL for simplicity):
'INSERT INTO orders (zuid) VALUES ('+str(rnd)+')'
find that exact order number using the random number:
'SELECT zorder FROM orders WHERE zuid = '+str(rnd)
pack away that number as the new order number (newordernum)
clobber the random number to reduce collision risks
'UPDATE orders SET zuid = 0 WHERE zorder = '+str(newordernum)
...and now I have a unique new order, I know what the correct order number is, the risk of a read collision is reduced to negligible, and I can prepare that order without concern that I'm trampling on another newly created order.
Just goes to show you why DB authors implement sequences, lol.

How to use one sequence for all table in sqlite

When I'm creating tables in an SQLite database, separate sequences are automatically created for each table.
However I want to use one sequence for all tables in my SQLite database and also need to set min and max values (e.g. min=10000 and max=999999) of that sequence (min and max means start value of sequence and maximum value that sequence can increment).
I know this can be done in an Oracle database, but don't know how to do it in SQLite.
Is there any way to do this?
Unfortunately, you cannot do this: SQLite automatically creates sequences for each table in special sqlite_sequence service table.
And even if you somehow forced it to take single sequence as source for all your tables, it would not work the way you expect. For example, in PostgreSQL or Oracle, if you set sequence to value say 1 but table already has filled rows with values 1,2,..., any attempt to insert new rows using that sequence as a source would fail with unique constraint violation.
In SQLite, however, if you manually reset sequence using something like:
UPDATE sqlite_sequence SET seq = 1 WHERE name = 'mytable'
and you already have rows 1,2,3,..., new attempts to insert will NOT fail, but automatically assign this sequence maximum existing value for appropriate auto-increment column, so it can keep going up.
Note that you can use this hack to assign starting value for the sequence, however you cannot make it stop incrementing at some value (unless you watch it using other means).
First of all, this is a bad idea.
The performance of database queries depends on predictability, and by fiddling with the sequence of indexes you are introducing offsets which will confuse the database engine.
However, to achieve this you could try to determine the lowest sequence number which is higher than or equal to any existing sequence number:
SELECT MAX(seq) FROM sqlite_sequence
This needs to be done after each INSERT query, followed by an update of all sequences for all tables:
UPDATE sqlite_sequence SET seq=determined_max

Understanding the ORA_ROWSCN behavior in Oracle

So this is essentially a follow-up question on Finding duplicate records.
We perform data imports from text files everyday and we ended up importing 10163 records spread across 182 files twice. On running the query mentioned above to find duplicates, the total count of records we got is 10174, which is 11 records more than what are contained in the files. I assumed about the posibility of 2 records that are exactly the same and are valid ones being accounted for as well in the query. So I thought it would be best to use a timestamp field and simply find all the records that ran today (and hence ended up adding duplicate rows). I used ORA_ROWSCN using the following query:
select count(*) from my_table
where TRUNC(SCN_TO_TIMESTAMP(ORA_ROWSCN)) = '01-MAR-2012'
;
However, the count is still more i.e. 10168. Now, I am pretty sure that the total lines in the file is 10163 by running the following command in the folder that contains all the files. wc -l *.txt.
Is it possible to find out which rows are actually inserted twice?
By default, ORA_ROWSCN is stored at the block level, not at the row level. It is only stored at the row level if the table was originally built with ROWDEPENDENCIES enabled. Assuming that you can fit many rows of your table in a single block and that you're not using the APPEND hint to insert the new data above the existing high water mark of the table, you are likely inserting new data into blocks that already have some existing data in them. By default, that is going to change the ORA_ROWSCN of every row in the block causing your query to count more rows than were actually inserted.
Since ORA_ROWSCN is only guaranteed to be an upper-bound on the last time there was DML on a row, it would be much more common to determine how many rows were inserted today by adding a CREATE_DATE column to the table that defaults to SYSDATE or to rely on SQL%ROWCOUNT after your INSERT ran (assuming, of course, that you are using a single INSERT statement to insert all the rows).
Generally, using the ORA_ROWSCN and the SCN_TO_TIMESTAMP function is going to be a problematic way to identify when a row was inserted even if the table is built with ROWDEPENDENCIES. ORA_ROWSCN returns an Oracle SCN which is a System Change Number. This is a unique identifier for a particular change (i.e. a transaction). As such, there is no direct link between a SCN and a time-- my database might be generating SCN's a million times more quickly than yours and my SCN 1 may be years different from your SCN 1. The Oracle background process SMON maintains a table that maps SCN values to approximate timestamps but it only maintains that data for a limited period of time-- otherwise, your database would end up with a multi-billion row table that was just storing SCN to timestamp mappings. If the row was inserted more than, say, a week ago (and the exact limit depends on the database and database version), SCN_TO_TIMESTAMP won't be able to convert the SCN to a timestamp and will return an error.

Resources