How to rollback all the commits in Oracle DB - oracle11g

Recently I made a mistake, that, all the tables in my oracle database were dropped (running drop command from SQL Developer tool) in a sequence of all transactions unbeknownst to me that I was dropping tables at the wrong database. Each time I dropped a set of tables, I committed the operations. By the time, I realized that I have dropped the tables at the wrong DB, I was too late to rollback, since it rolled back the last operation only. I searched online as well, and found SavePoints to be the resolution, but I had no savepoints configured. I had the backup of all the tables taken 2 days before, so, I ran scripts from there and made lost changes for the past 2 days.
Is there any other way that I take to get my DB state back. I have made it a practice to make savepoints while performing such operations though. But, I am still confused. One who doesn't know about savepoints will lose all the data.

DDL statements like drop do an implicit commit so you cannot rollback to undrop a table. Nor would a savepoint help you since that only lets you go back to an earlier point in the current transaction.
If you have merely dropped a table, assuming you haven't disabled the recycle bin, you can simply do a flashback drop
flashback table <<table name>> to before drop
If you have a more complicated script that is doing more than just dropping tables, you may want to set a restore point prior to running the script. You could then flashback the entire database to that restore point if something goes wrong.
create restore point <<your restore point name>>
<<run your script>>
flashback database to restore point <<your restore point name>>

Related

SQL Server Data Archiving

I have a SQL Azure database on which I need to perform some data archiving operation.
Plan is to move all the irrelevant data from the actual tables into Archive_* tables.
I have tables which have up to 8-9 million records.
One option is to write a stored procedure and insert data in to the new Archive_* tables and also delete from the actual tables.
But this operation is really time consuming and running for more than 3 hrs.
I am in a situation where I can't have more than an hour's downtime.
How can I make this archiving faster?
You can use Azure Automation to schedule execution of a stored procedure every day at the same time, during maintenance window, where this stored procedure will archive the oldest one week or one month of data only, each time it runs. The store procedure should archive data older than X number of weeks/months/years only. Please read this article to create the runbook. In a few days you will have all the old data archived and the Runbook will continue to do the job from now and on.
You can't make it faster, but you can make it seamless. The first option is to have a separate task that moves data in portions from the source to the archive tables. In order to prevent table lock escalations and overall performance degradation I would suggest you to limit the size of a single transaction. E.g. start transaction, insert N records into the archive table, delete these records from the source table, commit transaction. Continue for a few days until all the necessary data is transferred. The advantage of that way is that if there is some kind of a failure, you may restart the archival process and it will continue from the point of the failure.
The second option that does not exclude the first one really depends on how critical the performance of the source tables for you and how many updates are happening with them. It if is not a problem you can write triggers that actually pour every inserted/updated record into an archive table. Then, when you want a cleanup all you need to do is to delete the obsolete records from the source tables, their copies will already be in the archive tables.
In the both cases you will not need to have any downtime.

AX 2012R2: Lookup query takes too long, lookup never opens

I have a AX2012R2 CU6 (build&client 6.2.1000.1437, kernel 6.2.1000.5268) with the following problem:
On AP>Journals>Invoices>Invoice Journal>lines (form LedgerJournalTransVendInvoice), when I select Vendor as Account type and then activate the lookup on the Account field, AX freezes for a couple minutes and when it recovers, the lookup is closed/never opened. This happens every time when account type vendor, other account types work just fine.
I debugged this to LedgerJournalEngine.accountNumLookup() --> VendTable.lookupVendor line
formSegmentedEntryControl.performFormLookup(formRun);
The above process takes up the time.
Any ideas before I hire an exorcist?
There is a known KB for this for R3, look for it on Lifecycle services
KB 3086961 Performance issue of VendorLookup on the volume data,
during the GFM Bugbash 6/11 took over 30 minutes
Even though the fix is for R3 it should be easy to backport as the changes are described as
The root cause seemed to be the DirPartyLookupGridView, which had
around 14 joins on views and tables. This view is used in many places
and hence seemed to have grown quite a lot over time.
The changes in the hotfix remove the view and add only the required
datasources - dirpartytable and logisticsaddress to the
VendTableLookup form.
The custtableLookup is not using the view and using custom datasource
joins instead, so no changes there.
Try implementing that change and see what happens.
I'm not sure this will fix your issue as in your execution plan the only operation that seems really expensive is the sort operator which needs to spill to tempdb (you might need more memory to solve that) but the changes in the datasource could have the effect of removing the sort operator from the execution plan as the data may be sorted by an index.
Probably the SQL Server chose the wrong query plan.
First check that you have not disabled any indexes on the involved tables, then do a synchronize on them.
If still a problem, then to run a STATISTICS UPDATE on the involved tables (including the tables in the view).

any support for UNDO in Sqlite manager

due to my impatience i corrupted my database with a replace query now all rows in that column shows 0.
is there any support for Sqlite manager in Firefox using which i will be able to undo to my previous state of DB.
Thanks
There's no such thing as UNDO -- however if you began a transaction prior to starting your query (or queries), you could simple not commit the transaction. Short of that, you need to restore from a backup (and if you don't have a backup, you'll probably think to make one next time?).

Optimizing select with transaction under SQLite 3

I read that wrapping a lot of SELECT into BEGIN TRANSACTION/COMMIT was an interesting optimization.
But are these commands really necessary if I use "PRAGMA journal_mode = OFF" before? (Which, if I remember, disables the log and obviously the transaction system too.)
Note that I don't agree with BigMacAttack.
For SQLITE, wrapping SELECTs in a Transaction does do something:
It reduces the number of SHARED locks that are obtained and then dropped.
Reference:
http://www.mail-archive.com/sqlite-users%40sqlite.org/msg79839.html
So I think the transaction would also be beneficial even if you had journal_mode turned off, because there is still the locking overhead to consider.
Maybe read_uncommitted would be something you could consider - I would guess that it would disable the SHARED locking.
"Use transactions – even if you’re just reading the data. This may yield a few milliseconds."
I'm not sure where the Katashrophos.net blog is getting this information, but wrapping SELECT statements in transactions does nothing. Transactions are always and only used when making changes to the database, and transactions cannot be disabled. They are a requirement. What many don't understand is that unless you manually BEGIN and COMMIT a transaction, each statement will be automatically put in their own unique transaction. Be sure to read the de facto SO question on improving sqlite performance. What the author of the blog might have been trying to say, is that if you plan to do an INSERT, then a SELECT, then another INSERT, then it would increase performance to manually wrap these statements in a single transaction. Otherwise sqlite will automatically put the two insert statements in separate unique transactions.
According to the "SQL as Understood by SQLite" documentation concerning transactions:
"No changes can be made to the database except within a transaction. Any command that changes the database (basically, any SQL command other than SELECT) will automatically start a transaction if one is not already in effect."
Lastly, disabling journaling via PRAGMA journal_mode = OFF does not disable transactions, only logging. But disabling the log is a good way to increase performance as well. Normally after each transaction, sqlite will document the transaction in the journal. When it doesn't have to do this, you get a performance boost.
UPDATE:
So it has been brought to my attention by "elegant dice" that the SQLite documentation statement I quote above is misleading. SELECT statements do in fact use the transaction system. This is used to acquire and release a SHARED lock on the database. As a result, it is indeed more efficient to wrap multiple SELECT statements in a single transaction. By doing so, the lock is only acquired and released once, rather than for each individual SELECT statement. This ends up being slightly more efficient while also assuring that all SELECT statements will access the same version of the database in case something has been added/deleted by some other program.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

Resources