MariaDB table missing but I can't recreate it - mariadb

Something went wrong during a structure synchronization between two databases.
One of our production databases now is missing a key table 'customers' (which just about every other table has foreign keys to)
I'm trying to recreate the table from last night's backup (I don't want to restore the entire db - just recreate this table as the data in it does not change that much and I don't want to lose the transactional data from today)
The hassle seems to be that all the foreign key data for this table still exists in INFORMATION_SCHEMA.KEY_COLUMN_USAGE and I am getting 121 and 150 errors when I try run the CREATE TABLE query.
I've manually deleted all FK to the missing table and I am still getting errno 150 when trying to recreate the table. Any ideas where else there might be lost references to this table that is stopping me creating it again?

This was eventually resolved by multiple consultations of the SHOW ENGINE INNODB STATUS query.
The missing table had various indexes - example on the customer name there was an index "customer_name_idx". The CREATE TABLE query asked for this index to be created. The show engine innodb status return was "could not create table because index customer_name_idx already exists."
There was no reference to this index, to any primary key or to the table itself in any of the meta-data tables - I checked
INFORMATION_SCHEMA.INNODB_SYS_INDEXES
INFORMATION_SCHEMA.TABLE_SCHEMA
INFORMATION_SCHEMA.STATISTICS -INFORMATION_SCHEMA.TABLE
so I could not explain why this error was being thrown.
My guess, after the fact, is that MySQL is holding a cached copy of the information_schema meta data in memory and was consulting that, and maybe that only gets refreshed if you restart MySQL?
The solution was to give the indexes new names as a short term fix, and to rename them during our next scheduled downtime.
Once these were made, the table was created and the backup data could be reinstated.

Related

MariaDB - Inserting historical data into a system versioned (temporal) table

I have some tables in MariaDB that I have been tracking the changes for by using a separate "changelog" table that updates every time a record is updated. However I have recently learned about temporal data tables in MariaDB and I would like to switch to that method as it is a much more elegant method of tracking changes. I'm wondering, however, if there is a way to transfer over my "changelog" table to the newly system versioned tables.
So I was hoping I could insert new rows somehow with the specified values for the table and also specify the row_end and row_start columns and also have that not trigger the table to create another historical row... is this possible? I tried just doing a a "insert into (id, row_start, row_end, etc) values(x, y, z)" but that results in an unknown column "row_start" error.
Old question, but starting with 10.11 MariaDB allows direct insertion of historical data using a command line option or setting.
https://mariadb.com/kb/en/system-versioned-tables/#system_versioning_insert_history
system_versioning_insert_history
Description: Allows direct inserts into ROW_START and ROW_END columns if secure_timestamp allows changing timestamp.
Commandline: --system-versioning-insert-history[={0|1}]
Scope: Global, Session
Dynamic: Yes
Type: Boolean
Default Value: OFF
Introduced: MariaDB 10.11.0

MariaDB remove foreign key to temporary table

Context:
I'm trying to upgrade a concrete5 installation from version 8.3.2 to 8.4.1. The upgrade process fails during execution of this SQL statement:
ALTER TABLE AreaLayoutsUsingPresets ADD CONSTRAINT FK_7A9049A1385521EA FOREIGN KEY (arLayoutID) REFERENCES AreaLayouts (arLayoutID) ON UPDATE CASCADE ON DELETE CASCADE
With:
SQLSTATE[HY000]: General error: 1005 Can't create table `concrete5`.`#sql-215_264a4` (errno: 121 "Duplicate key on write or update")
Investigating my database revealed that in information_schema in INNODB_SYS_FOREIGN there is the following entry:
ID FOR_NAME REF_NAME N_COLS TYPE
concrete5/FK_7A9049A1385521EA concrete5/#sql-215_26264 concrete5/AreaLayouts 1 5
Problem:
Now my understanding is, that I cannot modify the information_schema as it isn't a database but just a tabular representation of the system.
I'm wondering how do I get rid of that foreign key entry. The table concrete5/#sql-215_26264 does not exist (I can't find it on my server, nor does alter table or drop table find that table (I've tried with #mysql50# prefix and without it)). So the straight forward way of alter table to drop the foreign key fails because it can't find the table.
I guess I could mess with the upgrade script so that it creates a new foreign key ID, but I'd rather get rid of that zombie in my database. I've already tried to disable the foreign key checks, which then resulted in an error, telling me that the key cannot be added to the system tables (because it's already in there).
Reinstalling is rarely a cure for anything; but I am glad that it fixed your situation.
Table names such as #sql_... usually come from crashing in the middle of an ALTER or similar DDL. Such files can be removed. information_schema is derived from looking at the files, so I think removing the files will kill the zombie entries.
either prefix the SQL import with SET FOREIGN_KEY_CHECKS=0;
or your append it to your query ALTER TABLE...DISABLE KEYS;
... and better dump the whole database before messing around.

MariaDB waits after canceling index creation

We have a MariaDB database running WordPress 4.8 and found a lot of transient named records in the wp_options table. The table was cleaned up with a Plugin and reduced from ~800K records down to ~20K records. Still getting slow query entries regarding the table:
# User#Host: wmnfdb[wmnfdb] # localhost []
# Thread_id: 950 Schema: wmnf_www QC_hit: No
# Query_time: 34.284704 Lock_time: 0.000068 Rows_sent: 1010 Rows_examined: 13711
SET timestamp=1510330639;
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';
Found another post to create an index and did:
ALTER TABLE wp_options ADD INDEX (`autoload`);
That was taking too long and taking website offline. I found a lot of 'Waiting for table metadata lock' in the processlist. After canceling the ALTER TABLE, got all running again still with high loads and entries of course in the slow query log. I also tried creating the index with the web server offline and a clean processlist. Should it take so long if I try to create again tonight?
If you are deleting most of a table, it is better to create a new table, copy the desired rows over, then rename. The unfortunate aspect is that any added/modified rows during the steps would not get reflected in the copied table. (A plus: You could have had the new index already in place.)
In this, I give multiple ways to do big deletes.
What is probably hanging your system:
A big DELETE stashes away all the old values in case of a rollback -- which killing the DELETE invoked! It might have been faster to let it finish.
ALTER TABLE .. ADD INDEX -- If you are using MySQL 5.5 or older, that must copy the entire table over. Even if you are using a newer version (that can do ALGORITHM=INPLACE) there is still a metadata lock. How often is wp_options touched? (Sounds like too many times.)
Bottom line: If you recover from your attempts, but the delete is still to be done, pick the best approach in my link. After that, adding an index to only 20K rows should take some time, but not a deadly long time. And consider upgrading to 5.6 or newer.
If you need further discussion, please provide SHOW CREATE TABLE wp_options.
But wait! If autoload is a simple yes/no 'flag', the index might not be used. That is, it may be a waste to add the index! (For low cardinality, it is faster to do a table scan than to bounce back and forth between the index BTree and the data BTree.) Please provide a link to that post; I want to spit at them.

copy sqlite index from one database to another

I have a massive database (~800 GB) with several indexed tables. I need to copy one table (including indexes) to a new database. Copying the table itself is pretty straightforward.
$ sqlite3 newDB
> attach database 'oldDB.db' as oldDB
> create table newTable as select * from oldDB.oldTable
But I can't seem to find any information on a way to also copy over an index. Is there any way to do this? Since the tables are so large I'd really like to avoid having to re-index them.
SQLite has no mechanism to copy index contents.
If this particular table would be the majority of the data in the database, the fastest way to copy it would be to copy the database file and then to drop all other tables.
But otherwise, there you cannot avoid the reindex operation.
Please note that CREATE TABLE ... AS ... does copy only the contents of the table, but not the complete table definition (such as column types or constraints).
Copying large table in a single transaction is not a good idea. If you really have to you should turn off journaling first (destination database):
PRAGMA journal_mode=OFF;
As the others have stated, the index cannot be broken out. I suspect that time spent copying the database and then dropping a very large table would be longer than just -> 1. creating the new destination database, 2. determining the original CREATE TABLE statement (from the SQLITE_MASTER table of the source database) and recreating the table in the destination database. Then 3. just ATTACH your destination database to the source database and INSERT INTO destinationdb.tablename SELECT * FROM sourcedb.tablename;* to get the copy rolling.

What methods are available to monitor SQL database records?

I would like to monitor 10 tables with 1000 records per table. I need to know when a record, and which record changed.
I have looked into SQL Dependencies, however it appears that SQL Dependencies would only be able to tell me that the table changed, and not which record changed. I would then have to compare all the records in the table to find the modified record. I suspect this would be a problem for me as the records constantly change.
I have also looked into SQL Trigger's, however I am not sure if triggers would work for monitoring which record changed.
Another thought I had, is to create a "Monitoring" table which would have records added to it via the application code whenever a record is modified.
Do you know of any other methods?
EDIT:
I am using SQL Server 2008
I have looked into Change Data Capture which is available in SQL 2008 and suggested by Martin Smith. Change Data Capture appears to be a robust, easy to implement and very attractive solution. I am going to roll CDC on my database.
You can add triggers and have them add rows to an audit table. They can audit the primary key of the rows that changed, and even additional information about the changes. For instance, in the case of an UPDATE, they can record the columns that changed.
Before you write/implement your own take a look at AutoAudit :
AutoAudit is a SQL Server (2005, 2008) Code-Gen utility that creates
Audit Trail Triggers with:
Created, CreatedBy, Modified, ModifiedBy, and RowVersion (incrementing INT) columns to table
Insert event logged to Audit table
Updates old and new values logged to Audit table
Delete logs all final values to the Audit table
view to reconstruct deleted rows
UDF to reconstruct Row History
Schema Audit Trigger to track schema changes
Re-code-gens triggers when Alter Table changes the table
What version and edition of SQL Server? Is Change Data Capture available? – Martin Smith
I am using SQL 2008 which supports Change Data Capture. Change Data Capture is a very robust method for tracking data changes as I would like to. Thanks for the answer.
Here's an idea.You can have a flag on each table that every time a record is created or updated is filled with current datetime. Then when you notice that a record has changed set its flag to null again.Thus unchanged records have null in their flag field and you can query not null values to see which record has changed/created and when (and set their flags to null again) .

Resources