MariaDB - how to drop partition on master but not on slave - mariadb

My use case is that I want to have a master with main data organized into partitions - one partition per day (each day a new partition is created). The master should only keep most recent 10 days of data and I need the slaves to keep all.
How do I ALTER TABLE ... DROP PARTITION on master without being replicated on slaves?
I seems to me that somehow I have to instruct the master not to write this operation into bin-log, but how do I do it?

Altering a table to make it different between Master and Slave is asking for trouble. Subsequent operations may fail due to the differences. Caveat emptor.
SET sql_log_bin = OFF;
ALTER ...; -- or any other statement
SET sql_log_bin = ON;
That runs the query on the Master, but does not put it into the replication stream, thereby preventing it from being executed on the Slave(s).
https://dev.mysql.com/doc/refman/8.0/en/set-sql-log-bin.html
I feel sure that MariaDB works the same as MySQL.
If you are using some "cluster", such as Galera, then read about TOI and RSU. http://galeracluster.com/documentation-webpages/schemaupgrades.html

Related

Why doesn't Recversion always change on record update?

I found this question on the old Dynamics AX forum and found the answer.
I was not able to post it there due to Microsoft currently is forcing all to start using their new 365 version
"I've noticed that Recversion on some records in the database aren't updating as I'd expect. What I'm trying to do is to rework how my company is moving data into our Data Warehouse for reporting purposes. Up until this point, the tables we use for reports are refreshed each night with data from our production environment. Data Warehouse tables are truncated, then reloaded fresh.
Naturally, as our production database grows, this is becoming less and less ideal. We're using SQL Server 2005 for our database, so we don't have access to the new SQL "Merge" statement, but we were able to achieve a similar effect. We match records between the databases by RecId, and are using Recversion to test whether a Data Warehouse record needs to be updated from production in the case of changes to the record (inserts are handled based on not finding a matching record in the data warehouse by RecId).
So the problem is, if Recversion isn't being updated all the time, this method is useless since we aren't accurately capturing all updates. The table we've noticed a problem with is InventSum. The only thing I can see different from most other tables is the concurrency setting; InventSum is set to use Pessimistic concurrency. Would this affect the behavior of when the Recversion value changes? What else might cause this value not to update regularly?
The test:
I tested this out by recording the Recversion value of an InventSum record on a particular item in a given warehouse. I then created a sales line for 100,000 of the item, which in turn updated the InventSum.ReservPhysical value. Despite the change in the reserved amount on InventSum, the Recversion remained unchanged from its original value. Picking the quantity also did not update the Recversion. Posting a packing slip DID cause the Recversion to change.
So why the difference?"
See the original question here
I've added "modifiedDateTime" to the InventSum table and altered the method: InventUpdateOnhand.sqlUpdateInventSumStrSQLServer() - the line:
str sqls_base = 'UPDATE %1 SET %2
FROM (SELECT %3 FROM %4 WHERE %5 GROUP BY %6) AS %7 WHERE %8';
changed to
str sqls_base = 'UPDATE %1 SET %2,
modifiedDateTime = GETUTCDATE() FROM (SELECT %3 FROM %4 WHERE %5 GROUP
BY %6) AS %7 WHERE %8';
Then added an index containing modifiedDateTime, DataAreaId in that order.
This has helped the data warehouse people a lot

MariaDB waits after canceling index creation

We have a MariaDB database running WordPress 4.8 and found a lot of transient named records in the wp_options table. The table was cleaned up with a Plugin and reduced from ~800K records down to ~20K records. Still getting slow query entries regarding the table:
# User#Host: wmnfdb[wmnfdb] # localhost []
# Thread_id: 950 Schema: wmnf_www QC_hit: No
# Query_time: 34.284704 Lock_time: 0.000068 Rows_sent: 1010 Rows_examined: 13711
SET timestamp=1510330639;
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes';
Found another post to create an index and did:
ALTER TABLE wp_options ADD INDEX (`autoload`);
That was taking too long and taking website offline. I found a lot of 'Waiting for table metadata lock' in the processlist. After canceling the ALTER TABLE, got all running again still with high loads and entries of course in the slow query log. I also tried creating the index with the web server offline and a clean processlist. Should it take so long if I try to create again tonight?
If you are deleting most of a table, it is better to create a new table, copy the desired rows over, then rename. The unfortunate aspect is that any added/modified rows during the steps would not get reflected in the copied table. (A plus: You could have had the new index already in place.)
In this, I give multiple ways to do big deletes.
What is probably hanging your system:
A big DELETE stashes away all the old values in case of a rollback -- which killing the DELETE invoked! It might have been faster to let it finish.
ALTER TABLE .. ADD INDEX -- If you are using MySQL 5.5 or older, that must copy the entire table over. Even if you are using a newer version (that can do ALGORITHM=INPLACE) there is still a metadata lock. How often is wp_options touched? (Sounds like too many times.)
Bottom line: If you recover from your attempts, but the delete is still to be done, pick the best approach in my link. After that, adding an index to only 20K rows should take some time, but not a deadly long time. And consider upgrading to 5.6 or newer.
If you need further discussion, please provide SHOW CREATE TABLE wp_options.
But wait! If autoload is a simple yes/no 'flag', the index might not be used. That is, it may be a waste to add the index! (For low cardinality, it is faster to do a table scan than to bounce back and forth between the index BTree and the data BTree.) Please provide a link to that post; I want to spit at them.

Mariadb SELECT not failing on lock

I’m trying to cause a ‘SELECT’ query to fail if the record it is trying to read is locked.
To simulate it I have added a trigger on UPDATE that sleeps for 20 seconds and then in one thread (Java application) I’m updating a record (oid=53) and in another thread I’m performing the following query:
“SET STATEMENT max_statement_time=1 FOR SELECT * FROM Jobs j WHERE j.oid =53”.
(Note: Since my mariadb server version is 10.2 I cannot use the “SELECT … NOWAIT” option and must use “SET STATEMENT max_statement_time=1 FOR ….” instead).
I would expect that the SELECT will fail since the record is in a middle of UPDATE and should be read/write locked, but the SELECT succeeds.
Only if I add ‘for update’ to the SELECT query the query fails. (But this is not a good option for me).
I checked the INNODB_LOCKS table during the this time and it was empty.
In the INNODB_TRX table I saw the transaction with isolation level – REPEATABLE READ, but I don’t know if it is relevant here.
Any thoughts, how can I make the SELECT fail without making it 'for update'?
Normally consistent (and dirty) reads are non-locking, they just read some sort of snapshot, depending on what your transaction isolation level is. If you want to make the read wait for concurrent transaction to finish, you need to set isolation level to SERIALIZABLE and turn off autocommit in the connection that performs the read. Something like
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET autocommit = 0;
SET STATEMENT max_statement_time=1 FOR ...
should do it.
Relevant page in MariaDB KB
Side note: my personal preference would be to use innodb_lock_wait_timeout=1 instead of max_statement_time=1. Both will make the statement fail, but innodb_lock_wait_timeout will cause an error code more suitable for the situation.

Galera Cluster - Autocommit

I set up a galera cluster with 2 nodes, and I disable autocommit at 2servers.
set autocommit=0;
INSERT data at server1 and COMMIT,but server2 didn't renew data;
server2 need COMMIT; before SELECT;
how do i renew data without COMMIT, except autocommit=1?
You may be referring to "critical read" issues, not autocommit. See the manual on wsrep_sync_wait, which should be SET to 1 before a SELECT that might be reading data from a node other than the one where the data was written. This makes sure the replication is caught up so that you get the 'right' answer.
My Galera blog discusses that aspect, and more.
If you need something than a SELECT to wait, then use, say, 15 for the value in the SET.
(I prefer to explicitly use BEGIN instead of using autocommit=0; then I can pair up the BEGINs and COMMITs in code and not leave a transaction 'open' forever.)

Sqlite3: Disabling primary key index while inserting?

I have an Sqlite3 database with a table and a primary key consisting of two integers, and I'm trying to insert lots of data into it (ie. around 1GB or so)
The issue I'm having is that creating primary key also implicitly creates an index, which in my case bogs down inserts to a crawl after a few commits (and that would be because the database file is on NFS.. sigh).
So, I'd like to somehow temporary disable that index. My best plan so far involved dropping the primary key's automatic index, however it seems that SQLite doesn't like it and throws an error if I attempt to do it.
My second best plan would involve the application making transparent copies of the database on the network drive, making modifications and then merging it back. Note that as opposed to most SQlite/NFS questions, I don't need access concurrency.
What would be a correct way to do something like that?
UPDATE:
I forgot to specify the flags I'm already using:
PRAGMA synchronous = OFF
PRAGMA journal_mode = OFF
PRAGMA locking_mode = EXCLUSIVE
PRAGMA temp_store = MEMORY
UPDATE 2:
I'm in fact inserting items in batches, however every next batch is slower to commit than previous one (I'm assuming this has to do with the size of index). I tried doing batches of between 10k and 50k tuples, each one being two integers and a float.
You can't remove embedded index since it's the only address of row.
Merge your 2 integer keys in single long key = (key1<<32) + key2; and make this as a INTEGER PRIMARY KEY in youd schema (in that case you will have only 1 index)
Set page size for new DB at least 4096
Remove ANY additional index except primary
Fill in data in the SORTED order so that primary key is growing.
Reuse commands, don't create each time them from string
Set page cache size to as much memory as you have left (remember that cache size is in number of pages, but not number of bytes)
Commit every 50000 items.
If you have additional indexes - create them only AFTER ALL data is in table
If you'll be able to merge key (I think you're using 32bit, while sqlite using 64bit, so it's possible) and fill data in sorted order I bet you will fill in your first Gb with the same performance as second and both will be fast enough.
Are you doing the INSERT of each new as an individual Transaction?
If you use BEGIN TRANSACTION and INSERT rows in batches then I think the index will only get rebuilt at the end of each Transaction.
See faster-bulk-inserts-in-sqlite3.

Resources