With normal Mysql replication we can ignore tables with: replicate_ignore_table
I can't find any information on whether or not it's possible to do this with Galera cluster replication.
I'd like to ignore a table that is not important so that no cluster wide locks have to be acquired when performing an action on the database.
One workaround is to make the table MyISAM or MEMORY.
For ignore a table that is not important, convert this like MyISAM and the galera cluster ignore this automaticaly.
Related
There are some groups of queries that include creating tables, fields, etc. How to implement a mechanism by which a group of requests should be executed all, or if somewhere the error was canceled all? That is, the principle of transactionality with ALTER TABLE queries for example (which are committed implicitly)
You are talking about Commit and Rollback.
However if you are mixing normal SQL with DML (CREATE/ALTER/DROP etc) the DML commands have an implied COMMIT as part of there execution, you cannot avoid it. So this will likely cause you problems if mixed with your normal insert/update/delete type queries
Transactional DML is not possible until MySQL 8.0. In that version, there is a "Data Dictionary" that is stored in InnoDB tables. (I don't want to think about the bootstrap required!) The DD allows, at least in theory, the ROLLBACK of DDL actions such as ALTER and DROP TABLE. Study the 8.0 docs to see if it does enough for your needs.
In the past (pre-8.0), Many DML operations were at least reasonable "crash safe". For example, an ALTER used to copy the table over, then quickly swap in the new table. This provided a reasonably good way to recover from, a crash during an ALTER.
There have been a lot of major improvements to ALTER since 5.6. Before then, the only really "instant" alter was adding an option to an ENUM, and that had caveats. There are still things that mandate a complete, time-consuming, rebuild of the table -- such as any change to the PRIMARY KEY.
DML operations should be minimized; they are not designed for frequent use.
I've got a (2) node galera setup and I'm having this weird issue where creating databases/tables works and replicates across the nodes, mean that if you create a database or a table under a database on one node 1, it will appear on node 2. But when data is inserted into database.table on node 1, it does not appear on node 2.
What could be the issue?
I figured out the problem. I was trying to load MYISAM dumps without converting them to INNODB.
Problem Solved and here is the quick conversion for anybody else impacted:
sed -i.bak 's#MyISAM#innodb#g' MIGRATION_DATA.sql
I tried adapting a solution for MySQL, but it turns out information_schema.innodb_table_stats is empty. SHOW INDEX FROM schema_name.table_name doesn't cut it, by only showing cardinality.
mysql.innodb_table_stats is not available until MySQL 5.6 and MariaDB 10.0. Before that...
SHOW TABLE STATUS LIKE 'tablename';
will provide Data_length, which is the amount of space taken by all indexes except the PRIMARY KEY (in the case of InnoDB). There was no way to get the sizes of individual secondary indexes. The PK is "clustered" with the data, so there is really very little space taken in addition to the data.
It is general inadvisable to have a Slave running an older version than the Master, if that is what you are doing.
What are you really looking for?
I experienced a scenario where a select count(*) on a table every minute (yes, this should definitely be avoided) caused a huge increase in Cassandra writes to around 150K writes per second.
Can anyone explain this weird behavior? Why would a Select query significantly increase write count in Cassandra?
Thanks!
If you check
org.apache.cassandra.metrics:type=ReadRepair,name=RepairedBackground
and
org.apache.cassandra.metrics:type=ReadRepair,name=RepairedBlocking
metrics you can see if its read repairs sending mutations. Perhaps reading all the data to service the count(*) is causing a lot of read repairs if your data is inconsistent. If thats the case lowering the read_repair_chance and dclocal_read_repair_chance on the table (ALTER TABLE) could reduce load.
Other likely possibilities are:
You have tracing enabled (either globally or on the table) as some %.
Or if you use DSE and you have slow query's enabled.
A possible explanation could be found in the write path of an update:
During a write , Cassandra adds each new row to the database without checking on whether a duplicate record exists. This policy makes it possible that many versions of the same row may exist in the database.
Then
Most Cassandra installations store replicas of each row on two or more nodes. Each node performs compaction independently. This means that even though out-of-date versions of a row have been dropped from one node, they may still exist on another node.
And finally:
This is why Cassandra performs another round of comparisons during a read process. When a client requests data with a particular primary key, Cassandra retrieves many versions of the row from one or more replicas.
I have the task to do consistent mysqldumps across tables, so that one database is always consistent with itself (all tables inside).
Now I read that there are two options --single-transaction for InnoDB and --lock-tables for all others.
My two Question are:
Can I simply check if all tables of one database use InnoDB and if so apply --single-transaction to that database.
If any table inside one database is not using the InnoDB engine can I simply apply --lock-tables?
Would the above two cases guarantee me to always have consistent database backups across tables?
Update:
By consistent dumps I mean, that once the backup process is started, it will dump the current state of one database and no other operation (which might take place at the same time) can interfere with the current state.