Innodb with full text search - innodb

We have to convert MyISAM table as InnoDB on Percona 5.6.13 server. These MyISAM tables are using full-text Indexing. So, If I will convert these tables as InnoDB then does It impact then query performance?
Thanks

If you convert MyISAM table as InnoDB it reduce memory Usage for MyISAM, increase memory Usage for InnoDB. Also see
Converting tables from MyISAM to InnoDB

Related

Querying database size in SQLite

Is there any way to query physical database size per table in SQLite?
In MySQL, there's a meta-table called information_schema.TABLES that lists physical table sizes in bytes. PostgreSQL has a similar meta-table called pg_tables. Is there anything like this in SQLite?
If you know where the database is, it's just a file. For example:
$ wc -c db/development.sqlite3
2338816 db/development.sqlite3
The sqlite3_analyzer tool outputs lots of information about a database, among them the amount of storage used by each table.

Can I change phpMyAdmin's default queries?

I have several large innodb tables (over 500M records). When I click on one to view it, the system takes forever (a couple minutes) to return the first 30 rows. I went into my shell program and saw that phpMyAdmin was doing a select count(*) from table. This DIES in innodb. I do have the table indexed by the primary key which is an auto-increment id.
Is there any way to change this so that phpMyAdmin is actually useful for large innoDB tables? It worked fine for myIsam as count(*) performs well there. With innodb you need to count an indexed column, such as count(id).
Upgrade your phpMyAdmin version if it's outdated.

Consistent mysqldumps across tables: --lock-tables vs --single-transaction

I have the task to do consistent mysqldumps across tables, so that one database is always consistent with itself (all tables inside).
Now I read that there are two options --single-transaction for InnoDB and --lock-tables for all others.
My two Question are:
Can I simply check if all tables of one database use InnoDB and if so apply --single-transaction to that database.
If any table inside one database is not using the InnoDB engine can I simply apply --lock-tables?
Would the above two cases guarantee me to always have consistent database backups across tables?
Update:
By consistent dumps I mean, that once the backup process is started, it will dump the current state of one database and no other operation (which might take place at the same time) can interfere with the current state.

Galera replication ignore table

With normal Mysql replication we can ignore tables with: replicate_ignore_table
I can't find any information on whether or not it's possible to do this with Galera cluster replication.
I'd like to ignore a table that is not important so that no cluster wide locks have to be acquired when performing an action on the database.
One workaround is to make the table MyISAM or MEMORY.
For ignore a table that is not important, convert this like MyISAM and the galera cluster ignore this automaticaly.

How can I improve performance while altering a large mysql table?

I have 600 Millions records in a table and I am not able to add a column in this table as every time I try to do it, it times out.
Suppose in your MYSQL database you have a giant table having 600 Millions of rows, having some schema operation such as adding a unique key, altering a column, even adding one more column to it is a very cumbersome process which will takes hours to process and sometimes there is a server time out. In order to overcome that, one to have to come up with very good migration plan, one of which I jotting below.
1) Suppose there is table Orig_X in which I have to add a new column colNew with default value as 0.
2) A Dummy table Dummy_X is created which is replica of Orig_X except with a new column colNew.
3) Data is inserted from the Orig_X to Dummy_X with the following settings.
4) Auto commit is set to zero, so that data is not committed after each insert statement hindering the performance.
5) Binary logs are set to zero, so that no data will be written in these logs.
6) After insertion of data bot the feature are set to one.
SET AUTOCOMMIT = 0;
SET sql_log_bin = 0;
Insert into Dummy_X(col1, col2, col3, colNew)
Select col1, col2, col3, from Orig_X;
SET sql_log_bin = 1;
SET AUTOCOMMIT = 1;
7) Now primary key can be created with the newly inserted column, which is now the part of primary key.
8) All the unique keys can now be created.
9) We can check the status of the server by issuing the following command
SHOW MASTER STATUS
10) It’s also helpful to issue FLUSH LOGS so MySQL will clear the old logs.
11) In order to boost performance to run the similar type of queries such as above insert statement, one should have query cache variable on.
SHOW VARIABLES LIKE 'have_query_cache';
query_cache_type = 1
Above were the steps for the migration strategy for the large table, below I am witting so steps to improve the performance of the database/queries.
1) Remove any unnecessary indexes on the table, pay particular attention to UNIQUE indexes as these when disable change buffering. Don't use a UNIQUE index if you have no reason for that constraint, prefer a regular INDEX.
2) If bulk loading a fresh table, delay creating any indexes besides the PRIMARY KEY. If you create them once all after data is loaded, then InnoDB is able to apply a pre-sort and bulk load process which is both faster and results in typically more compact indexes.
3) More memory can actually help in performance optimization. If SHOW ENGINE INNODB STATUS shows any reads/s under BUFFER POOL AND MEMORY and the number of Free buffers (also under BUFFER POOL AND MEMORY) is zero, you could benefit from more (assuming you have sized innodb_buffer_pool_size correctly on your server.
4) Normally your database table gets re-indexed after every insert. That's some heavy lifting for you database, but when your queries are wrapped inside a Transaction, the table does not get re-indexed until after this entire bulk is processed. Saving a lot of work.
5) Most MySQL servers have query caching enabled. It's one of the most effective methods of improving performance that is quietly handled by the database engine. When the same query is executed multiple times, the result is fetched from the cache, which is quite fast.
6) Using the EXPLAIN keyword can give you insight on what MySQL is doing to execute your query. This can help you spot the bottlenecks and other problems with your query or table structures. The results of an EXPLAIN query will show you which indexes are being utilized, how the table is being scanned and sorted etc...
7) If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.
8) In every table have an id column that is the PRIMARY KEY, AUTO_INCREMENT and one of the flavors of INT. Also preferably UNSIGNED, since the value cannot be negative.
9) Even if you have a user’s table that has a unique username field, do not make that your primary key. VARCHAR fields as primary keys are slower. And you will have a better structure in your code by referring to all users with their id's internally.
10) Normally when you perform a query from a script, it will wait for the execution of that query to finish before it can continue. You can change that by using unbuffered queries. This saves a considerable amount of memory with SQL queries that produce large result sets, and you can start working on the result set immediately after the first row has been retrieved as you don't have to wait until the complete SQL query has been performed.
11) With database engines, disk is perhaps the most significant bottleneck. Keeping things smaller and more compact is usually helpful in terms of performance, to reduce the amount of disk transfer.
12) The two main storage engines in MySQL are MyISAM and InnoDB. Each have their own pros and cons.MyISAM is good for read-heavy applications, but it doesn't scale very well when there are a lot of writes. Even if you are updating one field of one row, the whole table gets locked, and no other process can even read from it until that query is finished. MyISAM is very fast at calculating
SELECT COUNT(*)
types of queries.InnoDB tends to be a more complicated storage
engine and can be slower than MyISAM for most small applications. But it supports row-based locking, which scales better. It also supports some more advanced features such as transactions.

Resources