I have several large innodb tables (over 500M records). When I click on one to view it, the system takes forever (a couple minutes) to return the first 30 rows. I went into my shell program and saw that phpMyAdmin was doing a select count(*) from table. This DIES in innodb. I do have the table indexed by the primary key which is an auto-increment id.
Is there any way to change this so that phpMyAdmin is actually useful for large innoDB tables? It worked fine for myIsam as count(*) performs well there. With innodb you need to count an indexed column, such as count(id).
Upgrade your phpMyAdmin version if it's outdated.
Related
I tried adapting a solution for MySQL, but it turns out information_schema.innodb_table_stats is empty. SHOW INDEX FROM schema_name.table_name doesn't cut it, by only showing cardinality.
mysql.innodb_table_stats is not available until MySQL 5.6 and MariaDB 10.0. Before that...
SHOW TABLE STATUS LIKE 'tablename';
will provide Data_length, which is the amount of space taken by all indexes except the PRIMARY KEY (in the case of InnoDB). There was no way to get the sizes of individual secondary indexes. The PK is "clustered" with the data, so there is really very little space taken in addition to the data.
It is general inadvisable to have a Slave running an older version than the Master, if that is what you are doing.
What are you really looking for?
Is there any way to query physical database size per table in SQLite?
In MySQL, there's a meta-table called information_schema.TABLES that lists physical table sizes in bytes. PostgreSQL has a similar meta-table called pg_tables. Is there anything like this in SQLite?
If you know where the database is, it's just a file. For example:
$ wc -c db/development.sqlite3
2338816 db/development.sqlite3
The sqlite3_analyzer tool outputs lots of information about a database, among them the amount of storage used by each table.
I have the task to do consistent mysqldumps across tables, so that one database is always consistent with itself (all tables inside).
Now I read that there are two options --single-transaction for InnoDB and --lock-tables for all others.
My two Question are:
Can I simply check if all tables of one database use InnoDB and if so apply --single-transaction to that database.
If any table inside one database is not using the InnoDB engine can I simply apply --lock-tables?
Would the above two cases guarantee me to always have consistent database backups across tables?
Update:
By consistent dumps I mean, that once the backup process is started, it will dump the current state of one database and no other operation (which might take place at the same time) can interfere with the current state.
I have 600 Millions records in a table and I am not able to add a column in this table as every time I try to do it, it times out.
Suppose in your MYSQL database you have a giant table having 600 Millions of rows, having some schema operation such as adding a unique key, altering a column, even adding one more column to it is a very cumbersome process which will takes hours to process and sometimes there is a server time out. In order to overcome that, one to have to come up with very good migration plan, one of which I jotting below.
1) Suppose there is table Orig_X in which I have to add a new column colNew with default value as 0.
2) A Dummy table Dummy_X is created which is replica of Orig_X except with a new column colNew.
3) Data is inserted from the Orig_X to Dummy_X with the following settings.
4) Auto commit is set to zero, so that data is not committed after each insert statement hindering the performance.
5) Binary logs are set to zero, so that no data will be written in these logs.
6) After insertion of data bot the feature are set to one.
SET AUTOCOMMIT = 0;
SET sql_log_bin = 0;
Insert into Dummy_X(col1, col2, col3, colNew)
Select col1, col2, col3, from Orig_X;
SET sql_log_bin = 1;
SET AUTOCOMMIT = 1;
7) Now primary key can be created with the newly inserted column, which is now the part of primary key.
8) All the unique keys can now be created.
9) We can check the status of the server by issuing the following command
SHOW MASTER STATUS
10) It’s also helpful to issue FLUSH LOGS so MySQL will clear the old logs.
11) In order to boost performance to run the similar type of queries such as above insert statement, one should have query cache variable on.
SHOW VARIABLES LIKE 'have_query_cache';
query_cache_type = 1
Above were the steps for the migration strategy for the large table, below I am witting so steps to improve the performance of the database/queries.
1) Remove any unnecessary indexes on the table, pay particular attention to UNIQUE indexes as these when disable change buffering. Don't use a UNIQUE index if you have no reason for that constraint, prefer a regular INDEX.
2) If bulk loading a fresh table, delay creating any indexes besides the PRIMARY KEY. If you create them once all after data is loaded, then InnoDB is able to apply a pre-sort and bulk load process which is both faster and results in typically more compact indexes.
3) More memory can actually help in performance optimization. If SHOW ENGINE INNODB STATUS shows any reads/s under BUFFER POOL AND MEMORY and the number of Free buffers (also under BUFFER POOL AND MEMORY) is zero, you could benefit from more (assuming you have sized innodb_buffer_pool_size correctly on your server.
4) Normally your database table gets re-indexed after every insert. That's some heavy lifting for you database, but when your queries are wrapped inside a Transaction, the table does not get re-indexed until after this entire bulk is processed. Saving a lot of work.
5) Most MySQL servers have query caching enabled. It's one of the most effective methods of improving performance that is quietly handled by the database engine. When the same query is executed multiple times, the result is fetched from the cache, which is quite fast.
6) Using the EXPLAIN keyword can give you insight on what MySQL is doing to execute your query. This can help you spot the bottlenecks and other problems with your query or table structures. The results of an EXPLAIN query will show you which indexes are being utilized, how the table is being scanned and sorted etc...
7) If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.
8) In every table have an id column that is the PRIMARY KEY, AUTO_INCREMENT and one of the flavors of INT. Also preferably UNSIGNED, since the value cannot be negative.
9) Even if you have a user’s table that has a unique username field, do not make that your primary key. VARCHAR fields as primary keys are slower. And you will have a better structure in your code by referring to all users with their id's internally.
10) Normally when you perform a query from a script, it will wait for the execution of that query to finish before it can continue. You can change that by using unbuffered queries. This saves a considerable amount of memory with SQL queries that produce large result sets, and you can start working on the result set immediately after the first row has been retrieved as you don't have to wait until the complete SQL query has been performed.
11) With database engines, disk is perhaps the most significant bottleneck. Keeping things smaller and more compact is usually helpful in terms of performance, to reduce the amount of disk transfer.
12) The two main storage engines in MySQL are MyISAM and InnoDB. Each have their own pros and cons.MyISAM is good for read-heavy applications, but it doesn't scale very well when there are a lot of writes. Even if you are updating one field of one row, the whole table gets locked, and no other process can even read from it until that query is finished. MyISAM is very fast at calculating
SELECT COUNT(*)
types of queries.InnoDB tends to be a more complicated storage
engine and can be slower than MyISAM for most small applications. But it supports row-based locking, which scales better. It also supports some more advanced features such as transactions.
Afaik, SQLite stores a single database in a single file. Since this would decrease the performance when working with large databases, is it possible to explicitly tell SQLite not to store the whole DB in a single file and store different tables in different files instead?
I found out, that it is possible.
Use:
sqlite3.exe MainDB.db
ATTACH DATABASE 'SomeTableFile.db' AS stf;
Access the table from the other database file:
SELECT * FROM stf.SomeTable;
You can even join over several files:
SELECT *
FROM MainTable mt
JOIN stf.SomeTable st
ON (mt.id = st.mt_id);
https://www.sqlite.org/lang_attach.html
tameera said there is a limit of 62 attached databases but I never hit that limit so I can't confirm that.
The big advantage besides some special cases is that you limit the fragmentation in the database files and you can use the VACUUM command separately on each table!
If you don't need a join between these tables you can manually split the DB and say which tables are in which DB (=file).
I don't think that it's possible to let SQLite split your DB in multiple files, because you connect to a DB by telling the filename.
SQLite database files can grow quite large without any performance penalties.
The things that might degrade performance are:
file-locking contention
table size (if using indexes and issuing write queries)
Also, by default, SQLite limits the number of attached databases to 10.
Anyway, try partition your tables. You'll see that SQLite can grow enormously this way.