I have an old MySQL query to find a free ID that always worked fine (0.0325 seconds). But now I have changed to MariaDB and it's extremely slow (over 30 seconds)
SELECT
START AS codice
FROM (
SELECT
codice +1 AS START
FROM `clfoco`
WHERE
SUBSTRING(codice,9) BETWEEN '0000001' AND '9999999' AND codice LIKE '1201__%'
) AS a
LEFT JOIN (
SELECT
codice
FROM `clfoco`
WHERE
SUBSTRING(codice,9) BETWEEN '0000001' AND '9999999' AND codice LIKE '1201__%'
) AS b ON a.start = b.codice
WHERE b.codice IS NULL
LIMIT 1
"codice" is the primary index
The old database was MySQL 5.1.73 and the new one is MariaDB 5.5.52.
I have replicated the table(import/export db and data), deleted and recreated the index but the query in the new database is always slow.
I have read answers where the changed index (codice+1) breaks the indexes and force MySQL to scan the whole table, but I dont think this is the case since in the old MySQL database the query is fast.
Any suggestion?
EDIT
TABLE
CREATE TABLE IF NOT EXISTS `clfoco` (
`codice` varchar(20) NOT NULL,
`id_anagra` int(9) NOT NULL,
`descri` varchar(100) DEFAULT NULL,
....
....
....
PRIMARY KEY (`codice`),
KEY `id_anagra` (`id_anagra`),
FULLTEXT KEY `descri` (`descri`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Explain with Maria Db
| id|select_type|table |type |possible_keys|key |key_len|ref |rows |Extra
------------------------------------------------------------------------------
| 1 | SIMPLE |clfoco|range |PRIMARY |PRIMARY |62 |NULL|24549 |Using where; Using index
| 1 | SIMPLE |clfoco|index |PRIMARY |PRIMARY |62 |NULL|25182 |Using where; Using index; Using join buffer (flat, BNL join)
Explain with Mysql 5.1.73
| id|select_type|table |type |possible_keys|key |key_len|ref |rows |Extra
------------------------------------------------------------------------------
| 1 |PRIMARY |<derived2> |ALL |NULL |NULL |NULL |NULL|24661
| 1 |PRIMARY |<derived3> |ALL |NULL |NULL |NULL |NULL|24661 |Using where; Not exists
| 3 | DERIVED |clfoco |index |PRIMARY |PRIMARY |62 |NULL|25182 |Using where; Using index
| 2 | DERIVED |clfoco |index |PRIMARY |PRIMARY |62 |NULL|25182 |Using where; Using index;
NEW EDIT
After Rick James suggestion I have changed the query.
SELECT a.codice+1 AS START
FROM `clfoco` a
WHERE codice LIKE '1201__%'
AND(substring(b.codice,9) between '0000001' and '9999999')
AND NOT EXISTS (
SELECT *
FROM `clfoco`
WHERE codice LIKE '1201__%'
AND (substring(b.codice,9) between '0000001' and '9999999')
AND codice = a.codice+1 )
LIMIT 1;
The execution time on MariaDB is 0.0294 seconds
EXPLAIN EXTENDED:
id|select_type |table |type |possible_keys|key |key_len|ref |rows |filtered|Extra
1 |PRIMARY |a |range|PRIMARY |PRIMARY|62 |NULL|24549 |100.00 |Using where; Using index
2 |DEPENDENT SUBQUERY|clfoco|range|PRIMARY |PRIMARY|62 |NULL|24549 |100.00 |Range checked for each record (index map: 0x1)
SELECT a.codice+1 AS START
FROM `clfoco` a
WHERE codice LIKE '1201__%'
AND NOT EXISTS (
SELECT *
FROM `clfoco`
WHERE codice LIKE '1201__%'
AND codice = a.codice+1 )
LIMIT 1;
Notes:
EXISTS may optimize better. (But your versions are quite old.)
SUBSTRING gets in the way of index usage.
The SUBSTRING check seems to be useless.
codice needs to be VARCHAR, else LIKE will be problematical.
Please provide SHOW CREATE TABLE and EXPLAIN SELECT ....
The subselects seem a little obfuscatory, which is always dangerous, because even if it doesn't confuse the optimizer now, it may in a future version (as seems to have happened). I would have written this, with no subselects:
select a.codice+1 as codice
from clfoco a
left join clfoco b
on b.codice=a.codice+1
and (substring(b.codice,9) between '0000001' and '9999999')
and b.codice like '1201__%'
where b.codice is null
and (substring(a.codice,9) between '0000001' and '9999999')
and a.codice like '1201__%'
limit 1;
Does this confuse the optimizer less?
From the explain you added of the NOT EXISTS version, it looks like it is using the like '1201__%' for indexing the dependent subquery when it shouldn't. Try this:
select a.codice+1 as codice
from clfoco a
left join clfoco b on b.codice=a.codice+1
where b.codice is null
and (substring(a.codice,9) between '0000001' and '9999999')
and a.codice like '1201__%'
and (substring(a.codice+1,9) between '0000001' and '9999999')
and a.codice+1 like '1201__%'
limit 1;
Related
I'm trying to export/import a BD from one system to another but the import fails with the following error:
ERROR 1062 (23000) at line 8232: Duplicate entry '0-3-30168717-com_liferay_product_navigation_product_menu_web_...' for key 'IX_C7057FF7'
That table is defined as such:
CREATE TABLE `PortletPreferences` (
`portletPreferencesId` bigint(20) NOT NULL,
`ownerId` bigint(20) DEFAULT NULL,
`ownerType` int(11) DEFAULT NULL,
`plid` bigint(20) DEFAULT NULL,
`portletId` varchar(200) DEFAULT NULL,
`preferences` longtext DEFAULT NULL,
`mvccVersion` bigint(20) NOT NULL DEFAULT 0,
`companyId` bigint(20) DEFAULT NULL,
PRIMARY KEY (`portletPreferencesId`),
UNIQUE KEY `IX_C7057FF7` (`ownerId`,`ownerType`,`plid`,`portletId`),
In the mysql dump file, I see these two entries:
(31453178,0,3,30168717,'com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet','<portlet-preferences />',0,10132)
(31524539,0,3,30168717,'com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet','<portlet-preferences />',0,10132)
So, yep, there are two entries with the same unique key. How is that possible?!?
Knowing this, I ran the following select statement against the source DB:
select portletPreferencesId, ownerId, ownerType, plid, portletId from PortletPreferences where ownerId = 0 AND ownerType = 3 AND plid = 30168717 AND portletId like 'com_liferay_product_navigation_product_menu_web%';
And it outputs just ONE LINE!
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| portletPreferencesId | ownerId | ownerType | plid | portletId |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| 31524539 | 0 | 3 | 30168717 | com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
By the portletPreferencesId field, it outputs the second entry in the dump file. So I did one more select for the other row as such:
select portletPreferencesId, ownerId, ownerType, plid, portletId from PortletPreferences where portletPreferencesId = 31453178;
And I get:
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| portletPreferencesId | ownerId | ownerType | plid | portletId |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| 31453178 | 0 | 3 | 30168717 | com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
My question is, what's going on?!? Why is that entry not output by the first select statement and why is it there in the first place if those fields were supposed to be unique???
I have a bad feeling about the state of the source database :-( Oh, and that's just one. I have multiple duplicate keys like that in that table :-(
Thanks
There are occasional bugs like MDEV-15250 that can cause duplicate entries to occur.
Sometimes you may not see these as parts of the optimizer expect only a single row because of the unique constraint so won't search beyond it. Maybe a query across a range including the entries would be more likely to show it (which is what mysqldump would have done).
If you don't think its related to ALTER TABLE occurring at the same time as insertions (like MDEV-15250), and have a good hunch as the set of operations on the table that may have escaped the unique key enforcement, can you please create a bug report.
After upgrading to mariadb 10.5.11 I ran into a weird problem with the indexes.
Simple table with two colums Type(varchar) and Point(point)
An index on Type(Tindex) and a spatial index on Point(Pindex)
Now a query like
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
Results in a
Error in query (1207): Update locks cannot be acquired during a READ UNCOMMITTED transaction
While both
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels USE INDEX (Pindex) WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
and
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels USE INDEX (Tindex) WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
work fine. As mariadb 10.5.10 did
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | hotels | range|filter | Type,Pindex | Pindex|Type | 34|302 | NULL | 340 (4%) | Using where; Using rowid filter |
The issue is now being tracked as MDEV-26123 (I guess you reported it there). The issue description says that the problem was introduced in MariaDB 10.2.39, 10.3.30, 10.4.20, 10.5.11, 10.6.1.
I ran into the issue after upgrading to MariaDB 10.6.4. I downgraded to 10.6.0, which was possible without having to do any migration of the data. It seems to have fixed the problem for now.
The cause of this appears to be the code fix for MDEV-25594.
I cannot see anything in the commit message or discussion there that indicates that a change to the READ UNCOMMITTED behavior was intentional.
There are no open bug reports on this so I recommend you create a new bug report.
select ##session.autocommit;
set ##session.autocommit=0;
select ##session.autocommit;
#add in my.cnf
autocommit = 0
using mariadb 10.2.40 ( resolved )
https://developpaper.com/transaction-isolation-level-of-mariadb/
Query1
cluster(x).database('$systemdb').Operations
| where Operation == "DatabaseCreate" and Database contains "oci-"| where State =='Completed'
and StartedOn between (datetime(2020-04-07) .. 3d)
| distinct Database , StartedOn
| order by StartedOn desc
Output of my query1 is list of databases , now I have to pass each db value into query2 to get buildnumber
Query2:
set query_take_max_records=5000;
let view=datatable(Property:string,Value:dynamic)[];
let viewFile=datatable(FileName:string)[];
alias database db = cluster(x).database('y');
let latestInfoFile = toscalar((
union isfuzzy=true viewFile,database('db').['TextFileLogs']
| where FileName contains "AzureStackStampInformation"
| distinct FileName
| order by FileName
| take 1));
union isfuzzy=true view,(
database('db').['TextFileLogs']
| where FileName == latestInfoFile
| distinct LineNumber,FileLineContent
| order by LineNumber asc
| summarize StampInfo=(toobject(strcat_array(makelist(FileLineContent,100000), "\r\n")))
| mvexpand bagexpansion=array StampInfo
| project Property=tostring(StampInfo[0]), Value=StampInfo[1]
)|where Property contains "StampVersion" | project BuildNumber = Value;
database() function: is a special scoping function, and it does not support non-constant arguments due to security consideration.
As a result - you cannot use sub-query to fetch list of databases and then operate on this list as input for database() function.
This behavior is described at:
https://learn.microsoft.com/en-us/azure/kusto/query/databasefunction?pivots=azuredataexplorer
Syntax
database(stringConstant)
Arguments
stringConstant: Name of the database that is referenced. Database identified can be either DatabaseName or PrettyName. Argument has to be constant prior of query execution, i.e. cannot come from sub-query evaluation.
I have a table like this:
CREATE TABLE test (
height int(10) CHECK(height>5)
);
When I try to remove check constraint by:
ALTER TABLE test DROP CONSTRAINT height;
I got this error message:
ERROR 1091 (42000): Can't DROP CONSTRAINT `height`; check that it exists
Here is the SHOW CREATE TABLE test; command output:
+-------+-------------------------------------------------------------------------------------------------------------------+
| Table | Create Table
|
+-------+-------------------------------------------------------------------------------------------------------------------+
| test | CREATE TABLE `test` (
`height` int(10) DEFAULT NULL CHECK (`height` > 5)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+-------------------------------------------------------------------------------------------------------------------+
And here is the SELECT * from information_schema.table_constraints where TABLE_NAME = 'test'; output:
+--------------------+-------------------+-----------------+------------------+------------+-----------------+
| CONSTRAINT_CATALOG | CONSTRAINT_SCHEMA | CONSTRAINT_NAME | TABLE_SCHEMA | TABLE_NAME | CONSTRAINT_TYPE |
+--------------------+-------------------+-----------------+------------------+------------+-----------------+
| def | test_db | height | test_db | test | CHECK |
+--------------------+-------------------+-----------------+------------------+------------+-----------------+
CREATE TABLE :: Constraint
Expressions
...
MariaDB 10.2.1 introduced two ways to define a constraint:
CHECK(expression) given as part of a column definition.
CONSTRAINT [constraint_name] CHECK (expression)
...
If you define the constraint using the first form (column constraint), you can remove it using MODIFY COLUMN:
ALTER TABLE `test`
MODIFY COLUMN `height` INT(10);
If you use the second form (table constraint), you can remove it using DROP CONSTRAINT:
ALTER TABLE `test`
DROP CONSTRAINT `height`;
See dbfiddle.
I have a database called av2web, which contains 130 MyISAM tables and 20 innodb tables. I wanna take mysqldump of these 20 innodb tables, and export it to another database as MyISAM tables.
Can you tell me a quicker way to achieve this?
Thanks
Pedro Alvarez Espinoza.
If this was an one-off operation I'd do:
use DB;
show table status name where engine='innodb';
and do a rectangular copy/paste from the Name column:
+-----------+--------+---------+------------+-
| Name | Engine | Version | Row_format |
+-----------+--------+---------+------------+-
| countries | InnoDB | 10 | Compact |
| foo3 | InnoDB | 10 | Compact |
| foo5 | InnoDB | 10 | Compact |
| lol | InnoDB | 10 | Compact |
| people | InnoDB | 10 | Compact |
+-----------+--------+---------+------------+-
to a text editor and convert it to a command
mysqldump -u USER DB countries foo3 foo5 lol people > DUMP.sql
and then import after replacing all instances of ENGINE=InnoDB with ENGINE=MyISAM in DUMP.sql
If you want to avoid the rectangular copy/paste magic you can do something like:
use information_schema;
select group_concat(table_name separator ' ') from tables
where table_schema='DB' and engine='innodb';
which will return countries foo3 foo5 lol people
I know this is an old question. I just want to share this script that genrate the mysqldump command and also shows how to restore it
This following portion of the script will generate a command to create a mysql backup/dump
SET SESSION group_concat_max_len = 100000000; -- this is very important when you have lots of table to make sure all the tables get included
SET #userName = 'root'; -- the username that you will login with to generate the dump
SET #databaseName = 'my_database_name'; -- the database name to look up the tables from
SET #extraOptions = '--compact --compress'; -- any additional mydqldump options https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
SET #engineName = 'innodb'; -- the engine name to filter down the table by
SET #filename = '"D:/MySQL Backups/my_database_name.sql"'; -- the full path of where to generate the backup too
-- This query will generate the mysqldump command to generate the backup
SELECT
CASE WHEN tableNames IS NULL
THEN 'No tables found. Make sure you set the variables correctly.'
ELSE CONCAT_WS(' ','mysqldump -p -u', #userName, #databaseName, tableNames, #extraOptions, '>', #filename)
END AS command
FROM (
SELECT GROUP_CONCAT(table_name SEPARATOR ' ') AS tableNames
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema= #databaseName AND ENGINE= #engineName
) AS s;
This following portion of the script will generate a command to restore mysql backup/dump into a specific database on the same or a different server
SET #restoreIntoDatabasename = #databaseName; -- the name of the new database you wish to restore into
SET #restoreFromFile = #filename; -- the full path of the filename you want to restore from
-- This query will generate the command to use to restore the generated backup into mysql
SELECT CONCAT_WS(' ', 'mysql -p -u root', #restoreIntoDatabasename, '<', #restoreFromFile);