I'm facing a strange error. I have a 5.5.5-10.1.20-MariaDB install on my local mac (brew) and a 5.5.52-MariaDB on my prod server (centos7). My local DB content is a copy from my server DB. I've executed this query on local:
## CREATE DIRECT RELATION BETWEEN JOURNAL AND PUBLICATION
INSERT INTO journal_publication (journal_id, `publication_id`) (
select issues.journal_id as journal_id, publications.id as publication_id from issues
join publications on issues.id = publications.`issue_id`
where publications.id Not In (select distinct publication_id from journal_publication)
);
It works fine and takes only less than a second to execute.
Now when I try the exact same query on my prod server, the query is never ending and takes all CPUs. Moreover, I've tried to EXPLAIN the query, it works fine on my local:
id select_type table type possible_keys key key_len ref rows Extra
1 PRIMARY issues index PRIMARY issues_journal_id_foreign 5 NULL 70993 Using index; Using temporary
1 PRIMARY publications ref publications_issue_id_foreign publications_issue_id_foreign 5 pubpeer.issues.id 1 Using where; Using index
2 MATERIALIZED journal_publication index NULL PRIMARY 8 NULL 143926 Using index
Whereas the same query on my Prod returns an error:
You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'INSERT INTO journal_publication (journal_id, `publication_id`)
(select issues.j' at line 2
Again, the content of the two DBs are identical, primary keys and indexes are set equally. For the record, when I try and execute this query:
select issues.journal_id as journal_id, publications.id as publication_id from issues
join publications on issues.id = publications.`issue_id`
where publications.id Not In (select distinct publication_id from journal_publication;
either on local or prod takes only a second.
Have you got any clue or process I could follow to help me understand these differences?
Thanks.
Xavier
MariaDB server versions < 10.0 only support EXPLAIN SELECT
MariaDB server versions >= 10.0 support additionally EXPLAIN
UPDATE, EXPLAIN INSERT and EXPLAIN DELETE
Please note that the version string 5.5.5-10.1.20-MariaDB means MariaDB 10.1.20, the 5.5.5 prefix is required since MySQL replication would break, since it supports only 1 digit numbers for the major version.
See also EXPLAIN UDATE/INSERT/DELETE in MySQL and MariaDB
Related
I use an app which creates this SQLite DB with this table:
CREATE TABLE expense_report (_id INTEGER PRIMARY KEY, ...)
And for some reason that _id (which is the ROWID) became invalid in that DB.
When I scan the table I see that the last rows got an _id which was already being used long ago:
1,2,3...,1137,1138,...,1147,1149,...,12263,12264,1138,...,1148
I highlighted above the ranges in which I see that I have the same _id for completely different rows (the rest of the values do not match at all).
And querying this DB usually gets me inaccurate results due to that. For instance:
SELECT
(SELECT MAX(_ID) FROM expense_report) DirectMax
, (SELECT MAX(_ID) FROM (SELECT _ID FROM expense_report ORDER BY _ID DESC)) RealMax;
| DirectMax | RealMax |
| 1148 | 12264 |
And inserting a new row into this table via DB Browser for SQLite also generates an _id of 1149 (instead of 12265), so the problem becomes worse if I keep using this DB.
Running PRAGMA QUICK_CHECK or PRAGMA INTEGRITY_CHECK show this error response:
*** in database main ***
On page 1598 at right child: Rowid 12268 out of order
And running VACUUM also detects the problem but doesn't seem to be able to fix it:
Execution finished with errors.
Result: UNIQUE constraint failed: expense_report._id
Anyone knows a way to fix these duplicate ROWID values?
My Teradata query creates a volatile that is used to join to existing views. When linking query to excel the following error pops up: "Teradata: [Teradata Database] [3932] Only an ET or null statement is legal after a DDL Statement". Is there a workaround for this for someone that does not have write permissions in teradata to create a real view or table? I want to avoid linking to Teradata in SQL and running an open query to pull in the data needed.
This is for Excel 2016 64bit and using Teradata version 15.10.1.12
Normally this error will occur if you are using ANSI mode or have issued a BT (Begin Transaction) in BTET mode.
Here are a few workarounds to try:
Issue an ET; statement (commit) after the create volatile table statement. If you are using ANSI mode, use COMMIT; instead of ET;. If you are unsure, try each one in turn. Only one will be valid but both do the same thing. Make sure your Volatile table includes ON COMMIT PRESERVE ROWS
Try using BT ET mode (a.k.a. Teradata mode) when establishing the session. I do not remember where but there will be a setting in the ODBC configuration for this.
Try using a Global Temporary table. These work similarly to Volatile tables except you define them once and the definition sticks around. That is, you can create it in, say BTEQ, or SQL assistant etc. The definition is common to all users and sessions (i.e. your Excel session), but the content is transient and unique to each session (like a volatile table).
Move the select part of your insert into the volatile table into the query that selects the data from the volatile table. See simple example below.
If you do not have create Global Temporary table permissions, ask your DBA.
Here is a simple example to illustrate point 4.
Current:
create volatile table tmp (id Integer)
ON COMMIT PRESERVE ROWS;
insert into tmp
select customer_number
from customer
where X = Y and yr = 2019
;
select a,b,c
from another_tbl A join TMP T ON
A.id = T.id
;
Becomes:
select a,b,c
from another_tbl A join (
select customer_number
from customer
where X = Y and yr = 2019
) AS T
ON
A.id = T.id
;
Or better yet, just Join your tables directly.
Note The first sequence (create table, Insert into and select) is a three statement series. This will return 3 "result sets". The first two will be row counts the last will be the actual data. Most programs (including I think Excel) can not process multiple result set responses. This is one of the reasons it is difficult to use Teradata Macros with client tools like Excel.
The latter solution (a single select) avoids this potential problem.
So, I have a Flyway migration which applied successfully, years ago, to an older version of MariaDB.
A newer release of MariaDB is now more strict, and causes an error on that same migration. There's a legit issue with that migration that I want to fix for both fresh runs from the baseline (e.g. building in my CI environment, or on a new devleoper's laptop), and for all of my existing databases (before I attempt to upgrade them to a newer MariaDB release, which may just fail).
What's the right solution?
Alter the migration, and add a new one that does the same fix (another ALTER TABLE ...) that'll effectively be a noop for the newly created DBs, but will fix my existing stuff.
Add a new migration versioned out-of-order, just prior to the broken one, which fixes the issue. Hopefully that means new DBs will apply that just before the broke migration, and existing installs will apply it before any of my newer migrations?
To be specific, the issue was that I was migrating a table which was originally using ENGINE=MyISAM ROW_FORMAT=FIXED to ENGINE=InnoDB -- MariaDB 10.1 accepts that, but newer MariaDB releases seem to fail unless I also add ROW_FORMAT=DEFAULT.
Baseline
CREATE TABLE FOO ( ... )
ENGINE=MyISAM ROW_FORMAT=FIXED;
Later Migration
ALTER TABLE FOO
ENGINE=InnoDB;
That latter statement fails on a newer MariaDB release (and possibly for MySQL too, not sure?).
This statement works, though:
ALTER TABLE FOO
ENGINE=InnoDB ROW_FORMAT=DEFAULT;
The issue is that the previous statement internally tries to do something like this, which fails:
CREATE TABLE FOO ( ... )
ENGINE=InnoDB ROW_FORMAT=FIXED;
The best way to handle this is probably to carefully modify the migration and issue a flyway repair to realign the checksums in the database with the new ones on disk.
InnoDB does not have a ROW_FORMAT=FIXED. In older versions, the variable innodb_strict_mode is set to 0, in which case a warning is issued, and ROW_FORMAT=COMPACT is used when converting.
ALTER TABLE FOO ENGINE=InnoDB;
Query OK, 0 rows affected, 1 warning (0.07 sec)
Records: 0 Duplicates: 0 Warnings: 1
mysql [localhost] {msandbox} (test) > SHOW WARNINGS;
+---------+------+--------------------------------------+
| Level | Code | Message |
+---------+------+--------------------------------------+
| Warning | 1478 | InnoDB: assuming ROW_FORMAT=COMPACT. |
+---------+------+--------------------------------------+
In newer versions innodb_strict_mode is set to 1, so an error is returned.
ALTER TABLE FOO ENGINE=InnoDB;
ERROR 1005 (HY000): Can't create table `test`.`FOO` (errno: 140 "Wrong create options")
You can set the variable 0 for the duration of the session in order to replicate the old behaviour.
set innodb_strict_mode=0;
References:
https://mariadb.com/kb/en/library/xtradbinnodb-strict-mode/
https://mariadb.com/kb/en/library/innodb-system-variables/#innodb_strict_mode
https://mariadb.com/kb/en/library/myisam-storage-formats/#fixed-length
https://mariadb.com/kb/en/library/innodb-storage-formats/
Setup mariadb 10.1 to strict mode let me export and import the data on mariadb 10.3
wrong create table options error message is gone.
set innodb_strict_mode=0;
https://mariadb.com/kb/en/innodb-strict-mode/
I tried to reduce the rights a database user has to the minimum needed. Doing so I noticed the following situation:
I have a database test and a user user with the following privileges:
REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'user'#'%';
GRANT SELECT, INSERT ON test.test TO 'user'#'%';
GRANT UPDATE (y) ON test.test TO 'user'#'%';
The test table (InnoDB) is defined as
create table test
(
x int null,
y int null
);
create unique index test_x_uindex on test (x);
I can run insert and update queries like
INSERT INTO test (x,y) VALUES (1,1), (2,2);
UPDATE test SET y = 3 WHERE x = 1;
But running
INSERT INTO test (x,y) VALUES (2,4) ON DUPLICATE KEY UPDATE y = VALUES(y);
results in
ERROR 1143 (42000): UPDATE command denied to user 'sap'#'localhost' for column 'x' in table 'test'
The same happens if the statement wouldn't actually update something but just insert a new row.
This seems a little bit odd. I couldn't find a directly related bug report. Just something older for MySQL (Is closed but someone said that it is not actually fixed. I did not test it.). In the documentation of MySQL 8.0 the mention that update privileges are only needed for changed columns. The MariaDB documentation does not mention any privilege requirements.
Do I miss something?
This was all tested on MariaDB 10.2.16.
A strange thing, that I don't know the cause, is happenning when trying to collect results from a db2 database.
The query is the following:
SELECT
COUNT(*)
FROM
MYSCHEMA.TABLE1 T1
WHERE
NOT EXISTS (
SELECT
*
FROM
MYSCHEMA.TABLE2 T2
WHERE
T2.PRIMARY_KEY_PART_1 = T1.PRIMARY_KEY_PART_2
AND T2.PRIMARY_KEY_PART_2 = T1.PRIMARY_KEY_PART_2
)
It is a very simple one.
The strange thing is, this same query, if I change COUNT(*) to * I will get 8 results and using COUNT(*) I will get only 2. The process was repeated some more times and the strange result is still continuing.
At this example, TABLE2 is a parent table of the TABLE1 where the primary key of the TABLE1 is PRIMARY_KEY_PART_1 and PRIMARY_KEY_PART_2, and the primary key of the TABLE2 is PRIMARY_KEY_PART_1, PRIMARY_KEY_PART_2 and PRIMARY_KEY_PART_3.
There's no foreign key between them (because they were legacy ones) and they have a huge amount of data.
The DB2 query SELECT VERSIONNUMBER FROM SYSIBM.SYSVERSIONS returns:
7020400
8020400
9010600
And the client used is SquirrelSQL 3.6 (without the rows limit marked).
So, what is the explanation to this strange result?
Without the details (including, at least, the exact Db2 version and DDL for both tables and their indexes) it can be just anything, and even with that details only IBM support will be really able to say, what is the actual reason.
Generally this looks like damaged data (e.g. differences in index vs table data).
Worth to open the support case with IBM.