MariaDB does not finish query for table with foreign key - mariadb

In a MariaDB data base I've set up I cannot change tables with foreign keys at all. Querys involving alter table and drop table never finish, I don't even get an error message. Same with repair table.
All I can do is hit ctrl+c. There are no apparent errors indicated in the InnoDB monitor output.
I'm quite new to relational data bases, so it's probably a user error. I just can't see what it might be. Any help greatly appreciated!
OS: Windows 10 Enterprise
MariaDB: 10.8
using both the client and a plugin in Visual Studio Code, doesn't matter.
I can alter other tables.
I set foreign_key_checks to off
The table with the foreign key looks like below. participant_id is the foreign key.
Field
Type
Null
Key
Default
Extra
trial_id
smallint(6)
NO
PRI
NULL
begin_trial
datetime
NO
UNI
NULL
participant_id
tinyint(4)
NO
MUL
NULL
The referenced table looks like this, id is participant_id.
Field
Type
Null
Key
Default
Extra
id
tinyint(4)
NO
PRI
NULL
auto_increment
code
char(6)
NO
UNI
NULL
day
date
NO
0000-00-00

Yes, user error. I had two connections open, one using the MariaDB client, another one using the VSC plugin. Thus there must have been some kind of lock on write operationsm but not on read operations.
Thanks for being my rubber duck, Stackoverflow.

Related

Why is my MariaDb not adding a column to a large table using the INSTANT algorithm

I have a huge table in a MariaDb (10.4.10-MariaDB-1:10.4.10+maria~bionic) and I am adding a new column using
alter table Appointment add column responsible_organization varchar(256);
The existing table is this:
CREATE TABLE `Appointment` (
`id` VARCHAR(256) NOT NULL,
`version` VARCHAR(24) NOT NULL,
`repetition_ref` VARCHAR(256) NULL DEFAULT NULL,
`type` VARCHAR(256) NULL DEFAULT NULL,
`comment` VARCHAR(2048) NULL DEFAULT NULL,
`description` VARCHAR(2048) NULL DEFAULT NULL,
`end` DATETIME NULL DEFAULT NULL,
`start` DATETIME NULL DEFAULT NULL,
`status` VARCHAR(256) NULL DEFAULT NULL,
`statuschangedate` DATETIME NULL DEFAULT NULL,
`deliverystatus` VARCHAR(256) NULL DEFAULT NULL,
`reasoncancelled` VARCHAR(256) NULL DEFAULT NULL,
`visit_type` VARCHAR(256) NULL DEFAULT NULL,
`modified_db_time` TIMESTAMP(3) NOT NULL DEFAULT current_timestamp(3) ON UPDATE current_timestamp(3),
`markedasdeleted` DATETIME NULL DEFAULT NULL,
PRIMARY KEY (`id`, `version`),
INDEX `FKc4f6e4y3ftaya162pwf7v4uj4` (`deliverystatus`),
INDEX `FKeuhxsh83rlweegn404penommb` (`reasoncancelled`),
INDEX `FK5xmurewn61wf4n3of5yx2nsmg` (`visit_type`),
INDEX `modified_db_time` (`modified_db_time`),
CONSTRAINT `FK5xmurewn61wf4n3of5yx2nsmg` FOREIGN KEY (`visittype`) REFERENCES `Coding` (`id`),
CONSTRAINT `FKc4f6e4y3ftaya162pwf7v4uj4` FOREIGN KEY (`deliverystatus`) REFERENCES `CodeableConcept` (`id`),
CONSTRAINT `FKeuhxsh83rlweegn404penommb` FOREIGN KEY (`reasoncancelled`) REFERENCES `CodingDt` (`id`)
)
;
As far as I read the MariaDb documentation it should choose the most efficient algorithm if I don't specify any. I would expect it to use INPLACE at the minimum. But when I run it i can see in the process list that it is running with the state "Copy to tmp table". So this is the COPY algorithm right?
I then tried to force it to use INSTANT as sugested by #o-jones. That gave me this output:
MariaDB [mydb]> alter table Appointment add column responsibleorganisation varchar(256), ALGORITHM=INSTANT;
ERROR 1846 (0A000): ALGORITHM=INSTANT is not supported. Reason: Cannot change column type. Try ALGORITHM=COPY
Weird, since I am adding a column.
I am wondering if it has something to do with the table being created on an older version of MariaDb and not having been rebuilt recently. I have found references to this being an issue for tables with old style temporal columns.
The variables old_alter_table and alter_algorithm both have the value DEFAULT
My table has 110+ million rows, so I would have liked to find a way to optimize this.
Any ideas?
MariaDB Server 10.4.10 was released over 2 years ago, in November 2019. Is this repeatable with a more recent version? There have been some fixes to ALTER TABLE since then.
If the problem is repeatable with the latest version in the 10.4 series, I would suggest that you file a bug report at https://jira.mariadb.org with a minimal reproducible test case (CREATE TABLE and ALTER TABLE statements).
With a more recent version, the answer could have been that MDEV-20590 introduced a way to disable instant operations that involve changing the data file format. SET GLOBAL innodb_instant_alter_column_allowed=never; would make ADD COLUMN always rebuild the table, like it did before MariaDB Server 10.3.
I have seen this in cases where a database was created in an earlier version of MariaDB. Sometimes the storage format for some columns has changed, but the upgrade did not rewrite all of the data. Instead it waits until certain schema changes are made and then it modifies that underlying data format. In my cases it was because some of the columns were date/datetime and the temporal storage format had changed in 10.3, but many of my tables were still in the old format. Any schema change I made in 10.4 wanted to update the format for all of those columns, which was a pain for tables with many millions of rows.
If you rebuild the table first using "ALTER TABLE tab_name ENGINE = InnoDB;" then the column add would be able to be added "instantly". Although the alter will take as long as the non-instant column add would. However, there are circumstances where doing the processes separately might be advantageous.
In my case I had to do a manual table rebuild because I could not allow the live table to be locked for as long as the rebuild would take. So I wrote a stored procedure to take the table and create a copy of it gradually, and keep the copy up-to-date. The copy process was throttled to avoid putting to much strain on the db and then once the copy was ready I was able to swap the old and new tables quickly with renames so that the down time was measured in seconds instead of hours. Once the new table was in place I was able to do instant ddl operations on it like normal.

Create table with auto increment

I have to create a table with autoincrement column. I am creating an web app with sqlite as a background and sql alchemy is the orm layer.Python and Flask as front end. I am creating some department list and department id should be auto incremented.When I try to add department through UI I dont provide department id.Because department id is the primary key and should be auto incremented.I have added the department name and department jobs through UI without any error.But when I try to list the departments list I am getting error.
AttributeError: 'NoneType' object has no attribute 'department_id'
What I tried is,My sqlite
create table Departments(department_id primarykey integer autoincrement,department_name char,department_jobs char);
When I try creating this schema I am getting an error called 'syntax error near autoincrement'
I tried by using capital letter,auto_increment,auto increment.
Nothing is working
My sql alchemy looks like this
class Departments(db.Model):
"Adding the department"
department_id = db.Column(db.Integer,primary_key=True)
department_name = db.Column(db.String(50),nullable=False)
department_jobs = db.Column(db.String(40),nullable=False)
What I am expecting here is how do I do the auto incrementin sqllite and sqlalchemy so that I can use it in both frontend and backend.
You have coded PRIMARYKEY instead of PRIMARY KEY and you should also code INTEGER PRIMARY KEY as the column type should appear first.
There is no need, from your explanation, to code AUTOINCREMENT.
Not using AUTOINCREMENT will be more efficient and will as far as you are concerned do the same thing. i.e. if the value for the department_id is not supplied, then SQLite will automatically generate a value which will be 1 for the first row that is inserted and then typically 1 greater for the next row and so on (SQLite does not guarantee montonically increasing numbers).
SQLite Autoincrement which includes
The AUTOINCREMENT keyword imposes extra CPU, memory, disk space, and disk I/O overhead and should be avoided if not strictly needed. It is usually not needed.
I'd sugggest just using :-
create table Departments(department_id INTEGER PRIMARY KEY,department_name char,department_jobs char);

How can I implement a reflexive primary/foreign key relationship in Access?

MS Office 265 ProPlus, Access 2007 - 2016
I'm a novice with this.
I have a table called pedigree. I has 3 columns...
Name (text)
ID (auto increment integer, primary key)
Parent_ID (integer)
I want to implement a constraint which will require that the "parent_ID" value of each record exists as the ID value of some other record i the same table (a reflexive primary/foreign key setup).
In Access, I went to the "Database Tool" tab, then "Relationships", then opened the table up twice and tied that ID column of one to the "Parent_ID" of the other. It didn't complain, saved out OK. When I run it, it doesn't seem to work. I can put records in the table with Parent_ID values outside of the available ID value pool.
Any clues?
Also, if there's a different/better way to do this, I'm all ears. I read about the "Database Tools" -> "Relationships" approach on the web somewhere but am open to anything that might work.
And the solution for me (the novice) was...
Set the "Enforce Referrential Integrity" of the relationship.
Thanks Gustav for the hint !

PostgreSQL 11 foreign key on partitioning tables

In the PostgreSQL 11 Release Notes I found the following improvements to partitioning functionality:
Add support for PRIMARY KEY, FOREIGN KEY, indexes, and triggers on partitioned tables
I need this feature and tested it.
Create table:
CREATE TABLE public.tbl_test
(
uuid character varying(32) NOT null,
registration_date timestamp without time zone NOT NULL
)
PARTITION BY RANGE (registration_date);
Try to create Primary key:
ALTER TABLE public.tbl_test ADD CONSTRAINT pk_test PRIMARY KEY (uuid);
I get an error SQL Error [0A000]. If use composite PK (uuid, registration_date) then it's work. Because PK contains partitioning column
Conclusion: create PK in partitioning tables work with restrictions (PK need contains partitioning column).
Try to create Foreign key
CREATE TABLE public.tbl_test2
(
uuid character varying(32) NOT null,
test_uuid character varying(32) NOT null
);
ALTER TABLE tbl_test2
ADD CONSTRAINT fk_test FOREIGN KEY (test_uuid)
REFERENCES tbl_test (uuid);
I get an error SQL Error [42809]. It means FOREIGN KEY on partitioning tables not work.
Maybe i'm doing something wrong. Maybe somebody tried this functionality and know how this work.
Maybe somebody know workaround except implement constraint in the application.
PostgreSQL v12.0 will probably support foreign keys that reference partitioned tables. But this is still not guaranteed as v12.0 is still in development.
For v11 and lower versions, you may use triggers as described by depesz in these posts: part1, part2, and part3.
Update: PostgreSQL v12.0 was released on Oct 3, 2019, with this feature included
Postgres 11 only supports foreign keys from a partitioned table to a (non-partitioned) table.
Previously not even that was possible, and that's what the release notes are about.
This limitation is documented in the chapter about partitioning in the manual
While primary keys are supported on partitioned tables, foreign keys referencing partitioned tables are not supported. (Foreign key references from a partitioned table to some other table are supported.
(emphasis mine)

MariaDB remove foreign key to temporary table

Context:
I'm trying to upgrade a concrete5 installation from version 8.3.2 to 8.4.1. The upgrade process fails during execution of this SQL statement:
ALTER TABLE AreaLayoutsUsingPresets ADD CONSTRAINT FK_7A9049A1385521EA FOREIGN KEY (arLayoutID) REFERENCES AreaLayouts (arLayoutID) ON UPDATE CASCADE ON DELETE CASCADE
With:
SQLSTATE[HY000]: General error: 1005 Can't create table `concrete5`.`#sql-215_264a4` (errno: 121 "Duplicate key on write or update")
Investigating my database revealed that in information_schema in INNODB_SYS_FOREIGN there is the following entry:
ID FOR_NAME REF_NAME N_COLS TYPE
concrete5/FK_7A9049A1385521EA concrete5/#sql-215_26264 concrete5/AreaLayouts 1 5
Problem:
Now my understanding is, that I cannot modify the information_schema as it isn't a database but just a tabular representation of the system.
I'm wondering how do I get rid of that foreign key entry. The table concrete5/#sql-215_26264 does not exist (I can't find it on my server, nor does alter table or drop table find that table (I've tried with #mysql50# prefix and without it)). So the straight forward way of alter table to drop the foreign key fails because it can't find the table.
I guess I could mess with the upgrade script so that it creates a new foreign key ID, but I'd rather get rid of that zombie in my database. I've already tried to disable the foreign key checks, which then resulted in an error, telling me that the key cannot be added to the system tables (because it's already in there).
Reinstalling is rarely a cure for anything; but I am glad that it fixed your situation.
Table names such as #sql_... usually come from crashing in the middle of an ALTER or similar DDL. Such files can be removed. information_schema is derived from looking at the files, so I think removing the files will kill the zombie entries.
either prefix the SQL import with SET FOREIGN_KEY_CHECKS=0;
or your append it to your query ALTER TABLE...DISABLE KEYS;
... and better dump the whole database before messing around.

Resources