MariaDB LOAD XML much slower than MySQL - mariadb

I am testing MariaDB for a possible replacement of a MySQL data warehouse. This data warehouse is rebuilt nightly from a legacy database.
The basic process is to generate XML documents/files from the legacy database and then use DROP TABLE, create table from DDL, LOAD XML LOCAL INFILE 'xml file'. A few of the XML files are 60-100 megabytes (about 300K rows). On MySQL these tables take a couple minutes. On MariaDB these tables take significantly longer (e.g. 83 megabyte XML file requires 16 minutes on MariaDB, MySQL less than 1 minute), and the times seem to grow exponentially with file size.
I have read and followed the KB topic How to Quickly Insert Data Into MariaDB and have tried the suggestions there with no real change. Since the MariaDB tables are dropped and recreated immediately before the LOAD XML LOCAL INFILE, several performance improvements should be triggered.
I have not tried LOCK TABLE yet.
What can I try to improve performance? Don't want to return to MySQL but this issue is a deal killer.
Environment is RHEL 8, MariaDB 10.5.16
Used DISABLE KEYS/LOAD.../ENABLE KEYS with no apparent benefit.
Increased max_allowed_packet_size with no effect.

Related

Moving system versioned (temporal) MariaDB database from Windows to Linux

After moving entire data directory from Win installation of MariaDB to Linux, partitioned temporal tables (or system versioned) cannot be seen by MariaDB.
Everything is ok with "tranditional" tables, MariaDB can access them on Linux. But for partitioned temporal tables there are problem:
on Windows partition data files are named like this: <table name>#p#p_cur.ibd (for current data partition)
but on Linux MariaDB expect this file to be named as <table name>#P#p_cur.ibd
and so MariaDB cannot use such partitions, it gives message:
table do not exist in engine
Renaming .ibd file do not help, MariaDB cannot find file in this case.
Could anyone help, please?
I think that this should affect any partitioned tables. Can you confirm that?
On Windows, for some reason, InnoDB converts the #P into lower case. The table might be accessible on other platforms if you set lower_case_table_names=2. I think that such settings should not exist in the first place.
To rename the partitions to the correct names on InnoDB, it might be possible to do the following:
CREATE TABLE t(a INT)ENGINE=InnoDB; and copy the t.frm file to tablename#p#p_cur.frm
RENAME TABLE ``#mysql50#tablename#p#p_cur`` TO ``#mysql50#tablename#P#p_cur``; (Note: only single backticks instead of double ones. I had trouble with the formatting on StackOverflow.)
Remove the file tablename#P#p_cur.frm.
Repeat for each partition.
Finally, DROP TABLE t;
The special #mysql50 prefix should pass the rest of the table name to the storage engine while bypassing the filename-safe encoding that was introduced in MySQL 5.1. That should allow direct access to the partitions. Normally the # is encoded as the sequence #0023, but the partitioning engine uses the raw #P# suffix.
In MySQL 4.1 and 5.0, the table names were encoded directly in UTF-8. In MySQL 4.0 (which was the stable release series when I started working on InnoDB internals), they could have been encoded directly in latin1, or perhaps non-ASCII characters in table names did not work on some file systems or operating systems.
Note: I think that the .frm file stores information of the storage engine. If you just copy the tablename.frm file of the partitioned table, it could be that only ha_partition::rename_table() would be invoked, instead of ha_innobase::rename_table(). We do want the rename operation to be performed by InnoDB, so that the table name will be renamed in its own data dictionary (the SYS_TABLES table, which is readable via INFORMATION_SCHEMA.INNODB_SYS_TABLES).
Note: I did not test this. Please report back whether this worked.

SQLite Multiple Attached rather than single large database

I'm going to end-up with a rather large database CubeSQLite in the cloud and cloned on the local machine. In my current databases I already have 185 tables and growing. I store them in 6 SQLite databases and begin by attaching them together Using the ATTACH DATABASE command. There are views that point to information in other databases and, as a result, Navicat won't open the SQLite tables individually. It finds them to be corrupted, although they are not and are working fine.
My actual question is this:
Considering the potential size of the files, is it better/faster/slower to do it this way or to put them all into one really large SQLite DB?

SQLite no release memory after delete

Excuse for English
I using SQLite and test it. Insert multi-million row for testing speed. and deletes rows after any insert.
But i know my database size is 33.0 MB..... now database is empty. but size on disk is 33 MB.
WHY?
can you help me?
The VACUUM command rebuilds the entire database. There are several
reasons an application might do this:
Unless SQLite is running in "auto_vacuum=FULL" mode, when a large amount of data is deleted from the database file it leaves behind
empty space, or "free" database pages. This means the database file
might be larger than strictly necessary. Running VACUUM to rebuild the
database reclaims this space and reduces the size of the database
file.
https://www.sqlite.org/lang_vacuum.html

Is it possible to establish two connection to the same SQLite database simultaneously?

I have a executable jar file that inserts data into a SQLite database. This insertion, however, is taking so longer than I expected. I thought I might be able to create another copy of this jar file to HELP the first one.
The reason why this process is so slow is not because my CPU is working 100%, but because the process itself is time-consuming.
By the way, during this process no row will be deleted. it's just INSERT and UPDATE.

Importing data from a text file into SQlite Administrator takes too much time

Whenever I import a text file with about 2 million rows and 2 columns into SQLite Administrator, it takes 3-4 hours to do so. Is it normal, or am I doing something wrong?
The way that I do it, is to take a tab delimited text file with rows, change the extension to .csv, and feed to SQLite Administrator.
My PC specs are 2 GB RAM, Core 2 duo 1.86GHz. I also have about 10Gb free disk space when importing data.
Apparently, SQLite has performance issues in this field.
Check this thread for more information.
You can try to do some performance tuning:
SQLite Docs: Pragma
SQLite Optimization FAQ
SQLite Optimization
SQLite Performance Tuning and Optimization on Embedded Systems

Resources