Moving system versioned (temporal) MariaDB database from Windows to Linux - mariadb

After moving entire data directory from Win installation of MariaDB to Linux, partitioned temporal tables (or system versioned) cannot be seen by MariaDB.
Everything is ok with "tranditional" tables, MariaDB can access them on Linux. But for partitioned temporal tables there are problem:
on Windows partition data files are named like this: <table name>#p#p_cur.ibd (for current data partition)
but on Linux MariaDB expect this file to be named as <table name>#P#p_cur.ibd
and so MariaDB cannot use such partitions, it gives message:
table do not exist in engine
Renaming .ibd file do not help, MariaDB cannot find file in this case.
Could anyone help, please?

I think that this should affect any partitioned tables. Can you confirm that?
On Windows, for some reason, InnoDB converts the #P into lower case. The table might be accessible on other platforms if you set lower_case_table_names=2. I think that such settings should not exist in the first place.
To rename the partitions to the correct names on InnoDB, it might be possible to do the following:
CREATE TABLE t(a INT)ENGINE=InnoDB; and copy the t.frm file to tablename#p#p_cur.frm
RENAME TABLE ``#mysql50#tablename#p#p_cur`` TO ``#mysql50#tablename#P#p_cur``; (Note: only single backticks instead of double ones. I had trouble with the formatting on StackOverflow.)
Remove the file tablename#P#p_cur.frm.
Repeat for each partition.
Finally, DROP TABLE t;
The special #mysql50 prefix should pass the rest of the table name to the storage engine while bypassing the filename-safe encoding that was introduced in MySQL 5.1. That should allow direct access to the partitions. Normally the # is encoded as the sequence #0023, but the partitioning engine uses the raw #P# suffix.
In MySQL 4.1 and 5.0, the table names were encoded directly in UTF-8. In MySQL 4.0 (which was the stable release series when I started working on InnoDB internals), they could have been encoded directly in latin1, or perhaps non-ASCII characters in table names did not work on some file systems or operating systems.
Note: I think that the .frm file stores information of the storage engine. If you just copy the tablename.frm file of the partitioned table, it could be that only ha_partition::rename_table() would be invoked, instead of ha_innobase::rename_table(). We do want the rename operation to be performed by InnoDB, so that the table name will be renamed in its own data dictionary (the SYS_TABLES table, which is readable via INFORMATION_SCHEMA.INNODB_SYS_TABLES).
Note: I did not test this. Please report back whether this worked.

Related

SQLite Multiple Attached rather than single large database

I'm going to end-up with a rather large database CubeSQLite in the cloud and cloned on the local machine. In my current databases I already have 185 tables and growing. I store them in 6 SQLite databases and begin by attaching them together Using the ATTACH DATABASE command. There are views that point to information in other databases and, as a result, Navicat won't open the SQLite tables individually. It finds them to be corrupted, although they are not and are working fine.
My actual question is this:
Considering the potential size of the files, is it better/faster/slower to do it this way or to put them all into one really large SQLite DB?

Clone Oracle Express Edition 11g R2

I have installed Oracle XE 11g R2 on my machine. I ran few scripts which does the setup by creating schemas, procedures for our application. Now I want to clone this database so that other people by using the cloned dbf file can see the base schema on their respective machine and work on their individual requirement on top of that.
Now it has 6 dbf files
CONTROL.DBF
SYSAUX.DBF
SYSTEM.DBF
TEMP.DBF
UNDO.DBF
USER.DBF
Can i just give them the files or I need to create server parameter file (SPFILE) or Control file. What about the REDO logs.
I have very little knowledge in Database administration. Please suggest. I understand that it is not Enterprise Edition so all things might not supported but assuming cloning process is similar for XE.
While it is possible to restore a database using the data files, I strongly suspect that is not what you're really after. If you're not an experienced DBA, the number of possible issues you'll encounter trying to restore a backup on a different machine and then creating an appropriate database instance are rather large.
More likely, what you really want to do is generate a full export of your database. The other people that need your application would then install Oracle and import the export that you generated.
The simplest possible approach would be at a command line to
exp / as sysdba full=y file=myDump.dmp
You would then send myDump.dmp to the other users who would import that into their own database
imp / as sysdba full=y file=myDump.dmp
This will only be a logical backup of your database. It will not include things like the parameters that the database has been set to use so other users may be configured to use more (or less) memory or to have a different file layout or even a slightly different version of Oracle. But it does not sound like you need that degree of cloning. If you have a large amount of data, using the DataPump version of the export and import utilities would be more efficient. My guess from the fact that you haven't even created a new tablespace is that you don't have enough data for this to be a concern.
For more information, consult the Oracle documentation on the export and import utilities.
Removing content as it is not valid here

Load sqlite database into Postgres

I have been developing locally for some time and am now pushing everything to production. Of course I was also adding data to the development server without thinking that I hadn't reconfigured it to be Postgres.
Now I have a SQLite DB who's information I need to be on a remote VPS on a Postgres DB there.
I have tried dumping to a .sql file but am getting a lot of syntax complaints from Postgres. What's the best way to do this?
For pretty much any conversion between two databases the options are:
Do a schema-only dump from the source database. Hand-convert it and load it into the target database. Then do a data only dump from the source DB in the most compatible form of SQL dump it offers. Try loading that into the target DB. When you hit problems, script transformations to the dump using sed/awk/perl/whatever and try again. Repeat until it loads and the results match.
Like (1), hand-convert the schema. Then write a script in your preferred language that connects to both databases, SELECTs from one, and INSERTs into the other, possibly with some transformations of data types and representations.
Use an ETL tool like Talend or Pentaho to connect to both databases and convert between them. ETL tools are like a "somebody else already wrote it" version of (2), but they can take some learning.
Hope that you can find a pre-written conversion too. Heroku one called sequel that will work for SQLite -> PostgreSQL; is it available without Heroku and able to function without all the other Heroku infrastructure and code?
After any of those, some post-transfer steps like using setval() to initialize sequences is typically required.
Heroku's database conversion tool is called sequel. Here are the ruby gems you need:
gem install sequel
gem install sqlite3
gem install pg
Then this worked for me for a sqlite database file named 'tweets.db' in the current working directory:
sequel -C sqlite://tweets.db postgres://pgusername:pgpassword#localhost/pgdatabasename
PostgreSQL supports "foreign data wrappers", which allow you to directly access any data source through the DB, including sqlite. Even up to automatically importing the schema. You can then use create table localtbl as (select * from remotetbl) to get your data into the actual PG storage.
https://wiki.postgresql.org/wiki/Foreign_data_wrappers
https://github.com/pgspider/sqlite_fdw

Sqlite database exception: file is encrypted or is not a database in blackberry?

I am working on a firm application in which I need to create a local database on my device.
I create my local database through create statement[ It works well]
Then I use that file and perform insert operation through fire-fox sqlite plugin, I need to insert aprox 2000 rows at a time so I can not use code. I just run insert manually through sqlite plugin in fir-fox.
After that I just use that file in my place of my local database.
When I run select query through my code, It show Exception:java.lang.Exception: Exception: In create or prepare statement in DBnet.rim.device.api.database.DatabaseException: SELECT distinct productline FROM T_Electrical ORDER BY productline: file is encrypted or is not a database
I got the solution of this problem, I was doing a silly mistake by creating a file manually by right click in my RES folder, that is not correct. We need to create the database completely from SQlite plugin, then it will work fine. "Create data base from SQLITE(FIle too) and perform insertion operation from SQLITE, then it will work fine"
This is very rare problem, but i think it might be helpful for someone like me....!:)
You should check to see if there is a version problem between the SQLite used by your Firefox installation and that on the BlackBerry. I think I had the same error when I tried to build a database file with SQLite version 2.
You also shouldn't need to create the database file on the device. To create large tables I use a Ubuntu machine and the sqlite3 command line. Create the file, create the tables, insert the data and build indexes. Then I just copy the file onto the device in the proper directory.
For me it was a simple thing. One password was set to that db. I just used it and prolem got solved.

sqlite3 virtual tables lifetime

Can someone please tell me the lifetime of the virtual tables created in sqlite3. I have an android application with a search feature, and i want to use the fast text search feature of sqlite.
I do not know how long these tables stay in the system or if i need to create the tables each time i access the application.
Any help?
The SQLite FTS module creates several 'internal' tables for every virtual table you define. Those tables are plainly visible in the database schema, so FTS virtual tables as well as their underlying data are completely contained in the database file.
This might be different with other types of virtual table; e.g. the VirtualShape extension allows ESRI shapefiles (.shp) files to be read as tables; those are (naturally) stored separately from the SQLite database file.
In any case, the definition of any virtual table itself is stored in the database file, just like a normal table; so the answer to your question is:
No, there's no need to re-create them every time you open the database.
According the SQLite3 file format specification, the virtual table definitions are stored in the schema table like any other table. Any indices for a virtual table are also stored in the DB file.
I take all this to mean that a virtual table is stored in the DB file and thus persistent. You should not have to recreate it each time you open a DB connection - it wouldn't make much sense like that, anyway.
A simple test using the sqlite3 CLI tool and an FTS3 table confirms this :-)

Resources