Import database MySQL File-Per-Table Tablespaces to Same Server - innodb

Is necessary copy a database within a single server. Was chosen way to "File-Per-Table Tablespaces to Another Server" as it is the fastest for large databases.
The official documentation states that the database name must be the same on the source server and the destination server.
What if the source server and the destination server - this is one and the same server?
Is there any way in order to be able to copy the database files from one database to another within a server quickly.
Or somehow a way to get "File-Per-Table Tablespaces to Another Server" to ignore the name of the database?
Info server: OS: MS Windows Server 2008
MySQL Server: MySQL 5.5 or MariaDB
Tables Type: InnoDB (if MariaDB - InnoDB plugin)
Portability Considerations for .ibd Files
When you move or copy .ibd files, the database directory name must be the same on the source and destination systems. The table definition stored in the InnoDB shared tablespace includes the database name. The transaction IDs and log sequence numbers stored in the tablespace files also differ between databases.

EDITED:
I would create the backup files as suggested in the method, but would also export the schema as create table statements. After the backup I would use the rename table command to move the existing files to another database. Then Iwould recreate the schema in mz current database using the create table statements and then would import back the namespace as described.

Related

Delphi FireDAC SQLite default database path in Windows 10

I'm building a small app with a local in memory database using Delphi 10.3 with FireDAC set to SQLite.
What is de default path to the database file that SQLite uses? (i.e. Database parameter is left blank)
I want to transfer the database file to another PC. I suppose it has the .db file extension, but I'm unable to locate the file.
from http://docwiki.embarcadero.com/RADStudio/Sydney/en/Using_SQLite_with_FireDAC:
"To create and open an SQLite in-memory database use the following
parameters:
DriverID=SQLite Database=:memory: Or just leave the Database parameter
empty:"
This made me think that there should be a file that eventually stores the data, but it turns out there is none. All data is lost after the database is free'd.
By definition "in memory" means no file. If you need a file for your data then add a filename (complete with path) to the Database value in TFDConnection's Params.
You can do it in the object inspector at design time or by code at run time it looks like:
FDConnection1.Params.Values['Database'] := 'C:\ProgramData\YourCompany\YourApp\YourFile.sqlite3';
Of course it is better to set the path by querying Windows for the location of "Program Data".

Table synchronize error. Cannot drop the index because it does not exist or you do not have permission

I added a field to this table: STG_INVOICE_SUP_VW
But then I wasn't able to synchronize the table so I deleted it. Now if I'm trying to synchronize any table it's throwing this error below:
Cannot execute a data definition language command on (). The SQL
database has issued an error.
SQL error description: [Microsoft][SQL Server Native Client 11.0][SQL
Server]Cannot drop the index
'STG_INVOICE_SUP_VW._dta_index_STG_INVOICE_SUP_VW_25_692157136__K7_1_2_3_4_5_6_8_9_10_11_12_13_14_15_16_17_1',
because it does not exist or you do not have permission.
SQL statement: DROP INDEX
STG_INVOICE_SUP_VW._dta_index_STG_INVOICE_SUP_VW_25_692157136__K7_1_2_3_4_5_6_8_9_10_11_12_13_14_15_16_17_1
Problems during SQL data dictionary synchronization. The operation
failed.
Synchronize failed on 1 table(s)
Edit:
Entire issue was related to an additional index created from SQL side.
If you create an Index on AX tables from SQL side then either you won't be able to synchronize the table or your created index will be dropped on synchronization(as suggested by by some users).You should create indexes from Application Object Tree.
I Deleted the index from SSMS and then synchronize worked perfectly.
It also solved one more issue. Incremental CIL was throwing an error below:
Cannot create a record in SysXppAssembly (SysXppAssembly). The record
already exists.
For incremental CIL issue I had already done the steps pointed here but it didn't fix it:
Stop the AOS Navigate to the XppIL folder in your AOS server
“C:\Program Files\Microsoft Dynamics
AX\60\Server\YourAXInstanceName\bin\XppIL”
Backup the files from the XppIL folder.
Delete the files from the XppIL folder. Note: files
only not sub folders.
Restart the AOS.
The XppIL folder files will be created after the AOS restart
From this link: Community.Dynamics
After fixing the table sync issue, Incremental CIL ran without issue.
Try creating index STG_INVOICE_SUP_VW._dta_index_STG_INVOICE_SUP_VW_25_692157136__K7_1_2_3_4_5_6_8_9_10_11_12_13_14_15_16_17_1 directly in SQL Server and then re-run DB sync in AX.

Load a text file into Apache Kudu table?

How do you load a text file to an Apache Kudu table?
Does the source file need to be in HDFS space first?
If it doesn't share the same hdfs space as other hadoop ecosystem programs (ie/ hive, impala), is there Apache Kudu equivalent of:
hdfs dfs -put /path/to/file
before I try to load the file?
The file need not to be in HDFS first.It can be taken from an edge node/local machine.Kudu is similar to Hbase.It is a real-time store that supports key-indexed record lookup and mutation but cant store text file directly as in HDFS.For Kudu to store the contents of a text file,it needs to be parsed and tokenised.For that, you need to have Spark execution/java api alongwith Nifi (or Apache Gobblin) to perform the processing and then storing it in Kudu table.
Or
You can integrate it with Impala allowing you to use Impala to insert, query, update, and delete data from Kudu tablets using Impala’s SQL syntax, as an alternative to using the Kudu APIs to build a custom Kudu application.Below are the steps:
Import the file in hdfs
Create an external impala table.
Then insert the data in the table.
Create a kudu table using keyword stored as KUDU and As Select
to copy the contents from impala to kudu.
In this link you can refer for more info- https://kudu.apache.org/docs/quickstart.html

Copy table from remote sqlite database?

Is there any way to copy data from one remote sqlite database to another? I have file replication done across two servers; however, some changes are recorded in an sqlite database local to each server. To get my file replication to work correctly, I need to copy the contents of one table and enter them into the table on the opposite system. I understand that sqlite databases are not meant for remote access; but is there any way to do what I need? I suppose I could write the contents of the table to a file, copy that file, then add the contents to the other database. This doesn't seem like the best option though, so I'm looking for another solution.
If you have access to the other database file, you can ATTACH it:
ATTACH '/some/where/else/other.db' AS remote;
INSERT INTO MyTable SELECT * FROM remote.MyTable;

sqlite3 virtual tables lifetime

Can someone please tell me the lifetime of the virtual tables created in sqlite3. I have an android application with a search feature, and i want to use the fast text search feature of sqlite.
I do not know how long these tables stay in the system or if i need to create the tables each time i access the application.
Any help?
The SQLite FTS module creates several 'internal' tables for every virtual table you define. Those tables are plainly visible in the database schema, so FTS virtual tables as well as their underlying data are completely contained in the database file.
This might be different with other types of virtual table; e.g. the VirtualShape extension allows ESRI shapefiles (.shp) files to be read as tables; those are (naturally) stored separately from the SQLite database file.
In any case, the definition of any virtual table itself is stored in the database file, just like a normal table; so the answer to your question is:
No, there's no need to re-create them every time you open the database.
According the SQLite3 file format specification, the virtual table definitions are stored in the schema table like any other table. Any indices for a virtual table are also stored in the DB file.
I take all this to mean that a virtual table is stored in the DB file and thus persistent. You should not have to recreate it each time you open a DB connection - it wouldn't make much sense like that, anyway.
A simple test using the sqlite3 CLI tool and an FTS3 table confirms this :-)

Resources