How to export/import sequence of Mariadb 10.3 - mariadb

From version 10.3, Mariadb introduces sequence object(not the storage engine).
XtraBackup did backup seq object as table, and when restored, it can't be used as seq anymore.
How can I show all seq in the database, and export all seq object to a sql file?

From the documentation:
One of the goals with the Sequence implementation is that all old tools, such as mysqldump, should work unchanged, while still keeping the normal usage of sequence standard compatibly.
So you can use mysqldump to dump it as a table, and then import it again with e.g. the mysql command-line client, and it will be turned into a sequence yet again. You can also use SHOW FULL TABLES WHERE table_type='SEQUENCE'; to display a list of sequences.
I would also think that mariabackup, which is MariaDB's fork of Percona's XtraBackup, will backup sequences correctly.

Related

Create backup of bigquery cluster table

I've a clustered partitioned table exported from GA 360. Attached is the image. I would like to create exact replica of the same. Using Web UI it's not possible. I created backup table using bq command line tool, still no luck.
Also, whenever we check preview it has a day filter. It looks like this:
Whenever data is appended to the backup table, I don't find this filter there even though this option is set to true while creating a table.
If you can give more context about handling this kind of table it would be beneficial.
Those are indeed sharded tables. As explained by #N. L they follow a time-based naming approach: [PREFIX]_YYYYMMDD. They then get grouped together. The explained procedure to backup them seems correct. Anyhow, I would recommend to use partitioned tables as it will be easier to backup them and they perform better in general.
This is not a cluster / partitioned table. This one is a sharded non-partitioned table having one common prefix. Once you start creating multiple tables with same prefix we can see them under the same prefix.
Ex:
ga_session_20190101
ga_session_20190102
both these tables will be grouped together.
To take backup of these tables you need to create a script to copy source to destination table with same name and execute that script using bq command line tool under the same project.

Create table Failed: [100015] Total size of all parcels is greater than the max message size

Could someone explain what does the above error message mean? How can it be fixed?
Thanks
There appears to be two main causes of this error:
Bugs in the client software
The query is too large
Bugs:
Make sure that you have the latest tools installed.
I have seen this error when incompatible versions of different TTU
software components are installed, especially CLI.
Please install (or-reinstall) the latest and greatest patches of CLI.
-- SteveF
Steve Fineholtz Taradata Employee
The other reference is from the comments to the original post:
Could be the driver. I had a similar issue with JDBC drivers, which went away
when I simply switched to a different version. – access_granted
Query is too large:
This is the root of the problem, even if it is caused by the above bugs.
Check your actual SQL query size sent to the server. Usually OBDC logs or debug files will let you examine the actual SQL generated.
Some SQL generators include charsets and collations to each field, increasing the query length.
You may want to create your own SQL Query from scratch.
Avoid the following, since they can be added by using additional queries.
Indexes
Default Values
Constraints
Non-ASCII characters as Column Names.
Also, remove all whitespace except a single space.
Do not attempt to add data while creating a table; Unless, the total size of the SQL statement is less than 1 MB.
From the first reference, the maximum query size is 1MB.
On the extreme side, you can name all of your fields a single letter(or double letters...). You can rename them with Alter Table queries later.
The same goes for type; you can set the type for all of the columns as CHAR, and modify it later(before any data is added to the table).

sqlite: online backup is not identical to original

I'm doing an online backup of an (idle) database using the example 2 code from here. The backup file is not identical to the original (the length is the same, but it differs in 3 bytes), although the .dump from both databases is identical. Backup files taken at different times are identical to each other.
This isn't great, as I'd like a simple guarantee that the backup is identical to the original, and I'd like to record checksums on the actual database and the backups to simplify restores. Any idea if I can get around this, or if I can use the backup API to generate files that compare identically?
The online backup can write into an existing database, so this writing is done inside a transaction.
At the end of such a transaction, the file change counter (offsets 24-27) is changed to allow other processes to detect that the database was modified and that any caches in those processes are invalid.
This change counter does not use the value from the original database because it might be identical to the old value of the destination database.
If the destination database is freshly created, the change counter starts at zero.
This is likely to be a change from the original database, but at least it's consistent.
The byte at offset 28 was decreased because the database has some unused pages.
The byte at offset 44 was changed because the database does not actually use new schema features.
You might be able to avoid these changes by doing a VACUUM before the backup, but this wouldn't help for the change counter.
I would not have expected them to be identical, just because the backup API ensures that any backups are self consistent (ie transactions in progress are ignored).

Can't insert multiple rows into SQLite database

I'm trying to read a .sql file into SQLite, but I'm getting syntax errors because the file was dumped from MySQL, which can add multiple entries at once, but I'm using SQLite v3.7.7, which can't read more than one entry to a table at a time with the VALUES command.
My understanding is that I either need to upgrade SQLite, or somehow modify the file to read in one entry at a time into the tables. Please note I'm dealing with tens of thousands of entries, so inserting the UNION SELECT command probably won't be very easy.
You need at least SQLite 3.7.11 to use the VALUES syntax you're interested in. But mysqldump has about 100 command-line options. And one of them, --skip-extended-insert, can disable extended inserts. (So you get one INSERT statement per row.) Read the mysqldump documentation, and run the dump again with options that better fit your target.
Or better yet, look at the list of SQLite converter tools.

what is the best way to export data from Filemaker Pro 6 to Sql Server?

I'm migrating/consolidating multiple FMP6 databases to a single C# application backed by SQL Server 2008. the problem I have is how to export the data to a real database (SQL Server) so I can work on data quality and normalisation. Which will be significant, there are a number of repeating fields that need to be normalised into child tables.
As I see it there are a few different options, most of which involve either connecting to to FMP over ODBC and using an intermediate to copy the data across (either custom code or MS Acess linked tables), or, exporting to flat file format (CSV with no header or xml) and either use excel to generate insert statements or write some custom code to load the file.
I'm leaning towards writing some custom code to do the migration (like this article does, but in C# instead of perl) over ODBC, but I'm concerned about the overhead of writing a migrator that will only be used once (as soon as the new system is up the existing DB's will be archived)...
a few little joyful caveats: in this version of FMP there's only one table per file, and a single column may have multi-value attributes, separated by hex 1D, which is the ASCII group separator, of course!
Does anyone have experience with similar migrations?
I have done this in the past, but using MySQL as the backend. The method I use is to export as csv or merge format and them use the LOAD DATA INFILE statement.
SQL Server may have something similar, maybe this link would help bulk insert

Resources