I'd like to export an Amazon MySQL RDS instance to my own server running MySQL. I successfully dumped the database and recreated the users on the new database, but when I go to import the dumpfile, I get:
ERROR 1071 (42000) at line 25: Specified key was too long; max key length is 767 bytes
Some Googling revealed that InnoDB has a max key size of 767. It turns out that we were using the following options in RDS:
innodb_large_prefix=on
innodb_file_format=barracuda
innodb_file_per_table=true
log_bin_trust_function_creators=1
I added these options to my.cnf, but I got the same error message. I then read that innodb_large_prefix only works on tables with ROW_TYPE=DYNAMIC. It turned out that we were using dynamic rows on RDS, but that these rows were not being created as DYNAMIC in the dumpfile. I then found this StackOverflow post that added the ROW_TYPE=DYNAMIC option to the dumpfile: Force row_format on mysqldump
And yet, still I get the same error message. Ideas?
I believe this is encoding issue.
If latin1 was used on RDS, but UTF-8 in your environment then indexed VARCHAR(256) is the problem.
Because in UTF-8 VARCHAR( 256) becomes VARCHAR( 768 ) internally.
Related
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
This looks like limitation from Microsoft azure mobile client for offline sync service for android.
In my xamarin form application i have 40 azure tables to sync with remote. Whenever the particular request(_abcTable.PullAsync) has the more number record like 5K, PullAsync returns the exception saying that : Error executing SQLite command: 'too many SQL variables'.
That pull async URL goes like this : https://abc-xyz.hds.host.com/AppHostMobile/tables/XXXXXXResponse?$filter=(updatedAt ge datetimeoffset'2017-06-20T13:26:17.8200000%2B00:00')&$orderby=updatedAt&$skip=0&$top=5000&ProjectId=2&__includeDeleted=true.
But in postman i can see the same Url returning the 5K records and Works fine in iPhone device as well but failing only in android.
From the above PullAsync request if i change the "top" parameter value from 5000 to 500 it works fine in android but takes more time. Do i have any other alternatives without limiting the performance.
Package version:
Microsoft.Azure.Mobile.Client version="3.1.0"
Microsoft.Azure.Mobile.Client.SQLiteStore" version=“3.1.0”
Microsoft.Bcl version="1.1.10"
Microsoft.Bcl.Build version="1.0.21"
SQLite.Net.Core-PCL version="3.1.1"
SQLite.Net-PCL version="3.1.1"
SQLitePCLRaw.bundle_green version="1.1.2"
SQLitePCLRaw.core" version="1.1.2"
SQLitePCLRaw.lib.e_sqlite3.android" version="1.1.2"
SQLitePCLRaw.provider.e_sqlite3.android" version="1.1.2"
Please let me know if i need to provide more information. Thanks
Error executing SQLite command: 'too many SQL variables
Per my understanding, your sqlite may touch the Maximum Number Of Host Parameters In A Single SQL Statement mentions as follows:
A host parameter is a place-holder in an SQL statement that is filled in using one of the sqlite3_bind_XXXX() interfaces. Many SQL programmers are familiar with using a question mark ("?") as a host parameter. SQLite also supports named host parameters prefaced by ":", "$", or "#" and numbered host parameters of the form "?123".
Each host parameter in an SQLite statement is assigned a number. The numbers normally begin with 1 and increase by one with each new parameter. However, when the "?123" form is used, the host parameter number is the number that follows the question mark.
SQLite allocates space to hold all host parameters between 1 and the largest host parameter number used. Hence, an SQL statement that contains a host parameter like ?1000000000 would require gigabytes of storage. This could easily overwhelm the resources of the host machine. To prevent excessive memory allocations, the maximum value of a host parameter number is SQLITE_MAX_VARIABLE_NUMBER, which defaults to 999.
The maximum host parameter number can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_VARIABLE_NUMBER,size) interface.
I refered Debugging the Offline Cache and init my MobileServiceSQLiteStore as follows:
var store = new MobileServiceSQLiteStoreWithLogging("localstore.db");
I logged all the SQL commands that are executed against the SQLite store when invoking pullasync. I found that after successfully retrieve response from mobile backend via the following request:
https://{your-app-name}.azurewebsites.net/tables/TodoItem?$filter=((UserId%20eq%20null)%20and%20(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00'))&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Microsoft.Azure.Mobile.Client.SQLiteStore.dll would execute the following sql statement for updating the related local table:
BEGIN TRANSACTION
INSERT OR IGNORE INTO [TodoItem] ([id]) VALUES (#p0),(#p1),(#p2),(#p3),(#p4),(#p5),(#p6),(#p7),(#p8),(#p9),(#p10),(#p11),(#p12),(#p13),(#p14),(#p15),(#p16),(#p17),(#p18),(#p19),(#p20),(#p21),(#p22),(#p23),(#p24),(#p25),(#p26),(#p27),(#p28),(#p29),(#p30),(#p31),(#p32),(#p33),(#p34),(#p35),(#p36),(#p37),(#p38),(#p39),(#p40),(#p41),(#p42),(#p43),(#p44),(#p45),(#p46),(#p47),(#p48),(#p49)
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
.
.
COMMIT TRANSACTION
Per my understanding, you could try to set MaxPageSize up to 999. Also, this limitation is from sqlite and the update processing is automatically handled by Microsoft.Azure.Mobile.Client.SQLiteStore. For now, I haven't find any approach to override the processing from Microsoft.Azure.Mobile.Client.SQLiteStore.
We are retrieving the output from table through DB link by executing a stored procedure and input parameters which worked previously and got the output in asp.net application.But now we noted that outputs through DB links are getting trimmed say if status is 'TRUE' ,we are getting as 'TRU' etc why the output values are getting trimmed.The only change we did recently was we changed one of the type of input parameter from number to varchar at the receiving remote side,But i don't think that is the issue??whe we execute the stored procedure remotely on the table.It is giving proper output but through DB link outputs are getting trimmed.ANy one has any idea about this issue??
My oracle client is having issues,i reinstalled and it worked fine.Only my system was having issues.so i desided to reinstall
I have set up table-level InnoDB database encryption on MariaDB.
I'd like to know if there is any way to confirm that the data is truly encrypted. I've tried searching /var/lib/mysql/ibdata1 for sample data in the tables, but I don't know if that's a reliable test or not.
I posted this question on mariadb.com, and the suggestion there was to perfom a grep for some known data.
A DBA at Rackspace suggested using the strings command instead, to better handle the binary data, for example:
strings /var/lib/mysql/sample_table/user.ibd | grep "knownuser"
This approach returns no results on an encrypted table and does return results on an unencrypted table (assuming both have "knownuser" loaded into them).
You can query information_schema.innodb_tablespaces_encryption. When innodb tablespace is encrypted it is present in the table.
SELECT * FROM information_schema.INNODB_TABLESPACES_ENCRYPTION
WHERE NAME LIKE 'db_encrypt%';
source
My advice for testing is to copy the full dataset to another node without the encryption keys in place and try to start MySQL and query the encrypted tables. I'm making an (big) assumption that they will not be readable since the valid encryption keys are missing.
To parse the files on disk as they lay may prove difficult unless you have a special tool to do this. Maybe something like Jeremy Cole's innodb_ruby would be another litmus test https://github.com/jeremycole/innodb_ruby.
[probably don't works if you change the key which encrypts the log.]
Stop the database server.
BACKUP the keyfile
Change a key in the keyfile. (don't delte - it still has to remain a valid key otherwiese the server can't restart)
Start MariaDB again.
Try to read the table (e.g. with phpMyAdmin).
If encrypted correctly there is an answer: "The table is encrypted..." when trying to read the encryted table.
Stop Maria
Restore the backup
Restart Maria
The database can be open()ed using the same encryption key and it works fine. Tried with multiple encrypted databases - all can be opened, but not attached.
This works when encrypted and when not encrypted (bytearray is null):
connection.open(file, "create", false, 1024, bytearray);
This only works when not encrypted:
connection.attach("db" + newnum.toString(), file, new Responder(attachEncryptedSuccess, openEncryptedError), bytearray);
Any help is appreciated.
UPDATE:
Just found a strange pattern here:
It seems that if I create an encrypted database, and then create new databases and attach them, everything works fine.
The created files, after unloading, will only be properly opened using the command that they were initially created with. Therefore, the encrypted database that I created before using open() will only open with open() method. All the encrypted databases that were initially created using attach() will only be able to be opened using attach(). It also doesn't matter which database was open()ed first, aka which one is the main database. It can even be not encrypted.
This is something very strange. Is this a bug? Or am I doing something wrong here?
One gotcha that I ran into awhile ago, and it sounds like it might be impacting you. If you are creating both db's from AIR then this should work fine, however if you have created one with any external tool - generally most tools will default the PRAGMA ENCODING = UTF8. AIR, being Adobe, does things a little different than just straight up telling you that they create theirs UTF16-LE.
According to sqlite rules, differing encoding types cannot be attached one way or the other. One way to verify is to use sqliteman or some other sqlite editor to verify the pragma settings.
For me, I ended up having to start from a seeded db (empty databases -just the header- were over written by AIR) that was to be initialized from a template database. If I allowed AIR to create my starting db, it was set to UTF16 to which I could not attach a UTF8 template.