MySQL InnoDB: cannot calculate statistics because the .ibd file is missing - innodb

Environment:
Windows 7 (XAMPP latest)
Apache 2.4.4
PHP 5.5
MySQL 5.6.11
I am trying to backup a database from MySQL 5.1 and import it to MySQL 5.6.
In MySQL 5.1, there are some MyISAM and InnoDB tables. I use mysqldump to dump the SQL file out, with --add-drop-database switch.
Now when I go back to my localhost and import the SQL file using MySQL workbench, an error occur:
InnoDB: cannot calculate statistics for table "database"."tables" because the .ibd file is missing.
I tried drop the database using:
drop schema database
And it crashes the MySQL 5.6, with error like this:
2013-09-10 17:18:23 fc4 InnoDB: Warning: MySQL is trying to drop database `database`.``
InnoDB: though there are still open handles to table `database`.`table`.
In my.ini I set:
innodb_force_recovery = 4
I tried:
Create a new database with different database name and run the import again, all innodb tables cannot be created.
Copy all *.frm tables from a working 5.1 server in data directory, overwrite existing database, restart MySQL 5.6.11, all innodb tables cannot be accessed.
If I run a create table statement with engine=InnoDB, it will failed and said table already exists, which actually it does not exists.
If I run a drop table statement that using InnoDB, it will say table does not exists...
Can anyone please give me some advices on it?
Thank you.

It turns out that it seems to be a MySQL 5.6.11 problem, I change MySQL to 5.5.30 and it all works well.

I had a similar problem on Mac OS X, but in a slightly bizarre way: I was running fine on some pre-5.6.19 version and then upgraded to 5.6.19, which started to give me the error message above, except none of my tables use mixed case.
As it turned out, one of my databases is using an uppercase character as its first letter. This has worked fine for a long time, but failed this morning after the minor version upgrade and sent me into a 2-hour search for what went wrong.
The fix is simple: create a symlink with the lower case version, restart mysqld and all is well. However, even though I understand the logic for making tables case-safe, there is no danger of the database name being ambiguous since the OS would always prevent that.

Related

SQLITE file is encrypted or is not a database

I have a huge problem... I am developing desktop app with SQLite but during copy/paste process I lost a power and process was terminated so base was lost. However, I found a way to recover it but base is encrypted. When I try to open connection using conn.Open(); I get this error. If I try to open it with DB Browser for SQLite it asks me a SQLCipher encryption password so it seams to me that data is lost..
Is there any default password ?
Why did this happen and how to prevent it from happening again ?
What can I do ?
Thanks in advance.
Also check that SQLite version you're "connecting" with aligns with the DB file version.
For example, here's a DB file written by SQLite version 3+:
$ file foobar.db
foobar.db: SQLite 3.x database, last written using SQLite version 3027002
And here I also have 2 versions of sqlite:
$ sqlite -version
2.8.17
$ sqlite3 -version
3.27.2 2019-02-25 16:06:06 bd49a8271d650fa89e446b42e513b595a717b9212c91dd384aab871fc1d0alt1
Obviously in hindsight, opening foobar.db with sqlite version 2 will fail, yielding the same error message:
$ sqlite foobar.db
Unable to open database "foobar.db": file is encrypted or is not a database
But all is good with the correct version:
$ sqlite3 foobar.db
SQLite version 3.27.2 2019-02-25 16:06:06
Enter ".help" for usage hints.
sqlite>
sqlite> .databases
main: /tmp/foobar.db
sqlite>
The error message is a catch-all, simply meaning that the file format was not recognized.
Ok, finally found a solution that works so posting the answer if anybody will have same trouble as I did..
First of all, use good recover software. For repairing the database I found 3 solutions that work without backup :
Open corrupted database using DB Browser an Export Database to SQL. Name it however you want. Then, create new database and import database from SQL.
There is software that repairs corrupted databases. Download one and use it to repair the database.
Download "sqlite3" from sqlite.org and in command line navigate to folder where "sqlite3" is unzipped. Then try to dump the entire database with .dump, and use those commands to create a new database:
sqlite3 corrupt_table_name.sqlite ".dump" | sqlite3 new.sqlite
I had the same error when I was trying to access a db dump in another system compared to compared to where it was obtained. When I tried to open on a dev machine, it threw the reported error in this thread:
$ sqlite3 db_dump.sqlite .tables
Error: file is encrypted or is not a database
This turned out to be due to the differences in the sqlite version between those systems. This dev system version was 3.6.20.
The version on the production system was 3.8.9. Once I had the sqlite3 upgraded to same version, I was able to access all its data. I am just showing below the tables are displayed as expected:
# sqlite3 -version
3.8.9
# sqlite3 db_dump.sqlite .tables
capture diskio transport
consumer filters processes
This error is rather misleading to begin with, though.
If you've interacted with the database at some point while specifying journal_mode = WAL, and then later try to use the database from a client that does not support WAL (< v3.7.0), this error can also come up.
As noted in the SQLite documentation under Backwards Compatibility, to resolve that without having to recreate the database, explicitly set the journal mode to DELETE:
PRAGMA journal_mode=DELETE;
Your database did not become encrypted (this is only one of the two options in the error message).
Your data recovery tool did not recover the correct data; what you have in the file is something else.
You have to restore the database file from the backup.
The issue is with sqlcipher version upgrade in my case, Whenever I update my pod it automatically upgrade the sqlcipher and the error occurred.
For a quick fix just manually add the SDK instead of Pod install. And for a proper solution use this link GitHub Solution

#1293 - Incorrect table definition; there can be only one TIMESTAMP column with CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause

I have a Magento2 website and wanted to deploy on a server. I got the above mentioned error when I tried to import DB copy of a local to the live mySQL server. I found the reason that on local system I have mySQL 5.6 and phpMyAdmin 4.4 and on the live server it is lower than 5.6(my hosting does not show which mysql version). and phpMyAdmin 3.4.11.
Is there any way to fix the problem? Your comments and solutions are appreciated
The Error look like this.
NOTE:
There were some other tables like admin_user where I removed the CURRENT_TIMESTAMP attribute and ON UPDATE CURRENT_TIMESTAMP from the 2nd column of the DB Table.
MySQL 5.6 is the minimum supported version. Your server might have a lesser version (see https://dev.mysql.com/doc/refman/5.6/en/upgrading-from-previous-series.html) You can check the version running the following query on your server:
SELECT version();

Why have a lost tables when moving from phpmyadmin to local MySQL Workbench

I have a Huge database in phpmyadmin. It have 1500 tables because I'm using Drupal. I used the command
mysqldump -u [username] -p[password] [databasename] > [filename.sql]
To create a .sql file on the server. It took some time but was 100% complete.
I then tried to copy the file to my local machine with Filezilla, it kept crashing and stopping so I used the command
scp [my_username]#[my_host]:[filename.sql] /some/local/directory.sql
This took a couple of hours but also said It was 100% complete. After this I opened MySQL workbench and imported from the file on my local machine. When the import was finished I have 1024 tables. I thought this was less than it should be and checked phpmyadmin with this sql command
SELECT COUNT(*) FROM information_schema.tables WHERE table_schema = <MyDatabaseName>;
This command returned table like this
So from this result I'm missing 484 tables. Where have they gone? are they meaningless tables or will this cause problems if I try to deploy?
My thoughts were that one of these commands leave behind some tables that Drupal creates because it sees them as pointless or broken.
Any help is really appreciated.

SQL Collation conflict after database restore

I restored a site and database onto another machine, but cannot run the application because I am getting this error:
InnerException message: Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation.
Both machines are running MS SQL Server 2012 Standard edition and even the same minor version. I saw the other posts on this error, but could not find any tables or columns as Latin1_General_CI_AS. The database properties shows that the collation is SQL_Latin1_General_CP1_CI_AS. Any ideas on ow to fix this?
I changed the database servers collation and that worked. Apparently, temp tables were being populated by a stored procedure and choking since the database and the server were different collation.

How to make a database service in Netbeans 6.5 to connect to SQLite databases?

I use Netbeans IDE (6.5) and I have a SQLite 2.x database. I installed a JDBC SQLite driver from zentus.com and added a new driver in Nebeans services panel. Then tried to connect to my database file from Services > Databases using this URL for my database:
jdbc:sqlite:/home/farzad/netbeans/myproject/mydb.sqlite
but it fails to connect. I get this exception:
org.netbeans.modules.db.dataview.meta.DBException: Unable to Connect to database : DatabaseConnection[name='jdbc:sqlite://home/farzad/netbeans/myproject/mydb.sqlite [ on session]']
at org.netbeans.modules.db.dataview.output.SQLExecutionHelper.initialDataLoad(SQLExecutionHelper.java:103)
at org.netbeans.modules.db.dataview.output.DataView.create(DataView.java:101)
at org.netbeans.modules.db.dataview.api.DataView.create(DataView.java:71)
at org.netbeans.modules.db.sql.execute.SQLExecuteHelper.execute(SQLExecuteHelper.java:105)
at org.netbeans.modules.db.sql.loader.SQLEditorSupport$SQLExecutor.run(SQLEditorSupport.java:480)
at org.openide.util.RequestProcessor$Task.run(RequestProcessor.java:572)
[catch] at org.openide.util.RequestProcessor$Processor.run(RequestProcessor.java:997)
What should I do? :(
The current version of Zentus SQLiteJDBC is v053, based on SQLite 3.6.1. It will not open a 2.x SQLite database. Perhaps you can use SQLite 2.x command line tool to .dump your database, and the Sqlite3 command line tool to .load it. The use Zentus SQLiteJDBC to access the new SQLite 3.x database.
Alternatively, use a JDBC driver that supports SQLite 2 such as this one.
It's againg me...
I have made two mistakes during my first attempt. After setting CLASSPATH as a system variable (hope I didn’t broke smth else :)), putting sqlite_jni.dll to the system32 folder and correcting JDBC url I have got a success :)
I also have downloaded their SQLite ODBC wrapper. Installed it and made a connection to my SQLite2 database via ordinary and UTF8 based ODBC driver. I also used built in NetBeans JDBC-ODBC Bridge driver to be able to set up this connection.
All three connections have been created but:
ordinary ODBC driver: I see text data in a wrong encoding. All other columns are displayed correctly
UTF8 ODBC driver: I don’t see text data at all. All other columns are displayed correctly
JDBC driver: I don’t see any column at all. "Select * from my_any_table" always returns an empty single column
I have Russian based data in my database.
So...currently I have returned to sqlite command line interface :))

Resources