How to selectively dump all innodb tables in a mysql database? - innodb

I have a database called av2web, which contains 130 MyISAM tables and 20 innodb tables. I wanna take mysqldump of these 20 innodb tables, and export it to another database as MyISAM tables.
Can you tell me a quicker way to achieve this?
Thanks
Pedro Alvarez Espinoza.

If this was an one-off operation I'd do:
use DB;
show table status name where engine='innodb';
and do a rectangular copy/paste from the Name column:
+-----------+--------+---------+------------+-
| Name | Engine | Version | Row_format |
+-----------+--------+---------+------------+-
| countries | InnoDB | 10 | Compact |
| foo3 | InnoDB | 10 | Compact |
| foo5 | InnoDB | 10 | Compact |
| lol | InnoDB | 10 | Compact |
| people | InnoDB | 10 | Compact |
+-----------+--------+---------+------------+-
to a text editor and convert it to a command
mysqldump -u USER DB countries foo3 foo5 lol people > DUMP.sql
and then import after replacing all instances of ENGINE=InnoDB with ENGINE=MyISAM in DUMP.sql
If you want to avoid the rectangular copy/paste magic you can do something like:
use information_schema;
select group_concat(table_name separator ' ') from tables
where table_schema='DB' and engine='innodb';
which will return countries foo3 foo5 lol people

I know this is an old question. I just want to share this script that genrate the mysqldump command and also shows how to restore it
This following portion of the script will generate a command to create a mysql backup/dump
SET SESSION group_concat_max_len = 100000000; -- this is very important when you have lots of table to make sure all the tables get included
SET #userName = 'root'; -- the username that you will login with to generate the dump
SET #databaseName = 'my_database_name'; -- the database name to look up the tables from
SET #extraOptions = '--compact --compress'; -- any additional mydqldump options https://dev.mysql.com/doc/refman/5.6/en/mysqldump.html
SET #engineName = 'innodb'; -- the engine name to filter down the table by
SET #filename = '"D:/MySQL Backups/my_database_name.sql"'; -- the full path of where to generate the backup too
-- This query will generate the mysqldump command to generate the backup
SELECT
CASE WHEN tableNames IS NULL
THEN 'No tables found. Make sure you set the variables correctly.'
ELSE CONCAT_WS(' ','mysqldump -p -u', #userName, #databaseName, tableNames, #extraOptions, '>', #filename)
END AS command
FROM (
SELECT GROUP_CONCAT(table_name SEPARATOR ' ') AS tableNames
FROM INFORMATION_SCHEMA.TABLES
WHERE table_schema= #databaseName AND ENGINE= #engineName
) AS s;
This following portion of the script will generate a command to restore mysql backup/dump into a specific database on the same or a different server
SET #restoreIntoDatabasename = #databaseName; -- the name of the new database you wish to restore into
SET #restoreFromFile = #filename; -- the full path of the filename you want to restore from
-- This query will generate the command to use to restore the generated backup into mysql
SELECT CONCAT_WS(' ', 'mysql -p -u root', #restoreIntoDatabasename, '<', #restoreFromFile);

Related

Weird behavior in mariadb table with unique key defined (ie, not so unique)

I'm trying to export/import a BD from one system to another but the import fails with the following error:
ERROR 1062 (23000) at line 8232: Duplicate entry '0-3-30168717-com_liferay_product_navigation_product_menu_web_...' for key 'IX_C7057FF7'
That table is defined as such:
CREATE TABLE `PortletPreferences` (
`portletPreferencesId` bigint(20) NOT NULL,
`ownerId` bigint(20) DEFAULT NULL,
`ownerType` int(11) DEFAULT NULL,
`plid` bigint(20) DEFAULT NULL,
`portletId` varchar(200) DEFAULT NULL,
`preferences` longtext DEFAULT NULL,
`mvccVersion` bigint(20) NOT NULL DEFAULT 0,
`companyId` bigint(20) DEFAULT NULL,
PRIMARY KEY (`portletPreferencesId`),
UNIQUE KEY `IX_C7057FF7` (`ownerId`,`ownerType`,`plid`,`portletId`),
In the mysql dump file, I see these two entries:
(31453178,0,3,30168717,'com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet','<portlet-preferences />',0,10132)
(31524539,0,3,30168717,'com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet','<portlet-preferences />',0,10132)
So, yep, there are two entries with the same unique key. How is that possible?!?
Knowing this, I ran the following select statement against the source DB:
select portletPreferencesId, ownerId, ownerType, plid, portletId from PortletPreferences where ownerId = 0 AND ownerType = 3 AND plid = 30168717 AND portletId like 'com_liferay_product_navigation_product_menu_web%';
And it outputs just ONE LINE!
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| portletPreferencesId | ownerId | ownerType | plid | portletId |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| 31524539 | 0 | 3 | 30168717 | com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
By the portletPreferencesId field, it outputs the second entry in the dump file. So I did one more select for the other row as such:
select portletPreferencesId, ownerId, ownerType, plid, portletId from PortletPreferences where portletPreferencesId = 31453178;
And I get:
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| portletPreferencesId | ownerId | ownerType | plid | portletId |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
| 31453178 | 0 | 3 | 30168717 | com_liferay_product_navigation_product_menu_web_portlet_ProductMenuPortlet |
+----------------------+---------+-----------+----------+----------------------------------------------------------------------------+
My question is, what's going on?!? Why is that entry not output by the first select statement and why is it there in the first place if those fields were supposed to be unique???
I have a bad feeling about the state of the source database :-( Oh, and that's just one. I have multiple duplicate keys like that in that table :-(
Thanks
There are occasional bugs like MDEV-15250 that can cause duplicate entries to occur.
Sometimes you may not see these as parts of the optimizer expect only a single row because of the unique constraint so won't search beyond it. Maybe a query across a range including the entries would be more likely to show it (which is what mysqldump would have done).
If you don't think its related to ALTER TABLE occurring at the same time as insertions (like MDEV-15250), and have a good hunch as the set of operations on the table that may have escaped the unique key enforcement, can you please create a bug report.

Using Indexes results in Update locks cannot be acquired during a READ UNCOMMITTED transaction

After upgrading to mariadb 10.5.11 I ran into a weird problem with the indexes.
Simple table with two colums Type(varchar) and Point(point)
An index on Type(Tindex) and a spatial index on Point(Pindex)
Now a query like
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
Results in a
Error in query (1207): Update locks cannot be acquired during a READ UNCOMMITTED transaction
While both
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels USE INDEX (Pindex) WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
and
SELECT X(Point) as x,Y(Point) as y,hotels.Type FROM hotels USE INDEX (Tindex) WHERE (Type in ("acco")) AND MBRContains( GeomFromText( 'LINESTRING(4.922 52.909,5.625 52.483)' ), hotels.Point)
;
work fine. As mariadb 10.5.10 did
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
| 1 | SIMPLE | hotels | range|filter | Type,Pindex | Pindex|Type | 34|302 | NULL | 340 (4%) | Using where; Using rowid filter |
The issue is now being tracked as MDEV-26123 (I guess you reported it there). The issue description says that the problem was introduced in MariaDB 10.2.39, 10.3.30, 10.4.20, 10.5.11, 10.6.1.
I ran into the issue after upgrading to MariaDB 10.6.4. I downgraded to 10.6.0, which was possible without having to do any migration of the data. It seems to have fixed the problem for now.
The cause of this appears to be the code fix for MDEV-25594.
I cannot see anything in the commit message or discussion there that indicates that a change to the READ UNCOMMITTED behavior was intentional.
There are no open bug reports on this so I recommend you create a new bug report.
select ##session.autocommit;
set ##session.autocommit=0;
select ##session.autocommit;
#add in my.cnf
autocommit = 0
using mariadb 10.2.40 ( resolved )
https://developpaper.com/transaction-isolation-level-of-mariadb/

(for-each-row scenario).in kusto

Query1
cluster(x).database('$systemdb').Operations
| where Operation == "DatabaseCreate" and Database contains "oci-"| where State =='Completed'
and StartedOn between (datetime(2020-04-07) .. 3d)
| distinct Database , StartedOn
| order by StartedOn desc
Output of my query1 is list of databases , now I have to pass each db value into query2 to get buildnumber
Query2:
set query_take_max_records=5000;
let view=datatable(Property:string,Value:dynamic)[];
let viewFile=datatable(FileName:string)[];
alias database db = cluster(x).database('y');
let latestInfoFile = toscalar((
union isfuzzy=true viewFile,database('db').['TextFileLogs']
| where FileName contains "AzureStackStampInformation"
| distinct FileName
| order by FileName
| take 1));
union isfuzzy=true view,(
database('db').['TextFileLogs']
| where FileName == latestInfoFile
| distinct LineNumber,FileLineContent
| order by LineNumber asc
| summarize StampInfo=(toobject(strcat_array(makelist(FileLineContent,100000), "\r\n")))
| mvexpand bagexpansion=array StampInfo
| project Property=tostring(StampInfo[0]), Value=StampInfo[1]
)|where Property contains "StampVersion" | project BuildNumber = Value;
database() function: is a special scoping function, and it does not support non-constant arguments due to security consideration.
As a result - you cannot use sub-query to fetch list of databases and then operate on this list as input for database() function.
This behavior is described at:
https://learn.microsoft.com/en-us/azure/kusto/query/databasefunction?pivots=azuredataexplorer
Syntax
database(stringConstant)
Arguments
stringConstant: Name of the database that is referenced. Database identified can be either DatabaseName or PrettyName. Argument has to be constant prior of query execution, i.e. cannot come from sub-query evaluation.

Why does vsql can return all the records, while program using ODBC driver can't?

I do a simple test for Vertica:
ha=> insert into test(Name, City) values( 'Nan', 'Nanjing');
OUTPUT
--------
1
(1 row)
ha=> select node_name, wos_row_count, ros_row_count from projection_storage where anchor_table_name = 'test';
node_name | wos_row_count | ros_row_count
---------------+---------------+---------------
v_ha_node0001 | 1 | 3
(1 row)
ha=> select * from test;
ID | Name | City
--------+------+---------
250001 | Nan | Nanjing
250002 | Nan | Nanjing
250003 | Nan | Nanjing
250004 | Nan | Nanjing
(4 rows)
The select operation displays OK (the data in WOS and ROSall display).
Then I write a simple program which uses ODBC:
ret = SQLExecDirect(stmt_handle, (SQLCHAR*)"select * from test", SQL_NTS);
if (!SQL_SUCCEEDED(ret))
{
printf("Execute statement failed\n");
goto ERR;
}
while ((ret = SQLFetch(stmt_handle)) == SQL_SUCCESS)
{
row_num++;
}
printf("Row number is %d\n", row_num);
But the result is:
Row number is 3
It doesn't count the data in WOS.
And the DbVisualizer also displays 3 rows of data:
Does it need some special option for using ODBC? Thanks very much in advance!
By default, vsql is in transaction mode. As long as you keep your session open, inside vsql, you will see what you expect, as you are inside a transaction.
As soon as you go outside of your session (odbc, dbvis), the transaction is not (yet) visible. To make it visible to other sessions, you need to issue a 'COMMIT;' inside vsql. Then (as confirmed) you can access data from odbc and dbvis.
You can set (vsql only) your transaction to be autocommit with
\set AUTOCOMMIT on
-- disable with
\set AUTOCOMMIT off
To know if autocommit is enabled, you can use show:
show AUTOCOMMIT;
name | setting
------------+---------
autocommit | off
(1 row)
You can even do it on your vsql call with --set autocommit=on. Is that a good idea or not is another question.
ODBC lets you set autocommit in different ways, see the odbc doc.

Retrieve ImageByte from SQL Server Database to DataGridView in ASP.NET

I am using ASP.NET Visual Basic language and I want to retrieve my data from SQL Server 2008. During retrieving the data from the Database I noticed that the image/picture did not load on the DataGridView. It only retrieves text data but image data is not present. These is my sample SQL Script:
sql = "SELECT em_Picture, em_EmployeeID, em_LastName + ', ' + em_FirstName " & _
"From [T_EmployeeMaster] Where em_EmploymentStatus <> 'RS' "
I want to display the data on this kind of format:
| Picture | ID | Name |
| Image | 1 | Basilio, Rowell |
But when I execute the program it return results the following:
| ID | Name |
| 1 | Basilio, Rowell |
The em_Picture column data type is "image" and it was stored as byte() to the Database. I already search some solution on Google but doesn't help me.

Resources