QT QSQLDatabase __debugbreak() issue - qt

I want to read sizes information from a database table and want to store them in a map. I use visual studio 2022 with qt tools addon and qt 6.4.
The m_sizes map is a local variable in a configurationmanager class, the manager class lives as long as the program runs.
If i set the sizes manually (version 1) it works fine.
If I read the sizes from a database table (version 2) I get an error message when leaving the routine.
SetSize
void ConfigurationHandler::setSize(Sizes key, qint16 size)
{
if (!m_sizes.contains(key)) m_sizes.insert(key, size);
else m_sizes[key] = size;
}
Version 1 - without database usage
void ConfigurationHandler::initializeSizes()
{
setSize(Sizes::DisplayHeight, 100);
setSize(Sizes::DisplayWidth, 50);
setSize(Sizes::DisplayEncoderWidth, 70);
setSize(Sizes::DisplayHeaderHeight, 20);
}
Version 2 - with database usage
void ConfigurationHandler::initializeSizes() {
QMap<QString, QString> rows;
QSqlDatabase db;
db = QSqlDatabase::addDatabase("QSQLITE");
db.setDatabaseName("c:/users/arne/workspace/trm80eplus/trm80eplus.db");
db.open();
QSqlQuery q = db.exec("select * from configuration_sizes");
while (q.next() == true) rows.insert(q.value("key").toString(), q.value("value").toString());
setSize(Sizes::DisplayHeight, rows["display_height"].toInt());
setSize(Sizes::DisplayWidth, rows["display_width"].toInt());
setSize(Sizes::DisplayEncoderWidth, rows["display_encoderWidth"].toInt());
db.close();
}
Error is
Breakpoint instruction executed
A breakpoint statement (__debugbreak() statement or similar call) was executed in TRM80EPlus.exe.
Is it because the value from rows is no longer available when leaving the routine? But actually I copy the value from the database to m_sizes and no reference or?

Related

SQLite on Embedded System

I am trying to configure SQLite to run on an embedded system (ARM® Cortex®-M7). I have downloaded the amalgamation from the SQLite website, imported it into the project, and added the following symbols: SQLITE_THREADSAFE=0, SQLITE_OS_OTHER=1, SQLITE_OMIT_WAL=1 to allow it to compile.
I then downloaded test_onefile.c (available here: http://www.sqlite.org/vfs.html) which is supposed to allow SQLite to operate directly on embedded media without using an intermediate filesystem and imported it into the project (I was also sure to provide an sqlite3_os_init() function to register the VFS).
SQLITE_API int sqlite3_os_init(void)
{
extern int fs_register(void);
return fs_register();
}
In a separate file fs_register() looks like this:
/*
** This procedure registers the fs vfs with SQLite. If the argument is
** true, the fs vfs becomes the new default vfs. It is the only publicly
** available function in this file.
*/
int fs_register(void)
{
if (fs_vfs.pParent) return SQLITE_OK;
fs_vfs.pParent = sqlite3_vfs_find(0);
fs_vfs.base.mxPathname = fs_vfs.pParent->mxPathname;
fs_vfs.base.szOsFile = MAX(sizeof(tmp_file), sizeof(fs_file));
return sqlite3_vfs_register(&fs_vfs.base, 0);
}
I can successfully register a filesystem, open a database, and prepare SQL statements using sqlite3_register_vfs(), sqlite3_open(), and sqlite3_prepare().
When opening a database I am sure to use the ":memory:" string to create the database in memory rather than as a file.
static void TestSQLiteOpenDB(void)
{
/******** setup ********************************/
sqlite3 *db;
int rc;
/******** run element/component under test *****/
rc = sqlite3_open(":memory:", &db);
sqlite3_close(db);
/******** assertion test ***********************/
TEST_ASSERT_EQUAL_INT(SQLITE_OK, rc);
}
My issue is when trying to run sqlite3_exec(). The program crashes when the following piece of code from test_onefile.c is called:
/*
** Populate the buffer pointed to by zBufOut with nByte bytes of
** random data.
*/
static int fsRandomness(sqlite3_vfs *pVfs, int nByte, char *zBufOut)
{
sqlite3_vfs *pParent = ((fs_vfs_t *)pVfs)->pParent;
return pParent->xRandomness(pParent, nByte, zBufOut);
}
If I change this function to simply return 0, it appears to work. I can then create tables, insert data into tables etc...
My question is this: Is there a need in SQLite to populate this buffer with random data or is this workaround ok? I do not want to create further headaches for myself but it was a nightmare to track this down as the point of failure and I can not quite wrap my head around what is happening.
SQLite uses this randomness for temporary files, to force changes in journal/WAL files, to generate unique column names, and when autoincremented IDs overflow.
If the returned value is constant, some of these might go into an infinite loop, so you should attempt to get actual randomness. (It does not need to be cryptographically secure.)

SQLite DB closed after prepare but stmt still works?

Also, the sqlite_master table for DB and registered functions still seem available. Is this just a case of the stmt accessing memory that hasn't been overwritten yet or does the prepare write details into the stmt that means it doesn't subsequently require the sqlite3* structure.
#include "sqlite3.h"
//---------------------------------------------------------------------------
void Odd(sqlite3_context *ctx,int nargs,sqlite3_value **values)
{
sqlite3_result_int(ctx,sqlite3_value_int(values[0])%2);
}
//---------------------------------------------------------------------------
int _tmain(int argc,_TCHAR* argv[])
{
sqlite3 *DB;
if (sqlite3_open_v2("c:/SQLiteData/MyDB.db",&DB,SQLITE_OPEN_READWRITE,NULL)!=SQLITE_OK)
return 1;
sqlite3_create_function_v2(DB,"Odd",-1,SQLITE_UTF16 | SQLITE_DETERMINISTIC,NULL,
&Odd,NULL,NULL,NULL);
sqlite3_stmt *stmt;
if (sqlite3_prepare16_v2(DB,L"select * from sqlite_master where Odd(rowid)",
-1,&stmt,NULL)!=SQLITE_OK) return 2;
if (sqlite3_close_v2(DB)!=SQLITE_OK) return 3;
int Count=0;
while (sqlite3_step(stmt)==SQLITE_ROW) Count++;
return 0;
}
The documentation says:
If the database connection is associated with unfinalized prepared statements … then sqlite3_close() will leave the database connection open and return SQLITE_BUSY. If sqlite3_close_v2() is called with unfinalized prepared statements …, then the database connection becomes an unusable "zombie" which will automatically be deallocated when the last prepared statement is finalized or the last sqlite3_backup is finished. The sqlite3_close_v2() interface is intended for use with host languages that are garbage collected, and where the order in which destructors are called is arbitrary.
But you are not using such a language.
You should not try to access the zombie; your application
should finalize all prepared statements … associated with the sqlite3 object prior to attempting to close the object.

Xcode 4.5.1 - C programming - executable not producing output file

I'm using Xcode 4.5.1 (4G1004) on OSX 10.8.2. I've written a simple C program that reads scores from a .txt file and stores some of them in a new .txt file. What's happening is that if I build the program in Xcode (command-b) then go directly to the location of the executable it created and run that in a terminal, the program runs but does not output the new .txt file. However, if instead of building (command-b) and running the program in the described way above, if I run it through Xcode by clicking the arrow in the upper left of the toolbar, the new .txt IS created afterwards.
Why does this occur? I tried building this program using Sublime Text 2, then manually running the executable and that method works correctly by creating the output .txt file as well.
*edit - forgot to attach the code!
//func decs
void getScores();
void printScores(int);
int main()
{
getScores();
getchar();
return 0;
}//main
void getScores()
{
FILE* spscoresIn;
FILE* spscoresOut;
spscoresIn = fopen("scores_in.txt", "r");
spscoresOut = fopen("scores_out.txt", "w");
int score;
int count = 0;
while ((fscanf(spscoresIn, "%d", &score)) == 1) //writes only scores higher than 90 to the new file
if (score > 90)
{
fprintf(spscoresOut, "%d\n", score);
count++;
}
printScores(count);
return;
}//getScores
void printScores(int count)
{
printf("%d scores were higher than 90:\t", count);
printf("Press Enter to exit");
return;
}//printScores

Fast batch executions in PostgreSQL

I have a lots of data and I want to insert to DB in the least time. I did some tests. I created a table (using the below script) in PostgreSQL:
CREATE TABLE test_table
(
id serial NOT NULL,
item integer NOT NULL,
count integer NOT NULL,
CONSTRAINT test_table_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE test_table OWNER TO postgres;
I wrote test code, created 1000 random values and insert to test_table in two different ways. First, using QSqlQuery::exec()
int insert() {
QSqlDatabase db = QSqlDatabase::addDatabase("QPSQL");
db.setHostName("127.0.0.1");
db.setDatabaseName("TestDB");
db.setUserName("postgres");
db.setPassword("1234");
if (!db.open()) {
qDebug() << "can not open DB";
return -1;
}
QString queryString = QString("INSERT INTO test_table (item, count)"
" VALUES (:item, :count)");
QSqlQuery query;
query.prepare(queryString);
QDateTime start = QDateTime::currentDateTime();
for (int i = 0; i < 1000; i++) {
query.bindValue(":item", qrand());
query.bindValue(":count", qrand());
if (!query.exec()) {
qDebug() << query.lastQuery();
qDebug() << query.lastError();
}
} //end of for i
QDateTime end = QDateTime::currentDateTime();
int diff = start.msecsTo(end);
return diff;
}
Second using QSqlQuery::execBatch:
int batchInsert() {
QSqlDatabase db = QSqlDatabase::addDatabase("QPSQL");
db.setHostName("127.0.0.1");
db.setDatabaseName("TestDB");
db.setUserName("postgres");
db.setPassword("1234");
if (!db.open()) {
qDebug() << "can not open DB";
return -1;
}
QString queryString = QString("INSERT INTO test_table (item, count)"
" VALUES (:item, :count)");
QSqlQuery query;
query.prepare(queryString);
QVariantList itemList;
QVariantList CountList;
QDateTime start = QDateTime::currentDateTime();
for (int i = 0; i < 1000; i++) {
itemList.append(qrand());
CountList.append(qrand());
} //end of for i
query.addBindValue(itemList);
query.addBindValue(CountList);
if (!query.execBatch())
qDebug() << query.lastError();
QDateTime end = QDateTime::currentDateTime();
int diff = start.msecsTo(end);
return diff;
}
I found that there is no difference between them:
int main() {
qDebug() << insert() << batchInsert();
return 1;}
Result:
14270 14663 (milliseconds)
How can I improve it?
In http://doc.qt.io/qt-5/qsqlquery.html#execBatch has been cited:
If the database doesn't support batch executions, the driver will
simulate it using conventional exec() calls.
I'm not sure my DBMS support batch executions or not?
How can I test it?
In not sure what the qt driver does, but PostgreSQL can support running multiple statements in one transaction. Just do it manually instead of trying to use the built in feature of the driver.
Try changing your SQL statement to
BEGIN TRANSACTION;
For every iteration of loop run an insert statement.
INSERT HERE;
Once end of loop happens for all 1000 records issue this. On your same connection.
COMMIT TRANSACTION;
Also 1000 rows is not much to test with, you might want to try 100,000 or more to make sure the qt batch really wasn't helping.
By issuing 1000 insert statements, you have 1000 round trips to the database. This takes quite some time (network and scheduling latency). So try to reduce the number of insert statements!
Let's say you want to:
insert into test_table(item, count) values (1000, 10);
insert into test_table(item, count) values (1001, 20);
insert into test_table(item, count) values (1002, 30);
Transform it into a single query and the query will need less than half of the time:
insert into test_table(item, count) values (1000, 10), (1001, 20), (1002, 30);
In PostgreSQL, there is another way to write it:
insert into test_table(item, count) values (
unnest(array[1000, 1001, 1002])
unnest(array[10, 20, 30]));
My reason for presenting the second way is that you can pass all the content of a big array in a single parameter (tested with in C# with the database driver "Npgsql"):
insert into test_table(item, count) values (unnest(:items), unnest(:counts));
items is a query parameter with the value int[]{100, 1001, 1002}
counts is a query parameter with the value int[]{10, 20, 30}
Today, I have cut down the running time of 10,000 inserts in C# from 80s to 550ms with this technique. It's easy. Furthermore, there is not any hassle with transactions, as a single statement is never split into multiple transactions.
I hope this works with the Qt PostgreSQL driver, too. On the server side, you need PostgreSQL >= 8.4., as older versions do not provide unnest (but there may be work arounds).
You can use QSqlDriver::hasFeature with argument QSqlDriver::BatchOperations
In the 4.8 sources, I found that only oci (oracle) support the BatchOperations. Don't know why not use the COPY statement for postgresql in the psql driver.

multiple sql statements in QSqlQuery using the sqlite3 driver

I have a file containing several SQL statements that I'd like to use to initialize a new sqlite3 database file. Apparently, sqlite3 only handles multiple statements in one query via the sqlite3_exec() function, and not through the prepare/step/finalize functions. That's all fine, but I'd like to use the QtSQL api rather than the c api directly. Loading in the same initializer file via QSqlQuery only executes the first statement, just like directly using the prepare/step/finalize functions from the sqlite3 api. Is there a way to get QSqlQuery to run multiple queries without having to have separate calls to query.exec() for each statement?
As clearly stated in Qt Documentation for QSqlQuery::prepare() and QSqlQuery::exec(),
For SQLite, the query string can contain only one statement at a time.
If more than one statements are give, the function returns false.
As you have already guessed the only known workaround to this limitation is having all the sql statements separated by some string, split the statements and execute each of them in a loop.
See the following example code (which uses ";" as separator, and assumes the same character not being used inside the queries..this lacks generality, as you may have the given character in string literals in where/insert/update statements):
QSqlDatabase database;
QSqlQuery query(database);
QFile scriptFile("/path/to/your/script.sql");
if (scriptFile.open(QIODevice::ReadOnly))
{
// The SQLite driver executes only a single (the first) query in the QSqlQuery
// if the script contains more queries, it needs to be splitted.
QStringList scriptQueries = QTextStream(&scriptFile).readAll().split(';');
foreach (QString queryTxt, scriptQueries)
{
if (queryTxt.trimmed().isEmpty()) {
continue;
}
if (!query.exec(queryTxt))
{
qFatal(QString("One of the query failed to execute."
" Error detail: " + query.lastError().text()).toLocal8Bit());
}
query.finish();
}
}
I wrote a simple function to read SQL from a file and execute it one statement at a time.
/**
* #brief executeQueriesFromFile Read each line from a .sql QFile
* (assumed to not have been opened before this function), and when ; is reached, execute
* the SQL gathered until then on the query object. Then do this until a COMMIT SQL
* statement is found. In other words, this function assumes each file is a single
* SQL transaction, ending with a COMMIT line.
*/
void executeQueriesFromFile(QFile *file, QSqlQuery *query)
{
while (!file->atEnd()){
QByteArray readLine="";
QString cleanedLine;
QString line="";
bool finished=false;
while(!finished){
readLine = file->readLine();
cleanedLine=readLine.trimmed();
// remove comments at end of line
QStringList strings=cleanedLine.split("--");
cleanedLine=strings.at(0);
// remove lines with only comment, and DROP lines
if(!cleanedLine.startsWith("--")
&& !cleanedLine.startsWith("DROP")
&& !cleanedLine.isEmpty()){
line+=cleanedLine;
}
if(cleanedLine.endsWith(";")){
break;
}
if(cleanedLine.startsWith("COMMIT")){
finished=true;
}
}
if(!line.isEmpty()){
query->exec(line);
}
if(!query->isActive()){
qDebug() << QSqlDatabase::drivers();
qDebug() << query->lastError();
qDebug() << "test executed query:"<< query->executedQuery();
qDebug() << "test last query:"<< query->lastQuery();
}
}
}
http://www.fluxitek.fi/2013/10/reading-sql-text-file-sqlite-database-qt/
https://gist.github.com/savolai/6852986

Resources