QTableView is extremely slow (even for only 3000 rows) - qt

I have a table with 3000 rows and 8 columns. I use the QTableView.
To insert items I do:
QStandardItem* vSItem = new QStandardItem();
vSItem->setText("Blabla");
mModel->setItem(row, column, vSItem);
where mModel is QStandardItemModel.
Everything is fine if I have not to many rows, but when I am trying to visualize big
data (about 3000 rows), then it is extremely slow (20 seconds on Win 7 64-bit (8 core machine with 8 GB of RAM!!!)).
Is there anything I can do to improve performance?
Thanks in advance.

Good call on the autoresize on contents for your columns or rows.
I have a function that added a column to the table each time a client connected to my server application. As the number of columns in the table got large, the insertion time seemed to take longer and longer.
I was doing a ui->messageLog->resizeRowsToContents(); each time. I changed this to only auto resize the row that was being added ui->messageLog->resizeRowToContents(0);, and the slowness went away.

Do you have an autoresize on contents for your columns or rows ? It can be a killer in performance sometimes !
Have a look here :
QHeaderView::ResizeToContents
Hope it helps !

I found a solution: the problem was that I assigned the model to the tableview
already in the constructor. So everytime I inserted the item in the model,
tableview was informed and probably updated. Now I assign the model to the
tableview only after I filled my model with data.
This is not an elegant solution but it works. Is there maybe a way to temporarily
disable the model from tableview or something that says to the tableview to
not to care about changes in the model?

Watch out for setSectionResizeMode(). This had enormous performance implications for me. It causes row and column size recalculations with every modification (i.e. every setData()/setText() call). This wasn't noticeable until I reached 1000+ rows. Consider using resizeSections() instead which seems to be a one time adjustment.

For this quantity of data, you'd be better with a custom model - then you'd have the control of when you inform the view of updates, for example. The 'standard' items scale to hundreds, and probably thousands, due to modern hardware being fast, but they're explicitly documented as not being intended for datasets of this size.

Also, if all your rows have the same height, setting http://doc.qt.io/qt-5/qtreeview.html#uniformRowHeights-prop to true can boost performance. In my case, a model containing about 50.000 rows was almost unusable with uniformRowHeights set to false (the default). After changing it to true, it worked like a charm.

I am using 80000 rows and had a similar problem adding huge amounts of items to a table.
My solution was to let it allocate the memory in advanced by telling it how many rows it will need.
I was using a Qtableview and model, so:
self.model.setRowCount(80000)
I'm sure you can match this up with your code

try this :
QSqlDatabase db =QSqlDatabase::addDatabase( "QSQLITE");
void SELECT_TO_TBLWID(QTableWidget * TBL, QString DbPath,QString SQL)
{
QSqlDatabase db2 =QSqlDatabase::database();
db2.setDatabaseName(DbPath);
if( !db2.open() )
{
qDebug() << db2.lastError();
qFatal( "Failed to connect." );
}
QSqlQuery qry;
qry.prepare(SQL);
if( !qry.exec() )
qDebug() << qry.lastError();
else
{
QSqlRecord rec = qry.record();
TBL->setColumnCount(rec.count());
int RW=0;
for( int r=0; qry.next(); r++ )
{RW++;}
TBL->setRowCount(RW);
for (int pr=RW;qry.previous();pr--){// do nothing}
for( int r=0; qry.next(); r++ )
{
for( int c=0; c<rec.count(); c++ )
{
if ( r==0)
{
TBL->setHorizontalHeaderItem(c,new QTableWidgetItem(rec.fieldName(c)));
}
TBL->setItem(r, c, new QTableWidgetItem(qry.value(c).toString()));
}
}
}
db2.close();
}

I had the exact same issue, with even 100 rows the performance was horrible.
Upon inspection this issue is not really related to options set on the table view itself (like resizing and such), but rather because the model informs and updates the view for each insertion (beginInsertRows/endInsertRows).
With that being said you have 2 options for maximum performance:
Set the model to the view after you populated with data
Set the model anytime, but populate a new list of data and then
assign that list to the model
Whichever option you go with, performance is crazy. I went with the second option because I set my model (and proxy model) in the constructor of the widget.
Later on when I want to add data:
// Create a new list of data (temporary)
QList<MyObjects*> NewList;
for (auto it = Result.cbegin(); it != Result.cend(); ++it)
{
MyObjects* my = new MyObjects();
// my - set data
NewList.append(my);
}
// Now simply replace current data list
Model->setList(NewList);
This is considering you created your own setList() function inside your custom Model:
void Model::setList(QList<MyObjects*> clist)
{
beginResetModel();
list = clist;
endResetModel();
}
And voila... you load thousands of records with high performance. Notice beginResetModel() and endResetModel() are the functions that notify the table view.
Enjoy.

Related

Get cell types when reading and parsing excel files

I am trying to read and parse and excel and some unclear things come into play as usual for me.
Here is what i have:
while (true)
{
comVariantCell1 = cells.item(row, 1).value().variantType();
comVariantCell2 = cells.item(row, 2).value().variantType();
//if an empty cell is found, processing will stop and user will get an error message in order to solve the inconsistency.
if (comVariantCell1 != COMVariantType::VT_EMPTY && comVariantCell2 != COMVariantType::VT_EMPTY)
{
//both cells have values, check their types.
importedLine = conNull();
progress1.setText(strfmt("Importing row %1", row));
if (cells.item(row, 1).value().variantType() == COMVariantType::VT_BSTR)
{
importedLine += cells.item(row, 1).value().bStr();
}
else
{
importedLine += cells.item(row, 1).value().double();
}
importedLine += cells.item(row, 2).value().double();
importedLinesCollection += [importedLine]; //conIns(importedLinesCollection, row - 1, (importedLine));
row++;
}
else
{
info (strFmt("Empty cell found at line %1 - import will not continue and no records were saved.", row));
break;
}
}
Excel format:
Item number Transfer Qty
a100 50.5
a101 10
a102 25
This worked well to check if the cell type is string: COMVariantType::VT_BSTR
but what should i use to check for a real or integer value ?
I am pretty sure in this case, the quantity will be not contain real values but anyway, it could be useful in the future to make the difference between these two types.
I have to mention that, even if i have an int value and I use cells.item(row, 1).value().int() it won't work. I can't see why.
Why do i want to make the difference? Because if it's forbidden to have real values in the quantity column ( at least in my case ), i want to check that and give the user the opportunity to put a correct value in that place and maybe further investigate why that happened to be there.
Take a look on how it is done in \Classes\SysDataExcelCOM\readRow.
It is basically using switch to test the type. This is really boring!
Also take a look on ExcelIO, a class I made some years ago. It reads Excel and returns each row as a container. This is a more high-level approach.
As a last resort you could save the Excel as a tab separated file. Then use TextIO to read the content. This will be at least 10 times faster than using Excel!

Form Running Totals, Ax 2009

Is there an example anywhere of a form that performs running totals in a column located within a grid. The user ordering and filtering of the grid would affect the running totals column.
I can easily perform the above if it was ordering only by transaction date, but including the user ordering and filtering I presume that we would have to use the datasource range() and rangecount() functions (see SysQuery::mergeRanges() for an example) then iterate over these to apply the filtering, then include the dynalinks. The same for the ordering, albeit this is now more complicated.
Any suggestions appreciated. Any appreciations suggested (as in: vote the question up!).
You could implement it as a form datasource display method using this strategy:
Copy the form's datasource query (no need for SysQuery::mergeRanges):
QueryRun qr = new QueryRun(ledgerTrans_qr.query());
Iterate and sum over your records using qr, stop after the current record:
while (qr.next())
{
lt = qr.getNo(1);
total += lt.AmountMST;
if (lt.RecId == _lt.RecId)
break;
}
This could be made more performant if the sorting order was fixed (using sum(AmountMST) and adding a where constraint).
Return the total
This is of cause very inefficient (subquadratic time, O(n^2)).
Caching the results (in a map) may make it usable if there are not too many records.
Update: a working example.
Any observations or criticisms to the code below most welcome. Jan's observation about the method being slow is still valid. As you can see, it's a modification of his original answer.
//BP Deviation Documented
display AmountMST XXX_runningBalanceMST(LedgerTrans _trans)
{
LedgerTrans localLedgerTrans;
AmountMST amountMST;
;
localLedgerTrans = this.getFirst();
while (localLedgerTrans)
{
amountMST += localLedgerTrans.AmountMST;
if (localLedgerTrans.RecId == _trans.RecId)
{
break;
}
localLedgerTrans = this.getNext();
}
return amountMST;
}

AX 2009: Adjusting User Group Length

We're looking into refining our User Groups in Dynamics AX 2009 into more precise and fine-tuned groupings due to the wide range of variability between specific people within the same department. With this plan, it wouldn't be uncommon for majority of our users to fall user 5+ user groups.
Part of this would involve us expanding the default length of the User Group ID from 10 to 40 (as per Best Practice for naming conventions) since 10 characters don't give us enough room to adequately name each group as we would like (again, based on Best Practice Naming Conventions).
We have found that the main information seems to be obtained from the UserGroupInfo table, but that table isn't present under the Data Dictionary (it's under the System Documentation, so unavailable to be changed that way by my understanding). We've also found the UserGroupName EDT, but that is already set at 40 characters. The form itself doesn't seem to restricting the length of the field either. We've discussed changing the field on the SQL directly, but again my understanding is that if we do a full synchronization it would overwrite this change.
Where can we go to change this particular setting, or is it possible to change?
The size of the user group id is defined as as system extended data type (here \System Documentation\Types\userGroupId) and you cannot change any of the properties including the size 10 length.
You should live with that, don't try to fake the system using direct SQL changes. Even if you did that, AX would still believe that length is 10.
You could change the SysUserInfo form to show the group name only. The groupId might as well be assigned by a number sequence in your context.
I wrote a job to change the string size via X++ and it works for EDTs, but it can't seem to find the "userGroupId". From the general feel of AX I get, I'd be willing to guess that they just have it in a different location, but maybe not. I wonder if this could be tweaked to work:
static void Job9(Args _args)
{
#AOT
TreeNode treeNode;
Struct propertiesExt;
Map mapNewPropertyValues;
void setTreeNodePropertyExt(
Struct _propertiesExt,
Map _newProperties
)
{
Counter propertiesCount;
Array propertyInfoArray;
Struct propertyInfo;
str propertyValue;
int i;
;
_newProperties.insert('IsDefault', '0');
propertiesCount = _propertiesExt.value('Entries');
propertyInfoArray = _propertiesExt.value('PropertyInfo');
for (i = 1; i <= propertiesCount; i++)
{
propertyInfo = propertyInfoArray.value(i);
if (_newProperties.exists(propertyInfo.value('Name')))
{
propertyValue = _newProperties.lookup(propertyInfo.value('Name'));
propertyInfo.value('Value', propertyValue);
}
}
}
;
treeNode = TreeNode::findNode(#ExtendedDataTypesPath);
// This doesn't seem to be able to find the system type
//treeNode = treeNode.AOTfindChild('userGroupId');
treeNode = treeNode.AOTfindChild('AccountCategory');
propertiesExt = treeNode.AOTgetPropertiesExt();
mapNewPropertyValues = new Map(Types::String, Types::String);
mapNewPropertyValues.insert('StringSize', '30');
setTreeNodePropertyExt(propertiesExt, mapNewPropertyValues);
treeNode.AOTsetPropertiesExt(propertiesExt);
treeNode.AOTsave();
info("Done");
}

Determining HTML5 database memory usage

I'm adding sqlite support to a my Google Chrome extension, to store historical data.
When creating the database, it is required to set the maximum size (I used 5MB, as suggested in many examples)
I'd like to know how much memory I'm really using (for example after adding 1000 records), to have an idea of when the 5MB limit will be reached, and act accordingly.
The Chrome console doesn't reveal such figures.
Thanks.
You can calculate those figures if you wanted to. Basically, the default limit for localStorage and webStorage is 5MB where the name and values are saved as UTF16 therefore it is really half of that which is 2.5 MB in terms of stored characters. In webStorage, you can increase that by adding "unlimited_storage" within the manifest.
Same thing would apply in WebStorage, but you have to go through all tables and figure out how many characters there is per row.
In localStorage You can test that by doing a population script:
var row = 0;
localStorage.clear();
var populator = function () {
localStorage[row] = '';
var x = '';
for (var i = 0; i < (1024 * 100); i++) {
x += 'A';
}
localStorage[row] = x;
row++;
console.log('Populating row: ' + row);
populator();
}
populator();
The above should crash in row 25 for not enough space making it around 2.5MB. You can do the inverse and count how many characters per row and that determines how much space you have.
Another way to do this, is always adding a "payload" and checking the exception if it exists, if it does, then you know your out of space.
try {
localStorage['foo'] = 'SOME_DATA';
} catch (e) {
console.log('LIMIT REACHED! Do something else');
}
Internet Explorer did something called "remainingSpace", but that doesn't work in Chrome/Safari:
http://msdn.microsoft.com/en-us/library/cc197016(v=VS.85).aspx
I'd like to add a suggestion.
If it is a Chrome extension, why not make use of Web SQL storage or Indexed DB?
http://html5doctor.com/introducing-web-sql-databases/
http://hacks.mozilla.org/2010/06/comparing-indexeddb-and-webdatabase/
Source: http://caniuse.com/

Flex - sorting a datagrid column by the row's label

I'm creating a table that displays information from a MySQL database, I'm using foreignkeys all over the place to cross-reference data.
Basically I have a datagrid with a column named 'system.' The system is an int that represents the id of an object in another table. I've used lableFunction to cross-reference the two and rename the column. But now sorting doesn't work, I understand that you have to create a custom sorting function. I have tried cross-referencing the two tables again, but that takes ~30sec to sort 1200 rows. Now I'm just clueless as to what I should try next.
Is there any way to access the columns field label inside the sort function?
public function order(a:Object,b:Object):int
{
var v1:String = a.sys;
var v2:String = b.sys;
if ( v1 < v2 ){
trace(-1);
return -1;
}else if ( v1 > v2 ){
trace(1);
return 1;
}else {
trace(0);
return 0;
}
}
One way to handle this is to go through the objects you received and add the label as a property on each of them based on the cross-referenced id. Then you can specify your label property to display in your data grid column instead of using a label function. That way you would get sorting as you'd expect rather than having to create your own sort function.
The way that DataGrids, and other list based classes work is by using itemRenderers. Renderers are only created for the data that is shown on screen. In most cases there is a lot more data in your dataProvider than what is seen on screen.
Trying to sort your data based on something displayed by the dataGrid will most likely not give you the results you want.
But, there is no reason you can't call the same label function on your data objects in the sortFunction.
One way is to use the itemToLabel function of the dataGrid:
var v1:String = dataGrid.itemToLabel(a);
var v2:String = dataGrid.itemToLabel(b);
A second way is to just call the labelFunction explicitly:
var v1:String = labelFunction(a);
var v2:String = = labelFunction(b);
In my experience I have found sorting to be extremely quick, however you're recordset is slightly larger than what I usually load in memory at a single time.

Resources