I'm adding sqlite support to a my Google Chrome extension, to store historical data.
When creating the database, it is required to set the maximum size (I used 5MB, as suggested in many examples)
I'd like to know how much memory I'm really using (for example after adding 1000 records), to have an idea of when the 5MB limit will be reached, and act accordingly.
The Chrome console doesn't reveal such figures.
Thanks.
You can calculate those figures if you wanted to. Basically, the default limit for localStorage and webStorage is 5MB where the name and values are saved as UTF16 therefore it is really half of that which is 2.5 MB in terms of stored characters. In webStorage, you can increase that by adding "unlimited_storage" within the manifest.
Same thing would apply in WebStorage, but you have to go through all tables and figure out how many characters there is per row.
In localStorage You can test that by doing a population script:
var row = 0;
localStorage.clear();
var populator = function () {
localStorage[row] = '';
var x = '';
for (var i = 0; i < (1024 * 100); i++) {
x += 'A';
}
localStorage[row] = x;
row++;
console.log('Populating row: ' + row);
populator();
}
populator();
The above should crash in row 25 for not enough space making it around 2.5MB. You can do the inverse and count how many characters per row and that determines how much space you have.
Another way to do this, is always adding a "payload" and checking the exception if it exists, if it does, then you know your out of space.
try {
localStorage['foo'] = 'SOME_DATA';
} catch (e) {
console.log('LIMIT REACHED! Do something else');
}
Internet Explorer did something called "remainingSpace", but that doesn't work in Chrome/Safari:
http://msdn.microsoft.com/en-us/library/cc197016(v=VS.85).aspx
I'd like to add a suggestion.
If it is a Chrome extension, why not make use of Web SQL storage or Indexed DB?
http://html5doctor.com/introducing-web-sql-databases/
http://hacks.mozilla.org/2010/06/comparing-indexeddb-and-webdatabase/
Source: http://caniuse.com/
Related
I am trying to read and parse and excel and some unclear things come into play as usual for me.
Here is what i have:
while (true)
{
comVariantCell1 = cells.item(row, 1).value().variantType();
comVariantCell2 = cells.item(row, 2).value().variantType();
//if an empty cell is found, processing will stop and user will get an error message in order to solve the inconsistency.
if (comVariantCell1 != COMVariantType::VT_EMPTY && comVariantCell2 != COMVariantType::VT_EMPTY)
{
//both cells have values, check their types.
importedLine = conNull();
progress1.setText(strfmt("Importing row %1", row));
if (cells.item(row, 1).value().variantType() == COMVariantType::VT_BSTR)
{
importedLine += cells.item(row, 1).value().bStr();
}
else
{
importedLine += cells.item(row, 1).value().double();
}
importedLine += cells.item(row, 2).value().double();
importedLinesCollection += [importedLine]; //conIns(importedLinesCollection, row - 1, (importedLine));
row++;
}
else
{
info (strFmt("Empty cell found at line %1 - import will not continue and no records were saved.", row));
break;
}
}
Excel format:
Item number Transfer Qty
a100 50.5
a101 10
a102 25
This worked well to check if the cell type is string: COMVariantType::VT_BSTR
but what should i use to check for a real or integer value ?
I am pretty sure in this case, the quantity will be not contain real values but anyway, it could be useful in the future to make the difference between these two types.
I have to mention that, even if i have an int value and I use cells.item(row, 1).value().int() it won't work. I can't see why.
Why do i want to make the difference? Because if it's forbidden to have real values in the quantity column ( at least in my case ), i want to check that and give the user the opportunity to put a correct value in that place and maybe further investigate why that happened to be there.
Take a look on how it is done in \Classes\SysDataExcelCOM\readRow.
It is basically using switch to test the type. This is really boring!
Also take a look on ExcelIO, a class I made some years ago. It reads Excel and returns each row as a container. This is a more high-level approach.
As a last resort you could save the Excel as a tab separated file. Then use TextIO to read the content. This will be at least 10 times faster than using Excel!
Overall context: I have a db of cross-references among pages in a wiki space, and want an incrementally-growing visualization of links.
I have working code that shows clusters of labels as you mouseover. But when you move away, rather than hiding all the labels, I want to keep certain key labels (e.g. the centers of clusters).
I forked an existing example and got it roughly working.
info is at http://webseitz.fluxent.com/wiki/WikiGraphBrowser
near the bottom of that or any other page in that space, in the block that starts with "BackLinks:", at the end you'll find "Click here for WikiGraphBrowser" which will launch a window with the interface
equivalent static subset example visible at http://www.wikigraph.net/static/d3/cgmartin/WikiGraphBrowser/:
code for that example is at https://github.com/BillSeitz/WikiGraphBrowser/blob/master/js/wiki_graph.js
Code that works at removing all labels:
i = j = 0;
if (!bo) { //bo=False - from mouseout
//labels.select('text.label').remove();
labels.filter(function(o) {
return !(o.name in clicked_names);
})
.text(function(o) { return ""; });
j++;
}
Code attempting to leave behind some labels, which does not work:
labels.forEach(function(o) {
if (!(d.name in clicked_names)) {
d.text.label.remove();
}
I know I'm just not grokking the d3 model at all....
thx
The problem comes down to your use of in to search for a name in an array. The Javascript in keyword searches object keys not object values. For an array, the keys are the index values. So testing (d.name in clicked_names) will always return false.
Try
i = j = 0;
if (!bo) { //bo=False - from mouseout
//labels.select('text.label').remove();
labels.filter(function(o) {
return (clicked_names.indexOf(o.name) < 0);
})
.text(function(o) { return ""; });
j++;
}
The array .indexOf(object) method returns -1 if none of the elements in the array are equal (by triple-equals standards) to the parameter. Alternatively, if you are trying to support IE8 (I'm assuming not, since you're using SVG), you could use a .some(function) test.
By the way, there's a difference between removing a label and just setting it's text content to the empty string. Which one to use will depend on whether you want to show the text again later. Either way, just be sure you don't end up with a proliferation of empty labels clogging up your browser.
We're looking into refining our User Groups in Dynamics AX 2009 into more precise and fine-tuned groupings due to the wide range of variability between specific people within the same department. With this plan, it wouldn't be uncommon for majority of our users to fall user 5+ user groups.
Part of this would involve us expanding the default length of the User Group ID from 10 to 40 (as per Best Practice for naming conventions) since 10 characters don't give us enough room to adequately name each group as we would like (again, based on Best Practice Naming Conventions).
We have found that the main information seems to be obtained from the UserGroupInfo table, but that table isn't present under the Data Dictionary (it's under the System Documentation, so unavailable to be changed that way by my understanding). We've also found the UserGroupName EDT, but that is already set at 40 characters. The form itself doesn't seem to restricting the length of the field either. We've discussed changing the field on the SQL directly, but again my understanding is that if we do a full synchronization it would overwrite this change.
Where can we go to change this particular setting, or is it possible to change?
The size of the user group id is defined as as system extended data type (here \System Documentation\Types\userGroupId) and you cannot change any of the properties including the size 10 length.
You should live with that, don't try to fake the system using direct SQL changes. Even if you did that, AX would still believe that length is 10.
You could change the SysUserInfo form to show the group name only. The groupId might as well be assigned by a number sequence in your context.
I wrote a job to change the string size via X++ and it works for EDTs, but it can't seem to find the "userGroupId". From the general feel of AX I get, I'd be willing to guess that they just have it in a different location, but maybe not. I wonder if this could be tweaked to work:
static void Job9(Args _args)
{
#AOT
TreeNode treeNode;
Struct propertiesExt;
Map mapNewPropertyValues;
void setTreeNodePropertyExt(
Struct _propertiesExt,
Map _newProperties
)
{
Counter propertiesCount;
Array propertyInfoArray;
Struct propertyInfo;
str propertyValue;
int i;
;
_newProperties.insert('IsDefault', '0');
propertiesCount = _propertiesExt.value('Entries');
propertyInfoArray = _propertiesExt.value('PropertyInfo');
for (i = 1; i <= propertiesCount; i++)
{
propertyInfo = propertyInfoArray.value(i);
if (_newProperties.exists(propertyInfo.value('Name')))
{
propertyValue = _newProperties.lookup(propertyInfo.value('Name'));
propertyInfo.value('Value', propertyValue);
}
}
}
;
treeNode = TreeNode::findNode(#ExtendedDataTypesPath);
// This doesn't seem to be able to find the system type
//treeNode = treeNode.AOTfindChild('userGroupId');
treeNode = treeNode.AOTfindChild('AccountCategory');
propertiesExt = treeNode.AOTgetPropertiesExt();
mapNewPropertyValues = new Map(Types::String, Types::String);
mapNewPropertyValues.insert('StringSize', '30');
setTreeNodePropertyExt(propertiesExt, mapNewPropertyValues);
treeNode.AOTsetPropertiesExt(propertiesExt);
treeNode.AOTsave();
info("Done");
}
I have a table with 3000 rows and 8 columns. I use the QTableView.
To insert items I do:
QStandardItem* vSItem = new QStandardItem();
vSItem->setText("Blabla");
mModel->setItem(row, column, vSItem);
where mModel is QStandardItemModel.
Everything is fine if I have not to many rows, but when I am trying to visualize big
data (about 3000 rows), then it is extremely slow (20 seconds on Win 7 64-bit (8 core machine with 8 GB of RAM!!!)).
Is there anything I can do to improve performance?
Thanks in advance.
Good call on the autoresize on contents for your columns or rows.
I have a function that added a column to the table each time a client connected to my server application. As the number of columns in the table got large, the insertion time seemed to take longer and longer.
I was doing a ui->messageLog->resizeRowsToContents(); each time. I changed this to only auto resize the row that was being added ui->messageLog->resizeRowToContents(0);, and the slowness went away.
Do you have an autoresize on contents for your columns or rows ? It can be a killer in performance sometimes !
Have a look here :
QHeaderView::ResizeToContents
Hope it helps !
I found a solution: the problem was that I assigned the model to the tableview
already in the constructor. So everytime I inserted the item in the model,
tableview was informed and probably updated. Now I assign the model to the
tableview only after I filled my model with data.
This is not an elegant solution but it works. Is there maybe a way to temporarily
disable the model from tableview or something that says to the tableview to
not to care about changes in the model?
Watch out for setSectionResizeMode(). This had enormous performance implications for me. It causes row and column size recalculations with every modification (i.e. every setData()/setText() call). This wasn't noticeable until I reached 1000+ rows. Consider using resizeSections() instead which seems to be a one time adjustment.
For this quantity of data, you'd be better with a custom model - then you'd have the control of when you inform the view of updates, for example. The 'standard' items scale to hundreds, and probably thousands, due to modern hardware being fast, but they're explicitly documented as not being intended for datasets of this size.
Also, if all your rows have the same height, setting http://doc.qt.io/qt-5/qtreeview.html#uniformRowHeights-prop to true can boost performance. In my case, a model containing about 50.000 rows was almost unusable with uniformRowHeights set to false (the default). After changing it to true, it worked like a charm.
I am using 80000 rows and had a similar problem adding huge amounts of items to a table.
My solution was to let it allocate the memory in advanced by telling it how many rows it will need.
I was using a Qtableview and model, so:
self.model.setRowCount(80000)
I'm sure you can match this up with your code
try this :
QSqlDatabase db =QSqlDatabase::addDatabase( "QSQLITE");
void SELECT_TO_TBLWID(QTableWidget * TBL, QString DbPath,QString SQL)
{
QSqlDatabase db2 =QSqlDatabase::database();
db2.setDatabaseName(DbPath);
if( !db2.open() )
{
qDebug() << db2.lastError();
qFatal( "Failed to connect." );
}
QSqlQuery qry;
qry.prepare(SQL);
if( !qry.exec() )
qDebug() << qry.lastError();
else
{
QSqlRecord rec = qry.record();
TBL->setColumnCount(rec.count());
int RW=0;
for( int r=0; qry.next(); r++ )
{RW++;}
TBL->setRowCount(RW);
for (int pr=RW;qry.previous();pr--){// do nothing}
for( int r=0; qry.next(); r++ )
{
for( int c=0; c<rec.count(); c++ )
{
if ( r==0)
{
TBL->setHorizontalHeaderItem(c,new QTableWidgetItem(rec.fieldName(c)));
}
TBL->setItem(r, c, new QTableWidgetItem(qry.value(c).toString()));
}
}
}
db2.close();
}
I had the exact same issue, with even 100 rows the performance was horrible.
Upon inspection this issue is not really related to options set on the table view itself (like resizing and such), but rather because the model informs and updates the view for each insertion (beginInsertRows/endInsertRows).
With that being said you have 2 options for maximum performance:
Set the model to the view after you populated with data
Set the model anytime, but populate a new list of data and then
assign that list to the model
Whichever option you go with, performance is crazy. I went with the second option because I set my model (and proxy model) in the constructor of the widget.
Later on when I want to add data:
// Create a new list of data (temporary)
QList<MyObjects*> NewList;
for (auto it = Result.cbegin(); it != Result.cend(); ++it)
{
MyObjects* my = new MyObjects();
// my - set data
NewList.append(my);
}
// Now simply replace current data list
Model->setList(NewList);
This is considering you created your own setList() function inside your custom Model:
void Model::setList(QList<MyObjects*> clist)
{
beginResetModel();
list = clist;
endResetModel();
}
And voila... you load thousands of records with high performance. Notice beginResetModel() and endResetModel() are the functions that notify the table view.
Enjoy.
We'll soon be embarking on the development of a new mobile application. This particular app will be used for heavy searching of text based fields. Any suggestions from the group at large for what sort of database engine is best suited to allowing these types of searches on a mobile platform?
Specifics include Windows Mobile 6 and we'll be using the .Net CF. Also some of the text based fields will be anywhere between 35 and 500 characters. The device will operate in two different methods, batch and WiFi. Of course for WiFi we can just submit requests to a full blown DB engine and just fetch results back. This question centres around the "batch" version which will house a database loaded with information on the devices flash/removable storage card.
At any rate, I know SQLCE has some basic indexing but you don't get into the real fancy "full text" style indexes until you've got the full blown version which of course isn't available on a mobile platform.
An example of what the data would look like:
"apron carpenter adjustable leather container pocket waist hardware belt" etc. etc.
I haven't gotten into the evaluation of any other specific options yet as I figure I'd leverage the experience of this group in order to first point me down some specific avenues.
Any suggestions/tips?
Just recently I had the same issue. Here is what I did:
I created a class to hold just an id and the text for each object (in my case I called it a sku (item number) and a description). This creates a smaller object that uses less memory since it is only used for searching. I'll still grab the full-blown objects from the database after I find matches.
public class SmallItem
{
private int _sku;
public int Sku
{
get { return _sku; }
set { _sku = value; }
}
// Size of max description size + 1 for null terminator.
private char[] _description = new char[36];
public char[] Description
{
get { return _description; }
set { _description = value; }
}
public SmallItem()
{
}
}
After this class is created, you can then create an array (I actually used a List in my case) of these objects and use it for searching throughout your application. The initialization of this list takes a bit of time, but you only need to worry about this at start up. Basically just run a query on your database and grab the data you need to create this list.
Once you have a list, you can quickly go through it searching for any words you want. Since it's a contains, it must also find words within words (e.g. drill would return drill, drillbit, drills etc.). To do this, we wrote a home-grown, unmanaged c# contains function. It takes in a string array of words (so you can search for more than one word... we use it for "AND" searches... the description must contain all words passed in... "OR" is not currently supported in this example). As it searches through the list of words it builds a list of IDs, which are then passed back to the calling function. Once you have a list of IDs, you can easily run a fast query in your database to return the full-blown objects based on a fast indexed ID number. I should mention that we also limit the maximum number of results returned. This could be taken out. It's just handy if someone types in something like "e" as their search term. That's going to return a lot of results.
Here's the example of custom Contains function:
public static int[] Contains(string[] descriptionTerms, int maxResults, List<SmallItem> itemList)
{
// Don't allow more than the maximum allowable results constant.
int[] matchingSkus = new int[maxResults];
// Indexes and counters.
int matchNumber = 0;
int currentWord = 0;
int totalWords = descriptionTerms.Count() - 1; // - 1 because it will be used with 0 based array indexes
bool matchedWord;
try
{
/* Character array of character arrays. Each array is a word we want to match.
* We need the + 1 because totalWords had - 1 (We are setting a size/length here,
* so it is not 0 based... we used - 1 on totalWords because it is used for 0
* based index referencing.)
* */
char[][] allWordsToMatch = new char[totalWords + 1][];
// Character array to hold the current word to match.
char[] wordToMatch = new char[36]; // Max allowable word size + null terminator... I just picked 36 to be consistent with max description size.
// Loop through the original string array or words to match and create the character arrays.
for (currentWord = 0; currentWord <= totalWords; currentWord++)
{
char[] desc = new char[descriptionTerms[currentWord].Length + 1];
Array.Copy(descriptionTerms[currentWord].ToUpper().ToCharArray(), desc, descriptionTerms[currentWord].Length);
allWordsToMatch[currentWord] = desc;
}
// Offsets for description and filter(word to match) pointers.
int descriptionOffset = 0, filterOffset = 0;
// Loop through the list of items trying to find matching words.
foreach (SmallItem i in itemList)
{
// If we have reached our maximum allowable matches, we should stop searching and just return the results.
if (matchNumber == maxResults)
break;
// Loop through the "words to match" filter list.
for (currentWord = 0; currentWord <= totalWords; currentWord++)
{
// Reset our match flag and current word to match.
matchedWord = false;
wordToMatch = allWordsToMatch[currentWord];
// Delving into unmanaged code for SCREAMING performance ;)
unsafe
{
// Pointer to the description of the current item on the list (starting at first char).
fixed (char* pdesc = &i.Description[0])
{
// Pointer to the current word we are trying to match (starting at first char).
fixed (char* pfilter = &wordToMatch[0])
{
// Reset the description offset.
descriptionOffset = 0;
// Continue our search on the current word until we hit a null terminator for the char array.
while (*(pdesc + descriptionOffset) != '\0')
{
// We've matched the first character of the word we're trying to match.
if (*(pdesc + descriptionOffset) == *pfilter)
{
// Reset the filter offset.
filterOffset = 0;
/* Keep moving the offsets together while we have consecutive character matches. Once we hit a non-match
* or a null terminator, we need to jump out of this loop.
* */
while (*(pfilter + filterOffset) != '\0' && *(pfilter + filterOffset) == *(pdesc + descriptionOffset))
{
// Increase the offsets together to the next character.
++filterOffset;
++descriptionOffset;
}
// We hit matches all the way to the null terminator. The entire word was a match.
if (*(pfilter + filterOffset) == '\0')
{
// If our current word matched is the last word on the match list, we have matched all words.
if (currentWord == totalWords)
{
// Add the sku as a match.
matchingSkus[matchNumber] = i.Sku.ToString();
matchNumber++;
/* Break out of this item description. We have matched all needed words and can move to
* the next item.
* */
break;
}
/* We've matched a word, but still have more words left in our list of words to match.
* Set our match flag to true, which will mean we continue continue to search for the
* next word on the list.
* */
matchedWord = true;
}
}
// No match on the current character. Move to next one.
descriptionOffset++;
}
/* The current word had no match, so no sense in looking for the rest of the words. Break to the
* next item description.
* */
if (!matchedWord)
break;
}
}
}
}
};
// We have our list of matching skus. We'll resize the array and pass it back.
Array.Resize(ref matchingSkus, matchNumber);
return matchingSkus;
}
catch (Exception ex)
{
// Handle the exception
}
}
Once you have the list of matching skus, you can iterate through the array and build a query command that only returns the matching skus.
For an idea of performance, here's what we have found (doing the following steps):
Search ~171,000 items
Create list of all matching items
Query the database, returning only the matching items
Build full-blown items (similar to SmallItem class, but a lot more fields)
Populate a datagrid with the full-blow item objects.
On our mobile units, the entire process takes 2-4 seconds (takes 2 if we hit our match limit before we have searched all items... takes 4 seconds if we have to scan every item).
I've also tried doing this without unmanaged code and using String.IndexOf (and tried String.Contains... had same performance as IndexOf as it should). That way was much slower... about 25 seconds.
I've also tried using a StreamReader and a file containing lines of [Sku Number]|[Description]. The code was similar to the unmanaged code example. This way took about 15 seconds for an entire scan. Not too bad for speed, but not great. The file and StreamReader method has one advantage over the way I showed you though. The file can be created ahead of time. The way I showed you requires the memory and the initial time to load the List when the application starts up. For our 171,000 items, this takes about 2 minutes. If you can afford to wait for that initial load each time the app starts up (which can be done on a separate thread of course), then searching this way is the fastest way (that I've found at least).
Hope that helps.
PS - Thanks to Dolch for helping with some of the unmanaged code.
You could try Lucene.Net. I'm not sure how well it's suited to mobile devices, but it is billed as a "high-performance, full-featured text search engine library".
http://incubator.apache.org/lucene.net/
http://lucene.apache.org/java/docs/