Vaadin 14 Lazy Loading Fetch iterations - Not what we expect - grid

We are attempting to use the CallbackDataProvider class to perform lazy loading using a Grid component.
Our data source is using JPA implementation with pagination
Setting a page size = 20 running a query that would return 200 rows in the result set the callback seems to perform only 2 fetches, the first fetch for 20 rows, the second for the remaining 180 rows
This is not what we expected, we are expecting 20 rows on each fetch or for the 200 rows, 10 fetch of 20 rows each.
Is our expectation incorrect here?
Using this paradigm if there are 1000 or 2000 rows in the result set, I don't see how lazy loading is of any benefit here since fetching 980 rows on the second fetch defeat the lazy load purpose
Anyone have a similar experience or is there something we are missing?

The actual buffersize of the loaded data is determined by the components web client part. Pagesize is only the initial parameter. The default value of the pagesize is 50, which leads in normal circumstances Grid to load 100 items at the time. If web client determines that pagesize is too small based on it visual size, it will larger buffer. Usually pagesizes as small as 20 do not work well.

Related

Implement a recurring function in MapForce

im using Altova Mapforce to autogenerate xslt for transforming messages. It's a friendly tool with many built-in functions but i have a problem which i dont know how to solve. In a typical programming language it should be easy but i dont know how to implement it using the functions of MapForce. The problem is that in the initial message a Datagroup is allowed to be used from 0 to 99999 times while in the final message it is allowed only from 0 to 99. When the count of Datagroup in the initial message is greater than 99 then the remaining will be mapped to a second Datagroup etc so that all the occurences of the Datagroup in the first message will be mapped in the second message in 99 count groups. So we must break the iterations of the Datagroup of the first message to 99 count groups. The first function the first items with count 99. But then how could i check how much 99nth groups are left without using 100 times the skip first items function. I say 100 because 99999/99 = 100

Is there any limit to the number of items in a QListWidget?

I am writing a program using PyQt5.
Is there any limit to the number of items that can be added to a QListWidget?
If the view is populated on demand (with only a relatively small number of items being shown at once), there's no effective limit (as demonstrated by the Fetch More Example in the Qt docs). However, the number of items that can be displayed at the same time does have a fixed limit.
This limit is determined by the size of a signed int (which is in turn determined by the compiler used to build the Qt libraries). For most systems, the size will be four bytes (but it could be 2 bytes for some embedded systems), which equates to 2,147,383,647 items.
The size cannot be larger than this, because all models (including the internal one used by QListWidget) must be subclasses of QAbstractItemModel, which are required to return integer values from methods such as rowCount. If larger values are returned, an overflow-error will occur:
import ctypes
from PyQt5 import QtCore, QtWidgets
MAXINT = 2 ** (8 * ctypes.sizeof(ctypes.c_int())) // 2 - 1
class Model(QtCore.QAbstractListModel):
def data(self, index, role=QtCore.Qt.DisplayRole):
if index.isValid() and role == QtCore.Qt.DisplayRole:
return f'Item ({index.row()})'
def rowCount(self, parent=None):
return MAXINT + 1
app = QtWidgets.QApplication(['Test'])
view = QtWidgets.QListView()
model = Model()
print('row-count:', model.rowCount())
view.setModel(model)
view.show()
app.exec()
Output:
row-count: 2147483648
OverflowError: invalid result from Model.rowCount(), value must be in the range -2147483648 to 2147483647
However, even if you restrict the number of items to less than this fixed limit, there's no guarantee that the view will be able to display them. During testing, I found that 10 thousand items would load almost instantly, 100 thousand took about 1-2 seconds, and 1 million about 20 seconds. But displaying 10 million items at once took over three minutes to load, and attempting to load more than about 500 million items resulted in an immediate segmentation fault.
For a QListWidget, these timings would be much worse, since the cost of creating a QListWidgetItem adds a significant amount of overhead (and the same goes for QStandardItemModel). So long before you hit the fixed limit, performance considerations will demand that some kind of Fetch More approach will be required.

What exactly is Limit in Dynamodb?

From AWS Docs:
A single Query operation can retrieve a maximum of 1 MB of data. This limit applies before any FilterExpression or ProjectionExpression
is applied to the results. If LastEvaluatedKey is present in the
response and is non-null, you must paginate the result set.
I have been working on DynamoDB for sometime now, when I increase limit of a query it would always give me more records. So What's the closest meaning of Limit = 2? returning 2 items (or max 1 MB which we know for the fact) right? So, would that make Limit=1000; return 1000 items or 1000 MBs of data? Or 1000 records and no effect on data size? Or anything else?
The limit parameter only affects the number of items that are returned.
Limit = 2 means at most 2 items will be returned. The upper limit for the limit parameter is 1000 - one API call can't return more than 1000 items.
Depending on the item size, you may not get all the records that you specify with the limit parameter, because at most 1MB of data is read from the table.
That means if all items in your table are 400KB in size each (the max per item) and you set the limit parameter to 5, you will always get at most 2 items from the table, because of the 1MB limit.
In a query, the Limit is the number of items that will be Shown after the query (or available in your SDK response).
So if you make a query that normally would return 15 items, but have a limit of 2, you get the first 2 based on their sort key (and if no sort key its the first two that come back, I believe the oldest items, but dont quote me on that.
The 1mb limit is a hard cap of the total size of the return JSON from the query made by the SDK api call. So if you have 100 items and when in a JSON format they exceed 1 mb worth of data, then only the first 1mb worth of entries (whole entries) will be returned. A pagination token will also be returned (NextToken if I recall correctly) that can be used in the next query to start the return from the ending of the previous one (pagination).
Its VERY IMPORTANT to realize that the combination of Limit keyword and 1mb hard cap pagination means that if you have a query that would require pagination, and include a limit, that limit is applied AFTER the pagination begins.
So if the first page of your query returns 15 items and you have a limit of 5, you get the first five items. Then if you call next token you get item 16-20 because the original query, before the limit, had next token assigned to item 16.
In general, there is little reason to use Limit - instead, your Partition Key/Sort Key combination should be set up along your access patterns so you are only retrieving the actual items you need on any given call. Using SortKey expressions of >, <, =, between, starts_with, contains are a better way to limit the number of responses than Limit. The only major use case I can usually find for Limit is literally needing just the latest item of a potential multiple items after a specific date. But even so, its usually better just to take the entire query and get the first item yourself in your code (index 0) instead so you don't accidentally loose items from the limit/query combination.

Datasource query limit is different of page size

In AppMaker I have a calculated datasource and I've set its page size to 10.
In the function I call to return the records (queryRecords), the limit parameter is set to 11 (I don't change it on front side).
Why ?
That's a good catch. App Maker sets the limit to page size + 1 on all queries because it needs to look ahead to see if there are more pages or not (this allows us to fill in the "lastPage" property because we looked one ahead and found a record). But for calculated data sources this is pretty confusing, I'll file a bug to look into this. At the very least it needs some clear documentation.
I think if you did try to return 11 records, it should only show 10 on the client, and fill in the last page property appropriately.

Paginating chronologically prioritized Firebase children

tl;dr Performing basic pagination in Firebase via startAt, endAt and limit is terribly complicated, there must be an easier way.
I'm constructing an administration interface for a large number of user submissions. My initial (and current) idea is to simply fetch everything and perform pagination on the client. There is however a noticable delay when fetching 2000+ records (containing 5-6 small number/string fields each) which I attribute to a data payload of over 1.5mb.
Currently all entries are added via push but I'm a bit lost as to how paginate through the huge list.
To fetch the first page of data I'm using endAt with a limit of 5:
ref.endAt().limit(10).on('child_added', function(snapshot) {
console.log(snapshot.name(), snapshot.val().name)
})
which results in the following:
-IlNo79stfiYZ61fFkx3 #46 John
-IlNo7AmMk0iXp98oKh5 #47 Robert
-IlNo7BeDXbEe7rB6IQ3 #48 Andrew
-IlNo7CX-WzM0caCS0Xp #49 Frank
-IlNo7DNA0SzEe8Sua16 #50 Jimmmy
Firstly to figure out how many pages there are I am keeping a separate counter that's updated whenever someone adds or removes a record.
Secondly since I'm using push I have no way of navigating to a specific page since I don't know the name of the last record for a specific page meaning an interface like this is not currently possible:
To make it simpler I decided on simply having next/previous buttons, this however also presents a new problem; if I use the name of the first record in the previous result-set I can paginate to the next page using the following:
ref.endAt(null, '-IlNo79stfiYZ61fFkx3').limit(5).on('child_added', function(snapshot) {
console.log(snapshot.name(), snapshot.val().name)
})
The result of this operation is as follows:
-IlNo76KDsN53rB1xb-K #42 William
-IlNo77CtgQvjuonF2nH #43 Christian
-IlNo7857XWfMipCa8bv #44 Jim
-IlNo78z11Bkj-XJjbg_ #45 Richard
-IlNo79stfiYZ61fFkx3 #46 John
Now I have the next page except it's shifted one position meaning I have to adjust my limit and ignore the last record.
To move back one page I'll have to keep a separate list on the client of every record I've received so far and figure out what name to pass to startAt.
Is there an easier way of doing this or should I just go back to fetching everything?
We're working on adding an "offset()" query to allow for easy pagination. We'll also be adding a special endpoint to allow you to read the number of children at a location without actually loading them from the server.
Both of these are going to take a bit though. In the meantime the method you describe (or doing it all on the client) are probably your best bet.
If you have a data structure that is append-only, you could potentially also do pagination when you write the data. For example: put the first 50 in /page1, put the second 50 in /page2, etc.
Another way to accomplish this is with two trees:
Tree of ids: {
id1: id1,
id2: id2,
id3: id3
}
Tree of data: {
id1: ...,
id2: ...,
id3: ...
}
You can then load the entire tree of ids (or a big chunk of it) to the client, and do fancy pagination with that tree of ids.
Here's a hack for paginate in each direction.
// get 1-5
ref.startAt().limit(5)
// get 6-10 from 5
ref.startAt(null, '5th-firebase-id' + 1).limit(5)
// get 11-15 from 10
ref.startAt(null, '10th-firebase-id' + 1).limit(5)
Basically it's a hack for startAtExclusive(). You can anything to the end of the id.
Also figured out endAtExclusive() for going backwards.
// get 6-10 from 11
ref.endAt(null, '11th-firebase-id'.slice(0, -1)).limit(5)...
// get 1-5 from 6
ref.endAt(null, '6th-firebase-id'.slice(0, -1)).limit(5)...
Will play with this some more but seems to work with push ids. Replace limit with limitToFirst or limitToLast if using firebase queries.

Resources