I am loading a grid from a pure sql query and am trying to generate the pagination correctly, I am manually adding the limit and offset to the end of the sql and this works fine, what I need to do is return the correct total from a query, I am just not sure how to access this from the grid.
If I use
total: data => data.total
It will run for the limit I have set and it will have exactly one page.
What I would like is the total field to be something I have defined in a query somewhere to be returned with the data from the server, is this possible?
Related
I am trying to get a full list of every group in Azure AD. I am currently able to get 999 records with the following uri:
https://graph.microsoft.com/v1.0/groups?$top=999
According to the documentation from Microsoft there are only a couple OData query parameters available, none of which appear to be able to navigate to the next page. It also states the maximum page size is 999. I have tried using the $skip parameter to skip a certain number of records, but it is not supported:
{"error":{"code":"Request_BadRequest","message":"'$skip' is not supported by the service.",...
Is there any way to get a full list of all AAD groups? We have several thousand that I would need to get.
Some queries against Microsoft Graph return multiple pages of data either due to server-side paging or due to the use of the $top query parameter to specifically limit the page size in a request. When more than one query request is required to retrieve all the results, Microsoft Graph returns an #odata.nextLink property in the response that contains a URL to the next page of results.
For example, the following URL requests all the users in an organization with a page size of 5, specified with the $top query parameter:
https://graph.microsoft.com/v1.0/groups?$top=5
If the result contains more results, Microsoft Graph will return an #odata.nextLink property similar to the following along with the first page of results:
You can retrieve the next page of results by sending the URL value of the #odata.nextLink
ref doc - https://learn.microsoft.com/en-us/graph/paging
With $top, you can customize the result size within the range of 1 and 1000. Because of your question, I guess that 1000 is exclusive, so the range goes from 1 to 999 (inclusive). Read more about the query parameter $top here. I got the range information from List messages.
The response you get from List groups seems to not contain #odata.nextLink which you would normally expect in such a case, so GET https://graph.microsoft.com/v1.0/groups does not support pagination. That would also explain why you get an error if you try to use $skip. You can read more about $skip here.
In order to get the full list of all groups, I would stop using the query parameter $top.
I have created one datasource which contains only the rows that fulfil one condition. I want to create some filters in this table... but is not working.
This is the datasource:
For example, I have a text area which filters by the field "Title". Only should appears the row 5 , but the numer 6 it's still here...
This is the event handler code:
Important: in the beginning, I used this filters and they worked properly. They stopped working when i created the filter in the datasource (The one of the first image)
The filter that you are setting via the binding is getting lost when you perform your query script. Essentially, you are creating a query via the binding, then your script is creating a new query that doesn't have the filters you set previously.
Server Script - queryRecords(query: Query)
You'll notice that your query script has access to a parameter query that you can use instead of calling newQuery(). This will have the filter you set via your binding. Additionally, query.run() returns a list of records, so there's no need to iterate over them. Here is all the code you need in your query script:
query.filters.Status._in = ["Published"];
return query.run();
I have an infinite scroll page where I'm not using Meteor templates to draw the items. The reason for that belongs in a whole other thread. I'm trying to figure out how to paginate the data without fetching all the items at once. I have an idea about using a limit on the cursor, but can't find any real samples online of the proper way to do this.
Should the server call return the cursor itself or just the find with limited data set? If the server doesn't return the cursor itself, won't I lose position when I try to fetch the next set of results?
Also, I want to make sure to retrieve data from the same cursor. Like if there are currently 100 items and I fetch 20, I expect the next 4 fetches to get 20-40, 40-60, 60-80, and 80-100. If in the interim some items got inserted or deleted, I don't want it to mess up the fetches. I am handling reactivity separately and letting users decide when to update the items (which should reset the cursor).
Help/advice appreciated!
What you would usually do is this:
var cursor = collection.find({},{limit:100+20*page});
The first {} is obviously the selector!
Docs:
http://docs.meteor.com/#/basic/Mongo-Collection-find
You don't have to worry about returning only the values 100-120 and then 120-140 etc. since meteors ddp does that for you!
If you were using meteor's blas or you just want to have the reactivity, you should probably store the page variable in the Session or create a dependancy:
https://manual.meteor.com/#deps-asimpleexample
Hi i have a sql database server runnin on my desktop. I want to create an asp.net application to detect when new data has been inserted into the database. Is there a command in visual studio to detect when theres new data right away?
Use the timestamp datatype on each column. This will stay identical until a change is made to any column in that row. If you combine this with the rowcount you can be certain if anything has changed in your database. You would need to cache the current timestamps and row count and compare them with the results of a query, you can then find out if there is a change.
So in your answer to:
Is there a command in visual studio to
detect when theres new data right
away?
Yes there is, although its not a command is the timestamp function (not to be confused with anything to do with the time)
Perhaps you need to provide more details to your scenario since constant querying of the database might not be the best way forward.
You can get a row count of your dataset and create a application
IN VB
Dim i as Integer
i=dataset.tables("table").rows.count
in sql backed return a count of a table and create a ASP.Net website to get the count and when count change alerts you
It may be heavier duty than you are looking for, but SQL Notification Services will do what you want. Essentially you execute a query and tell notification services you want to be notified whenever re-running that query would produce different results.
if you are using caching you can make it dependent on sql.
or you can fire email using sql trigger so when ever trigger get fired you will receive an email.
otherwise you will have to check your db again and again for any changes.
if you can provide more details about exact situation , we can provide more specific solution
You can create a webservice and call it using javascript.
here you can find sample how to call webservice using javascript:
function CallWebservice()
{
myWebService.isPrimeNumberWebService.callService(isPrimeNumberResult, "IsPrime",
testValue.value);
setTimeout("CallWebservice()",100);//here set time according to your requirement
}
For timer in javascript:
http://dotnetacademy.blogspot.com/2010/09/timer-in-javascript.html
For webservice in javaScript:
http://www.webreference.com/js/tips/020715.html
How to call webservice in JavaScript for FireFox 3.0
I have a very large (millions of rows) SQL table which represents name-value pairs (one columns for a name of a property, the other for it's value). On my ASP.NET web application I have to populate a control with the distinct values available in the name column. This set of values is usually not bigger than 100. Most likely around 20. Running the query
SELECT DISTINCT name FROM nameValueTable
can take a significant time on this large table (even with the proper indexing etc.). I especially don't want to pay this penalty every time I load this web control.
So caching this set of names should be the right answer. My question is, how to promptly update the set when there is a new name in the table. I looked into SQL 2005 Query Notification feature. But the table gets updated frequently, very seldom with an actual new distinct name field. The notifications will flow in all the time, and the web server will probably waste more time than it saved by setting this.
I would like to find a way to balance the time used to query the data, with the delay until the name set is updated.
Any ides on how to efficiently manage this cache?
A little normalization might help. Break out the property names into a new table, and FK back to the original table, using a int ID. you can display the new table to get the complete list, which will be really fast.
Figuring out your pattern of usage will help you come up with the right balance.
How often are new values added? are new values added always unique? is the table mostly updates? do deletes occur?
One approach may be to have a SQL Server insert trigger that will check the table cache to see if its key is there & if it's not add itself
Add a unique increasing sequence MySeq to your table. You may want to try and cluster on MySeq instead of your current primary key so that the DB can build a small set then sort it.
SELECT DISTINCT name FROM nameValueTable Where MySeq >= ?;
Set ? to the last time your cache has seen an update.
You will always have a lag between your cache and the DB so, if this is a problem you need to rethink the flow of the application. You could try making all requests flow through your cache/application if you manage the data:
requests --> cache --> db
If you're not allowed to change the actual structure of this huge table (for example, due to huge numbers of reports relying on it), you could create a holding table of these 20 values and query against that. Then, on the huge table, have a trigger that fires on an INSERT or UPDATE, checks to see if the new NAME value is in the holding table, and if not, adds it.
I don't know the specifics of .NET, but I would pass all the update requests through the cache. Are all the update requests done by your ASP.NET web application? Then you could make a Proxy object for your database and have all the requests directed to it. Taking into consideration that your database only has key-value pairs, it is easy to use a Map as a cache in the Proxy.
Specifically, in pseudocode, all the requests would be as following:
// the client invokes cache.get(key)
if(cacheMap.has(key)) {
return cacheMap.get(key);
} else {
cacheMap.put(key, dababase.retrieve(key));
}
// the client invokes cache.put(key, value)
cacheMap.put(key, value);
if(writeThrough) {
database.put(key, value);
}
Also, in the background you could have an Evictor thread which ensures that the cache does not grow to big in size. In your scenario, where you have a set of values frequently accessed, I would set an eviction strategy based on Time To Idle - if an item is idle for more than a set amount of time, it is evicted. This ensures that frequently accessed values remain in the cache. Also, if your cache is not write through, you need to have the evictor write to the database on eviction.
Hope it helps :)
-- Flaviu Cipcigan