I have posted a question here regarding binding of my gridview - Bind GridView with many records with number of visible records at a moment. The performance hit has come down incredibily to 2sec from 20sec by implementing the answer but still i know it can be lowered to more limit.
Like- When i fetch say 50 records(Customers) at a time from DB, it has one column which fetch number of orders corresponding to that customer from Orders table(just count not complete record). So, it hit 50 queries to a SQL Server to fetch data for 50 customers. Is there a way where i can fetch complete data with a single query??
NOTE- If someone needs to look at a code, do let me know. Hopefuly, the link which i have provided contains enough codebase..
Database Indexing was what i was looking for.
Related
I am setting up a Serverless application for a system and I am wondering the following:
Say that my table handle Companies. Each Company can have Invoices. Each company has roughly 6-8000 Invoices. Say that I have 14 Companies, that results in roughly 112 000 items in my table.
Is it "okay" to handle it this way? I will only pay for each Get request I do, and I can query a lot of items into the same get request.
I will not fetch every single item each time I write or get items.
So, is there a recommendation for how many items I should max have in a table? I could bake some items together, but I mainly want a general recommendation.
There is no practical limit to the number of items you can have in a table. How many items each invoice is depends on your application's access patterns. You need to ask, what data does your app need, when does it need that data, and how large is the data, how often is the item updated. For example, if all the data in one item comes in under the 1Kb WCU and 4Kb RCU and you do not write to it often, and when you read it, you need all of the data in the item, then shove it in one item perhaps. If the data is larger, or part of it gets written to more often, then perhaps split it up.
An example might be a package tracking app. You have the initial information about the package, size, weight, source address, destination address, etc. That could be a lot of data. When that package enters a sorting facility it is checked in. Do you want to update that entire item you already wrote? Or do you just write an item that has the same PK (item collection), but a different SK and then the info that it made it to the sorting facility? When it leaves the sorting facility, you want to write to the DB that it left, which truck it was on, etc. Same questions.
Now when you need to present the shipping information by tracking ID number, the PK, you can do a query to DynamoDB and get the entire item collection for that tracking ID number. Therefore you get all items with that ID as your app presents much of that information on the tracking web site for the customer.
So again, it really depends on the app and your access patterns, but you want to TRY to only read and write the items your app needs, when you need them, how you need them, and no more...within reason (there is such a thing as over slicing your data). That is how, in my opinion, you will make a NoSQL database like DynamoDB be the most performant and most cost effective.
Dynamo Db won't even notice 100K entries...
As mentioned by LifeOfPi, entries should be less than 400k.
The question indicates a distinct lack of understanding of what/why/how to use DDB. I suggest you do some more learning. The AWS Reinvent videos around DDB are quite useful.
In a standard RDBMS, you need to know the structure from the beginning. Accessing that data is then very flexible.
DDB is the opposite, you need to understand how you'll need to access you data; the structure is not important. You should end up with something like so:
For 100K items and for most applications, you may find Aurora serverless to be an easier fit for your needs; especially if you have complicated searching and/or sorting needs.
I have 5000 records I am calculating salary of one user and update his data in database. So it’s taking quit long to update 5000 records. I want to calculate all users’ salary first and then update to records in db.
Is there any other way we can update db in single click
It really depends on how your are managing your data access layer and what data you need for doing the calculation? Do you have all the data you need in just one table or for each record you need to fetch some other data from another tables?
One way is to retrieve each record and do the calculation in a transaction and then store it on the database. In this way, you can also take advantage of ajax UI to inform the user about the progress of calculation. In this way, you should use SqlDataReader to fetching the data as it is very optimized and has less overhead than using DataSet and DataTables and also you can prevent several type-castings. In addition, you can also make it optimized by taking advantage of TPL or make it configurable for fetching/updating N records each time. This approach works if you have the ID of the records. You also need to have a field for your records to track your calculations in case of any disconnection or crashes or iisreset execution so that you can resume the calculation instead of rerunning it again.
In My Database I Have 10000000 Records. In GridView First I Am Showing First 10 Records. In Order To See the Next Records User Need to Click Page Numbers ( 1,2,3,------10000). But As I'm Retrieving 10 Records for The First Time GridView Paging is not Showing.
Is There Any Way To Show Paging In ASP.NET GridView Statically ?
For so many records, I won't recommend Paging. You can show Top 20 recently added records and provide options to filter out records. A user can enter keywords. ReQuery and ReBind the GridView with this new result set.
You might also consider using PetaPoco, a Micro-ORM, which will help you fetch paged result.
With so many records, you really need to take into account the exact queries being run to pull back the data.
There are numerous ways of doing data paging ( http://beyondrelational.com/modules/2/blogs/28/posts/10434/sql-server-server-side-paging-with-rownumber-function.aspx ). However, the "best" way is dependent on the exact version of SQL Server you are running.
Essentially, the solutions boil down to you passing a page number and number of records per page through some type of query. Usually a stored procedure as the query can be quite messy.
Once there, you have an option. Either send the total record count back as an OUT parameter in your query and the result set back normally, or you send the total record count back as a column. There are definitely efficiency concerns with both options as one way requires the query to be run twice and the other requires an extra column of data which increases network traffic.
Once you have that solved then you can figure out exactly how you want the UI to work.
I have implemented paging in Gridview and in order to avoid frequent Reconnection with database,I used Session to store data.So that data could be retrieved from session on changing page index of Gridview.
But my problem is that when should I clear this session as it's usable only for this very page.And if I use ViewState then it will not be fine if data increases in amount.
Looking forward for valuable suggestion of yours.....
Thanks in advance
Supriya
You should not be saving any data to sessions. If the data control requires data per page it is fine to select only the rows you need from the data base per page change.
So if you have say 100 rows and 10 rows per page, then you should be retrieving 10 rows per PageChange of the data control. This is perfectly acceptable especially when combined with caching.
Ifyou are using SQl 2005 see this post:
http://weblogs.asp.net/scottgu/archive/2006/01/07/434787.aspx
I don't think you should worry about the database connections. Connection pooling will take case of that. You have to open connection and close as soon as you get paged records.
If you store your records in viewstate/cache this will unnecesarily use resources and might be out of sync with database. I consider it as a bad approach.
You should make a call each time you change page and retrieve records from db.
Hope it helps.
I'm trying to figure out how to develop the on my database. E.g. there are some records in the database:
alt text http://img109.imageshack.us/img109/2962/datax.png
So, if the actual DateTime on the server is 2010-01-09 12:12:12 the record no.1 must be deleted.
OK, but what, if in the datebase are e.g. 1.000.000 records? The server has to search in the database on each second to check what rows must be deleted ? It's not efficient at all.
I'm totally new to Microsoft Server so I'd be grateful of any kind of help
There isn't a time based trigger in sql server. So you are going to have to implement this as a job or through some other scheduled mechanism.
Most likely you will want an index on the StartDate (end date?) column so that your deletion query doesn't have to perform a full table scan to find the data it needs to delete.
Usually you don't actually perform deletes every second. Instead the app should be smart enough to query the table in a way to eliminate those records from it's result set. Then, you can perform lazy deletes at some other time interval to do cleanup. Such as once an hour or once a day etc.