ASP.NET MySQL Data Access Layer - asp.net

I have an ASP.NET application that access a MySQL database. For that I made a class with all the queries I need to retrieve data from database.
In order to bring from database just the info I need I have a lot of queries:
example
One query that gets the NAME and DATE from the table NEWS
Another query to get the NAME, DATE AND TEXT from the table news.
I do this because in some pages I just need the name and date and in others I need also the text.
What do you think would be better for performance, just to have one query and get all the information even if I don't use some of the fields in some pages or to have a query for each case?
This has been a very simple example, in some cases I have many fields...
Thanks.

It really depends more on how often you create connection with database. For example, if your page loads and some parts of your page use first query and other use the second, there is benefit of executing the second query for both only once and distribute data as needed. You save on unnecessary connections and this does result in performance gain. However if you have different pages calling different methods and you can not reduce number of calls, you can keep both methods and call the one that will select only what you need.

Related

SQL server inserting lots of data from ASP.NET?

I have this application, where there is a parent child table, and customers can order products. The whole structure is quite complex to post here but suffice to say, there is one Order table and one OrderDetails table for storing the orders. Currently what we are doing is INSERT one record in Order table, and then for each item the customer added, insert each item in a loop to OrderDetails table. The solution is not scalable for obvious reasons. It works fine for 100 or so items, but if user goes over 1000 items, or 1000 qty of a item or so, one can start to notice the unresponsiveness of the application.
There are a couple of solutions that come to mind, but I am not sure which one would scale well. One is I use BulkInsert from my asp.net application to insert into the OrderDetails table. Second is I generate XML and then pass that to a sql proc and extract / insert data into OrderDetails table from that XML, but that have associate overhead of memory consumption of the XML generated. I know I could benchmark and see for myself what would suit best for my application, but I would like to know what is the most common strategy and would scale better when compared to other. Also, if there is another technique that I could use instead of these two, that would be better performance wise ( I know performance is subjective word, but let me narrow it down to speed ) I could use that. Which is generally used the most? What do you use in your application?
You could consider exploring the option of using a table valued parameter in the database. You will have to create a table type object, whose structure will mimic that of the OrderDetails table. The stored proc for inserting the data will accept an input parameter of this type (such parameters are always READONLY).
In your server side code, you can construct a DataTable object containing all the Order Details data, which will be mapped to the input parameter of the stored proc. Ensure that the order of columns in the DataTable object exactly matches the order in the table valued parameter. Upon executing the query, all the data will be inserted in one shot. This will save you from looping for each row of data that is there, and will also prevent the overhead of XML parsing. This approach though will involve passing an entire object over the network.
You can read more about it here : MSDN Table Valued Parameters
1000 items for an order does seem quite excessive!
Would it be feasible to introduce a limit of 100 items per order into the business logic of the application?

Is it possible to create multiple versions of the same table, and if so, how?

I'm currently in the process of creating a website/system and was wondering how I can create multiple versions of a table to then be used by many users. The reason for this is primarily due to the amount of information that is needed my each user. Another reason for doing so is a result of each information have been laid out to display the product code and other key information.
This is due to a main table setting a list of data for example prices of a product. To which the user then can set and store data in their own table to be used at a later date and referenced accordingly. Rather than creating multiple columns in the thousands I feel it would be better to simply create different versions of the table.
Instead of creating multiple table, why don't you not create seperate Views in Database based on your User and display that information in table.
Altought, its recommended to use the single same table, and add fields that allow to restrict some data for a particular user, there is some few cases that may be required to have several versions of the table, like the ones you mention.
I have work with a web app. that served several companies, with the same tables, fields, schemas, same web server & databased server, and yet the customers want it separated from other users.
Usually, you create several tables with the same schema, but different id. Your web app. must have a way to select which table or database you are going to use.
Be very careful with this approach, is very difficult to maintain, its better to use some programming techniques to control this scenario, like:
http://en.wikipedia.org/wiki/Multitier_architecture
That allows to have a lot of control over a database app. and allows to change a table or database with thesame schema.

How to setup data model for customizable application

I have an ASP.NET data entry application that is used by multiple clients. The application consists of multiple data entry modules that are common to all clients.
I now have multiple clients that want their own custom module added which will typically consist of a dozen or so data points. Some values will be text, others numeric, some will be dropdown selections, etc.
I'm in need of suggestions for handling the data model for this. I have two thoughts on how to handle. First would be to create a new table for each new module for each client. This is pretty clean but I don't particular like it. My other thought is to have one table with columns for each custom data point for each client. This table would end up with a lot of columns and a lot of NULL values. I don't really like either solution and suspect there's a better way to do this, so any feedback you have will be appreciated.
I'm using SQL Server 2008.
As always with these questions, "it depends".
The dreaded key-value table.
This approach relies on a table which lists the fields and their values as individual records.
CustomFields(clientId int, fieldName sysname, fieldValue varbinary)
Benefits:
Infinitely flexible
Easy to implement
Easy to index
non existing values take no space
Disadvantage:
Showing a list of all records with complete field list is a very dirty query
The Microsoft way
The Microsoft way of this kind of problem is "sparse columns" (introduced in SQL 2008)
Benefits:
Blessed by the people who design SQL Server
records can be queried without having to apply fancy pivots
Fields without data don't take space on disk
Disadvantage:
Many technical restrictions
a new field requires DML
The xml tax
You can add an xml field to the table which will be used to store all the "extra" fields.
Benefits:
unlimited flexibility
can be indexed
storage efficient (when it fits in a page)
With some xpath gymnastics the fields can be included in a flat recordset.
schema can be enforced with schema collections
Disadvantages:
not clearly visible what's in the field
xquery support in SQL Server has gaps which makes getting your data a real nightmare sometimes
There are maybe more solutions, but to me these are the main contenders. Which one to choose:
key-value seems appropriate when the number of extra fields is limited. (say no more than 10-20 or so)
Sparse columns is more suitable for data with many properties which are filled out infrequent. Sounds more appropriate when you can have many extra fields
xml column is very flexible, but a pain to query. Appropriate for solutions that write rarely and query rarely. ie: don't run aggregates etc on the data stored in this field.
I'd suggest you go with the first option you described. I wouldn't over think it. The second option you outlined would be a bad idea in my opinion.
If there are fields common to all the modules you're adding to the system you should consider keeping those in a single table then have other tables with the fields specific to a particular module related back to the primary key in the common table. This is basically table inheritance (http://www.sqlteam.com/article/implementing-table-inheritance-in-sql-server) and will centralize the common module data and make it easier to query across modules.

About the GridView control in asp.net

I'm working on a web application, on one page I am inserting records in the database and I want to display the data in a GridView but on a diffrent page. How can I do this?
I know how to display records in a GridView, but I want to know if there are two web pages,
on one page provides the facility to insert the records and U want to display the records in the GridView bit on the second page.
While it is possible to retain the data being inserted without retrieving it from the database, I think it is better to save the data on the first page and retrieve it from the database on the second page.
You can do this by writing inline SQL or a stored procedure. One simple approach would be to pass the resultset into a DataTable and bind a GridView to that.
That does involve more work -- more code and more trips to the database. However, I think it is very useful when performing INSERTs that the web page is updated to display what actually got into the database. Sometimes, this is different from what the user thinks they entered, and they can see the problem immediately.
One question would be how to identify the data that has just been inserted. I can think of several ways to do that. One is to query for all records entered today by the person logged in (which is recorded in the CreatedBy and CreatedDate columns of the database tables). Sort the resultset in descending order of CreatedDate, so that the most recent entries appear at the top of the GridView. Another would be by assigning a batch number to the data entry and retrieving only the data in that batch.
If you really want to hang on to the data entry, you could put it into Session on the first page, and then retrieve it from Session for display on the second page.
Following along the lines of what DOK said, it's also a lot easier to validate data entered by your users in your business logic before you submit it to the database.
Secondly, users can change their minds about data on a webpage frequently. The data on the web could be in an partially-finished state or could have typos or errors in it. If someone else saw this data and believed that it needed to be completed, then you could end up with duplicated entries in the database that would then require reconciliation.
Honestly, your best bet is to use the Session object to hold temporary user data. The MSDN entry for the GridView RowEditing event contains some great source code for this approach. Whenever I have to use GridViews to handle data from the database, I mimic this.
In addition to handling problems with temporary data storage, you can compare the Session object to your database results to determine whether or not new rows have been inserted. This is somewhat costly as it involves overloading the Equals method (and GetHashCode as well, if you follow what Microsoft recommends) and using Equals to iterate over the two collections, comparing the properties of both objects, and determining which records are new based on records that don't exist in your Session object, but do exist in your database object.
It's also worth noting that this approach assumes that you don't delete data from your database, but set the status of a record in your database to "Deleted" -- if that's a boolean field or an sequence of codes you use to describe the state of rows in a table.

Bulk Collection Manipulation through a REST (RESTful) API

I'd like some advice on designing a REST API which will allow clients to add/remove large numbers of objects to a collection efficiently.
Via the API, clients need to be able to add items to the collection and remove items from it, as well as manipulating existing items. In many cases the client will want to make bulk updates to the collection, e.g. adding 1000 items and deleting 500 different items. It feels like the client should be able to do this in a single transaction with the server, rather than requiring 1000 separate POST requests and 500 DELETEs.
Does anyone have any info on the best practices or conventions for achieving this?
My current thinking is that one should be able to PUT an object representing the change to the collection URI, but this seems at odds with the HTTP 1.1 RFC, which seems to suggest that the data sent in a PUT request should be interpreted independently from the data already present at the URI. This implies that the client would have to send a complete description of the new state of the collection in one go, which may well be very much larger than the change, or even be more than the client would know when they make the request.
Obviously, I'd be happy to deviate from the RFC if necessary but would prefer to do this in a conventional way if such a convention exists.
You might want to think of the change task as a resource in itself. So you're really PUT-ing a single object, which is a Bulk Data Update object. Maybe it's got a name, owner, and big blob of CSV, XML, etc. that needs to be parsed and executed. In the case of CSV you might want to also identify what type of objects are represented in the CSV data.
List jobs, add a job, view the status of a job, update a job (probably in order to start/stop it), delete a job (stopping it if it's running) etc. Those operations map easily onto a REST API design.
Once you have this in place, you can easily add different data types that your bulk data updater can handle, maybe even mixed together in the same task. There's no need to have this same API duplicated all over your app for each type of thing you want to import, in other words.
This also lends itself very easily to a background-task implementation. In that case you probably want to add fields to the individual task objects that allow the API client to specify how they want to be notified (a URL they want you to GET when it's done, or send them an e-mail, etc.).
Yes, PUT creates/overwrites, but does not partially update.
If you need partial update semantics, use PATCH. See http://greenbytes.de/tech/webdav/draft-dusseault-http-patch-14.html.
You should use AtomPub. It is specifically designed for managing collections via HTTP. There might even be an implementation for your language of choice.
For the POSTs, at least, it seems like you should be able to POST to a list URL and have the body of the request contain a list of new resources instead of a single new resource.
As far as I understand it, REST means REpresentational State Transfer, so you should transfer the state from client to server.
If that means too much data going back and forth, perhaps you need to change your representation. A collectionChange structure would work, with a series of deletions (by id) and additions (with embedded full xml Representations), POSTed to a handling interface URL. The interface implementation can choose its own method for deletions and additions server-side.
The purest version would probably be to define the items by URL, and the collection contain a series of URLs. The new collection can be PUT after changes by the client, followed by a series of PUTs of the items being added, and perhaps a series of deletions if you want to actually remove the items from the server rather than just remove them from that list.
You could introduce meta-representation of existing collection elements that don't need their entire state transfered, so in some abstract code your update could look like this:
{existing elements 1-100}
{new element foo with values "bar", "baz"}
{existing element 105}
{new element foobar with values "bar", "foo"}
{existing elements 110-200}
Adding (and modifying) elements is done by defining their values, deleting elements is done by not mentioning it the new collection and reordering elements is done by specifying the new order (if order is stored at all).
This way you can easily represent the entire new collection without having to re-transmit the entire content. Using a If-Unmodified-Since header makes sure that your idea of the content indeed matches the servers idea (so that you don't accidentally remove elements that you simply didn't know about when the request was submitted).
Best way is :
Pass Only Id Array of Deletable Objects from Front End Application To Web API
2. Then You have Two Options:
2.1 Web API Way : Find All Collections/Entities using Id arrays and Delete in API , but you need to take care of Dependant entities like Foreign Key Relational Table Data too
2.2. Database Way : Pass Ids to your database side, find all records in Foreign Key Tables and Primary Key Tables and Delete in same order i.e. F-Key Table records then P-Key Table records

Resources