I have data entry form like...
Data Entry Form http://img192.imageshack.us/img192/2478/inputform.jpg
There are some empty rows and some of them have values. User can Update existing values and can also fill value in empty rows.
I need to map these values in my DB table and some of them will be inserted as new rows into the database and existing record will be updated.
I need your suggestions, How can I accomplish this scenario with best approach.
Thanks
For each row, I would have a primary key (hidden), a dirty flag, and a new flag. In the grid, you would set the "dirty" flag to true when changes are made. When adding new rows in the UI, you would set the new flag as well as generate a primary key (this would be easiest if you used GUIDs for the key). Then, when you post this all back to the server, you would do inserts when the new flag is set and updates for those with the dirty flag.
Once the commit of the data has completed, you would simply clear the dirty and new flags.
Of course, if the data is shared by multiple contributors and can be edited concurrently, there's a bit more involved if you don't want someone overwriting another's edits.
I would look into using ADO.net DataSets and DataTables as a backing store in memory for your custom data grid. ADO.net allows you to bulk load a data set out of the database and track inserts, updates, and deletes against that data in memory. Once you are done, you can then bulk process the stored transactions back into the database.
The big benefit of using ADO.net is that all the prickly change tracking code is written for you already, and the library is deployed to every .net capable machine.
While it isn't in vogue right now, you can also send ADO.net data sets across the wire using XML serialization for altering and then send it back to be processed into the database.
Google around. There are literally thousands of books, tutorials, and blog posts on how to use ADO.net.
Related
I thought Datastore's key was ordered by insertion date, but apparently I was wrong. I need to periodically look for new entities in the Datastore, fetch them and process them.
Until now, I would simply store the last fetched key and wrongly query for anything greater than it.
Is there a way of doing so?
Thanks in advance.
Datastore automatically generated keys are generated with uniform distribution, in order to make search more performant. You will not be able to understand which entity where added last using keys.
Instead, you can try couple of different approaches.
Use Pub/Sub and architecture your app so another background task will consume this last added entities. On entities add in DB, you will just publish new Event into Pub/Sub with key id. You event listener (separate routine) will receive it.
Use names and generate you custom names. But, as you want to create sequentially growing names, this will case performance hit on even not big ranges of data. You can find more about this in Best Practices of Google Datastore.
https://cloud.google.com/datastore/docs/best-practices#keys
You can add additional creation time column, and still use automatic keys generation.
I am working with a large web application where there can be up to 100,000 objects, populated from a DB, in cache.
There is a table in the database which, given the object ID, will give you a last_updated value which is updated whenever any aspect of that object changes in the DB.
I have read creating an SqlCacheDependency (one row in a table per object) per object, which such high number of objects is a no-go.
I am looking for alternative solutions. One such possible solution I thought of is to cache the "last_updated" table as a datastructure and create a cache dependency to the table it is based on. Then whenever one of the 100,000 objects is requested, I check the cached "last_updated" table and if it is out of date, I fetch the object again from the database and re-cache it. If it is not out of date, I give the cached version. Does this seem like a reasonable solution?
But.. how you can do it for a single row of the table.. In ASP.Net you can create SQl Server dependency which uses a broker Service.. and puts the data into the cache and whenever the table is updated.. the cache will be rejected and new data is taken from db and put into the cache..
i Hope this might give you some idea!
I have a question in the field of optimization and application design.
I am building a web application using asp.net and sql server.
In one of my screens I must perform an action that generates a random number of user ids. I present the viewer with some statistics about the selected users. If the viewer likes the statistics I want to save them.
So basically I need to save the temporary random data, and if user likes it keep it.
Should I store the generated ids in the database or should I store them in the session?
Well, since you are generating random ids, you are using some kind of pseudorandom generator. Have you considered the possibility of just storing the seed for that generator? I've recently had a similar issue. In fact, very similar. Have a look:
My Post about Random(int seed)
EDIT: In comments you suggest you want to do much of this on the SQL server. Have a look at the following post. In addition you may want to consider the special case of new user being added whilst your admin or whatever ponders if he "likes" to save this or not. In that case you'd need to additionally store the number of users when the request was made, and adjust your random selection function accordingly. In the even more special case of removed user, this approach, admittedly, is useless.
Seeding SQL
It depends on your case and requirements. If you store the data in the session, then that is fine, but you will lose the data once the session is abandoned, or ended. But if that is not important, meaning that you don't want to keep the data there forever, then storing in the session (temporary) is better. But you will need to look into performance issues, specially if multiple users do the same thing at once, that will decrease performance.
If you choose to store in a database, then that will also work but you will need to decide on having ViewState enabled or disabled, again for the sake of performance.
if i have to do so , i store data in temp table in sql which will be drop on every page load event and recreate, i show the data in a grid where from selected user id can be deleted and saved
I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.
I'm working on a web application, on one page I am inserting records in the database and I want to display the data in a GridView but on a diffrent page. How can I do this?
I know how to display records in a GridView, but I want to know if there are two web pages,
on one page provides the facility to insert the records and U want to display the records in the GridView bit on the second page.
While it is possible to retain the data being inserted without retrieving it from the database, I think it is better to save the data on the first page and retrieve it from the database on the second page.
You can do this by writing inline SQL or a stored procedure. One simple approach would be to pass the resultset into a DataTable and bind a GridView to that.
That does involve more work -- more code and more trips to the database. However, I think it is very useful when performing INSERTs that the web page is updated to display what actually got into the database. Sometimes, this is different from what the user thinks they entered, and they can see the problem immediately.
One question would be how to identify the data that has just been inserted. I can think of several ways to do that. One is to query for all records entered today by the person logged in (which is recorded in the CreatedBy and CreatedDate columns of the database tables). Sort the resultset in descending order of CreatedDate, so that the most recent entries appear at the top of the GridView. Another would be by assigning a batch number to the data entry and retrieving only the data in that batch.
If you really want to hang on to the data entry, you could put it into Session on the first page, and then retrieve it from Session for display on the second page.
Following along the lines of what DOK said, it's also a lot easier to validate data entered by your users in your business logic before you submit it to the database.
Secondly, users can change their minds about data on a webpage frequently. The data on the web could be in an partially-finished state or could have typos or errors in it. If someone else saw this data and believed that it needed to be completed, then you could end up with duplicated entries in the database that would then require reconciliation.
Honestly, your best bet is to use the Session object to hold temporary user data. The MSDN entry for the GridView RowEditing event contains some great source code for this approach. Whenever I have to use GridViews to handle data from the database, I mimic this.
In addition to handling problems with temporary data storage, you can compare the Session object to your database results to determine whether or not new rows have been inserted. This is somewhat costly as it involves overloading the Equals method (and GetHashCode as well, if you follow what Microsoft recommends) and using Equals to iterate over the two collections, comparing the properties of both objects, and determining which records are new based on records that don't exist in your Session object, but do exist in your database object.
It's also worth noting that this approach assumes that you don't delete data from your database, but set the status of a record in your database to "Deleted" -- if that's a boolean field or an sequence of codes you use to describe the state of rows in a table.