Automatic row deletion in SQL Server 2008 - asp.net

I am developing library management system in ASP.NET. I am using SQL Server 2008 as database. I want to provide book reservation option so that student can reserve book for 15 minutes.
I am storing details of reserve books in the reserve table but I want to automatically delete those rows with a reservation time beyond 15 minutes.
Please help me.

If you have the option of changing the table structure I would. Without seeing the overall design I would suggest adding a "Status" column on the reserve table. The status column could contain on of the known statuses. Reserved, Pickedup, Returned, Never used. Create a SQL Agent job that queries the table for "reserved" records, if the create date is older than 15 minutes, change status to "Never Used" If you really want design it properly, you will want to add a Statuses table, then create a foreign key between the two tables. If you want to go down the path of the 2 tables and not sure how to do that let me know I can post a SQLFiddle example.

Related

what is the exact meaning of master table,staging table,configuration table,transaction table,temp table ?

Can any one please let me know what is the exact meaning of master table,staging table,configuration table,transaction table,temp table ?
previously am in Java .now i am new to the Pl/sql and peoplesoft application.
sp please elaborate if you know the perfect answer?
I used google for 5 minutes and I've found:
In Simple way
Master Table means your basic data which you want to
use another place like Address Book , Employee List , Product List.
What is a Staging table ?
A staging table is just a regular SQL server table. For example, if
you have a process that imports some data from say .CSV files then you
put this data in a staging table. You may then decide to apply some
data cleaning or business rules to the data and move it to a different
staging tables etc...
Transaction table means if doing any transaction like payment gateway
, Account credit , Debit like this. Which each transaction needed
different different master table.
You should read the How To Ask page, you'll understand why the negative votes. And I can guarantee you, it's not because they don't know the answer.

Move Records from "Active Tables" to "Archive Tables" in SQL Server 2012

This one is tricky to me in the since of SQL. I have 4 Tables. I'm having more a logic issue than anything else I think. I have a table called Vehicle. Within my application, any record changed in this Vehicle table gets inserted into a VLog table. The Primary key for Vehicle is VehicleID and it is related to VehicleID in VLog. In SQL, I'm not positive how to archive both of these tables. I have two archive tables. VehicleArchive and VLogArchive. Within my application, I'm able to "Archive" certain records based on a certain parameter. This was easy as I was using a gridview and could verify the VehicleID and Insert any record with that VehicleID into VLogArchive and VehicleArchive. However, I'm going to be dealing with live records and was wondering if there is a solution in SQL. There are multiple records for each VehicleID in VLog as VLog keeps track of all the changes made in Vehicle. Within my application itself, On "Update" button click, I was able to Insert the records with from VLog into VLogArchive by comparing it to the VehicleID within the gridview and then removing said records from VLog and the Location was changed to "File Room." It then goes on to insert the records from Vehicle into VehicleArchive also by comparing it to the VehicleID within the gridview again as long as the Location is "File Room." It seems backwards but I had to do it this way because if I tried to remove a record from Vehicle before removing the related records from VLog, it wouldn't delete as it is related. I do not know how to do this approach in SQL and was wondering if someone knew how to go about moving all the records at the same time if the Location is in "File Room."
I have found this and this but I'm not positive these are the approaches that I need. Thanks for the help!
I'm going to disagree with #Richard. I'm always going to lean towards keeping your tables 'lean and mean' if at all possible. If the data is old / unused, get it out of your production tables! Keeping unnecessary records in your transactional tables presents many performance and maintenance risks. Since you're going to include a check on that flag in EVERY select query you run (using the view that #Richard suggests), you're going to have to add that flag to every nonclustered index to avoid tipping point problems. You're also going to run into problem with unique constraint enforcement (if your database is properly designed) as well as making queries more error prone (what if someone writes a report and doesn't use the view?). The list of problems with metadata flags like that go on and on and on. Just don't do it.
There are a lot of different ways to peel the onion on archiving data. I prefer more frequent and smaller transactions myself. If you're archiving to the same server, use T-SQL as in your first link. If you're archiving to a different server, use SSIS.
Don't do it like that; add an archived int(1) field to the tables and use that to filter the views.
then use this as the source for the gridview
SELECT * FROM Vehicle WHERE Archived=0
to make a record archived you can then do
UPDATE Vehicle SET Archived=1 WHERE ID=1
Moving records like this on a production system is never going to be easy and will almost certainly cause more problems than you can imagine.
If you do really want to do this; then you're going to have to do it record by record; by copying the data; and of course this is going to break the referential integrity that you'll have to fixup manually if any other tables reference Vehicle.
You can do this either in code as a background procedure or (better) using SQL / SSIS on the db server

Is there a way to find the SQL that updated a particular field at a particular time?

Let's assume that I know when a particular database record was updated. I know that somewhere exists a history of all SQL that's executed, perhaps only accessible by a DBA. If I could access this history, I could SELECT from it where the query text is LIKE '%fieldname%'. While this would pretty much pull up any transactional query containing the field name I am looking for, it's a great start, especially if I can filter the recordset down to a particular date/time range.
I've discovered the dbc.DBQLogTbl view, but it doesn't appear to work as I expect. Is there another view that contains the information I am looking for?
It depends on the level of database query logging (DBQL) that has been enabled by the DBA.
Some DBA's may elect not to detailed information for tactical queries so it is best to consult with your DBA team to understand what is being captured. You can also query the DBC.DBQLRules to determine what level of logging has been enabled.
The following data dictionary objects will be of particular interest to your question:
DBC.QryLog contains the details about the query with respect to the user, session, application, type of statement, CPU, IO, and other fields associated with a particular query.
DBC.QryLogSQL contains the SQL statements. If a SQL statement is exceeds a certain length it is split across multiple rows which is denoted by a column in this table. If you join this to the main Query Log table care must be taken if you are aggregating and metrics in the Query Log table. Although more often then not if your are joining the Query Log table to the SQL table you are not doing any aggregation.
DBC.QryLogObjects contains the objects used by a particular query and how they were used. This includes tables, columns, and indexes referenced by a particular query.
These tables can be joined together in DBC via QueryID and ProcID. There are a few other tables that capture information about the queries but are beyond the scope of this particular question. You can find out about those in the Teradata Manuals.
Check with your DBA team to determine the level of logging being done and where they historical DBQL data is retained. Often DBQL data is moved nightly to a historical database and there often is a ten minute delay in data being flushed from cache to the DBC tables. Your DBA team can tell you where to find historical DBQL data.

many-to-many query runs slow in windows phone 7 emulator

my application is using sqlite for a database. in the database, i have a many-to-many relationship. when i use the sqlite addon/tool for firefox, the sql query joining the tables in the many-to-many runs pretty fast. however, when i run the same query on the emulator, it takes a very long time (5 minutes or more). i haven't even tried it on a real device, thus.
can someone tell me what is going on?
for example, i have 3 table.
1. create table person (id integer, name text);
2. create table course (id integer, name text);
3. create table registration(personId integer, courseId integer);
my sql statements that i have tried are as follows.
select *
from person, course, registration
where registration.personId = person.id and registration.courseId = course.id
and also as follows.
select *
from person inner join registration on person.id=registration.personId
inner join course on course.id=registration.courseId
i am using the sqlite client from http://wp7sqlite.codeplex.com. i have 4,800 records in the registration table, 4,000 records in the person table, and 1,000 records in the course table.
is it my queries? is it just the sqlite client? is it the record size? if this problem cannot be fixed on the app, i'm afraid i'll have to push the database remotely (that means my app will have to use the internet).
Yep, its your queries. You're not going to get away with you can get away with doing what you are trying to do on a mobile device. You have to remember you aren't running on a PC so you have to think differently about how you approach things (both code and UI). You have low memory, slow disk access, a slow-ish processor, no virtual memory, etc. You're going to have to make compromises.
I'm sure what ever you are doing is perfectly possible to do on the phone without needing an offsite server but you need to be smart about it. For example is it really necessary to load all 4800+ records into memory at once? Almost certainly not, a user can't possibly at look at all 4800 at the same time. Forgetting the database speed just showing this number of items in a ListBox is going to kill your app performance wise.
And even is performance was perfect is displaying 4800 items really a good user experience? Surely allowing the user to enter a search term would be better and would allow you to filter the list to a more manageable size. Could you implement paging so you only display the first 10 records and have the user click next for the next 10?
You might also want to consider de-normalizing your database, so that you just have one table rather than 3. It will improve performance considerably. Yes it goes against everything you were taught about databases in school but like I said: phone = compromises. And remember this isn't a big OLTP mission critical database, its a phone app - no one cares if your database is in 3rd normal form or not. Also remember that the more work you give the phone (chugging through data building up joins) the more battery power you app will consume.
Finally if you absolutely think you must to give the user a list of 4800 records to scroll through, you should look at some kind of data virtualization technique. Which gives the user the illusion they are scrolling through a long list, even though there are actually only a few items loaded at any given time.
But the short answer is: yes, doing queries like that will problematic, you need to consider changing them.
By the time you start doing those joins that's an awfuly large amount of records you could end up with. What is memory like during this operation?
Assuming you have tuned indexes appropraitely, rather than do this with joins, I'd try three separate queries.
Either that or consider restructuring your data so it only contains what you need in the app.
You should also look to only return the fields you need.

How to improve asp.net AJAX autocomplete performance

My web site has city,state and zip code autocomplete feature.
If user types in 3 characters of a city in the textbox, then top 20 cities starting with those characters are shown.
As of now, Autocomplete method in our application queries sql 2005 database which has got around 900,000 records related to city,state and zip.
But the response time to show the cities list seems to be very very slow.
Hence, for peformance optimization, is it a good idea to store the location data into Lucene index or may be in Active directory and then pull the data from there?
Which one will be faster...Lucene or Activedirectory?And what are the pros and cons of each?Any suggestions please?
Thanks a bunch!
Taking a nuclear option (like changing backing data stores) probably shouldn't be the first option. Rather, you need to look at why the query is performing so slowly. I'd start with looking at the query performance in SQL Profiler and the execution plan in Sql Management Studio and see if I am missing anything stupid like an index. After you cover that angle, then check the web layer and ensure that you are not sending inordinate amounts of data or otherwise tossing a spanner in the works. Once you have established that you aren't killing yourself in the db or on the wire, then it is time to think about re-engineering.
On a side note, my money would be on Sql Server handling the data end of this task better than either of those options. Lucene is better suited for full-text searches and AD is a poor database at best.
I would cache the data into a separate table. Depending on how fresh you need that data to be, you can rebuild it as often as necessary.
--Create the table
SELECT DISTINCT city, state, zip INTO myCacheTable FROM theRealTable
--Rebuild the table anytime
TRUNCATE TABLE myCacheTable
INSERT INTO myCacheTable (city, state, zip) SELECT DISTINCT city, state, zip FROM theRealTable
Your AJAX calls can access myCacheTable instead, which will have far fewer rows than 900k.
Adding to what Wyatt said, you first need to figure out which area is slow? Is the SQL query slow OR the network connection slow between the browser and the server? OR is there something else?
And I completely agree with Wyatt that SQL Server is much more suitable for this task then Lucene and Active Directory.

Resources