Progress Bar for calls to SQL from .Net - asp.net

I'm wondering if there is a tried/true method for reporting the progress of a DB call from a .Net application to its user. I'm wondering if it's even possible to actually indicate a percentage of completion, or is the best/only approach to simply display a "loading" animation to indicate that something is happening?
Also, does SQL2008 address this to anyone's knowledge?

You have to load things deterministically. For example if you know that you'll be fetching a lot of data, for example, you might do something like:
-get a count of all of the records
-get 500 of them
-report status as 500/total %
-get 500 more
-report status as 1000/total %
-... continue until you've gotten them all of the user has canceled
This would be incredibly wasteful on something that takes no time at all, since the mere fact of going to the database is a large part of the overhead.

As far as I know there is no way to do this. My suggestion is use one of the circular progress bars that just spins forever. Microsoft uses these in database operations in SQL and Project.
Here is an article on CodeProject that has a variety of these:
http://www.codeproject.com/KB/vb/sql2005circularprogress.aspx?fid=324397&df=90&mpp=25&noise=3&sort=Position&view=Quick&select=2205952
Adam Berent

You could use raiserror with the no wait option to send a message back to the remote caller without ending the sql transaction. Of course this will break the usual try/catch conventions, so caution is advised.
On the other hand, "a db call" could also refer to a restore or backup operation, in which case you could simply use the progress event in the SMO (server management objects) library.
Additionally, an ORM solution such as Linq could be used to handle "db call"s such as batch update operations. If that is the case, then this answer could be useful:
How can I get a percentage of LINQ to SQL submitchanges?

Related

Can't see SQLite database changes on a database open by multiple processes

I have a process that opens a database using sqlite3_open and sets journal mode to WAL.
Another process, uses sqlite3_open to open that same database. Everything seems to work, but the problem is that second process does not seem to see changes made by the first process. I am trying to fetch count, or rowids, and they stay the same.
I am sure that database is being updated, because refreshing using SQLiteDatabaseBrowser shows the changes.
I tried multiple ways of opening databases, and multiple ways of querying, but no luck so far. What am I missing? Thanks!
Transactions are used to isolate connections from each other, especially to make changes visible only after a transaction has completed.
So for changes to be visible, the writing connection must end its transaction, and the reading connection must not have started its own transaction before that. (When using automatic transactions, ensure that statements are reset or finalized.)
I figured out what the problem was, and as usual in cases where thing make no sense, mistake on my side. Problem is however subtle, so worth mentioning.
I was doing sqlite3_reset calls on cached prepared statements lazily, that is before I reuse the prepared statement, not immediately after I am done executing it. Problem is that this pattern means that there’s always prepared statement pending reset. Apparently, reset is necessary to be able to see updates to the database (probably some mutex is being held).
Thanks for your help.
EDIT: after sleeping on it this behavior actually makes sense. Updates should not be visible during the time of prepared statement execution, otherwise it might never be done or accurate.

Is it possible to subscribe to changes in Azure DocumentDb?

Is there some way to subscribe to changes in an Azure DocumentDb? For example, something similar to SQL Server SQL Dependency. If there is nothing "built-in", is there a recommended approach to solving this problem?
Update: As of May 2017 Change Feed is available. See more here:
https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed
There is no way to subscribe to changes but it's a frequently requested feature (see votes for it here which also shows it as "under review"). I heard a while back that Azure Functions also wants this for their DocumentDB connection so maybe that will help get it from "under review" to "in progress". Go vote it up to help.
Until then, most people poll the collection using either the _ts field or with their own time series sequential field. However, there is no guarantee that a document with an earlier _ts won't show up later with default eventual or even session consistency so you have to work around that usually by going back in time and then checking for duplicates (idempotency).

Strategy for sending updates to a view while performing a large operation

This might be a very easy one for many of you - but I'm stuck trying to figure out a strategy for rendering updates to the View while server is performing a timeconsuming operation.
This is the case. I have a view that has a button which say "Approve". This approve needs to call some Action or backgroundprogress of some kind to perform a heavy operation that might take 20-30 seconds.
During that time I want to update the View with some kind of processing gif-animation and append text's like "performing operation A", "performing operation B" and so on.
What is the best stragegy for achieving this?
Here an answer you might not like: don't even bother trying to get some "progress updates" from the server.
Take a look at this task from the commercial point of view. The purpose of providing some feedback is to give the user some warm and fuzzy feeling that they have not been forgotten and the task they have asked their computer to do has not been abandoned. How much cost are you willing to incur delivering this feature?
The simplest such device is the humble progress bar. Even though most experience users would not trust it to tell them when a task will finish they do still trust that if its moving something is happening.
My solution would be to post off an async operation to the server to kick the operation off. Then show a progress bar that is entirely managed by javascript. It starts of rapidly but slows down as it progresses such that it would never actually complete but does appear to be making some progress. When the async operation completes briefly show the progress bar as reaching completion then remove it.
The cost of other solutions is much, much greater but the benefit over this approach is almost negligable if not actually negative, after all they are complex to implement and are more likely to go wrong.
I must admit that I haven't tried but I am willing to do it.
I think SignalR could be a good try.
while this sounds good in theory, I would make this a background operation that then sends a notification to the user via email or sms when the task is done.
otherwise you need to
setup a cache (not asp.net cache) on the server to store the current state of the long running process
setup a js timer to poll the server
update the ui with the current state stored in the cache.
not impossible, but a lot of moving parts.

Solution for previewing user changes and allowing rollback/commit over a period of time

I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.

Any SQL Server multiple-recordset stored procedure gotchas?

Context
My current project is a large-ish public site (2 million pageviews per day) site running a mixture of asp classic and asp.net with a SQL Server 2005 back-end. We're heavy on reads, with occasional writes and virtually no updates/deletes. Our pages typically concern a single 'master' object with a stack of dependent (detail) objects.
I like the idea of returning all the data required for a page in a single proc (and absolutely no unnecesary data). True, this requires a dedicated proc for such pages, but some pages receive double-digit percentages of our overall site traffic so it's worth the time/maintenance hit. We typically only consume multiple-recordsets from our .net code, using System.Data.SqlClient.SqlDataReader and it's NextResult method. Oh, yeah, I'm not doing any updates/inserts in these procs either (except to table variables).
The question
SQL Server (2005) procs which return multiple recordsets are working well (in prod) for us so far but I am a little worried that multi-recordset procs are my new favourite hammer that i'm hitting every problem (nail) with. Are there any multi-recordset sql server proc gotchas I should know about? Anything that's going to make me wish I hadn't used them? Specifically anything about it affecting connection pooling, memory utilization etc.
Here's a few gotchas for multiple-recordset stored procs:
They make it more difficult to reuse code. If you're doing several queries, odds are you'd be able to reuse one of those queries on another page.
They make it more difficult to unit test. Every time you make a change to one of the queries, you have to test all of the results. If something changed, you have to dig through to see which query failed the unit test.
They make it more difficult to tune performance later. If another DBA comes in behind you to help performance improve, they have to do more slicing and dicing to figure out where the problems are coming from. Then, combine this with the code reuse problem - if they optimize one query, that query might be used in several different stored procs, and then they have to go fix all of them - which makes for more unit testing again.
They make error handling much more difficult. Four of the queries in the stored proc might succeed, and the fifth fails. You have to plan for that.
They can increase locking problems and incur load in TempDB. If your stored procs are designed in a way that need repeatable reads, then the more queries you stuff into a stored proc, the longer it's going to take to run, and the longer it's going to take to return those results back to your app server. That increased time means higher contention for locks, and the more SQL Server has to store in TempDB for row versioning. You mentioned that you're heavy on reads, so this particular issue shouldn't be too bad for you, but you want to be aware of it before you reuse this hammer on a write-intensive app.
I think multi recordset stored procedures are great in some cases, and it sounds like yours maybe one of them.
The bigger (more traffic), you site gets, the more important that 'extra' bit of performance is going to matter. If you can combine 2-3-4 calls (and possibly a new connections), to the database in one, you could be cutting down your database hits by 4-6-8 million per day, which is substantial.
I use them sparingly, but when I have, I have never had a problem.
I would recommend having invoking in one stored procedure several inner invocations of stored procedures that return 1 resultset each.
create proc foo
as
execute foobar --returns one result
execute barfoo --returns one result
execute bar --returns one result
That way when requirments change and you only need the 3rd and 5th result set, you have a easy way to invoke them without adding new stored procedures and regenerating your data access layer. My current app returns all reference tables (e.g. US states table) if I want them or not. Worst is when you need to get a reference table and the only access is via a stored procedure that also runs an expensive query as one of its six resultsets.

Resources