I have an ASP .NET 4.5 application. On a maintenance page is a text box which allows administrative users to write SQL which is executed directly against the SQL Server 2008 database.
Occasionally one of the administrative users writes some inefficient SQL and the SQL Server process starts using up all the memory and CPU cycles on the server. We then have to start and stop the service to get the server responsive again.
Is there any way that I can stop these from queries consuming all the resources? These queries will not return fast enough for the user to see them so it's okay to cancel the query.
Edit
I realise it would be better to prevent users from writing SQL queries, but unfortunately I cannot remove this functionality from the application. The admin users don't want to be educated.
You can set the query governor at the server level but not sure about a per user or per connection / application limit.
http://technet.microsoft.com/en-us/magazine/dd421653.aspx
That said, it is likely a poor / dangerous practice to allow users to directly enter SQL queries.
First off I would make sure these queries are not locking any tables (either use NOLOCK or SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED for each query ran).
As far as resources go, I would look at something like the Resource Governor. We use this for any adhoc reports on our production OLTP system.
Pinal Dave has a great blog entry on it. You can also look at Technet or other MS sites for information on what it is and how to set it up.
http://blog.sqlauthority.com/2012/06/04/sql-server-simple-example-to-configure-resource-governor-introduction-to-resource-governor/
Create a user which will have no more and no less privileges than necessary.
Create some login data to the user, you will not share this with the admins who will write the queries.
Create a panel/page/application where you let them write their queries. Here you can add additional constraints unavailable at RDBMS system-level.
You let your users access this panel/page/application and they will run their queries through this. If they are doing anything undesired uncovered by you, then you just issue your modifications for the privilege and the panel/page/application.
Leaving aside the pros/cons of allowing users to construct and run their own queries, one solution could be to use the Resource Governor that comes with SQL 2008/2012. I think it's only available in Enterprise editions which might mean you can't use it (you haven't specified an edition in your question).
Technet has an article on how to manage workloads, but basically you can limit a session (based on login details or any other decernible information) to only use a certain % of CPU and/or memory.
Best way to prevent SQL from using memory/CPU: don't let users run queries.
SQL Server is supposed to use all the memory it can get it hands on, so the memory usage (or the CPU usage for that matter) wouldn't be reason to restart SQL.
By default .NET SqlCommand timeout is 30 seconds. When it times out the query will be canceled. Queries can take a while to cancel, especially if you were making a data modification and rollbacks have to occur. It's possible while the query is running/canceling other user queries are being blocked. This might make it seem like the SQL Server is unresponsive, but eventually it will finish up and your CPU usage will go back to normal. The memory however will continue to be consumed by the SQL Server process.
Related
AFAIK, Memcached does not support synchronization with database (at least SQL Server and Oracle). We are planning to use Memcached (it is free) with our OLTP database.
In some business processes we do some heavy validations which requires lot of data from database, we can not keep static copy of these data as we don't know whether the data has been modified so we fetch the data every time which slows the process down.
One possible solution could be
Write triggers on database to create/update prefixed-postfixed (table-PK1-PK2-PK3-column) files on change of records
Monitor this change of file using FileSystemWatcher and expire the key (table-PK1-PK2-PK3-column) to get updated data
Problem: There would be around 100,000 users using any combination of data for 10 hours. So we will end up having a lot of files e.g. categ1-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-78-data250, categ2-subcateg5-subcateg-78-data100, categ1-subcateg5-subcateg-33-data100, etc.
I am expecting 5 million files at least. Now it looks a pathetic solution :(
Other possibilities are
call a web service asynchronously from the trigger passing the key
to be expired
call an exe from trigger without waiting it to finish and then this
exe would expire the key. (I have got some success with this approach on SQL Server using xp_cmdsell to call an exe, calling an exe from oracle's trigger looks a bit difficult)
Still sounds pathetic, isn't it?
Any intelligent suggestions please
It's not clear (to me) if the use of Memcached is mandatory or not. I would personally avoid it and use instead SqlDependency and OracleDependency. The two both allow to pass a db command and get notified when the data that the command would return changes.
If Memcached is mandatory you can still use this two classes to trigger the invalidation.
MS SQL Server has "Change Tracking" features that maybe be of use to you. You enable the database for change tracking and configure which tables you wish to track. SQL Server then creates change records on every update, insert, delete on a table and then lets you query for changes to records that have been made since the last time you checked. This is very useful for syncing changes and is more efficient than using triggers. It's also easier to manage than making your own tracking tables. This has been a feature since SQL Server 2005.
How to: Use SQL Server Change Tracking
Change tracking only captures the primary keys of the tables and let's you query which fields might have been modified. Then you can query the tables join on those keys to get the current data. If you want it to capture the data also you can use Change Capture, but it requires more overhead and at least SQL Server 2008 enterprise edition.
Change Data Capture
I have no experience with Oracle, but i believe it may also have some tracking functionality as well. This article might get you started:
20 Using Oracle Streams to Record Table Changes
on upload.aspx page i have
conn1.ConnectionString = "Data Source=.\ip-of-remote-database-server;AttachDbFilename=signup.mdf;Integrated Security=True;User Instance=True";
and all the queries are also on same page ,only database on another machine..
so is this the correct way of implementing ?? or i have to create all queries on another machine and call them by application??
Any given query query might originate from the client code (such as ASP.NET), or it might be stored a-priori in the DBMS itself as a VIEW or a stored procedure (or even a trigger).
But no matter where it originated from, the query is always executed by the DBMS server. This way, the DBMS can guarantee the integrity of data and "defend" itself from the bugs in the client code.
The logical separation of client and server is why this model is called client/server, but that doesn't mean they must be separate physical machines - you'll decide that based on expected workload1 and usage patterns2.
1 Distributing the processing to multiple machines might increase performance.
2 E.g. you might need several "fat" clients around the LAN (communicating with the same database server) to reach all your users. This is less relevant for Web where there are additional layers of indirection between users and the database.
It depends on your infrastructure. If you have got Sql Server locally you can use it. I assume that it is a school project so it does not matter. In real life it usually a good idea to separate web server and database server
To start off - I have 2 separate websites and a database (IIS 7.5, ASP.NET and SQL Server 2008, using Linq-To-SQL for database access).
I have a separate administrative website that sometimes, during usage needs to trigger long running operations (more than 10 seconds) on database. The problem is that those operations cause sqlserver process to hit 100% CPU and then other, main customer website, can't access database promptly - there are some delays in accessing database.
I am OK with those administrative operations lasting 2x or 4x or nx times longer since they are lower priority.
I've tried using CPU Limit setting on AppPool in IIS, but that doesn't help, as w3wp.exe process never uses much of CPU... rather it's sqlservr.exe. Thanks in advance for your suggestions!
If your admin queries are consuming all the CPU on the box, there is almost certainly some tuning opportunities there - likely some indexing optimizations.
In lieu of the time to invest in those, and until you get your Resource Governor configuration settled, you can simply reduce their impact to a single CPU, which may provide short-term symptom relief, by adding the MAXDOP hint to your admin query:
OPTION (MAXDOP 1);
Yes, it might make you feel a little dirty, but the rest of your CPUs will be freed up to work on your more important queries.
The real answer is to tune your admin queries. Just because it's ok that they run long does not mean it's good for your server or the experience of your users. You'll never be able to completely isolate them from the effects of other queries going on on the box, especially if you are experiencing high CPU that is compensating for slow I/O. I/O does not have any knobs in resource governor - you can only control CPU and memory, and not even 100%.
Sounds like you want to look into Resource Governor which is built into SQL server as of SQL 2008. BOL link should get you started.
http://msdn.microsoft.com/en-us/library/bb933866(v=SQL.100).aspx
Essentially, you can throttle CPU and memory usage for the resource pools and workloads you define. This throttling will only kick in when the server is under load. Be aware that you cannot control disk IO utilization. If the process in your admin database is IO bound and your other DBs share drives you will inevitably still see performance issues and moving databases to separate spindles or query tuning will be necessary.
Example of the classifier function that will ensure the user you define is throttled by the desired resource pool based on workload group:
/* Classifier function */
CREATE FUNCTION dbo.rgov_classifier_db ()
RETURNS sysname
WITH SCHEMABINDING
AS
BEGIN
DECLARE #rgWorkloadGrp sysname
IF SUSER_SNAME() = 'adminWebsiteDB'
SET #rgWorkloadGrp = 'workloadGroupName'
ELSE
SET #rgWorkloadGrp = 'defaultWorkloadGroupName'
RETURN #rgWorkloadGrp
END;
GO
/* Register the function with Resource Governor and then start Resource Governor. */
ALTER RESOURCE GOVERNOR
WITH (CLASSIFIER_FUNCTION = dbo.rgov_classifier_db);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO
We had huge performance problem when deploying our ASP.NET app at a customer, which had the DB sitting on a remote location.
We found that it was due to the fact that pages made ridiculous amount of individual SQL queries to the DB. It was never a problem we noticed because usually, the web and DB are on the same local network (low latency). But on this (suddenly) low latency configuration, it was very very slow.
(Notice that each sql request by itself was fast, it is the number and serial nature of the sequence that is the problem).
I asked the engineering team to be able to report and maintain a "wall of shame" (or stats) telling us for each page the number of SQL requests so we can use it as a reference. They claim it is expensive..
anyone can tell me how to be able to maintain or get such report cheaply and easily?
We are using SQL Server 2005
We have a mix of our own DB access layer and subsonic
I know and use the profiler, but that is a bit manual. Asking here if there is a tip on how to automate or maybe I am just crazy?
If you are on SQL Server, read up on Profiler.
http://msdn.microsoft.com/en-us/library/ms187929.aspx
Running profiler from the UI is expensive, but you can run traces without the UI and that will give you what you want.
First, check out SubSonic's BatchQuery functionality--it might help alleviate alot of the stress in the first cut without getting into material modification of your code.
You can schedule trace jobs/dumps from the SQL server's end of things. You can also run perfmon counters to see how many database requests the app is serving.
All that said, I'd try and encourage the customer to move the database (or a mirrored copy of the database) closer to your app. It is probably the cheapest solution in the long term, depending on how thick the app is.
I have had good success using this tool in the past, not sure if the price is right for you but it will uncover any issues you may have:
Spotlight on SQL Server
The MiniProfiler (formerly known as the MVC mini profiler; but it works for all both MVC and Webforms) is a must in such a case IMO. If the code creating the database connections is well architectured it's a piece of cake to get it running for almost any ASP.NET application.
It generates a report on each rendered page with profiling stats, including each SQL query sent to the database for the request. You can see it in action on the Stack Exchange Data Explorer pages (top left corner).
I was wondering what is the easiest way to see total number of database queries from my ASP.Net (.NET 2.0) application.
My application heavily use sql 2005 database because all data are dynamic and everything goes through one connection string in web.config. Connection pooling is enabled there.
So, I am wondering how many select statements are executed for particular page I load in my browser.
I don't care if I can see that information from .net side or from db side as long as I can see only connections to MY database. Not all connections to that db server because I use shared db server and there is a lot of other databases.
The best way to do this is to set up a profiler on your database and then make a single request to your ASP.NET application. The profiler will aggregate any data you wish and you will be able to use that data to determine what queries were sent to SQL Server from your application.
The SQL Server Profiler will list all actions performed on your DB. If you use a different db login name for your project (probably a really good idea if you are not) you can filter so it only shows actions from your login (see Events Selection, Column Filters then Login Name).
Use SQL Profiler. You can configure it to filter by the database you want and to just show select statements.
If you have some sort of database layer in your code, you could modify it to write out a log message every time you run a select statement. Then just load the page once and count the number of log statements. This may or may not work, depending on how your code is structured, but it's an option.
Edit: I misread the question. I thought you had multiple clients connecting to the same database, not the same database server. In that case, a profiler probably is the best choice.
Do you have access to SQL Server Profiler? You can set up traces to monitor this sort of thing by loading a page and looking at the effects in the profiler.
JUst be aware that Profiler can affect performance, so it is best to do this on dev.