I use asp.net mvc, sql server. Query in my repository's class. Sometimes query is executed in 10 seconds, sometimes in 3 minutes!! Why? I used a SQL Server Profiler, but I realy don't understand what could be the cause and how I can find it.
Query:
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[FirstAddressId] AS [FirstAddressId],
[Extent1].[SecondAddressId] AS [SecondAddressId],
[Extent1].[Distance] AS [Distance],
[Extent1].[JsonRoute] AS [JsonRoute]
FROM [dbo].[AddressXAddressDistances] AS [Extent1]
Check your query plan. Just run your SELECT statement in SqlServer Management Studio to obtain real query plan. More info is here: Query plan.
If they are the same, but response time differs significantly between the calls, than probably the issue is with lockc on db level (or huge active workloads). I mean incorrect transaction isolation level for instance or some reports running in the meantime obtaining too much resources (or generating locks "because of something" to ensure some data consistency enforced by some developer).
Many factors have influence on performance (including memory available at the moment of query execution).
You can run also a few queries to analyze quality of your statistics (or just update all of them using EXEC sp_updatestats) or please analyze fragmentation of the indexes. I guess, but by me locks and outdated stats or defragmented indexes, can force SqlServer to choose very inefficient query plan.
Some info on active locks: Active locks on table
Additional info 1:
If you are the only user of this db and it's on your local machine (you use SQLServer Express) the issue with locks is rather less possible then other problems. Try to open Event Log of SqlServer. It's available in SqlServer Management Studio on left side (tree) under your engine instance here: Management/Sql Server Logs/Current. Do you see any unusual info there? Try to review system log also (using Event Viewer app). In case of hardware problems you should see there also some info. Btw: how many rows do your have in the table? Try to review also behavior of your disks in some Process Explorer or Performance Monitor. If disk queue length is to big it can be main source of the problem (in such case look what apps stress disk)...
More info on locks:
SELECT
[spid] = session_Id
, ecid
, [blockedBy] = blocking_session_id
, [database] = DB_NAME(sp.dbid)
, [user] = nt_username
, [status] = er.status
, [wait] = wait_type
, [current stmt] =
SUBSTRING (
qt.text,
er.statement_start_offset/2,
(CASE
WHEN er.statement_end_offset = -1 THEN DATALENGTH(qt.text)
ELSE er.statement_end_offset
END - er.statement_start_offset)/2)
,[current batch] = qt.text
, reads
, logical_reads
, cpu
, [time elapsed (ms)] = DATEDIFF(mi, start_time,getdate())
, program = program_name
, hostname
--, nt_domain
, start_time
, qt.objectid
FROM sys.dm_exec_requests er
INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)as qt
WHERE session_Id > 50 -- Ignore system spids.
AND session_Id NOT IN (##SPID) -- Ignore this current statement.
ORDER BY 1, 2
GO
Before you waste any more time on this, you should realize that something like the time a query takes in development is essentially meaningless. In development, you're running a single-threaded web server in IIS Express, which means that you've also got VS running, sitting on roughly 2-4 GB of RAM. Together with that, you're running a SQL Server instance, that's fighting the system for both RAM and hard drive time. You haven't given any specs of your system, but if you also happen to be sporting a consumer-class 5400 or 7200 RPM platter-style drive rather than an SSD, that's going to severely impact performance as well. Then, we haven't even got into what else might be running on this system. Photoshop? Outlook? Your favorite playlists of MP3s decoding in the background? What's Windows doing? It might be downloading/applying updates, indexing your drive for search, etc. None of that applies any more when you move into production (or at least shouldn't). In production, you should have a dedicated server with 4-8 GB of RAM and an SSD or enterprise-class 15,000+ RPM platter drive devoted just to SQL Server, so it can spit out query results at lightning speeds.
Long and short, if you want to guage website/query performance of your application, you need to deploy it to a facsimile of what you'll be running in production. There, you can pound the hell out of it and get some real data you can actually do something with. Trying to profile your app in development is just a total waste of time.
Related
Why would the TrackedMessages_Copy_BizTalkMsgBoxDb SQL Agent job start failing with "Query processor could not produce a query plan"?
Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. [SQLSTATE 42000] (Error 8622).
Our SQL guys are talking about amending the stored proc. but we've told them to treat BizTalk db's as a black box
It should go without saying, but before anything, make sure to backup your databases. In fact, if your regular backup jobs are running, you may be able to restore a backup and compare things to when it was working on this server. That said -
Check the SQL Agent Job to make sure no additional steps have been added/no plan has been forced/no hints are being used; it should just have one step called 'Purge' that calls the procedure below with the DB server and DTA database name as parameters.
Check the procedure (BizTalkMsgBoxDb.dbo.bts_CopyTrackedMessagesToDTA) to make sure it hasn't been altered.
If this is a production or otherwise sensitive box, back up the DBs and restore them to a local dev environment before proceeding!
If this is not production, see if you can run the procedure (perhaps in a transaction that you rollback) directly in SSMS. See if you get any better errors. Add print statements to see if you can find out exactly where it's getting conflicting hints.
If the procedure won't run, consider freeing the procedure cache (DBCC FREEPROCCACHE) and seeing if the procedure will run.
If it runs in your dev environment from a backup, you may have to start looking at server/database settings. I can't think of which ones off the top of my head that would cause this error though.
For what it's worth, well intentioned DBAs break BizTalk frequently. They decide that an index is missing or not properly covering, or that security could be improved, or that the database should be treated like other databases they administer are treated. I've seen DBAs do really silly things to the BizTalk databases that get very hard to diagnose.
Did you try updating the statistics on the database table referenced by the stored procedure (which is run by the SQL Server Agent job? The query planner uses those to decide how best to execute your SQL.
I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.
I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.
Currently I'm working on a project with opends. I have to upload more than 200k entries in the OpenDS. But unfortunately its fails at random times when file limit exceeding more than 10k - 15k.
When I google for that particular error (alert ID 9896233: JE Database Environment corresponding to backend id userRoot is corrupt. Restart the Directory Server to reopen the Environment) it seems like openDS backend DB [BerklyDB] is not that reliable when adding massive number of entries. How can i plug in new commercial or open source reliable relational DB [Oracle/ H2] to the openDS. any configuration ? or do i have to change the openDS code ?
First you should be aware that Oracle has pulled the plug on the OpenDS project and it is now completely stalled. Development continues as open source as the OpenDJ project : http://opendj.forgerock.org.
This said, I believe that there is a problem with your environment. When I was still working on OpenDS, our basic stress test was importing and running very high load against 10 Millions users. 200K entries is not massive number. My daily OpenDJ tests on my laptop are done with 100K to 1M entries. We have customers running in production with OpenDJ with more than 20M entries, growing 40% every 6 months !
Berkeley DB has been proved to be very scalable and reliable.
Things you might want to check : what is the maximum number of files that can be opened by a single process on your machine ? Linux defaults to 1024 and the limit may be easy to hit with OpenDS or OpenDJ. Are you using a local filesystem ? Berkeley DB is not supported on networked FS such as NFS or other NAS.
Finally, check the logs/errors file and your systems log. Chances are that one of them will have a message containing the root cause of the problem (most likely logs/errors).
Kind regards,
Ludovic Poitou
ForgeRock - Product Manager for OpenDJ
I have a SQLite database that is used by two processes. I am wondering, with the most recent version of SQLite, while one process (connection) starts a transaction to write to the database will the other process be able to read from the database simultaneously?
I collected information from various sources, mostly from sqlite.org, and put them together:
First, by default, multiple processes can have the same SQLite database open at the same time, and several read accesses can be satisfied in parallel.
In case of writing, a single write to the database locks the database for a short time, nothing, even reading, can access the database file at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option is available, in which reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, refer to the SQLite documentation.
SQLite3 explicitly allows multiple connections:
(5) Can multiple applications or multiple instances of the same
application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
For sharing connections, use SQLite3 shared cache:
Starting with version 3.3.0, SQLite includes a special "shared-cache"
mode (disabled by default)
In version 3.5.0, shared-cache mode was modified so that the same
cache can be shared across an entire process rather than just within a
single thread.
5.0 Enabling Shared-Cache Mode
Shared-cache mode is enabled on a per-process basis. Using the C
interface, the following API can be used to globally enable or disable
shared-cache mode:
int sqlite3_enable_shared_cache(int);
Each call sqlite3_enable_shared_cache() effects subsequent database
connections created using sqlite3_open(), sqlite3_open16(), or
sqlite3_open_v2(). Database connections that already exist are
unaffected. Each call to sqlite3_enable_shared_cache() overrides all
previous calls within the same process.
I had a similar code architecture as you. I used a single SQLite database which process A read from, while process B wrote to it concurrently based on events. (In python 3.10.2 using the most up to date sqlite3 version). Process B was continually updating the database, while process A was reading from it to check data. My issue was that it was working in debug mode, but not in "release" mode.
In order to solve my particular problem I used Write Ahead Logging, which is referenced in previous answers. After creating my database in Process B (write mode) I added the line:
cur.execute('PRAGMA journal_mode=wal') where cur is the cursor object created from establishing connection.
This set the journal to wal mode which allows for concurrent access for multiple reads (but only one write). In Process A, where I was reading the data, before connecting to the same database I included:
time.sleep(0.5)
Setting a sleep timer before a connection was made to the same database fixed my issue with it not working in "release" mode.
In my case: I did not have to manually set any checkpoints, locks, or transactions. Your use case might be different than mine however, so research is most likely required. Nevertheless, I hope this post helps and saves everyone some time!