I am running a BizTalk 2006 server instance on a SQL 2000 SP4 Database. I have a 10 GB Tracking DDB (9GB Used / 1GB Free). I am running the DTADB Archive & Purge jobs every hour. It is purging messages at 10 Days / 14 Days Hard. It runs without error. I take the purging down to 5 Days / 9 Days Hard and the Tracking Database's size only decreases by less than 5%.
Anybody have any thoughts or experience on what my be causing this issue?
I think it could be due to you using SQL server 2000.
The documentation for configuring purging of the database specifically states SQL Server 2005 and 2008.
http://msdn.microsoft.com/en-us/library/aa558715(BTS.10).aspx
There are also people who have had problems running purge scripts on SQL Server 2000.
http://www.biztalkgurus.com/forums/p/9443/18513.aspx
Hope this helps
By default, the tracking database** won't reduce in size - I suspect that if you look at the data and log file usage, you will find a large percentage in the unallocated (data file) and unused (log file) states.
You will need to shrink the database or individual files to reduce the overall database size using the DBCC SHRINKFILE command as discussed at Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE.
Hope this helps.
** or any database for that matter, unless the AUTO SHRINK option is enabled, however this isn't recommended: SQL Server Storage Engine Blog - Turn AUTO_SHRINK off!!
In the end, the only solution was to manually purge the tracking DB...
http://msdn.microsoft.com/en-us/library/dd800104(BTS.10).aspx
Not sure why it happens.
The DTA Archive and Purge SQL Server Agent job reduces the need to manually purge data from the BizTalk Tracking (BizTalkDTADb) database due to continuous purging of the database and compaction of stored tracking data. You might need to manually purge data if your BizTalk Tracking (BizTalkDTADb) database has grown so much that sustained performance degradation is occurring and the DTA Archive and Purge job is unable to keep up with the database growth.
Seems to imply this may be part of routine housekeeping.
Related
Suddenly , We are receiving an Disk space alerts from Production BizTalk 2010 database server . Alerts are set if 90% of disk space is full . I have not noticed any slowness in BizTalk data processing till now . Below are points I have noticed :
BizTalkDTADb size is ~ 65 GB ( Data file ~ 55 GB + log file ~ 10GB ) . All other database sizes are < 2 GB .
SQL Agent Job for purging and archiving DTA DB is not configured .
BizTalk is running more than 3 years now.
Global Tracking is on from Day 1 .
I can see orchestration Track Events checked in for orchestration tracking and can not find port level tracking checked in .
Below are the action items i have planned till now based on my internet searches :
Full Back up of BizTalk databases .
Take BizTalk offline
Purge BizTalkDTADb (As we do not have any usage of tracking data) using Terminator Tool .
Take BizTalk online again .
I have below questions :
I will be doing this for the first time , Could you please validate if I am going towards right direction .
What is the difference between running stored procedure run from SQL agent Job (dtasp_BackupAndPurgeTrackingDatabase) and running terminator tool to purge DTA DB . Because I read online , that running SP(for full clean up ) might take days to execute because of current size . How much time should terminator tool take ?
I just installed Latest BizTalk terminator Tool v2.5.6.9 available over internet . But I unable to find "Purge Everything in DTA" option as explained in https://blogs.msdn.microsoft.com/amantaras/2014/04/29/purging-trackingdta-db-using-terminator-tool/ .
What option I should go for clean up of DTA DB ?
Please let me know if you need more information to answer .
Regards,
Goutamendu
I would rather do the following:
Ask to add more disk space immediately to stop alerts and to allow yr prod environment to run smoothly without interruption.
Turn off global tracking from BizTalk admin console and restart host instances
Configure the purge job and let it cleanup. You can repeatedly configure it to reduce by few days at a time until u come down to where u want
You may still need to have DBAs shrink data files to reduce size of file
With this approach your environment will keep running and u be able to reduce DTA db size on the background. terminator tool you should only use if that’s the only resort.
Do not use the Terminator tool.
It would work, but it's for more extreme circumstances. Since all you're seeing is a Warning after ~7 years, you can probably take youy time.
Assuming all other Agent Jogs are running without error, including Backup BizTalk Server:
Double check with anyone, including Developers, that the Tracking data is not necessary for anything. If it is not...
During a regular downtime, manually backup all the databases by running sp_ForceFullBackup (Mgmt DB), then running the Backup BizTalk Server Job.
Run dtasp_PurgeAllCompletedTrackingData (DTA DB).
Configure DTA Purge and Archive and enable.
Depending on the size of the tracking database, the purge might take some time.
I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.
I have an Asp.Net MVC 5 website with EntityFramework codefirst approach in a shared hosting plan. It uses the open source WebbsitePanel for control panel and its SQL Server panel is somewhat limited. Today when I wanted to edit the database, I encountered this error:
The transaction log for database 'db_name' is full due to 'LOG_BACKUP'
I searched around and found a lot of related answers like this and this or this but the problem is they suggest running a query on the database. I tried running
db.Database.ExecuteSqlCommand("ALTER DATABASE db_name SET RECOVERY SIMPLE;");
with the visual studio (on the HomeController) but I get the following error:
System.Data.SqlClient.SqlException: ALTER DATABASE statement not allowed within multi-statement transaction.
How can I solve my problem? Should I contact the support team (which is a little poor for my host) or can I solve this myself?
In Addition to Ben's Answer, You can try Below Queries as per your need
USE {database-name};
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE {database-name}
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE ({database-file-name}, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE {database-name}
SET RECOVERY FULL;
GO
Update Credit #cema-sp
To find database file names use below query
select * from sys.database_files;
Call your hosting company and either have them set up regular log backups or set the recovery model to simple. I'm sure you know what informs the choice, but I'll be explicit anyway. Set the recovery model to full if you need the ability to restore to an arbitrary point in time. Either way the database is misconfigured as is.
Occasionally when a disk runs out of space, the message "transaction log for database XXXXXXXXXX is full due to 'LOG_BACKUP'" will be returned when an update SQL statement fails.
Check your diskspace :)
This error occurs because the transaction log becomes full due to LOG_BACKUP. Therefore, you can’t perform any action on this database, and In this case, the SQL Server Database Engine will raise a 9002 error.
To solve this issue you should do the following
Take a Full database backup.
Shrink the log file to reduce the physical file size.
Create a LOG_BACKUP.
Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
I wrote an article with all details regarding this error and how to solve it at The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
This can also happen when the log file is restricted in size.
Right click database in Object Explorer
Select Properties
Select Files
On the log line, click the ellipsis in the Autogrowth / Maxsize column
Change/verify Maximum File Size is Unlimited.
After chaning to unlimited, database came back to life.
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.
I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.
Currently I'm working on a project with opends. I have to upload more than 200k entries in the OpenDS. But unfortunately its fails at random times when file limit exceeding more than 10k - 15k.
When I google for that particular error (alert ID 9896233: JE Database Environment corresponding to backend id userRoot is corrupt. Restart the Directory Server to reopen the Environment) it seems like openDS backend DB [BerklyDB] is not that reliable when adding massive number of entries. How can i plug in new commercial or open source reliable relational DB [Oracle/ H2] to the openDS. any configuration ? or do i have to change the openDS code ?
First you should be aware that Oracle has pulled the plug on the OpenDS project and it is now completely stalled. Development continues as open source as the OpenDJ project : http://opendj.forgerock.org.
This said, I believe that there is a problem with your environment. When I was still working on OpenDS, our basic stress test was importing and running very high load against 10 Millions users. 200K entries is not massive number. My daily OpenDJ tests on my laptop are done with 100K to 1M entries. We have customers running in production with OpenDJ with more than 20M entries, growing 40% every 6 months !
Berkeley DB has been proved to be very scalable and reliable.
Things you might want to check : what is the maximum number of files that can be opened by a single process on your machine ? Linux defaults to 1024 and the limit may be easy to hit with OpenDS or OpenDJ. Are you using a local filesystem ? Berkeley DB is not supported on networked FS such as NFS or other NAS.
Finally, check the logs/errors file and your systems log. Chances are that one of them will have a message containing the root cause of the problem (most likely logs/errors).
Kind regards,
Ludovic Poitou
ForgeRock - Product Manager for OpenDJ