If I set journal_size_limit = 67110000 (64 MiB) will I be able to:
work with / commit transactions over that value (somewhat unlikely)
be able to successfully perform a VACUUM (even if the database has like 3 GiB or more)
The VACUUM command works by copying the contents of the database into
a temporary database file and then overwriting the original with the
contents of the temporary file. When overwriting the original, a
rollback journal or write-ahead log WAL file is used just as it would
be for any other database transaction. This means that when
VACUUMing a database, as much as twice the size of the original
database file is required in free disk space.
It's not entirely clear in the documentation, and I would appreciate if someone could tell me for sure.
The journal_size_limit is not an upper limit on the transaction journal; it is an upper limit for an inactive transaction journal.
After a transaction has finished, the journal is not needed, but not deleting the journal can make things faster because the file system does not need to free this data and then reallocate it for the next transaction.
The purpose of this setting is to limit the size of unused journal data.
Related
I have a very large sqlite database with a single table with two text columns (about 2.3 billion rows, 98GB) that I'm trying to create an index on using the sqlite3 cli tool on Ubuntu 20.04.
The command I'm trying to run is:
CREATE INDEX col1_col2_x ON tablename(col1 COLLATE NO CASE,col2);
The goal is to also create the opposite index to be able to do very fast case-insensitive searches on either column.
Every time I try, it runs for about an hour and then the process exits with just the message "Killed" and exit code 137, which I don't see listed in the sqlite3 documentation.
My first thought was running out of memory, so I tried setting the pragma temp_store_directory as well as the TEMP_DIR environment variable to same directory as the database file, which has about 8TB of free space, so I'm not sure what's going wrong.
Is sqlite not meant for databases of this size? Creating the index before insert doesn't seem to be a viable option as it's looking like it's going to take months. I should also note that I was able to create the exact same indexes successfully with a 36GB table that has the same schema so I'm wondering if I'm running into an undocumented limitation?
I'm also open to other database solutions if sqlite isn't the right solution, although preliminary tests of postgres didn't seem to be any better.
Have you considered setting any of the various PRAGMA statements for the database?
Even with a 2Gb database of only 5 million rows the following were helpful.
PRAGMA page_size = 4096;
PRAGMA cache_size = 10000;
PRAGMA synchronous = OFF;
PRAGMA auto_vacuum = FULL;
PRAGMA automatic_index = FALSE;
PRAGMA journal_mode = OFF;
page_size
Query or set the page size of the database. The page size must be a power of two between 512 and 65536 inclusive.
cache_size
Query or change the suggested maximum number of database disk pages that SQLite will hold in memory at once per open database file.
synchronous
With synchronous OFF (0), SQLite continues without syncing as soon as it has handed data off to the operating system.
automatic_vaccum
When the auto-vacuum mode is 1 or "full", the freelist pages are moved to the end of the database file and the database file is truncated to remove the freelist pages at every transaction commit.
automatic_index
Set Automatic Indexes
journal_mode
The OFF journaling mode disables the rollback journal completely. No rollback journal is ever created and hence there is never a rollback journal to delete. The OFF journaling mode disables the atomic commit and rollback capabilities of SQLite.
The goal is to complete an online backup while other processes write to the database.
I connect to the sqlite database via the command line, and run
.backup mydatabase.db
During the backup, another process writes to the database and I immediately receive the message
Error: database is locked
and the backup disappears (reverts to a size of 0).
During the backup process there is a journal file, although it never gets very large. I checked that the journal_size_limit pragma is set to -1, which I believe means its unlimited. My understanding is that writes to the database should go to the journal during the backup process, but maybe I'm wrong. I'm new to sqlite and databases in general.
Am I going about this the wrong way?
If the sqlite3 backup writes "Error: database is locked", then you should use
sqlite3 source.db ".timeout 10000" ".backup backup.db"
See also Increase the lock timeout with sqlite, and what is the default values? about default timeouts (spoiler: it's zero) and now with backups solved you can switch SQLite to WAL mode (it supports multiple writers!).
//writing this as an answer so it would be easier to google this, thanks guys!
I just want to know role of oracle downstream mining database machine in OGG downstream integrated capture mode. To be specific, I want to know whether the mining db also stores data or it only process the archive logs received from source and forward processed data to target without storing?
For example, if I have 1000 tables having size of 15TB in source system and I just want to replicate one table having size 1MB to the target, whether all the 1000 table having size of 15TB need to exist in downstream mining db, or none of the 1000 tables need to be exists in downstream mining DB, or only the interested table having size 1MB need to be exists in downstream mining DB.
Thanks
Lack of points adding comments here.
No need of source table(s) or data files on Log mining server.Only Redo or Archive logs are shipped/transported from source db to Log mining server.
I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.
I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.