Failure in sqlite_session_driver active_tick: Error from SQLite database "lasso_session": 19 constraint failed - lasso-lang

I am having a recurring problem with Lasso 9.2.6 where the instance slows to a crawl performance-wise and throws these errors to the log:
Failure in sqlite_session_driver active_tick: Error from SQLite
database "lasso_session": 19 constraint failed
Restarting the instance solves the performance problem temporarily, but errors continue to appear.
Any recommendations for cleaning this up or resetting the session database to clear out invalid data?

Depending on traffic volume or logging volume you may be overloading the sqlite tables. It's hard to say exactly what the cause is without checking, but I'd look at the settings for sessions. Consider setting the sessions to use either memory or the MySQL driver (I recommend a Memory table if using MySQL).
Have a look at the size of the tables and check if any are excessively large. You can just run ls -l /var/lasso/instances/default/SQLiteDBs/ or use a sqlite tool. The logbook and email tables are also likely suspects.

Related

MariaDB TDE - how to encrypt the error log file?

(Note: I haven't used MariaDB. I'm only doing research at this point.)
I read MariaDB's TDE / data-at-rest encryption support. In its Limitations section it states, "The MariaDB error log is not encrypted. The error log can contain query text and data in some cases..."
I'm assuming that statement implies query errors that might contain a date of birth (DOB) will be plainly written in the error log. If so, what are the workarounds in order to keep the errors logs secure? If the solution is handling encryption in syslog, would you please explain the process?
In addition, are there any other points listed in the Limitations that a new user like myself should be aware of? I am not familiar with MariaDB's intricacies.
Thanks.
Slow query log, general log and error log are not encrypted.
While slow query log and general log (see MDEV-9639) are usually not used in a production environment, the error log can contain in certain cases (like a server crash) SQL statements which might contain confidential data.
A solution would be to redirect the error log to syslog and to enable rsyslogd encrytion, More information can be found here:
Writing the Error Log to Syslog on Unix

Does MonetDBLite support auto-commit mode?

I am trying to optimize data upload in an R package using MonetDBLite. As per the MonetDB website, using LOCKED mode can speed up upload:
LOCKED mode
In many bulk loading situations, the original file can be saved as a
backup or recreated for disaster handling. This reliefs the database
system from having to prepare for recovery as well and to safe
significant storage space. The LOCKED qualifier can be used in this
situation (and in single user mode!) to skip the logging operation
normally performed.
However, when I try to run my COPY INTO statement with LOCKED mode I get the error:
Server says 'ParseException:SQLparser:COPY INTO .. LOCKED: only allowed in auto commit mode'.
Reading the CRAN MonetDBlite documentation would have me believe that the standard mode is auto-commit, eg. the documentation for dbTransaction():
dbTransaction is used to switch the data from the normal
auto-commiting mode into transactional mode. Here, changes to the
database will not be permanent until dbCommit is called. If the
changes are not to be kept around, you can use dbRollback to undo all
the changes since dbTransaction was called.
but perhaps this isn't true since I'm getting the above error.
Does anyone have any insight?

Why would the TrackedMessages_Copy_BizTalkMsgBoxDb start failing with "Query processor could not produce a query plan"?

Why would the TrackedMessages_Copy_BizTalkMsgBoxDb SQL Agent job start failing with "Query processor could not produce a query plan"?
Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. [SQLSTATE 42000] (Error 8622).
Our SQL guys are talking about amending the stored proc. but we've told them to treat BizTalk db's as a black box
It should go without saying, but before anything, make sure to backup your databases. In fact, if your regular backup jobs are running, you may be able to restore a backup and compare things to when it was working on this server. That said -
Check the SQL Agent Job to make sure no additional steps have been added/no plan has been forced/no hints are being used; it should just have one step called 'Purge' that calls the procedure below with the DB server and DTA database name as parameters.
Check the procedure (BizTalkMsgBoxDb.dbo.bts_CopyTrackedMessagesToDTA) to make sure it hasn't been altered.
If this is a production or otherwise sensitive box, back up the DBs and restore them to a local dev environment before proceeding!
If this is not production, see if you can run the procedure (perhaps in a transaction that you rollback) directly in SSMS. See if you get any better errors. Add print statements to see if you can find out exactly where it's getting conflicting hints.
If the procedure won't run, consider freeing the procedure cache (DBCC FREEPROCCACHE) and seeing if the procedure will run.
If it runs in your dev environment from a backup, you may have to start looking at server/database settings. I can't think of which ones off the top of my head that would cause this error though.
For what it's worth, well intentioned DBAs break BizTalk frequently. They decide that an index is missing or not properly covering, or that security could be improved, or that the database should be treated like other databases they administer are treated. I've seen DBAs do really silly things to the BizTalk databases that get very hard to diagnose.
Did you try updating the statistics on the database table referenced by the stored procedure (which is run by the SQL Server Agent job? The query planner uses those to decide how best to execute your SQL.

"The transaction log for database is full due to 'LOG_BACKUP'" in a shared host

I have an Asp.Net MVC 5 website with EntityFramework codefirst approach in a shared hosting plan. It uses the open source WebbsitePanel for control panel and its SQL Server panel is somewhat limited. Today when I wanted to edit the database, I encountered this error:
The transaction log for database 'db_name' is full due to 'LOG_BACKUP'
I searched around and found a lot of related answers like this and this or this but the problem is they suggest running a query on the database. I tried running
db.Database.ExecuteSqlCommand("ALTER DATABASE db_name SET RECOVERY SIMPLE;");
with the visual studio (on the HomeController) but I get the following error:
System.Data.SqlClient.SqlException: ALTER DATABASE statement not allowed within multi-statement transaction.
How can I solve my problem? Should I contact the support team (which is a little poor for my host) or can I solve this myself?
In Addition to Ben's Answer, You can try Below Queries as per your need
USE {database-name};
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE {database-name}
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE ({database-file-name}, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE {database-name}
SET RECOVERY FULL;
GO
Update Credit #cema-sp
To find database file names use below query
select * from sys.database_files;
Call your hosting company and either have them set up regular log backups or set the recovery model to simple. I'm sure you know what informs the choice, but I'll be explicit anyway. Set the recovery model to full if you need the ability to restore to an arbitrary point in time. Either way the database is misconfigured as is.
Occasionally when a disk runs out of space, the message "transaction log for database XXXXXXXXXX is full due to 'LOG_BACKUP'" will be returned when an update SQL statement fails.
Check your diskspace :)
This error occurs because the transaction log becomes full due to LOG_BACKUP. Therefore, you can’t perform any action on this database, and In this case, the SQL Server Database Engine will raise a 9002 error.
To solve this issue you should do the following
Take a Full database backup.
Shrink the log file to reduce the physical file size.
Create a LOG_BACKUP.
Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
I wrote an article with all details regarding this error and how to solve it at The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
This can also happen when the log file is restricted in size.
Right click database in Object Explorer
Select Properties
Select Files
On the log line, click the ellipsis in the Autogrowth / Maxsize column
Change/verify Maximum File Size is Unlimited.
After chaning to unlimited, database came back to life.
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.

Timeout when uploading images

I am currently testing Tridion 2011 and am having problems creating multimedia components with uploaded content (as opposed to external).
I fill out the title, schema, multimedia type, select a file from my system then click save. I get a Saving item... information message then approximately 30 seconds later I will receive a The wait operation timed out message.
There doesn't appear to be any error messages in the C:\Program Files (x86)\Tridion\log directory. Looking at the event viewer I see the following information relating to the save action
Unable to save Component (tcm:4-738361).
The wait operation timed out
Error Code:
0x8004033F (-2147220673)
Call stack:
System.Data.SqlClient.SqlConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException,Boolean,Action`1)
System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject,Boolean,Boolean)
System.Data.SqlClient.TdsParser.TryRun(RunBehavior,SqlCommand,SqlDataReader,BulkCopySimpleResultSet,TdsParserStateObject,Boolean&)
System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader,RunBehavior,String)
System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior,RunBehavior,Boolean,Boolean,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior,RunBehavior,Boolean,String,TaskCompletionSource`1,Int32,Task&,Boolean)
System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1,String,Boolean,Int32,Boolean)
System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Tridion.ContentManager.Data.AdoNet.Sql.SqlDatabaseUtilities.SetBinaryContent(Int32,Stream)
Tridion.ContentManager.Data.AdoNet.ContentManagement.ItemDataMapper.Tridion.ContentManager.Data.ContentManagement.IItemDataMapper.SetBinaryContent(Stream,TcmUri)
Tridion.ContentManager.ContentManagement.RepositoryLocalObject.SetBinaryContent(BinaryContent)
Tridion.ContentManager.ContentManagement.Component.OnSaved(SaveEventArgs)
Tridion.ContentManager.IdentifiableObject.Save(SaveEventArgs)
Tridion.ContentManager.ContentManagement.VersionedItem.Save(Boolean)
Tridion.ContentManager.ContentManagement.VersionedItem.Save()
Tridion.ContentManager.BLFacade.ContentManagement.VersionedItemFacade.UpdateAndCheckIn(UserContext,String,Boolean,Boolean)
XMLState.Save
Component.Save
I already have my timeout settings in the Content Manager Snap-In set to high values (more than 10 minutes) due to another issue.
The BINARIES table in the Content Manage Database is 25GB if that helps.
Any ideas? Thanks.
Edit 1
Following suggestions from Bart Koopman, my DBA has rebuilt the indexes but does not reckon the Transaction log has any impact on performance. The problem persists.
Edit 2
I have just found more details of the error
Unable to save Component (tcm:0-0-0).
Timeout expired.
The timeout period elapsed prior to completion of the operation or the server is not responding.
A database error occurred while executing Stored Procedure "EDA_ITEMS_UPDATEBINARYCONTENT".EDA_ITEMS_UPDATEBINARYCONTENT
After taking a look at this procedure it looks like the following statement could be the root cause
SELECT 1 FROM BINARIES WHERE ID = #iBINARY_ID AND CONTENT IS NULL
I execute it manually with #iBINARY_ID as -1 and after 2 minutes it still hasn't completed. I assume that when I insert a new multimedia component the query will be something similar (i.e. the id will not exist in the table).
The BINARIES table currently has a NON-CLUSTERED Primary Key. Maybe the solution would be to change this to a CLUSTERED Primary Key? However, I assume it is NON-CLUSTERED for a reason.
Just had a response from SDL customer support. Apparently this is a known issue related to statistics and the chosen query plan.
Running the following statement manually from SQL Server Management Studio fixes the problem (it didn't even need to complete for me)
SELECT 1 FROM BINARIES WHERE ID = -1 AND CONTENT IS NULL
Hope this helps someone else out!
Timeouts on database operations are usually an indication of a misconfiguration or a lack of maintenance. By increasing the timeout you are just working around the problem rather than solving it.
With a binaries table that big you will want to make sure you have proper database setup with data files that are separated from your log files (separated on different physical partitions/disks) and possibly even multiple data files on multiple physical partitions to take advantage of performance gains.
Next to that you will want to assure that the standard database maintenance is performed daily/hourly. Things like backing up and truncating the transaction log every hour will greatly improve your database performance (on MS SQL Server a transaction log of more than 1GB slows the database down drastically, you should always try to keep it below that size through timely backup/trucate). Updating statistics and rebuilding indexes is also something you should not forget on a regular basis.

Resources