Does MonetDBLite support auto-commit mode? - r

I am trying to optimize data upload in an R package using MonetDBLite. As per the MonetDB website, using LOCKED mode can speed up upload:
LOCKED mode
In many bulk loading situations, the original file can be saved as a
backup or recreated for disaster handling. This reliefs the database
system from having to prepare for recovery as well and to safe
significant storage space. The LOCKED qualifier can be used in this
situation (and in single user mode!) to skip the logging operation
normally performed.
However, when I try to run my COPY INTO statement with LOCKED mode I get the error:
Server says 'ParseException:SQLparser:COPY INTO .. LOCKED: only allowed in auto commit mode'.
Reading the CRAN MonetDBlite documentation would have me believe that the standard mode is auto-commit, eg. the documentation for dbTransaction():
dbTransaction is used to switch the data from the normal
auto-commiting mode into transactional mode. Here, changes to the
database will not be permanent until dbCommit is called. If the
changes are not to be kept around, you can use dbRollback to undo all
the changes since dbTransaction was called.
but perhaps this isn't true since I'm getting the above error.
Does anyone have any insight?

Related

Prevent Access from Blocking ODBC Query Under Certain Conditions

I'm using Access VBA to call an R script that builds some charts. This R script pulls some data from the Access database via an ODBC query. I'm using library(RODBC) to make the connection from R.
If I restart Access, or run Compact/Repair, the query will always run. However, if I make other changes in the database, I'll sometimes get the following warning:
Warning messages:
1: In odbcDriverConnect(sprintf("Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=%s", :
[RODBC] ERROR: state HY000, code -3810, message [Microsoft][ODBC Microsoft Access Driver] The database has been placed in a state by an unknown user that prevents it from being opened or locked.'
And the script fails to run, because the connection couldn't be made.
What's the best way to manage/set the state of the database so the query will always run? The issue isn't directly linked to whether a table is open or not - I can open a table, and close a table, and not have an issue, and even run with a table open, sometimes.
Edit: The error is caused by making any sort of change in a VBA module (this is unrelated to the actual VBA call of the script, I can run the same rscript call in the command line and replicate the error). Now that I understand that's the cause, I don't think it's a big issue. Saving the VBA module sometimes seems to correct the error, although not 100% of the time.
This is by design.
Making any design change to a VBA module, form or report sets an exclusive lock on an accdb file, which remains until the Access application that has made the change closes.
Just close and re-open the file after making any design change to a form, report or VBA module.
This is one of the reasons people recommend you split the database, since then you can change the design without locking people out of the data.

Failure in sqlite_session_driver active_tick: Error from SQLite database "lasso_session": 19 constraint failed

I am having a recurring problem with Lasso 9.2.6 where the instance slows to a crawl performance-wise and throws these errors to the log:
Failure in sqlite_session_driver active_tick: Error from SQLite
database "lasso_session": 19 constraint failed
Restarting the instance solves the performance problem temporarily, but errors continue to appear.
Any recommendations for cleaning this up or resetting the session database to clear out invalid data?
Depending on traffic volume or logging volume you may be overloading the sqlite tables. It's hard to say exactly what the cause is without checking, but I'd look at the settings for sessions. Consider setting the sessions to use either memory or the MySQL driver (I recommend a Memory table if using MySQL).
Have a look at the size of the tables and check if any are excessively large. You can just run ls -l /var/lasso/instances/default/SQLiteDBs/ or use a sqlite tool. The logbook and email tables are also likely suspects.

Is it ok to change from full recovery to simple recovery in Sql Server

I have an old database - a users membership/role that was setup automatically by an ASP.Net 2 application years ago:
The Sql Server version currently running is: Sql Server 10.5.1617
The users database log file is huge (the ldf file is approx 400 times the size of the mdf file).
The recovery model is currently set to "Full". I understand what that is - and I don't need point in time restoration.
If I simply changed the recovery model to "Simple" from within Sql Server Management Studio:
...and clicked ok to save the changes - would I be risking my current database in any way? Or is Sql Server fine with making changes like this to live databases? And would the log file automatically shrink itself?
Thanks for your advice,
Mark
You should be fine, the transactions have been commited. The log file is waiting to be backed up and therefor released. Changing to Simple Recovery means that you cannot do rolling backups, but data will be commited to the db in the same way as before, logs are simply deleted after sql has completed writing the transaction.
To answer both of your questions:
Changing the recovery model on a live database is safe. You shouldn't incur any downtime, blocking, etc.
The log file won't shrink itself. You may find that once you've set the recovery model to simple that it may not be shrinkable right away. If you find that you're unable to shrink it, take a look at dbcc loginfo, specifically the 'status' column. Each row in the output of that command represents one virtual log file (vlf). The shrink command will only be able to clear a contiguous block of inactive (i.e. status = 0) vlfs at the end of the file. TL;DR - If you've got rows with status = 2 at the bottom, wait until you don't and then shrink.

"The transaction log for database is full due to 'LOG_BACKUP'" in a shared host

I have an Asp.Net MVC 5 website with EntityFramework codefirst approach in a shared hosting plan. It uses the open source WebbsitePanel for control panel and its SQL Server panel is somewhat limited. Today when I wanted to edit the database, I encountered this error:
The transaction log for database 'db_name' is full due to 'LOG_BACKUP'
I searched around and found a lot of related answers like this and this or this but the problem is they suggest running a query on the database. I tried running
db.Database.ExecuteSqlCommand("ALTER DATABASE db_name SET RECOVERY SIMPLE;");
with the visual studio (on the HomeController) but I get the following error:
System.Data.SqlClient.SqlException: ALTER DATABASE statement not allowed within multi-statement transaction.
How can I solve my problem? Should I contact the support team (which is a little poor for my host) or can I solve this myself?
In Addition to Ben's Answer, You can try Below Queries as per your need
USE {database-name};
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE {database-name}
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE ({database-file-name}, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE {database-name}
SET RECOVERY FULL;
GO
Update Credit #cema-sp
To find database file names use below query
select * from sys.database_files;
Call your hosting company and either have them set up regular log backups or set the recovery model to simple. I'm sure you know what informs the choice, but I'll be explicit anyway. Set the recovery model to full if you need the ability to restore to an arbitrary point in time. Either way the database is misconfigured as is.
Occasionally when a disk runs out of space, the message "transaction log for database XXXXXXXXXX is full due to 'LOG_BACKUP'" will be returned when an update SQL statement fails.
Check your diskspace :)
This error occurs because the transaction log becomes full due to LOG_BACKUP. Therefore, you can’t perform any action on this database, and In this case, the SQL Server Database Engine will raise a 9002 error.
To solve this issue you should do the following
Take a Full database backup.
Shrink the log file to reduce the physical file size.
Create a LOG_BACKUP.
Create a LOG_BACKUP Maintenance Plan to take backup logs frequently.
I wrote an article with all details regarding this error and how to solve it at The transaction log for database ‘SharePoint_Config’ is full due to LOG_BACKUP
This can also happen when the log file is restricted in size.
Right click database in Object Explorer
Select Properties
Select Files
On the log line, click the ellipsis in the Autogrowth / Maxsize column
Change/verify Maximum File Size is Unlimited.
After chaning to unlimited, database came back to life.
I got the same error but from a backend job (SSIS job). Upon checking the database's Log file growth setting, the log file was limited growth of 1GB. So what happened is when the job ran and it asked SQL server to allocate more log space, but the growth limit of the log declined caused the job to failed. I modified the log growth and set it to grow by 50MB and Unlimited Growth and the error went away.

Can I read and write to a SQLite database concurrently from multiple connections?

I have a SQLite database that is used by two processes. I am wondering, with the most recent version of SQLite, while one process (connection) starts a transaction to write to the database will the other process be able to read from the database simultaneously?
I collected information from various sources, mostly from sqlite.org, and put them together:
First, by default, multiple processes can have the same SQLite database open at the same time, and several read accesses can be satisfied in parallel.
In case of writing, a single write to the database locks the database for a short time, nothing, even reading, can access the database file at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option is available, in which reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, refer to the SQLite documentation.
SQLite3 explicitly allows multiple connections:
(5) Can multiple applications or multiple instances of the same
application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
For sharing connections, use SQLite3 shared cache:
Starting with version 3.3.0, SQLite includes a special "shared-cache"
mode (disabled by default)
In version 3.5.0, shared-cache mode was modified so that the same
cache can be shared across an entire process rather than just within a
single thread.
5.0 Enabling Shared-Cache Mode
Shared-cache mode is enabled on a per-process basis. Using the C
interface, the following API can be used to globally enable or disable
shared-cache mode:
int sqlite3_enable_shared_cache(int);
Each call sqlite3_enable_shared_cache() effects subsequent database
connections created using sqlite3_open(), sqlite3_open16(), or
sqlite3_open_v2(). Database connections that already exist are
unaffected. Each call to sqlite3_enable_shared_cache() overrides all
previous calls within the same process.
I had a similar code architecture as you. I used a single SQLite database which process A read from, while process B wrote to it concurrently based on events. (In python 3.10.2 using the most up to date sqlite3 version). Process B was continually updating the database, while process A was reading from it to check data. My issue was that it was working in debug mode, but not in "release" mode.
In order to solve my particular problem I used Write Ahead Logging, which is referenced in previous answers. After creating my database in Process B (write mode) I added the line:
cur.execute('PRAGMA journal_mode=wal') where cur is the cursor object created from establishing connection.
This set the journal to wal mode which allows for concurrent access for multiple reads (but only one write). In Process A, where I was reading the data, before connecting to the same database I included:
time.sleep(0.5)
Setting a sleep timer before a connection was made to the same database fixed my issue with it not working in "release" mode.
In my case: I did not have to manually set any checkpoints, locks, or transactions. Your use case might be different than mine however, so research is most likely required. Nevertheless, I hope this post helps and saves everyone some time!

Resources