I am looking to understand the implications of using ATTACH with databases with different read-write permissions.
I have a scenario where I need to access a large database (approx. 512MB) that resides in a read-only filesystem. There is also a small read-write database with the same schema that resides in a read-write filesystem. The read-only database provides the base data used in my scenario, with infrequent data updates stored in the read-write database.
Currently I open these two databases in separate connections and the code that maintains the connections is responsible for presenting a unified view of the data to its clients. For example, this means that the code has to merge query results from the read-only and read-write databases, etc. I realize that this setup is inelegant (and likely suboptimal) and have been looking to use the ATTACH command to create a unified view of the data in SQL rather than C++.
I am wondering then whether there are any particular gotchas related to attaching read-only and read-write databases that I should be aware of. I am looking at one of the following ATTACH scenarios:
Open the read-only database as main and ATTACH the read-write database. This is my preferred solution.
Open the read-write database as main and ATTACH the read-only database.
A third alternative?
A few google queries pointed to messages suggesting problems in scenario (1). Because I did not find a definitive answer and because my own testing using sqlite 3.6.13 did not reveal any problems, I am posting this question.
Thank you for any insights.
The documentation doesn't seem to mention any caveats with attaching read-write databases to read-only ones.
So my assumption is that what you should expect to happen were the databases to be opened separately should also happen when you open one and attach the other.
I put your scenario 1 to the test and it seems to work fine. Here is what I tried:
[someone#somewhere tmp]$ echo .dump | sqlite3 big_readonly_db
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE foo (a INT);
INSERT INTO "foo" VALUES(1);
INSERT INTO "foo" VALUES(2);
INSERT INTO "foo" VALUES(3);
INSERT INTO "foo" VALUES(4);
INSERT INTO "foo" VALUES(5);
COMMIT;
[someone#somewhere tmp]$ echo .dump | sqlite3 small_readwrite_db
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE bar (a INT);
COMMIT;
[someone#somewhere tmp]$ chmod -w big_readonly_db
[someone#somewhere tmp]$ ls -l big_readonly_db
-r--r--r-- 1 someone someone 2048 Apr 12 21:41 big_readonly_db
[someone#somewhere tmp]$ sqlite3 big_readonly_db
SQLite version 3.7.7.1 2011-06-28 17:39:05
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> attach database small_readwrite_db as rw;
sqlite> insert into bar select * from foo;
sqlite> select * from bar;
1
2
3
4
5
I know that I am quite late. But I have a similar issue and I stumbled on this thread, so here's what I currently know:
#Yakov Galka is correct and sqlite does not allow writing on attached databases if the main database is opened in readonly mode. This can be verified by using #user610650's test code but opening the database in the last shell command with sqlite3 -readonly big_readonly_db. Although I haven't found that particular behavior described in the official documentation.
But #Bill Zissimopoulos' situation is slightly different. The readonly database cannot be written due to filesystem restrictions. In this case the main database can simply be opened normally (i.e. in read/write mode) so that the attached database can be written as well. When issuing a write command on the readonly main database, sqlite still restricts it (or: isn't allowed to) with the error: attempt to write a readonly database. So the preferred solution can be used.
Related
I'm on a Mac, running 10.15.7. My SQLite version is 3.32.3.
I have a large SQLite database (16GB) against which the query behavior is kind of mystifying. I have a SELECT query which takes a very long time (between 20 and 30 seconds). If I start this query in one SQLite shell, and attempt to do an UPDATE in another SQLite shell, I can get a write lock, but the commit yields "database is locked" (which I'm pretty sure corresponds to SQLITE_BUSY):
sqlite> begin immediate transaction;
sqlite> update edges set suppressed = 1 where id = 1;
sqlite> end transaction;
Error: database is locked
As I understand SQLite, it supports parallel reads but exclusive writes, and I'm only doing a write in the shell shown here; the other one is just running an expensive SELECT. The documentation does say this:
An attempt to execute COMMIT might also result in an SQLITE_BUSY return code if an another thread or process has an open read connection. When COMMIT fails in this way, the transaction remains active and the COMMIT can be retried later after the reader has had a chance to clear.
But I don't understand why, or under what circumstances this COMMIT behavior arises; it says "might", but it doesn't elaborate. Nor do I understand how this statement is consistent with the idea that SQLite is exclusive only with respect to writes.
Thanks to all in advance for an explanation.
Commenter Shawn is correct; the answer is that the default SQLite journal mode blocks a write lock if either a write or a read is underway. This is made clear here, although I couldn't find that mentioned in the core SQLite documentation.
I have created checkpoint table ggate for replicat rep1 but still I am getting following error:
2014-09-04 23:38:21 ERROR OGG-00446 Oracle GoldenGate Delivery for
Oracle, REP1.prm: Checkpoint table ggate.checkpoint does not exist.
Please create the table or recreate the REP1 group using the correct
table.
2014-09-04 23:38:21 ERROR OGG-01668 Oracle GoldenGate Delivery for
Oracle, REP1.prm: PROCESS ABENDING.
Can anyone tell me how to resolve it?
In this kind of situations you should:
Have you actually run the ADD CHECKPOINTTABLE? if not run it
Check if the checkpoint table actually exists in the database - if it has been created - try to drop it (DROP CHECKPOINTTABLE) and recreate it (ADD CHECKPOINTTABLE)
Check if the checkpoint parameter is correctly set in the GLOBALS config file
Restart the MGR and Extract/Replicat processes
Verify if the user has access on the database to the checkpoint table (insert, update, delete rights)
If nothing works, run 10046 flag on the target database and check what the GoldenGate Replicat process is executing on the database and when it actually fails (what it wants to do on the database and try to do the same commands by yourself)
This is a simple troubleshooting initiative:
Are you using a traditional non-CDB database or a PDB?
Are you using Classic Architecture or Microservices Architecture? - Different approaches when adding a checkpoint table.
How are you running ADD CHECKPOINTTABLE? From GGSCI/AdminClient or from HTML5 page?
In Classic Architecture, do you have CHECKPOINTTABLE parameter set in GLOBALS? (CHECKPOINTTABLE [container.] owner.table)
Who are you logged into the database as when using DBLOGIN USERIDALIAS?
What replicat are you using? - Classic, Coordinated, Integrated, Parallel?
Check the schema where the table is suppose to be? If not there, you can query the DBA_TABLES view for the name of the checkpoint table and see who owns it.
A lot of times when the checkpint table cannot be created it is due to not updating the GLOBALS file and/or connecting as the correct user to the database.
I have a python script which creates a sqlite database out of some external data. This works fine. But everytime I execute a GROUP BY query on this database, I get an "Error: unable to open database file". Normal SELECT queries work.
This is an issue for both, the sqlite3 library of python and the sqlite3 cli binary:
sqlite> SELECT count(*) FROM REC;
count(*)
----------
528489
sqlite> SELECT count(*) FROM REC GROUP BY VERSION;
Error: unable to open database file
sqlite>
I know that these errors are typically permission errors (I have read all questions which I could find on this topic on StackOverflow), but I'm quite sure it is not in my case:
I'm talking about a readily created database and read requests
I checked the permissions: Both the file and its containing folders have write permissions set
I can even write to the database: Creating a new table is no problem.
The device is not full, it got plenty of space.
Ensure that your process has access to the TEMP directory.
From the SQLite's Use Of Temporary Disk Files documentation:
SQLite may make use of transient indices to implement SQL language
features such as:
An ORDER BY or GROUP BY clause
The DISTINCT keyword in an aggregate query
Compound SELECT statements joined by UNION, EXCEPT, or INTERSECT
Each transient index is stored in its own temporary file. The
temporary file for a transient index is automatically deleted at the
end of the statement that uses it.
You probably can verify if temporary storage is the problem by setting the temp_store pragma to MEMORY:
PRAGMA temp_store = MEMORY;
to tell SQLite to keep the transient index for the GROUP BY clause in memory.
Alternatively, create an explicit index on the column you are grouping by to prevent the transient index from being created.
I'm trying to backup my sqlite database from a cronjob that runs in 5 minute intervals. The database is "live", so there are running queries at the time I want to perform the backup.
I want to be sure, that the database is in good shape when I backup it so that I can rely on the backup.
My current strategy (in pseudocode):
function backup()
{
#try to acquire the lock for 2 seconds, then check the database integrity
sqlite3 mydb.sqlite '.timeout 2000' 'PRAGMA integrity_check;'
if (integrity is ok and database was not locked)
{
#perform the backup to backup.sqlite
sqlite3 mydb.sqlite '.timeout 2000' '.backup backup.sqlite'
if (backup could be performed)
{
#Check the consistency of the backup database
sqlite3 backup.sqlite 'PRAGMA integrity_check;'
if (ok)
{
return true;
}
}
}
return false;
}
Now, there are some problems with my strategy:
If the live database is locked, I run into problems because I cannot perform the backup then. Maybe a transaction could help there?
If something goes wrong between the PRAGMA integrity_check; and the backup, I'm f*cked.
Any ideas? And by the way, what is the difference between the sqlite3 .backup and a good old cp mydb.sqlite mybackup.sqlite ?
[edit] I'm running nodejs on an embedded system, so if someone suggests the sqlite online backup api with some ruby wrapper - no chance ;(
If you want to backup while queries are running you need to use the backup API. The documentation has a worked out example of an online backup of a running database (example 2). I don't understand the Ruby reference, you can integrate it in your program or do it as a small C program running besides the real application -- I've done both.
An explicit integrity_check on the backup is overkill. The backup API guarantees that the destination database is consistent and up-to-date. (The flip side of that coin is that if you update the DB too often while a backup is running, the backup might never finish.)
It is possible to use 'cp' to make a backup, but not of a running database. You need to have an exclusive lock for the entire duration of the backup, so it's not really 'live'. You also need to be careful to copy all of sqlite's temp files as well as the main database.
I'd expect the sqlite3 ".backup" command to use the backup API.
If you cannot use the backup API, you must use another mechanism to prevent the database file from being modified while you're copying it.
Start a transaction with BEGIN IMMEDIATE:
After a BEGIN IMMEDIATE, no other database connection will be able to write to the database or do a BEGIN IMMEDIATE or BEGIN EXCLUSIVE. Other processes can continue to read from the database, however.
As per this how-to, I've successfully configured IIS on my XP-SP3 dev box for SQL Server 2008 Express to save ASP.NET session state information. I'm just using SQL Server because otherwise on every recompile, I was losing the session state which was obnoxious (having to re-login). But, I'm facing an annoying issue in that every time I restart SQL there's this error, and sometimes one or two other very similar friends:
The SELECT permission was denied on the object 'ASPStateTempSessions',
database 'tempdb', schema 'dbo'.
To fix the error, I just open Management Studio and edit the User Mapping for the login/dbo I'm using on the ASPState db, and re-add tempdb to that user with all but deny permissions. Apparently, once the right permissions are there, ASP.NET is able to automatically create the tables it uses. It just can't run that CreateTempTables sproc until the right security is there.
THE QUESTION...
Is there a way to not have to re-do this on every restart of the SQL Server?
I don't really care right now about keeping the temp data across restarts, but I would like to not have to go through this manual step just to get my web app working on localhost, which uses session state variables throughout. I suppose one could resort to some kind of stored procedure within SQL Server to accomplish the task for this machine when the service starts, to not have to do it manually. I'd accept such an answer as a quick fix. But, I'm also assuming there's a better recommended configuration or something. Not seeing an answer to this on the how-to guide or elsewhere here on StackOverflow.
Both answers seem valid; but with most things Microsoft, its all in the setup...
First uninstall the ASPState database by using the command:
aspnet_regsql –ssremove –E -S .
Note:
-E is to indicate you want to use integrated security connection.
-S informs what SQL server and SQL instance to use, and the "." (dot) specifies default local instance
Then re-install using the command:
aspnet_regsql –ssadd –sstype p –E -S .
Note:
The sstype has three options, t | p | c ... the first "t", tells the installer to host all stored procedures in the ASPState database, and all data in the tempdb. The second option "p" tells the installer to persist data to the ASPState database. The last option "c" allows you to specify a different 'custom' database to persist the session state data.
If you reinstall using the "-sstype p" you then need only to supply datareader/datawriter to the ASPState database for the user that's making the connection (in most cases, the application pool's identity in IIS).
The added benefit of persisting the data is that session state is retained even after a restart of the service. The only drawback is that you need to ensure the agent cleanup job is pruning old sessions regularly (it does this by default, every minute).
Important:
If you are running a cluster, you must persist session data. You're only option is to use sstype 'p' or 'c'.
Hope this sheds light on the issue!
For the record, I did find a way to do this.
The issue is that the tempdb is recreated from the model db each time the service restarts. The gist of the solution is to create a stored procedure that does the job, and then make that procedure run at startup.
Source code (credit to the link above) is as follows:
use master
go
-- remove an old version
drop proc AddAppTempDBOwner
go
-- the sp
create proc AddAppTempDBOwner as
declare #sql varchar(200)
select #sql = 'use tempdb' + char(13) + 'exec sp_addrolemember ''db_owner'', ''app'''
exec (#sql)
go
-- add it to the startup
exec sp_procoption 'AddAppTempDBOwner', 'startup', 'true'
go
Well done for finding the strangest way possible to do this.
The correct answer is as follows:
use master
go
EXEC sp_configure 'Cross DB Ownership Chaining', '1'
go
RECONFIGURE
go
EXEC sp_dboption 'ASPState', 'db chaining', 'true'
go