Intersystems Cache' PCK file usage - intersystems

I'm working with a customer using the InterSystems cache' database environment and they are asking what's the best way to deal with the .pck file that's part of the database structure. When I attempted to research this to understand their ask, I came up short nearly every place on what this file is, where it's located, and how or why it's problematic.
any advice would be helpful
tom

Since we got that you talk about LCK file, it becomes more clear.
Well, lck file, is a kind of lock file, for databases in Caché. Which should be close to the mounted database (CACHE.DAT file) all the time when the system is running, preventing to use this database to write from another Caché instance. While Caché correctly shutdowns it is clear lck files created before.
While you touched backup theme for Caché Databases. I can say if lck files appeared to be backuped as well, something going wrong in a backup process. Depending on a way chosen to backup databases, you should not even have lock files existed during a backup process. Or you just should not backup it. While you mentioned freeze in comment, in this case lock files can still exist, and just have to copy only CACHE.DAT, it should be safe. But without freeze copy databases from working server quite dangerous, and nobody can guarantee that database will not have integrity errors.

Related

Drupal to Drupal Migration Across Servers

I am in the process of migrating a D7 site from one server to another. I have successfully exported and uploaded the settings to the new site using Features, but I need to get the content over to the site as well. I've been looking at several modules to try and solve this problem, but I have not found anything suitable for this task. Please let me know if I am overlooking a really simple solution.
Thanks!
Mark
Easiest solution is to export a database dump and import it into your new server. You can do it wotj phpMyAdmin but I recommend using Drush.
This way you can simply do a database dump via:
drush sql-dump > ~/sql-dump-file-name.sql
and later import via:
drush sql-cli < ~/sql-dump-file-name.sql
Also copy your files directory from old server to new server which is located in /sites/default/files.
I've successfully used the backup and migrate module for these tasks. True, creating a dump and then spooling the dump into the other database works, but this typically also copies all caches.
The backup_migrate module allows you save backups on your local server, but also to your hard disk, from where you can upload it again to the other site.
A neat thing here is that you can exclude tables, such as cache tables, which makes the transfer much faster.
Obviously you need a core installation on the other end, and the backup_migrate module already installed for this to work, but I assume that since you only ask about the db, you must have mirrored the file structure already (excluding the settings files).

Write-Ahead Logging and Read-Only mode compatible in SQLite3?

Open read-only
I have a sqlite3 file on a filesystem that belongs to a different user than is running the reading process. I want the reading process to be able to read the file in read-only mode, so I'm passing SQLITE_OPEN_READONLY. I would expect that to work. Surely the idea is that read-only mode works on files that we don't want to write to?
When I prepare my first statement I get
unable to open database file
Similarly if I run the sqlite3 command line tool I get the same result unless I sudo. Which seems to confirm to me that the issue is writeability rather than anything else.
Journal files
The answer to this question seems to suggest that if there are journal files around then read-only access isn't possible.
Why are there journal files? Because another process is writing the file, my user process is trying to open it in read-only. To do this I am using Write-Ahead Logging, which produces two journal files, -shm and -wal. True enough, if I stop the writing process and remove the journal files, my user process can open it in read-only mode.
Incompatibility?
So I have two situations:
If the file belongs to the writing process and also the read-only process, write-ahead logging enables process A to write and process B to read-only
If the file belongs to the writing process but does not belong to the read-only process, the read-only process is blocked from opening read-only.
How do I achieve both of these? To spell it out, I want:
Writing process owns database
Read-only process does not own database
Read-only process cannot write to database
Write-ahead logging is enabled on database
Seems like a simple set of requirements, but I can't see an obvious solution.
**EDIT: ** Going by this documentation, it looks like this isn't possible. Can you suggest any alternative ways to achieve the above?
Yes WAL-journaled databases cannot be opened read-only, explicitly or otherwise (i.e. in the case where the database file is read-only to the process).
If you require that the read-only process absolutely not be allowed to modify the database file, then the only thing that comes to mind is that the write process maintains a not WAL-journal additional copy of the database.
Bottom line: to the best of my knowledge, WAL and read-only can't be done.
I think what the documentation is saying is that the WAL database itself may not be present on a readonly media, which does not necessarily mean you cannot use SQLITE_OPEN_READONLY. In fact, I have successfully opened two connections, a read-write as well as one with SQLITE_OPEN_READONLY, both on a WAL sqlite database. These work just fine. I tested an INSERT query using the read-only connection and the statement correctly returned an error that the database is read-only.
Just make sure that the database is stored on some media with write-access as a -shm file needs to be created and maintained, and so even a 'ready-only' connection may actually physically write something to disk - which doesn't necessarily mean that it can modify data using SQL.

Can you use rsync to replicate block changes in a Berkeley DB file?

I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html

why do you copy the SQLite DB before using it?

Everything I have read so far, it seems as though you copy the DB from assets to a "working directory" before it is used. If I have an existing SQLite DB I put it in assets. Then I have to copy it before it is used.
Does anyone know why this is the case?
I can see a possible application to that, where one doesn't want to accidentally corrupt database during write. But in that case, one would have to move database back when it's done working on it, otherwise, next time program is run will start from "default" database state.
That might be another use case - you might always want to start program execution with known data state. Previous state might be set from external application.
Thanks everyone for your ideas.
I think what I might have figured out is that the install cannot put a DB directly to the /data directory.
In Eclipse there is no /data which is where most of the discussions I have read say to put it.
This is one of the several I found:
http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-4/#comment-37008

Moving Raw MYSQL Data Files to a Different Directory

I have a directory only backup of a previous server that hosted multiple sites. I had access to a few .sql backups for our databases, but there were some that had not been backed up in that fashion. I located the .MYD,.frm, and .MYI files for the tables in my db in the var/lib/mysql/db_name directory.
I would like to know if there is a way to get the data from these files and move them into the new existing mysql installation? I tried copying the files from the db folder to a db folder with the exact same name, but I still get a "Can't find db_name/table_name.frm" error when trying to access any of the tables. They do show up in phpmyadmin table list, the error comes up when trying to access the tables.
Is this possible? If so, how do I go about taking these table files and turning them into use-able data?
I am sorry if my question or explanation doesn't make any sense. This is part of an ongoing 11+ hour emergency server recovery project that got sprung on me today, so my brain is fried. I will answer any questions necessary.
Have you checked that the files in /var/lib/mysql/db_name/ are owned by mysql and not root? Usually copying the files in should 'just work' (certainly it has and does for me). I assume you're using the same or very similar version of MySQL?
You really need to restore everything relating to that database, including also the matching 'mysql' database for your previous installation.

Resources