I'm trying to back up my db using a BASH script while having an application use it at the same time.
The application is not a heavy write application.
I've seen different solutions on SO, but want to confirm the correct way. I want users the ability to read at any time during the backup, writing is not a concern since I do all the writing (blog app).
Are there any dangers in corruption using:
sqlite3 /var/www/ghost/content/data/ghost.db <<EOF
.timeout 20000
.backup tmp.db
EOF
The .backup command uses SQLite's backup API, which is designed for online backups.
As long as you do not do have broken hardware or software (which has nothing to with backups and would affect any writes), this will work fine.
Related
Hello I wanted to ask if, to import the .sql update (after a git pull) I have to assemble and merge with the bash file (app/db_assembler) or if it's ok if I just launch the worldserver.exe and he will do it
Thanks
Short answer
No, the worldserver process will NOT update your database.
You need to use the DB-assembler bash script, as the instructions say.
More details
This is different than in TrinityCore, where it is a feature of the worldserver process to update the database.
In AzerothCore this task is a responsability of an external script, written in bash, the DB-assembler.
The advantage of having an external script to do this task instead of the worldserver is:
You don't need to compile and run the worldserver if you only need to create the database (useful when using or developing tools that only need the DBs)
The DB assembler is able to generate a unique SQL update file per each DB (by merging all the single SQL update files), which can be useful for debugging or development purposes
In general, it is better to delegate different software components for different tasks, instead of having a monolith doing everything
You can also make your own merge script and apply manually. Or just merge with the db_assembler.sh then apply manually.
Else refer to Francesco's answer
I have an application that updates some files in Unix server. Since I cannot modify this application, is there any way I can make sure that these files are copied before each update so I can have a history of the changes?
Is there a way/tool in Unix so I can do that?
If on Linux (specifically) you could use inotify(7) facilities (perhaps via incrontab ...)
Alternatively, you might run periodically (thru some crontab(5) entry) a script doing some make with your particular Makefile (since GNU make is designed to care about timestamps) managing e.g. backups. Or you could periodically run some rsync command.
However, it smells like you need some revision control (also known as version control system). I strongly recommend git; you could use it before and after running your application (e.g. write some wrapping shell script doing that).
But there is probably no universal solution (e.g. what if the monitored application is keeping a file descriptor opened for a long time, and write the file little by little...). You should explain much more what is happening and what do you want ...
I'd like to know how the .dump command affects other applications connected to the same database. I'd like to know this for the following journal modes:
DELETE (the default mode)
WAL (write-ahead-logging)
From reading other posts on this forum .backup uses the online backup API of SQLite. It would be great to have this confirmed as well.
Thanks in advance!
The .dump command reads the contents of the database normally, just as if you would do a bunch of SELECT queries inside a transaction.
This means that when not using WAL, other connections cannot write as long as the dump is running.
I have been developing locally for some time and am now pushing everything to production. Of course I was also adding data to the development server without thinking that I hadn't reconfigured it to be Postgres.
Now I have a SQLite DB who's information I need to be on a remote VPS on a Postgres DB there.
I have tried dumping to a .sql file but am getting a lot of syntax complaints from Postgres. What's the best way to do this?
For pretty much any conversion between two databases the options are:
Do a schema-only dump from the source database. Hand-convert it and load it into the target database. Then do a data only dump from the source DB in the most compatible form of SQL dump it offers. Try loading that into the target DB. When you hit problems, script transformations to the dump using sed/awk/perl/whatever and try again. Repeat until it loads and the results match.
Like (1), hand-convert the schema. Then write a script in your preferred language that connects to both databases, SELECTs from one, and INSERTs into the other, possibly with some transformations of data types and representations.
Use an ETL tool like Talend or Pentaho to connect to both databases and convert between them. ETL tools are like a "somebody else already wrote it" version of (2), but they can take some learning.
Hope that you can find a pre-written conversion too. Heroku one called sequel that will work for SQLite -> PostgreSQL; is it available without Heroku and able to function without all the other Heroku infrastructure and code?
After any of those, some post-transfer steps like using setval() to initialize sequences is typically required.
Heroku's database conversion tool is called sequel. Here are the ruby gems you need:
gem install sequel
gem install sqlite3
gem install pg
Then this worked for me for a sqlite database file named 'tweets.db' in the current working directory:
sequel -C sqlite://tweets.db postgres://pgusername:pgpassword#localhost/pgdatabasename
PostgreSQL supports "foreign data wrappers", which allow you to directly access any data source through the DB, including sqlite. Even up to automatically importing the schema. You can then use create table localtbl as (select * from remotetbl) to get your data into the actual PG storage.
https://wiki.postgresql.org/wiki/Foreign_data_wrappers
https://github.com/pgspider/sqlite_fdw
I have a Berkeley DB file that is quite large (~1GB) and I'd like to replicate small changes that occur (weekly) to an alternate location without having the entire file be re-written at the target location.
Does rsync properly handle Berkeley DBs by it's block level algo?
Does anyone have an alternative to only have changes be written to the Berkeley DBs files that are targets of replication?
Thanks!
Rsync handles files perfectly, at the block level. The problem with databases can come into play in a number of ways.
Caching
File locking
Synchronization/transaction logs
If you can insure that during the period of the rsync, no applications have the berkeley db open, then rsync should work fine, and offer a significent advantage over copying the entire file. However, depending on the configuration and version of bdb, there are transaction logs. You probably want to investigate the same mechanisms used for backups and hot backups. They also have a "snapshot" feature that might better facilitate a working solution.
You should probably read this carefully: http://www.cs.sunysb.edu/documentation/BerkeleyDB/ref/transapp/archival.html
I'd also recommend you consider using replication as an alternative solution that is blessed by BDB https://idlebox.net/2010/apidocs/db-5.1.19.zip/programmer_reference/rep.html
They now call this High Availabity -> http://www.oracle.com/technetwork/database/berkeleydb/overview/high-availability-099050.html