Versioning my copy of WordPress - wordpress

Get WordPress and a host ready. Install it, add plugins. Customize at your will. That will give us many files and a database.
We are already keeping every file on any given Version Control System (actually GIT SVN).
So, what's the best way to keep that "backup" fully and easily recoverable?
I believe that "best way" would be a simple and/or automated way (unlike this) to backup and recover the database with just one click.

To backup, use tar & mysqldump commands. These are Open standards, so accepted everywhere & having no bug.
Backup files with tar command:
$ tar -cvzf /path/to/storage/backup.tar /path/to/wordpress/installation
To restore files, simply untar it. An example:
$ tar -C /path/to/wordpress/installation -xvzf /path/to/storage/backup.tar
Backup database with MySQLdump command:
$ mysqldump --opt -u [uname] -p[password] [dbname] > [backupfile.sql]
To restore database, simply execute sql dump file by mysql command. An example:
$ mysql -u [uname] -p[password] [db_to_restore] < [backupfile.sql]
Make sure there's no space between -p & password.
It will work no matter how large your database is (phpMyAdmin can't be used to backup & restore large databases). MySQLdump is somewhat slower than other raw methods, but its bug-free & effective.
To do automation, use these commands as cron jobs' command.

Try Wordpress plugin XCloner Backup and Restore. It might help you beyond SVN. Setup a cron job for automation...

Related

Do not re-create repositories after updating

We manage systems and thus manage repositories. We remove repositories which we do not use, present in /etc/yum.repos.d/<file>
Our problem is: after an update/upgrade of the system, CentOS automatically re-creates the repositories which were removed, which is an issue for us.
Question: Is there a command / method to ensure repositories are not re-created after an upgrade on CentOS 7 systems.
Those repositories are created by someone, the OS doesn't recreate them.
Either they are restored by an update of a RPM package such as centos-release or by an automatic script you setup/run (ansible?).
I'm not aware of an automatic method to delete a repo; I see a couple of solutions:
Exclude centos-release from the upgradable packages, by adding
exclude=centos-release
to /etc/yum.conf (space separated list), but this could break some updates;
Disable them with:
# yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
(this can be easily scripted and put in a cron or in your ansible playbook)
Write a small script and place it in /etc/cron.hourly/, e.g. /etc/cron.hourly/wipe_repos, containing:
#!/usr/bin/env bash
rm -f /etc/yum.repos.d/CentOS-Base.repo
or, better:
#!/usr/bin/env bash
yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
I would suggest to use solution 2, since the repo files aren't overwritten by updates, but the new versions are placed along the old in .rpmnew files.
This is guaranteed by the flag %config(noreplace) in the source rpm of centos-release, applied to all files in /etc/yum.repos.d/.
You can check this by downloading the .src.rpm and opening the centos-release.spec file.
$ mkdir test && cd test
$ yumdownloader --source centos-release
$ rpm2cpio centos-release*.rpm | cpio -idmv
$ cat centos-release.spec
(or search for the package online and download the src.rpm)
Then scroll down to section %files and you'll notice:
%config(noreplace) /etc/yum.repos.d/*
%config(noreplace) means that all those files are not replaced with new files from an update, but the files from the new rpm are saved with the extension .rpmnew, so you'll have:
$ ls /etc/yum.repos.d/
CentOS-Base.repo <-- here you set them as disabled
CentOS-Base.repo.rpmnew <-- this comes from the update, but yum will ignore it
For reference, see http://people.ds.cam.ac.uk/jw35/docs/rpm_config.html or https://serverfault.com/a/48819/.
As I already said in the comments below the question, the reason why those repositories keep reappearing after an update is quite simple: the files defining the system repositories are owned by the package centos-release and whenever this package gets updated or reinstalled, the repositories reappear.
The package centos-release is a very basic package, it provides the capabilities redhat-release and system-release, and a number of other basic packages depend on it.
[local ~]$ rpm -q --provides centos-release
centos-release = 7-6.1810.2.el7.centos
centos-release(upstream) = 7.6
centos-release(x86-64) = 7-6.1810.2.el7.centos
config(centos-release) = 7-6.1810.2.el7.centos
redhat-release = 7.6-1
system-release = 7.6-1
system-release(releasever) = 7
[local ~]$ rpm -q --whatrequires system-release
setup-2.8.71-10.el7.noarch
grubby-8.28-25.el7.x86_64
[local ~]$ rpm -q --whatrequires redhat-release
initscripts-9.49.46-1.el7.x86_64
systemd-219-62.el7_6.5.x86_64
There is no easy way out of this.
But one possible solution might be to create a customized RPM package to replace centos-release. It should contain the pointers to your own repositories and of course needs to provide the capabilities redhat-release and system-release.
Please be aware that I have no idea if this is actually going to work, it's just something that came to my mind while thinking about the problem. It might save you the work of creating a full custom distribution derived from CentOS, which is the only other way I can think of to achieve what you seem to want.
My solution doesn't exactly solve the problem you request ("how do I delete default repository config files forever?"), but it does stabilize your config changes. If you zero out the files instead of deleting them, then system updates will leave your 'edited' versions unchanged.
I do feel that this is a 'hack', leaving named ghost files, but it's one I can live with. No need to disable or customize redhat-release or system-release.
My problem was slightly different than yours - I maintained different configs for the same repositories for different situations, indicated by filename. On updates the original files would return, leaving me with redundant and incorrect definitions. Now they don't.

Can't open Sqlite on Git Bash

I'm trying to access my sqlite database on my current directory at /c/wamp/www/laravel5 on my local project folder, with windows as my OS. I added the sqlite3 executable on the directory.
The database doesn't seem to open using git bash. When using the default command in windows command prompt it works seamlesly. sqlite3.exe storage/database.sqlite
Tried on Git Bash:
$ ./sqlite3.exe
and
$ ./sqlite3.exe storage/database.sqlite
These didn't work.
The error message is:
bash: sqlite3.exe: command not found
Here's a snapshot:
I'd like to see the database tables and schema using git bash since it has cooler font colors compare with the windows cmd.
Any help would be greatly appreciated.
if you have the same problem I had, then see my question here.
In short, using "winpty" to start sqlite3 worked:
$ winpty sqlite3
Building on the previous answer by #user172431, add the following alias to your .bashrc
alias sqlite3="winpty sqlite3.exe"
This will make the workflow a tad quicker and feel like a ninja rock-star in the process. I use this shortcut via Git Bash

Varying Vagrant Vagrants switching wordpress-trunk to git

I am trying to convert Varying Vagrant Vagrant's wordpress-trunk (or development) site to be provisioned via git instead of svn.
There seems to be a script (I presume it is a script even though it has no file extension) as part of the VVV project that will switch after the machine has been provisioned:
https://github.com/Varying-Vagrant-Vagrants/VVV/blob/master/config/homebin/develop_git
And the author told me that running the following from command line should do it:
vagrant ssh -c "develop_git"
but when I run that I get the following error:
Unknown cipher type 'develop_git'
There appears to be some code in the provision script that mentions git, but I have no idea what I am looking at.
So, does anyone know how to run/implement that script? Or otherwise convert the www/wordpress-trunk folder to git? Are there options somewhere to direct VVV to provision the trunk folder from git in the first place?
Contrary to the Vagrant-Documentation of vagrant ssh the -c option is delivered to the ssh command and therefore interpreted as the cipher-specification.
I would suggest you to try vagrant ssh -- "develop_git", since everything after "-- (two hyphens)[is] passed directly into the ssh executable".

svn cleanup: sqlite: database disk image is malformed

I was trying to do a svn cleanup because I can't commit the changes in my working copy, and I got the following error:
sqllite: database disk image is malformed
What can I do right now?
First, open command/terminal at repository root (folder which has .svn as child folder):
cd /path/to/repository
Download sqlite3 and put executable sqlite3 at root of folder.
You do an integrity check on the sqlite database that keeps track of the repository (/path/to/repository/.svn/wc.db):
sqlite3 .svn/wc.db "pragma integrity_check"
That should report some errors.
Then you might be able to clean them up by doing:
sqlite3 .svn/wc.db "reindex nodes"
sqlite3 .svn/wc.db "reindex pristine"
If there are still errors after that, you still got the option to check out a fresh copy of the repository to a temporary folder and copy the .svn folder from the fresh copy to the old one. Then the old copy should work again and you can delete the temporary folder.
Integrity check
sqlite3 .svn/wc.db "pragma integrity_check"
Clean up
sqlite3 .svn/wc.db "reindex nodes"
sqlite3 .svn/wc.db "reindex pristine"
Alternatively
You may be able to dump the contents of the database that can be read to a backup file, then slurp it back into an new database file:
sqlite3 .svn/wc.db
sqlite> .mode insert
sqlite> .output dump_all.sql
sqlite> .dump
sqlite> .exit
mv .svn/wc.db .svn/wc-corrupt.db
sqlite3 .svn/wc.db
sqlite> .read dump_all.sql
sqlite> .exit
The SVN cleanup didn't work. The SVN folder on my local system got corrupted. So I just deleted the folder, recreated a new one, and updated from SVN. That solved the problem!
After a power blackout, I ran into the database disk image is malformed error and the suggested reindex nodes command did not fix all issues due to violated constraints. Also the procedure described in http://mail-archives.apache.org/mod_mbox/subversion-users/201111.mbox/%3C874nybhpxi.fsf#stat.home.lan%3E did not resolve the problem.
Solution in my case:
Checkout the svn repository again into a temporary folder
Copy, i.e. replace, the file ".svn/wc.db" from the new checkout to the corrupt one
This may be useful, if your original svn checkout contains many modified or unversioned files and you don't want to switch to a fresh svn checkout.
I copied over .svn folder from my peer worker's directory and that fixed the issue.
Do not waste your time on checking integrity or deleting data from work queue table because these are temporary solutions and it will hit you back after a while.
Just do another checkout and replace the existing .svn folder with the new one. Do an update and then it should go smooth.
check out this svn at another place
show hidden .svn file
replace wc file
this works for me!
Maybe, could be a solution:
right mouse click over project
team -> disconnect
Select: Also delete ...
Now, re-connect again:
right mouse click over project
team -> Share project
select your repositorie: mine SVN ( other case: git, etc)
select your repositorie folder
Note:
On my case, I did a backup of my files. ( safe ur back :P )
Edit:
I am talking about SVN plugin on Eclipse :)
Have you seen this post on the subversion site? You could also potentially try validating and "fixing" the database directly as described here. (Note that I'm no expert, I just did a quick google search. May not be related to your issues at all).
Personally, I'd try checking out the repo again and reapplying your changes. Not sure if this is possible though in your case?
Throughout my researches, I've found 2 viable solutions.
If you're using any type of connections, ssh, samba, mounting, disconnect/unmount and reconnect/remount. Try again, this often resolved the problem for me. After that you can do svn cleanup or just keep on working normally (depending on when the problem appeared). Rebooting my computer also fixed the problem once... yes it's dumb I know!
Some times all there is to do is to rm -rf your files (or if you're not familiar with the term, just delete your svn folder), and recheckout your svn repository once again. Please note that this does not always solve the problem and you might also have changes you don't want to lose. Which is why I use it as the second option.
Hope this helps you guys!
I solved my problem of visual svn server rep-cache.db corruption.
Their are two solutions.
Stop the Visual SVN Server service.
Download sqllite3.exe shell from sqllite website and copy that into repo's db folder.
Type the following commands at command prompt in the repo's db folder.
-- First Solution --
sqlite3 rep-cache.db
.clone rep-cache-new.db
press ctrl+c to exit sqllite.
ren rep-cache.db rep-cache-old.db
ren re-cache-new.db rep-cache.db
-- 2nd Solution --
Delete The rep-cache.db
del rep-cache.db
it will be automatically created.
If you install the Tortoise SVN, Please go to task manager and stop it.
Then try to delete the folder. it will work
I fixed this for an instance of it happening to me by deleting the hidden .svn folder and then performing a checkout on the folder to the same URL.
This did not overwrite any of my modified files & just versioned all of the existing files instead of grabbing fresh copies from the server.
Marked answer might be the correct one, according to subversion cleanup. But the error is definitely a generic one, which led me here, this question page.
Our project has the dependency System.Data.SQLite and the error message was the same:
database disk image is malformed
In my case, I've executed following check script and the followings via SQLiteStudio 3.1.1.
pragma integrity_check
(I don't have any idea if these statistics would help, but I'm going to share them anyway...)
The DataBase file is being used on everyday usage for 1.5 year, via the connection journal mode on Memory, and was about 750 MB large. There were approximately 140K records per table and 6 tables was this large.
After the execution of Integrity Check script, 11 rows was returned after 30 minutes of execution time.
wrong # of entries in index sqlite_autoindex_MyTableName_1
wrong # of entries in index MyOtherTableAndOrIndexName_1
wrong # of entries in index sqlite_autoindex_MyOtherTableAndOrIndexName_2
etc...
All the results were about the indexes.
Following-up the re-building each indexes, my problem was resolved.
reindex sqlite_autoindex_MyTableName_1;
reindex MyOtherTableAndOrIndexName_1;
reindex sqlite_autoindex_MyOtherTableAndOrIndexName_2;
After re-indexing, the integrity check resulted "ok".
I've got this error last year, and I was restored the DB from the backup, and then re-committed all the changes, which was a real nightmare...
Check your local machine space where you are trying to checkout data. In my case my c drive don't have space for complete checkout so that error was coming :)
no need to worry for a directory lock guys.
Just you need to do is,
If sqllite3 is not installed, type below command,
>sudo apt-get install sqlite3
Open SVN database by typing this command,
>sqlite3 .svn/wc.db
Now just you need to do is to remove locks entries from SVN DB.
sqlite> select * from wc_lock;
1|-1
sqlite> delete from wc_lock;
sqlite> select * from wc_lock;
sqlite> .q
Process Completed. You can work on your SVN repository, do commit, update, add, remove operations without issue.
:-)
During app development I found that the messages come from the frequent and massive INSERT and UPDATE operations.
Make sure to INSERT and UPDATE multiple rows or data in one single operation.
var updateStatementString : String! = ""
for item in cardids {
let newstring = "UPDATE "+TABLE_NAME+" SET pendingImages = '\(pendingImage)\' WHERE cardId = '\(item)\';"
updateStatementString.append(newstring)
}
print(updateStatementString)
let results = dbManager.sharedInstance.update(updateStatementString: updateStatementString)
return Int64(results)
cd to folder containing .svn
rm -rf .svn
svn co http://mon.svn/mondepot/ . --force

How do you share zsh history between multiple machines?

I have a setup I'm pretty happy with for sharing my configuration files between machines, but I find that I often want to search back in zsh (Ctrl + R) and I can't remember which machine I typed the command on. Ideally I'd like this to search a canonical de-duped list of previous commands from any of my machines. Given that I sometimes work on these machines concurrently and without a network connection, what's a sensible way of merging history files and keeping everything in sync?
I wrote this plugin for oh-my-zsh:
An Oh My Zsh plugin for GPG encrypted, Internet synchronized Zsh history using Git
https://github.com/wulfgarpro/history-sync
Hmm, I guess you could identify your accounts somehow... then you could do the following:
modify your .zshrc to catenate the history of the OTHER accounts from dropbox or SCM with the current history IF this is the first zsh launched on the current computer
then sort the entries with -n (sort by timestamp)
I guess zsh would remove the duplicates if you have setopt HIST_SAVE_NO_DUPS
Then you need to trap shell exit, and copy the existing history to the dropbox/SCM/whatever shared place.
You could have a NFS mount point for the involved machines with the .zsh_history in it. Each machine would need the HISTFILE env var set to that file path in the NFS.
HISTFILE env var is compulsory, since ZSH does not accept a symlink, and it would be replaced by a file using the last version of the symlink target.
I have not tested what is said above, since my setup is between VM's using OSX and Parallels shared folders. Hence, with NFS the integrity (order?) of the file could be something to consider, also VS the Dropbox solution.
Not sure, something like https://gist.github.com/elim/f77e3e9f06b6f8a5e788 could be used to fix versions to be merged...
I use the command below:
ssh username#example.com cat ~/.zsh/.zhistory | cat ~/.zsh/.zhistory - | sort | uniq | tee ~/.zsh/.zhistory | ssh username#example.com cat \> ~/.zsh/.zhistory

Resources