svn cleanup: sqlite: database disk image is malformed - sqlite

I was trying to do a svn cleanup because I can't commit the changes in my working copy, and I got the following error:
sqllite: database disk image is malformed
What can I do right now?

First, open command/terminal at repository root (folder which has .svn as child folder):
cd /path/to/repository
Download sqlite3 and put executable sqlite3 at root of folder.
You do an integrity check on the sqlite database that keeps track of the repository (/path/to/repository/.svn/wc.db):
sqlite3 .svn/wc.db "pragma integrity_check"
That should report some errors.
Then you might be able to clean them up by doing:
sqlite3 .svn/wc.db "reindex nodes"
sqlite3 .svn/wc.db "reindex pristine"
If there are still errors after that, you still got the option to check out a fresh copy of the repository to a temporary folder and copy the .svn folder from the fresh copy to the old one. Then the old copy should work again and you can delete the temporary folder.

Integrity check
sqlite3 .svn/wc.db "pragma integrity_check"
Clean up
sqlite3 .svn/wc.db "reindex nodes"
sqlite3 .svn/wc.db "reindex pristine"
Alternatively
You may be able to dump the contents of the database that can be read to a backup file, then slurp it back into an new database file:
sqlite3 .svn/wc.db
sqlite> .mode insert
sqlite> .output dump_all.sql
sqlite> .dump
sqlite> .exit
mv .svn/wc.db .svn/wc-corrupt.db
sqlite3 .svn/wc.db
sqlite> .read dump_all.sql
sqlite> .exit

The SVN cleanup didn't work. The SVN folder on my local system got corrupted. So I just deleted the folder, recreated a new one, and updated from SVN. That solved the problem!

After a power blackout, I ran into the database disk image is malformed error and the suggested reindex nodes command did not fix all issues due to violated constraints. Also the procedure described in http://mail-archives.apache.org/mod_mbox/subversion-users/201111.mbox/%3C874nybhpxi.fsf#stat.home.lan%3E did not resolve the problem.
Solution in my case:
Checkout the svn repository again into a temporary folder
Copy, i.e. replace, the file ".svn/wc.db" from the new checkout to the corrupt one
This may be useful, if your original svn checkout contains many modified or unversioned files and you don't want to switch to a fresh svn checkout.

I copied over .svn folder from my peer worker's directory and that fixed the issue.

Do not waste your time on checking integrity or deleting data from work queue table because these are temporary solutions and it will hit you back after a while.
Just do another checkout and replace the existing .svn folder with the new one. Do an update and then it should go smooth.

check out this svn at another place
show hidden .svn file
replace wc file
this works for me!

Maybe, could be a solution:
right mouse click over project
team -> disconnect
Select: Also delete ...
Now, re-connect again:
right mouse click over project
team -> Share project
select your repositorie: mine SVN ( other case: git, etc)
select your repositorie folder
Note:
On my case, I did a backup of my files. ( safe ur back :P )
Edit:
I am talking about SVN plugin on Eclipse :)

Have you seen this post on the subversion site? You could also potentially try validating and "fixing" the database directly as described here. (Note that I'm no expert, I just did a quick google search. May not be related to your issues at all).
Personally, I'd try checking out the repo again and reapplying your changes. Not sure if this is possible though in your case?

Throughout my researches, I've found 2 viable solutions.
If you're using any type of connections, ssh, samba, mounting, disconnect/unmount and reconnect/remount. Try again, this often resolved the problem for me. After that you can do svn cleanup or just keep on working normally (depending on when the problem appeared). Rebooting my computer also fixed the problem once... yes it's dumb I know!
Some times all there is to do is to rm -rf your files (or if you're not familiar with the term, just delete your svn folder), and recheckout your svn repository once again. Please note that this does not always solve the problem and you might also have changes you don't want to lose. Which is why I use it as the second option.
Hope this helps you guys!

I solved my problem of visual svn server rep-cache.db corruption.
Their are two solutions.
Stop the Visual SVN Server service.
Download sqllite3.exe shell from sqllite website and copy that into repo's db folder.
Type the following commands at command prompt in the repo's db folder.
-- First Solution --
sqlite3 rep-cache.db
.clone rep-cache-new.db
press ctrl+c to exit sqllite.
ren rep-cache.db rep-cache-old.db
ren re-cache-new.db rep-cache.db
-- 2nd Solution --
Delete The rep-cache.db
del rep-cache.db
it will be automatically created.

If you install the Tortoise SVN, Please go to task manager and stop it.
Then try to delete the folder. it will work

I fixed this for an instance of it happening to me by deleting the hidden .svn folder and then performing a checkout on the folder to the same URL.
This did not overwrite any of my modified files & just versioned all of the existing files instead of grabbing fresh copies from the server.

Marked answer might be the correct one, according to subversion cleanup. But the error is definitely a generic one, which led me here, this question page.
Our project has the dependency System.Data.SQLite and the error message was the same:
database disk image is malformed
In my case, I've executed following check script and the followings via SQLiteStudio 3.1.1.
pragma integrity_check
(I don't have any idea if these statistics would help, but I'm going to share them anyway...)
The DataBase file is being used on everyday usage for 1.5 year, via the connection journal mode on Memory, and was about 750 MB large. There were approximately 140K records per table and 6 tables was this large.
After the execution of Integrity Check script, 11 rows was returned after 30 minutes of execution time.
wrong # of entries in index sqlite_autoindex_MyTableName_1
wrong # of entries in index MyOtherTableAndOrIndexName_1
wrong # of entries in index sqlite_autoindex_MyOtherTableAndOrIndexName_2
etc...
All the results were about the indexes.
Following-up the re-building each indexes, my problem was resolved.
reindex sqlite_autoindex_MyTableName_1;
reindex MyOtherTableAndOrIndexName_1;
reindex sqlite_autoindex_MyOtherTableAndOrIndexName_2;
After re-indexing, the integrity check resulted "ok".
I've got this error last year, and I was restored the DB from the backup, and then re-committed all the changes, which was a real nightmare...

Check your local machine space where you are trying to checkout data. In my case my c drive don't have space for complete checkout so that error was coming :)

no need to worry for a directory lock guys.
Just you need to do is,
If sqllite3 is not installed, type below command,
>sudo apt-get install sqlite3
Open SVN database by typing this command,
>sqlite3 .svn/wc.db
Now just you need to do is to remove locks entries from SVN DB.
sqlite> select * from wc_lock;
1|-1
sqlite> delete from wc_lock;
sqlite> select * from wc_lock;
sqlite> .q
Process Completed. You can work on your SVN repository, do commit, update, add, remove operations without issue.
:-)

During app development I found that the messages come from the frequent and massive INSERT and UPDATE operations.
Make sure to INSERT and UPDATE multiple rows or data in one single operation.
var updateStatementString : String! = ""
for item in cardids {
let newstring = "UPDATE "+TABLE_NAME+" SET pendingImages = '\(pendingImage)\' WHERE cardId = '\(item)\';"
updateStatementString.append(newstring)
}
print(updateStatementString)
let results = dbManager.sharedInstance.update(updateStatementString: updateStatementString)
return Int64(results)

cd to folder containing .svn
rm -rf .svn
svn co http://mon.svn/mondepot/ . --force

Related

rsync with --fake-super not preserving owner after restore - Monterey/Synology DS920+/rsync 3

Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.

Preserve files/directories for rpm upgrade in .spec file(rpmbuild)

I wrote a .spec file on RHEL and I am building RPM using rpmbuild. I need ideas on how to handle the situation below.
My RPM creates an empty logs directory when it installs first time within the installation folder like below
/opt/MyInstallation-1.0.0-1/some executables
/opt/MyInstallation-1.0.0-1/lib/carries shared objects(.so files)
/opt/MyInstallation-1.0.0-1/config/carries some XML and custom configuration files(.xml, etc)
/opt/MyInstallation-1.0.0-1/log--->This is where application writes logs
When my RPM upgrades MyInstallation-1.0.0-1, to MyInstallation-1.0.0-2 for example, I get everything right as I wanted.
But, my question is how to preserve log files written in MyInstallation-1.0.0-1? Or to precisely copy the log directory to MyInstallation-1.0.0-2.
I believe if you tag the directory as %config, it is expected that the user will have files in there, so it will leave it alone.
I found a solution or workaround to this by hit and trial method :)
I am using rpmbuild version 4.8.0 on RHEL 6.3 x86_64. I believe it will work on other distros as well.
If you install with one name only like "MyInstallation" rather than "MyInstallation-version number-RPM Build Number" and create "logs directory as a standard directory(no additional flags on it)[See Original Question for scenario] Whenever you upgrade, you normally don't touch logs directory. RPM will leave its contents as it is. All you have to do is to ensure that you keep the line below in the install section.
%install
install --directory $RPM_BUILD_ROOT%{_prefix}/%{name}/log
Here, prefix and name are macros. That has to do nothing with underlying concept.
Regarding config files, the following is a very precise table that will help you guarding your config files. Again, this rule can't be applied on logs our applications create.
http://www-uxsup.csx.cam.ac.uk/~jw35/docs/rpm_config.html
Thanks & Regards.

sqlite3 - getting trouble with SQLite3 shell

I am new to SQLite3.
I am trying to create a DB in the shell. When I run the shell, it already shows:
SQLite version 3..
Enter ".help" for instructions ..
Enter SQL...
Enter SQL statements terminated with a ";"
I read somewhere that I should be able to type "$" and codes like "$ sqlite3 mynotes.db"
Now, I need to be able to name my DB (like "mynotes.db") and be able to decide in which folder I want it to be saved.
Can someone help me? Cheers!
The $ somewhere was supposed to be a prompt of some shell (command processor), where you start sqlite3 and specify database name (which is created if does not exists).
If you're running sqlite3 without a shell (by clicking on sqlite3.exe?), it's time to try another way. CMD.EXE on Windows is as good for this job as a typical Unix shell.
As of the folder where the database will be: either cd to this folder before you run sqlite3 mynotes.db, or specify full pathname to the database: sqlite3 "C:\Users\Me\Documents and Settings\mynotes.db" (double quotes are needed when argument contains spaces).
When sqlite3 is started with no parameters, the database will be in memory. Sqlite3 supports "attaching" other databases (see ATTACH DATABASE description). This way, you can create and open on-disks databases even if starting sqlite3 with parameters is impossible for some reason (or too hard).

Run SQLite3 in Windows7 - Not Working (unless I Run as Admin)

I downloaded sqlite3, added sqlite3.dll, sqlite3.def and sqlite3.exe to Windows/System32. System32 is in the Windows Path. When I run SQLite3 test.db as per the Quick Start documentation from SQLite, SQLite3 is not recognized.
I also tried registering the dll but that did not work. I looked at numerous posts here and elsewhere but I cannot figure it out.
If I run sqlite3 at the cmd prompt, in System32, it is recognized. But obviously, unless I am missing something I do not want to create databases in the System32 folder. (Update)When I run a command prompt as admin, SQLite3 is recognized. Is that normal?
I guess I am viewing this as the same as Java, in the sense that once Java was added to the path I can run the Java comd from anywhere.
Conversely running in linux has been a smooth event.
Thank you,
diek
Aha! I had the same problem today. sqlite3.exe wouldn't be recognized if it was in C:\Windows\System32 (though interestingly it would work in C:\Windows) and that location was in the path.
I solved this by "unblocking" the file as it was a downloaded file that Windows doesn't trust: Properties > General > "Unblock"
This would possibly explain why running as admin worked.
It is also possibly related to this issue of 32-bit vs. 64-bit windows, though I think it is the earlier blocking problem.

Versioning my copy of WordPress

Get WordPress and a host ready. Install it, add plugins. Customize at your will. That will give us many files and a database.
We are already keeping every file on any given Version Control System (actually GIT SVN).
So, what's the best way to keep that "backup" fully and easily recoverable?
I believe that "best way" would be a simple and/or automated way (unlike this) to backup and recover the database with just one click.
To backup, use tar & mysqldump commands. These are Open standards, so accepted everywhere & having no bug.
Backup files with tar command:
$ tar -cvzf /path/to/storage/backup.tar /path/to/wordpress/installation
To restore files, simply untar it. An example:
$ tar -C /path/to/wordpress/installation -xvzf /path/to/storage/backup.tar
Backup database with MySQLdump command:
$ mysqldump --opt -u [uname] -p[password] [dbname] > [backupfile.sql]
To restore database, simply execute sql dump file by mysql command. An example:
$ mysql -u [uname] -p[password] [db_to_restore] < [backupfile.sql]
Make sure there's no space between -p & password.
It will work no matter how large your database is (phpMyAdmin can't be used to backup & restore large databases). MySQLdump is somewhat slower than other raw methods, but its bug-free & effective.
To do automation, use these commands as cron jobs' command.
Try Wordpress plugin XCloner Backup and Restore. It might help you beyond SVN. Setup a cron job for automation...

Resources