How can I diff 2 SQLite files? - sqlite

Using SQLite-manager (in its XUL form) on a Mac.
How can I diff a SQLite file from one submitted by someone else on the team, and incorporate his changes?
Thanks.

I believe you could use the following, in combination:
$ diff sqlite-file-1.sql sqlite-file-2.sql > sqlite-patch.diff
$ patch -p0 sqlite-file-1.sql sqlite-patch.diff
I hope that works for you. Otherwise, I highly suggest consulting the man-pages:
$ man diff
$ man patch
EDIT: Alright, here's the whole walk-through.
First, dump the databases:
$ sqlite test1.sql .dump > test1.sql.txt
$ sqlite test2.sql .dump > test2.sql.txt
Next, generate a diff file:
$ diff -u test1.sql.txt test2.sql.txt > patch-0.1.diff
And, finally, to apply the patch:
$ patch -p0 test1.sql.txt patch-0.1.diff

We can use the sqldiff Utility Program:
https://www.sqlite.org/sqldiff.html
It will compare the source to the destination databases and generate SQL commands to make source equivalent to destination.
Any differences in the content of paired rows are output as UPDATEs.
Rows in the source database that could not be paired are output as
DELETEs.
Rows in the destination database that could not be paired are
output as INSERTs.
We have to download the sources and compile it, from the tool folder.

Maybe using this tool: http://download.cnet.com/SQLite-Diff/3000-10254_4-10894771.html ?
But you can use the solution provided by #indienick provided you first dump the binary sqlite database with something like: sqlite x.db .dump > output.sql
Hope this helps,
Moszi

There is a free web tool to compare SQLite databases, both schema and data -
https://ksdbmerge.tools/for-sqlite-online
Works on wasm-compatible browsers including Safari on Mac.
I am the author of that tool, it is an uno-platform port of my desktop tool for SQLite.
Unlike few other online tools I could find - it does not upload your data to the server to generate data diff. However it uploads schema to handle the rest of logic.
As of June 2022 it is diff-only tool, web version does not produce any syncronization scripts.

Related

Fuseki configuration

As outlined in http://wiki.bitplan.com/index.php/Apache_Jena#Script_to_start_Fuseki_server
I have been avoiding the complexity of Fuseki configuration files and started the server from a script for my usecases in which I only need one dataset/endpoint. For multiple datasets/endpoints i simply used multiple servers.
Descriptions like:
https://jena.apache.org/documentation/fuseki2/fuseki-config-endpoint.html
and questions like:
fuseki Multiple services found exception
have been intimidating me since there seem to be so many options and no straight forward way to simply say: please use these dataset from the following directories as the command line version can do for one dataset.
Just look at:
https://users.jena.apache.narkive.com/MNZHLT25/multiple-datasets-on-fuseki
where the user expectation:
java -jar fuseki-0.1.0-server.jar --update --loc=data /dataset
--loc=data2 /dataset2
can be seen that is unfortunately not fullfilled. Instead:
http://jena.apache.org/documentation/serving_data/index.html#fuseki-configuration-file
was the answer at the time which is now an outdated link.
So obviously there are people out there getting fuseki to work with multiple datasets. But how do they do it ?
I know how to load a TDB store from a triple file via command line. I know that i could use the WebGUI to setup datasets and load data but that won't work for my multi million (and partly multi-billion) triple files.
What is a (hopefully simple) example for loading multiple triple files and making the result available with the same fuseki server as different datasets and have the SPARQL endpoints running (partly read-only?)
https://jena.apache.org/documentation/fuseki2/fuseki-layout.html gives a hint on the layout of files.
Using the script to start fuseki i inspected the run directory which in my case was to be found at:
apache-jena-fuseki-3.16.0/run
There are two subdirectories which are initially empty and stay so if you run things from the commandline:
configuration
database
By adding a dataset via the webgui http://localhost:3030
a directory with the name of the dataset in this case:
databases/cr
and a configuration file
configuration/cr.ttl
is created.
For smaller datasets data can now be added via the webgui. For bigger datasets a copy or symlink of the original loaded tdb data is necessary in the databases directory.
example symlinks:
zeus:databases wf$ls -l
total 48
drwxr-xr-x 4 wf admin 136 Sep 14 07:43 cr
lrwxr-xr-x 1 wf admin 27 Sep 15 11:53 dblp -> /Volumes/Torterra/dblp/data
lrwxr-xr-x 1 wf admin 26 Sep 14 08:10 gnd -> /Volumes/Torterra/gnd/data
lrwxr-xr-x 1 wf admin 42 Sep 14 07:55 wikidata -> /Volumes/Torterra/wikidata2020-08-15/data/
By restarting the server without a --loc
nohup java -jar fuseki-server.jar&
the configurations are automatically picked up.
The good news is that you do not have to bother with the details of the config files this way as long as you do not have any special needs.

Upgrading MariaDB 10.1.32 version to 10.3.7

Is it possible to upgrade from 10.1.x to 10.3.x directly in one step? or I have to upgrade first to 10.2. x then to 10.3.x.
Please it is so important question regarding upgrading our production MariaDB servers and I couldn't find any answer or notes regarding upgrade from 10.1 series to 10.3 series.
So i have to do it as follow:
10.1.32 --> 10.2.16
10.2.16 --> 10.3.7
or
once 10.1.32 --> 10.3.7
In general, for any upgrade for a critical production environment:
The best approach is to use or create a test environment that is as close as possible to your production environment and test the upgrade there.
Make backups and prepare a rollback so you are ready to undo your changes
For MariaDB specifically: to quote from other related questions on their support pages:
The main concern with skipping versions is that, while upgrading one major version is usually well-tested, skipping versions is not, so you
may bump into an incompatibility
Even if you find anecdotal indications that it worked for others, a database engine like MariaDB has possible complexities with different storage engines and the like that might make it more tricky in certain setups than in others.
1 : Shutdown or Quit your XAMPP server from Xampp control panel.
2 : Download the ZIP version of MariaDB
3 : Rename the xampp/mysql folder to mysql_old.
4 : Unzip or Extract the contents of the MariaDB ZIP file into your XAMPP
folder.
5 : Rename the MariaDB folder, called something like mariadb-5.5.37-win32, to
mysql.
6 : Rename xampp/mysql/data to data_old.
7 : Copy the xampp/mysql_old/data folder to xampp/mysql/.
8 : Copy the xampp/mysql_old/backup folder to xampp/mysql/.
9 : Copy the xampp/mysql_old/scripts folder to xampp/mysql/.
10: Copy mysql_uninstallservice.bat and mysql_installservice.bat from
xampp/mysql_old/ into xampp/mysql/.
11 : Copy xampp/mysql_old/bin/my.ini into xampp/mysql/bin.
12 : Edit xampp/mysql/bin/my.ini using a text editor like Notepad. Find skip-federated and add a # in front (to the left) of it to comment out the line if it exists. Save and exit the editor.
13 : Start-up XAMPP.
Note If you can't get mysql to start from Xampp control panel.
Add this 'skip-grant-tables' statement anywhere in xampp/mysql/bin/my.ini
file
14 : Run xampp/mysql/bin/mysql_upgrade.exe.
15 : Shutdown and restart MariaDB (MySQL).
If still mysql is not started then follow below Note steps(!Important)
Note :mysql error log file: c:\xampp\mysql\bin\mysqld.exe: unknown variable 'innodb_additional_mem_pool_size=2M' like please remove or commented this statement in my.ini file in this path xampp/mysql/bin/my.ini file.
Help from this link.

No such function: sqlcipher_export()

i using start terminal
-macbook:sqlTest user1$ sqlite3 sqlTest.sqlite
SQLite version 3.7.13 2012-07-17 17:46:21
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> ATTACH DATABASE 'encrypted.sqlite' AS encrypted KEY 'testkey';
sqlite> SELECT sqlcipher_export('encrypted');
Error: no such function: sqlcipher_export
sqlite>
what makes no such function: sqlcipher_export?
As answered on the mailing list:
The first step is to build the sqlcipher command line tool, as described here:
http://sqlcipher.net/introduction/
Once you have done this, you should run the command like this:
$ ./sqlcipher sqlTest.sqlite
or
$ /full/path/to/sqlcipher/sqlcipher sqlTest.sqlite
On unix systems, if you don't provide an explicit path for a command, the system will look for the program in $PATH. On OSX, the system ships with a sqlite3 command, so you've probably been using that instead of the version compiled with SQLCipher. Please let us know if that resolves the problem. Thanks!

How find which process generate max read/write disk operation

Cloud server begin generate big disk read/write operation. I want find some script who generate top file with process(process name | TOTAL | READ | WRITE )
You can use iotop to see the reads and writes of each process using a top like interface.
Another way is to look at the /proc/[PID]/io files.
Example:
$ cat /proc/1944/io
read_bytes: 17961091072
write_bytes: 8192000
cancelled_write_bytes: 32768
There's a monitor much like top available: Iotop.
Since you're using Debian Linux, you can simply retrieve it via APT:
apt-get install iotop
Done.

How can i use DBLinq >> DBMetal tool with sqlite?

Please provide samples of command line tool dbmetal to generate code file from a sqlite database.
Surfing the internet I found the following command:
DBMetal.exe /namespace:Namespace /provider:SQLite "/conn:Data Source=database.db" /code:CodeFile.cs
Only, the DBMetal version that I downloaded gave an error (Unable to resolve databaseConnectionType: System.Data.SQLite.SQLiteConnection)
I fixed it by downloading the code from the trunk (http://dblinq2007.googlecode.com/svn/trunk), compiling it, and using the generated DBMetal.exe with the above command.

Resources