putting Navicat files in git? - mariadb

I'm looking at switching from MySQL Workbench to Navicat because we're using MariaDB and the incompatibilities are starting to annoy me.
I'm working through the issues of getting Navicat to run on Centos under WINE but assume I will succeed (edit: this failed. The "linux" version requires WINE. Navicat will sort of run with a bit of hacking, but critical features rely on MS-Windows/WINE)
How do I get Navicat to work with git (or any other source code control)? Workbench is sufficiently primitive that file changes either get picked up automatically or completely ignored (almost always a dialog "file on disk has changed, reload?")
Specific problems:
when adding new query files Navicat only seems to rescan the folder when I add a new query. Is there a smart way to do that? (edit: no. You can manually refresh one file at a time by right clicking)
model and query files are buried deep in the WINE tree. Can I relocate them or or symlinks work? I'd rather keep all the DB-related code in one repo, rather than having a special Navicat repo. (edit: yes, but the explanation of how to do so is lengthy)
is there a way to merge a model file if more than one person has changed it? Workbench can't do this but I'd really like the feature. (edit: no, never. Merge the schema SQL files instead)
Also, bonus question: can we make multiple edits using Navicat other than repeated use of the GUI? If I want to change (say) a bunch of columns from VARCHAR(255) to CHAR(20) I'd normally script that in SQL but Navicat models don't do reverse engineering, only "delete the table from the model then re-import it" so there doesn't seem to be a non-tedious way to do that. (edit: no, but they might look at it in the future)
Final edit: I used the Navicat forums and the team were very helpful, but fundamentally Navicat is Windows software and the 64-bit purists behind Centos will never support WINE. For most Linux users this is not a problem, but I work with Centos enthusiasts and have long since lost the argument about which distro to use.

To the 1st question, you can sync it in different ways with a remote database/folder, when you are managing the database with Navicat, just right-click in your current connection and press "refresh", so you will be updated with the server changes. You also can do it with a programmed task.
Another matter is, why would you want to run navicat from wine when it has a native linux version? (I hope that answers the 2nd question)
For the 3rd question note that Navicat has an internal utility to sync data between servers, so you don't need git at all, or at most, you can automate the structure exportation and then sync it with a git repository (in form of a .sql file)
IMHO you need to review your concepts about mariadb and navicat, both are quite flexible and offer several ways to do such things you propose, like sync the data and they also allow to insert git in the workflow, just review your strategy and try to apply some new perspective with the available features.

Related

Reading Locked SQLite DB In to Memory

I'm working on a project that has some pretty specific requirements, and am running in to a problem with one of them. We have a locked SQLite database. We can't unlock this database, but need to read it (but not write to it), and we cannot create any new files on the filesystem that contain the data from this database. What was suggested is to read the file in to RAM, and then access it from there. I've been trying to find out a way to do this, but this project is on Windows, so it's not going as smoothly as it might otherwise.
What I've been trying to do is read the file in to a bash variable, and then pass that variable to sqlite as the database. This hasn't been working particularly well.
I installed win-bash, but when I do "sqlite3.exe <(cat <<<"$database")" I get a 'syntax error near unexpected token `<('" I checked, and win-bash looks like it's based on an older version of bash. I tried zsh, but it's saying "doesn't look like your system supports FIFOs.". I installed cygwin, which wouldn't really be a good solution anyway (once I figure out how to do this, I need to pass it off to our Qt developers so that they can roll it in to a Qt application) but I was just trying to do a 'proof of concept' - that didn't work either. Sqlite opened just fine, but when i ran ".tables", it said "Error: unable to open database "/dev/fd/63": unable to open database file" So, it looks like I'm barking up the wrong tree, and need to think of some other way to do this.
I guess my questions are, first, is it possible to read a sqlite database in a variable as I was attempting, or am I going down an entirely incorrect path there? Second - if it can't be done that way, is there some way I'm overlooking that might make this possible?
Thanks!

Integrating Flyway into an existing database

We have not used Flyway from the beginning of our project. We are at an advanced state of development. An expert review has suggested to use Flyway in our project.
The problem is that we have moved part of our services (microservices) into another testing environment as well.
What is the best way to properly implement Flyway? The requirements are:
In Development environment, no need to alter the schema which is already existing. But all new scripts should be done using Flyway.
In Testing environment, no need to alter the schema which is already existing. But what is not available in testing environment should be created automatically using Flyway when we do migrate project from Dev to test.
When we do migration to a totally new envrionment (UAT, Production etc) the entire schema should be created automatically using Flyway.
From the documentation, what I understood is:
Take a backup of the development schema (both DDL and DML) as SQL script files, give a file name like V1_0_1__initial.sql.
Clean the development database using "flyway clean".
Baseline the Development database "flyway baseline -baselineversion=1.0.0"
Now, execute "flyway migrate" which will apply the SQL script file V1_0_1__initial.sql.
Any new scripts should be written with higher version numbers (like V2_0_1__account_table.sql)
Is this the correct way or is there any better way to do this?
The problem is that I have a test database where we have different set of data (Data in Dev and test are different and I would like to keep the data as it is in both the environments). If so, is it good to separate the DDL and DML in different script files when we take it from the Dev environment and apply them separately in each environment? The DML can be added manually as required; but bit confused if I am doing the right thing.
Thanks in advance.
So, there are actually two questions here. Data management and Flyway management.
In terms of data management, yes, that should be a separate thing. Data grows and grows. Trying to manage data, beyond simple lookup tables, from source control quickly becomes very problematic. Not to mention that you want different data in different environments. This also makes automating deployments much more difficult (branching would be your friend if you insist on going this route, one branch for each data set, then deploy appropriately).
You can implement Flyway on an existing project, yes. The key is establishing the baseline. You don't have to do all the steps you outlined above. Let's say you have an existing database. You have to get the script that defines that database. That single script should include all appropriate DDL (and, if you want, DML). Name it following the Flyway standards. Something like V1.0__Baseline.sql.
With that in place, all you must do is run:
flyway baseline
That will establish your existing code base as the start point. From there, you just have to create scripts following the naming standard: V1.1xxx V2.0xxx V53000.1xxx. And run
flyway migrate
To deploy appropriate changes.
The only caveat to this is that, as the documentation states, you must ensure that all your databases match this V1.0 that you're creating and marking as the baseline. Any deviation will cause errors as you introduce new changes and migrate them into place. As long as you've got matching baseline points, you should be able to proceed with different data in different environments with no issues.
This is my how-to instruction on integration flyway with prod DB: https://delicious-snipe-938.notion.site/How-to-integrate-Flyway-with-existing-MySQL-DB-in-Prod-PostgreSQL-is-similar-1eabafa8a0e844e88205c2f32513bbbe.

Is there any value in including SQLite in VCS's

Having an argument with my team. We are developing an application using SQLite and some want to add it to the repo (GIT) and some don't. Previously with RDBMS system there has been no perceived benefit of using VCS on the DB. However SQLite is a self contained file with no external dependencies so i assume, even though it is binary, that a commit of the project code + the SQLite file will give an accurate snapshot of the state of play at that point.
I also assume that a branch and merge would work as well.
Has anyone actually done this and if so does it work?
You'd get more benefit from GIT's versioning facilities if you stored a dump of the SQLite database (i.e. commands required to create it) rather than the database file itself. That way you could look at the history of the dump file and see tables or data being added etc.
Generally speaking, it's preferable to include full set of dependencies in a VCS repository. This makes your life a whole lot simpler.
If you're after versioning DB schema, check out Wizardby.

How do I always use the 'remote' version of a file in mercurial?

Mercurial newbie here, I'm working on a Django project that uses Sqlite as the database. I develop templates & UI stuff while my colleague works on the back-end code. We both push changes to Bitbucket.
He is the only one who actually modifies the models and correspondingly, the SQLite file, however just by virtue of my testing the app, the file changes as well. I always drop my changes by doing an 'hg revert database.sqlite' after I've finished testing and before I push.
Is there a simple way for me to always stick with his version of the SQLite file so that we don't have merge issues every time we try and sync? Sort-of like an exception that says 'if there is a conflict, always use the remote version of the file'. I did see something like this in a tip somewhere, but I can't for the life of me find it again.
I agree with the comment by Matthew that the best solution is to not track this file.
However, your idea of asking Mercurial to always use the remote version is actually not that far out... :-) You do this by configuring a merge tool for this file where you tell Mercurial to use the other (remote) version in all merges:
[merge-tools]
database.sqlite = internal:other
That should make sure that you will always discard your changes to database.sqlite when you merge. This lets you do
$ hg pull
$ hg merge
I just got another idea -- use a pre-merge hook to revert the file:
[hooks]
pre-merge = hg revert mydb.sqlite
That is pretty much equivalent to using the internal:other merge tool from above, but you may find it conceptually simpler since it models what you are already doing.
As long as you work on a shell, there are numerous ways on having hg revert database.sqlite executed before you commit.
with bash:
alias hgcommit = hg revert database.sqlite;hg commit
(it's a bit cheap, I know, but thats why I like working with a shell)

Backing and restoring SQL Server data to changed database structure

The scenario is this. I have a SQL Server database online that I am demoing an application. During development, I have added extra fields, modified field types, changed keys and added some new tables locally.
What's the best way for me to update the online database with the new structure and not lose the data? The database is a SQL Server 2005 one.
Download a trial of Red Gate SQL Compare, compare your two servers and you are done. If you do this often, it is well worth the $400, or get one of their bundles for a better bang for the buck.
And I do not work for Red Gate, just a happy customer!
Write update scripts to modify your live database structure to the new structure, as well as inserting any data which is required.
You may find it necessary to use temporary tables to do this.
It's probably best if you test this process on a test environment, before running the scripts on the live environment.
Depending on what exactly you've done you may be able to get away with alter statements, though from the sounds of it (removing keys and whatnot) you're doing some heavy lifting that may make that a less-than-ideal solution. You should probably look into creating a maintenance plan or, better yet, a SQL Server Integration Services project in Visual Studio. You should be able to migrate the data in the existing database to a new one using those tools.
This probably isn't of huge help retrospectively, but I always script all structural DB changes to my development database and then using a version number to determine the current version of the DB I can run the required scripts on the live DB, hence bringing it back in line at the same time as the new code is uploaded.
This also works for any content changes, for instance if the change in the underlying structure has an effect on the conent stored you can also write scripts to migrate the data accordingly.
Make a copy of the existing database to copy from.
Make another copy and alter it to your new schema. save DDL for reuse.
Write queries that copy data from #1 to #2. Save the queries for reuse.
Check the results.
Repeat until done.

Resources