My question is re. cyclic dependencies in the db. If the db has a table t1 which has a custom data type d1, then d1 has to exist before t1 can be restored. Similarly, if a view v1 depends on tables t1 and t2, then both tables have to exist before the view can be restored. This creates problem when dumping a complex db an restoring it on another server.
Is there a way (a switch) that allows restoring a dump, but doesn't do any integrity checking until the entire kaboodle is restored?
The pg_dump utility should take care of this automatically, and generally does; however, a few bugs in dependency tracking have recently been found (and fixed).
The first thing to do would be to make sure that you are on a supported major release and on the latest minor (bug-fix) version of whatever major release you are running.
If you find that you still have the problem, post specifics, so that we can figure out whether you have found a new problem which is not yet fixed, or whether you have lingering dependency mapping problems from before the bug was fixed. Be sure to show the output of select version(); as well as the exact error message.
Related
This question is like " What was first, chicken or egg?".
Let's imagine we have some source code. Written using symfony or yii. It has db migration code that hadle some database changes.
Now, we have some commits that updates our code (for example new classes) and some db changes (change old columns or add new tables).
When we developing at localhost or update our dev servers it's ok to have time to stop services\any actions and update server. But when we tries to do it on production server we will crash everything for a while and this is not an option.
Why this will happen - when we pull it (git\mercurial) our code will be updated, but NOT database, and when code will be executed - it will throw exceptions of database. To fix it we should run build-in framework migrations. So in the end our server will be crashed until migrations will be called.
Code and migrations should be updated "in one time".
What is the best practice to handle it?
ADDED:
Solution like "run pull then run migrations in one call" - not an option in highload project. Because on highload even in second some entries\calls can be borken.
Stop server we cannot too.
Pulling off a zero downtime deployment can be a bit tricky and there are many ways to achieve this.
As for the database it is recommended to do changes in a backwards compatible fashion. So for example adding a nullable column or new table will not affect your existing code base and can be done safely. So if you want to add a new non-nullable column you would do it in 3 steps:
Add new column as nullable
Populate with data to make sure there are no null-values
Make the column NOT NULL
You will need a new deployment for 1 & 3 at the very least. When modifying a column it's pretty much the same, you create a new column, transfer the data over, release the code that uses the new column (optionally with the old column as fallback) and then remove the old column (plus fallback code) in a 3rd deployment.
This way you make sure that your database changes will not cause a downtime in your existing application. This takes great care and obviously requires you to have a good deployment pipeline allowing for fast releases. If it takes hours to get a release out this method will not be fun.
You could copy the database (or even the whole system), do a migration and then switch to that instance, but in most applications this is not feasible because it will make it a pain to keep both instances in sync between deployments. I cannot recommend investing too much time in that, but I might be biased from my experience.
When it comes to switching the current version of your code with a newer one you have multiple options. The fancy cloud based solutions like kubernetes make this kind of easy. You create a second cluster with your new version and then slowly route traffic from the old cluster to the new one. If you have a single server it is quite common to deploy a new release to a separate folder, do all the management tasks like warming caches and then when the release is ready to be used you switch a symlink to the newest release. Both methods require meticulous planning and tweaking if you really want them to be zero downtime. There are all kinds of thing that can cause issues like a shared cache being accidentally cleared to sessions not being transferred over correctly to the new release. Whenever something that's stored in a session changes you have to take a similar approach as with the database and basically slow move the state over to the new one while running the code or having a fallback to still handle the old data otherwise you might get errors when reading the session, causing 500 pages for your customers.
They key to deploy with as few outages and glitches as possible is good monitoring of the systems and the application to see where things go wrong during a deployment to make it more stable over time.
You can create a backup server with content that mirrors your current server. Then do some error detection.
If an error is detected on your primary server, update your DNS record to divert your traffic to your secondary server.
Once primary back up and running, traffic moves back to primary and then sync the changes in your secondary.
These are called failover servers.
I am trying flyway for first time, evaluating how it will fit into our project.
Trying to understand how a Failed Scenario will work
Naturally, what i did next was, modified the sql script, and re tried running, but got checksum error
Have three Ques here
So I guess the only way out is ... need to make a 1.2 with correct format or manually modify 'schema_version' table. Right , or am i missing something?
Wondering how will such a scenario work in case if this script is called from continuous integration tools (Jenkins or Bamboo). Manual Intervention will be needed.
Not sure if some other tool like Liquibase will behave in a different (better) manner
In that situation I think you should use "flyway repair" rather the "flyway migrate"
https://flywaydb.org/documentation/command/repair
One thing from your post that is not clear is was the script you ran a single DDL statement or a number of statements, of which one or more failed?. The reason for asking is that flyway records the results of a migration, but does not itself clean up 'script errors'. Depending on the database you are using this could be done by running the DDL statements within a transaction.
Liquibased operates with a much tighter connection to the database as it directly interacts with the DDL that can be expressed in a range of different formats. As such it has a much tighter control over the management of DDL deployment.
Upstream insists on manual rolling back of failed migration and re-applying it again. There is no "skip" command.
But you can manually fix and complete failed migration and manually change "schema_version"."success" to 1.
I am developing a small registration application for a friend zumba class, using Flask, SQLAlchemy and Flask-migrate(alembic) to deal with db update. I settled on SQlite because the application has to be self contained and runs locally on a laptop without internet access and SQLite requires no installation of a service or other, which is a must too.
Dealing with SQLite lack of support of ALTER table wasn't a problem during the initial development as I simply destroyed, recreated the DB when that problem arised. But now that my friend is actually using the application I am facing a problem.
Following a feature request a table has to be modified and once again I get the dreaded " "No support for ALTER of constraints in SQLite dialect". I foresee that this problem will probably arise in the future too.
How can I deal with this problem? I am pretty much a newbie when it comes to dealing with database. I read that a way to deal with that is to create a new table, create the new constraint and copy the data and rename the table, but I have no idea how to implement that in the alembic script.
You can set a variable (render_as_batch=True) in the env.py file created with the initial migation.
context.configure(
connection=connection,
target_metadata=target_metadata,
render_as_batch=True
)
It requires alembic > 0.7.0
This enables generation of batch operation migrations, i.e. creates a new table with the constraint, copies the existing data over, and removes the old table. See http://alembic.zzzcomputing.com/en/latest/batch.html#batch-mode-with-autogenerate
If you still encounter issues, be advised - there is still nuance with sqlite, e.g. http://alembic.zzzcomputing.com/en/latest/batch.html#dropping-unnamed-or-named-foreign-key-constraints
I'm looking at switching from MySQL Workbench to Navicat because we're using MariaDB and the incompatibilities are starting to annoy me.
I'm working through the issues of getting Navicat to run on Centos under WINE but assume I will succeed (edit: this failed. The "linux" version requires WINE. Navicat will sort of run with a bit of hacking, but critical features rely on MS-Windows/WINE)
How do I get Navicat to work with git (or any other source code control)? Workbench is sufficiently primitive that file changes either get picked up automatically or completely ignored (almost always a dialog "file on disk has changed, reload?")
Specific problems:
when adding new query files Navicat only seems to rescan the folder when I add a new query. Is there a smart way to do that? (edit: no. You can manually refresh one file at a time by right clicking)
model and query files are buried deep in the WINE tree. Can I relocate them or or symlinks work? I'd rather keep all the DB-related code in one repo, rather than having a special Navicat repo. (edit: yes, but the explanation of how to do so is lengthy)
is there a way to merge a model file if more than one person has changed it? Workbench can't do this but I'd really like the feature. (edit: no, never. Merge the schema SQL files instead)
Also, bonus question: can we make multiple edits using Navicat other than repeated use of the GUI? If I want to change (say) a bunch of columns from VARCHAR(255) to CHAR(20) I'd normally script that in SQL but Navicat models don't do reverse engineering, only "delete the table from the model then re-import it" so there doesn't seem to be a non-tedious way to do that. (edit: no, but they might look at it in the future)
Final edit: I used the Navicat forums and the team were very helpful, but fundamentally Navicat is Windows software and the 64-bit purists behind Centos will never support WINE. For most Linux users this is not a problem, but I work with Centos enthusiasts and have long since lost the argument about which distro to use.
To the 1st question, you can sync it in different ways with a remote database/folder, when you are managing the database with Navicat, just right-click in your current connection and press "refresh", so you will be updated with the server changes. You also can do it with a programmed task.
Another matter is, why would you want to run navicat from wine when it has a native linux version? (I hope that answers the 2nd question)
For the 3rd question note that Navicat has an internal utility to sync data between servers, so you don't need git at all, or at most, you can automate the structure exportation and then sync it with a git repository (in form of a .sql file)
IMHO you need to review your concepts about mariadb and navicat, both are quite flexible and offer several ways to do such things you propose, like sync the data and they also allow to insert git in the workflow, just review your strategy and try to apply some new perspective with the available features.
I am using Visual Studio 2013 to manage a .sqlproj file containing our database schema. The schema has been deployed successfully dozens of times.
When attempting to publish to one specific target database, the "Creating publish preview" step appears to fail, but no error is given. The output from the preview includes some expected warnings:
The column {...} is being dropped, data loss could occur
If this deployment is executed, changes to {...} might introduce run-time errors in {...}
This deployment may encounter errors during execution because changes to {...} are blocked by {...}'s dependency in the target database
I have unchecked "Block incremental deployment if data loss might occur".
The Preview just stops, and no script is generated.
This happens when there exists a stored procedure (or view or constraint or other object) in the target database, that isn't included in your sqlproj, that references a table that would be altered by deploying your sqlproj. SSDT apparently can't determine whether the change is safe unless the referring thing is included in your sqlproj, and then it errs on the safe side by blocking the deployment.
Disabling the "Block incremental deployment if data loss might occur" option only relaxes the data-loss checks. There isn't a "Block incremental deployment if run-time errors might occur" option.
You have three options:
add whatever stored procedures, views, or whatever from the target database to your sqlproj
uncheck the "Verify Deployment" option in the ssdt publish options (this is dangerous unless you're aware of the other referring sprocs and know that they aren't going to break)
if you're certain that everything that should exist in the target database is contained in your sqlproj, you can enabled the "Drop objects in target but not in source" option
The issue may also be caused prepending a database object with the wrong schema. For instance a table being referenced within a stored procedure SQL statement and the table being prepended with an incorrect schema name.
Additionally, we had some permissions for a specific security group that once we removed the solution would build again. In order to troubleshoot the error perform a schema compare of the project code and the target database. Remove differences from the database until the publish functionality works. The last item that you removed from the database is your culprit.
The last warning pattern appears to be more than a warning:
This deployment may encounter errors during execution because changes
to {...} are blocked by {...}'s dependency in the target database
appears to have been the culprit behind stopping the rest of the preview and the generation of the script.
Interestingly, the schema change being introduced would not have broken the triggers referenced in the preview output.
removing schemabinding from the view allows the publish to succeed with only warnings