I've got a new BizTalk group I've just created (with BizTalk Server 2006 R2 SP1). When I go to run the DTA purge job it complains with "Invalid object name 'EdiMessageContent'". This table does not exist in the DTA database.
So I compared my setup with another group I have where the purge job does work. Sure enough that one has the table. So the two groups differ by just this one table.
Strange. How come this table didn't get created? Is that because I'm running SP1? I do see a script in the schema folder named BtsEdiMessageContentTables.sql and I was able to run it successfully on a blank database.
Any thoughts?
Thanks,
Krip
I'll take my own stab at this: It's a problem with SP1. So I've just run the sql script I found that creates and indexes the new table, and now the purge job runs fine. That table didn't have any rows in my other environment anyway since I don't use EDI. So I think I'm good to go.
Anyone else faced this or worked around it?
-Krip
Related
So a friend of mine asked me to help him configure an automatic replication of a table on his MariaDB database to another table that's supposed to be an exact copy of the source/primary table.
The databases are on the same server. MariaDB version 10.2.44. The databases are on a cPanel managed webserver run by a webhost. We are accessing the databases using HeidiSQL, which is what I'm hoping I can use to configure everything.
Upon lots of googling, this is the article I suspect makes the most sense for what we want to do, but it doesn't look like this is automatic to any extent: https://mariadb.com/kb/en/setting-up-replication/
Is this the best way to do what we're trying to do? Is there a better way? Any suggestions?
Thanks!
Like #ysth said, in this case, triggers can be used.
When creating a trigger that "works between different databases", you need to specify the database on the trigger name. So for example:
CREATE TRIGGER database_name.trigger_name
Otherwise you'll get an "Out of schema" error.
The database you need to specify is the one where the "listener" is located. Basically, the place where the condition for the trigger is being checked.
I am developing a project in ASP.NET MVC and C#, using a SQL Server database managed in SSMS. I changed two of my column names in Visual Studio, added the migration and updated the database in VS. This deleted my two original columns and all of there data instead of just changing the column name like I expected.
Unfortunately I didn't make a backup of my database first as I didn't consider this outcome. I even tried to run a targeted migration to a previous state of the database but that just changed the column names back without the data that was originally there.
Please help with a solution to restore the data without a backup file. Any possible solutions are appreciated.
I am trying flyway for first time, evaluating how it will fit into our project.
Trying to understand how a Failed Scenario will work
Naturally, what i did next was, modified the sql script, and re tried running, but got checksum error
Have three Ques here
So I guess the only way out is ... need to make a 1.2 with correct format or manually modify 'schema_version' table. Right , or am i missing something?
Wondering how will such a scenario work in case if this script is called from continuous integration tools (Jenkins or Bamboo). Manual Intervention will be needed.
Not sure if some other tool like Liquibase will behave in a different (better) manner
In that situation I think you should use "flyway repair" rather the "flyway migrate"
https://flywaydb.org/documentation/command/repair
One thing from your post that is not clear is was the script you ran a single DDL statement or a number of statements, of which one or more failed?. The reason for asking is that flyway records the results of a migration, but does not itself clean up 'script errors'. Depending on the database you are using this could be done by running the DDL statements within a transaction.
Liquibased operates with a much tighter connection to the database as it directly interacts with the DDL that can be expressed in a range of different formats. As such it has a much tighter control over the management of DDL deployment.
Upstream insists on manual rolling back of failed migration and re-applying it again. There is no "skip" command.
But you can manually fix and complete failed migration and manually change "schema_version"."success" to 1.
I looked at the Flyway samples and documentation and tried to understand if it is useful in my environment.
The following conceptual detail is unclear to me: How does Flyway manage the changes between database versions? It obviously does NOT compare database life-instances (see answer here:Can Flyway find out and generate migration files from datamodel?)
In detail my setup looks like this:
I create SQL create and insert scripts when coding (automatically and manually). This means every version of my database is represented by a number of insert/create statements.
In my world I execute these scripts through a database tool (sqlplus from Oracle). Each run would setup the database _from_scratch_ (!).
Can I put these very same scripts 1 to 1 inside the "migration" path of Flyway? What happens if the target database is way older than the last "migration step" I did (or flyway did not yet exist when it was installed)?
Update:
I got some input from another Flyway user:
It seems like each "migration" (version of the database) has to be hand-written SQL/Java code and contains only "updates" from the previous "migration" of database.
If this is true, I wonder how this can be used with traditional coding technics: in my world SQL statements are generated automatically and contain all database init/create statements, not just "updates" to some previous version. If my SQL code generator could do that, then I wouldn't even need a tool like Flyway :-).
Your question about "how to handle a DB that has a longer history than there are migration scripts?" You need to create a V1_ migration/sql script that matches/recreates your latest DB schema. Something that can take a blank DB to what you have today. Create/generate that sql script using your existing DB tools and then put it in flyways migration directory. (And test V1 by using flyway against a clean DB and see if you get what you expect.) http://flywaydb.org/documentation/existing.html
After that point in time, all later versions must be added in as you work. When you decide you need a new table, in your dev environment, write a new V*_.sql that modifies your schema to the way you need it.
This blog goes over this situation for a Spring/SQL application. https://blog.synyx.de/2012/10/database-migration-using-flyway-and-spring-and-existing-data/
The scenario is this. I have a SQL Server database online that I am demoing an application. During development, I have added extra fields, modified field types, changed keys and added some new tables locally.
What's the best way for me to update the online database with the new structure and not lose the data? The database is a SQL Server 2005 one.
Download a trial of Red Gate SQL Compare, compare your two servers and you are done. If you do this often, it is well worth the $400, or get one of their bundles for a better bang for the buck.
And I do not work for Red Gate, just a happy customer!
Write update scripts to modify your live database structure to the new structure, as well as inserting any data which is required.
You may find it necessary to use temporary tables to do this.
It's probably best if you test this process on a test environment, before running the scripts on the live environment.
Depending on what exactly you've done you may be able to get away with alter statements, though from the sounds of it (removing keys and whatnot) you're doing some heavy lifting that may make that a less-than-ideal solution. You should probably look into creating a maintenance plan or, better yet, a SQL Server Integration Services project in Visual Studio. You should be able to migrate the data in the existing database to a new one using those tools.
This probably isn't of huge help retrospectively, but I always script all structural DB changes to my development database and then using a version number to determine the current version of the DB I can run the required scripts on the live DB, hence bringing it back in line at the same time as the new code is uploaded.
This also works for any content changes, for instance if the change in the underlying structure has an effect on the conent stored you can also write scripts to migrate the data accordingly.
Make a copy of the existing database to copy from.
Make another copy and alter it to your new schema. save DDL for reuse.
Write queries that copy data from #1 to #2. Save the queries for reuse.
Check the results.
Repeat until done.