SSDT Publish errors on Creating Publish Preview - sql-server-data-tools

I am using Visual Studio 2013 to manage a .sqlproj file containing our database schema. The schema has been deployed successfully dozens of times.
When attempting to publish to one specific target database, the "Creating publish preview" step appears to fail, but no error is given. The output from the preview includes some expected warnings:
The column {...} is being dropped, data loss could occur
If this deployment is executed, changes to {...} might introduce run-time errors in {...}
This deployment may encounter errors during execution because changes to {...} are blocked by {...}'s dependency in the target database
I have unchecked "Block incremental deployment if data loss might occur".
The Preview just stops, and no script is generated.

This happens when there exists a stored procedure (or view or constraint or other object) in the target database, that isn't included in your sqlproj, that references a table that would be altered by deploying your sqlproj. SSDT apparently can't determine whether the change is safe unless the referring thing is included in your sqlproj, and then it errs on the safe side by blocking the deployment.
Disabling the "Block incremental deployment if data loss might occur" option only relaxes the data-loss checks. There isn't a "Block incremental deployment if run-time errors might occur" option.
You have three options:
add whatever stored procedures, views, or whatever from the target database to your sqlproj
uncheck the "Verify Deployment" option in the ssdt publish options (this is dangerous unless you're aware of the other referring sprocs and know that they aren't going to break)
if you're certain that everything that should exist in the target database is contained in your sqlproj, you can enabled the "Drop objects in target but not in source" option

The issue may also be caused prepending a database object with the wrong schema. For instance a table being referenced within a stored procedure SQL statement and the table being prepended with an incorrect schema name.
Additionally, we had some permissions for a specific security group that once we removed the solution would build again. In order to troubleshoot the error perform a schema compare of the project code and the target database. Remove differences from the database until the publish functionality works. The last item that you removed from the database is your culprit.

The last warning pattern appears to be more than a warning:
This deployment may encounter errors during execution because changes
to {...} are blocked by {...}'s dependency in the target database
appears to have been the culprit behind stopping the rest of the preview and the generation of the script.
Interestingly, the schema change being introduced would not have broken the triggers referenced in the preview output.

removing schemabinding from the view allows the publish to succeed with only warnings

Related

Execute Flyway calback and report it inhistory table

I use flyway 8.5.0 and I want that my beforeMigrate or my afterMigrate sql is reported in the history table. Is this feaseable? or is there any config to setup this?
Then an other question: my repetable only runs when they change (checksum) but for my understanding the repetible sql should run every time. not so?
The beforeMigrate and afterMigrate SQL wont appear in your history table. If you look at the tutorial example for callbacks you can see that beforeMigrate can be called before the schema history table is created which would cause issues if it was trying to add itself to it. Additionally, I'm assuming these will be mostly static executions and would not really be part of the version history.
https://flywaydb.org/documentation/tutorials/callbacks
For repeatable, no they are only applied when the checksum has changed.
Repeatable migrations are very useful for managing database objects whose definition can then simply be maintained in a single file in version control. Instead of being run just once, they are (re-)applied every time their checksum changes.
https://flywaydb.org/documentation/tutorials/repeatable

How can I deploy only a select set of stored procedures in a DACPAC deployment?

I have a visual studio project which contains a database project. I create an executable which performs a software update and part of that update is to update the database. Some of the stored procedures are dependent on a linked server existing which gets created as part of the executable too. The problem is that this functionality is optional and the linked server won't connect on some client machines. But the DACPAC fails because the linked server can't connect. I am using sqlpackage.exe to deploy the .dacpac file.
Is there some way that I can deploy either all or only some of the stored procedures? Or maybe I can set a flag to ignore linked server errors? Or maybe there is an alternative method to using sqlpackage/dacpac?
One option I thought of is to convert the stored procedures that contain the linked server to dynamic SQL.
Having the database in visual studio and therefore source control is important.
Yes!
This is fairly easy to do. You can see your database project in visual studio. I would recommend removing the stored procs that are problematic and merging those back in to master. Then I would take out a feature branch and point again to the DB you have the stored procs on and use the schema compare to get those back as well (even the ones that don't work well so that you don't lose them). Push the commit up to the feature branch repo. Then,now that you have the problematic stored procs in source control + the shippable version in master-- you can go ahead and thruough visual studio "publish" through the database project into the DBs you want the selected objects.
If you haven't checked in anything to master-- you can do the schema compare and select all objects except those that are problematic and update your database project. and merge that to master. If this doesn't make sense, please comment on this answer and I'm happy to give more detail.
Well, I came across this. Still working on to implement this to solve my problem. Might help your cause too.
Download the filter from:
https://agilesqlclub.codeplex.com/releases/view/610727 put the dll
into the same folder as sqlpackage.exe and add these command line
parameters to your deployment:
/p:AdditionalDeploymentContributors=AgileSqlClub.DeploymentFilterContributor
/p:AdditionalDeploymentContributorArguments="SqlPackageFilter=IgnoreSchema(BLAH)”
This will neither deploy, drop or alter anything in the BLAH schema.
More details on
https://the.agilesql.club/2015/01/howto-filter-dacpac-deployments/

Schema not updating when publish web app from Visual Studio

I am building an ASP.NET MVC EF app with code-first migrations and hosting it in Azure with Azure SQL DB. The first time I published this, it went fine. But since then my models changed, and my schema in the Azure DB is not getting updated to match. When I deploy, I do have "Execute Code First Migrations" checked. When that wouldn't work, I deleted my DB and then recreated it in the Azure portal, figuring that would trigger it getting updated. But then that didn't work, so I set AutomaticMigrationsEnabled = True in the migration Configuration. It is STILL not working, so currently my DB in Azure has none of my tables. HOW can I get the DB in Azure to be forced to update to match my models so the published site will work?? I did try looking for if there's a way to script the VS local DB to a Create query and execute that in SQL management studio, but couldn't find how to do that.
If you have made sure that you have selected update database in the publish setting, and the connection string is correctc and its still not updating. Maybe the following will help for you:
I sometimes get an issue like this, it is quite frustrating, My publish file is correct and my settings are set to allow SQL updates to occur during publishing. But sometimes the database hasn't been updated and I get a nice "backing context has changed" error, sometimes the culprit is the migration table that hasn't been updated. Unfortunately the only sure way to get your databases in sync is to check what migration history they are both at, by comparing [dbo].[__MigrationHistory]
If your published server is missing the latest migration history, then you can generate an SQL script of that by typing into the package manager console:
Update-Database -Script -TargetMigration [migration name]
'migration name' should be the name of the last migration that your published server had, visual studio will generate sql script that can be used to bring the database up to the latest migration from that target migration.
Sometimes (though very rarely, its only happened once or twice for me) the above doesn't work for whatever reason (usually because migration files have been deleted), if that is the case then its a good idea to script the whole database, and cherry pick the sql you need from that.
Update-Database -Script -SourceMigration:0
This will generate a script for every migration, you can then cherry pick based on the changes you've made. The 'latest' changes will be closer to the bottom of the file. every migrational change will start with an if check:
IF #CurrentMigration < '201710160826338_mymigration'
BEGIN
You can use this to pick the bits that you need, if you do pick the SQL be sure to include the update to the migration history. It will be at the end of the if block and look something like this:
INSERT [dbo].[__MigrationHistory]([MigrationId], [ContextKey], [Model], [ProductVersion])
VALUES (N'201710101645265_test', N'API.Core.Configuration', 'Some long checksum')
Including the migration history will ensure that visual studio doesn't have the problem again.
Hope this helps.

How can I troubleshoot an Informix -255 "not in transaction" error?

Working with an Informix 11.70 database in non-ANSI, unbuffered logging mode.
I am accessing this database through a GlassFish 3.1.2.2 server with a connection pool set up to use javax.sql.ConnectionPoolDataSource objects implemented by the com.informix.jdbcx.IfxConnectionPoolDataSource class.
All transactions are under the control of a JPA provider (Hibernate in this case), so there are no explicit BEGIN WORK, COMMIT WORK or ROLLBACK WORK statements that I have any control over.
In one particular deployment of this configuration, we are getting -255 errors, which signify either:
the database is in non-logging mode (this is not true in our case)
the database is in some kind of logging mode, but there was a COMMIT WORK issued by someone without a preceding BEGIN WORK statement
How do I go about troubleshooting this problem? What environmental factors would cause this error on one deployment and not on another?
The answer thankfully has nothing to do innately with Informix or Hibernate support for Informix. It has everything to do with obscure automatic data source creation in GlassFish that happens depending on how you deploy. (This may have something to do with the default properties that the Informix data source supplies to the GlassFish web console.)
Specifically, we had a case where our deployer attempted to deploy our application on GlassFish without first creating the required JDBC resource. GlassFish reported that some odd JDBC resources were missing: jdbc/foobar__pm and jdbc/foobar__nontx.
Our deployer, not figuring anything was wrong, created these resources by hand. (GlassFish apparently ordinarily creates these automatically when you deploy with its web console.)
As a result, our deployer had inadvertently specified a non-transactional data source for use by our application and that was the root cause here.

BizTalk Business Activity Monitor

I recently started with the BAM from BizTalk.
I created a simple orchestration.
I configured the BAM for BizTalk ofcourse.
I used excel to create a simple schema with only textfields.
I deployed this xml schema to the BizTalk primary import using: bm deploy-all -DefinitionFile:myxml.xml.
Opened the TPE and opened the deployed schema.
Opened the orchestration and here opened the used schema and linked the schemafields to the bamschemafields.
After this I applied the tracking profile.
I then put a file through BizTalk which uses the orchestration. The file was outputted.
If I now check in the primary import database, I can see that the file is visible in the active messages. But the completed field is set to false. And it doesn't change. Also no data is filled in, only the ActivityID and LastModified, none of the columns which i specified myself are filled, and also RecordID = null.
What am I doing wrong?
I thought I did all the necessary steps, I know it's all still pretty basic but I need to get this to work if I want to do more, right?
Getting BAM to work can be tricky sometimes. First, did you restart your biztalk hosts after deploying everything? That could cause issues if you didn't.
Almost the first thing I do when I run into any issues with BAM is to turn on BAM tracing and either redirect it to a file or use DbgView to check for any errors BAM might be running into.
One of the crappy things about BAM is that it will fail silently sometimes, with the only information about the error being dump on the BAM tracing, so getting familiar with it is important.

Resources