VSTS database schema comparison not saving comments - database-tools

We use VSTS 2008 with SP1 and GDR R2 installed.
Found the next problem:
2 identical databases (for example
Database1 and Database2);
Database1 is a working database,
Database2 is a production one;
adding a new stored procedure to the
Database1 with comments
(description, author etc. before the CREATE PROCEDURE statement);
trying to compare schemes on
Database1 (source) and Database2
(target); the new SP successfully
added to the Database2 but without comment above the CREATE PROCEDURE statement (but comments in the SP body are fully preserved).
The part of the stored procedure in Database1:
-- =============================================
-- Author: [author here]
-- Create date: [creation date here]
-- Description: [description here]
-- =============================================
CREATE PROCEDURE [schema here].[procedure name here]
#param1 uniqueidentifier,
#param2 nvarchar(64),
#param3 bit,
#param4 int = 1,
#param5 int = 25,
#param6 int = 0 output,
#param7 int = 0 output
AS
The result in Database2 after schema comparison:
CREATE PROCEDURE [schema here].[procedure name here]
#param1 UNIQUEIDENTIFIER, #param2 NVARCHAR (64), #param3 BIT, #param4 INT=1, #param5 INT=25, #param6 INT=0 OUTPUT, #param7 INT=0 OUTPUT
AS
It murdered comments.
The check mark in the Schema compare options... -> Ignore Comments is unchecked.
Is there any way to make this work?

I remembered having read something about this a while ago, and after a little digging I found this blog post by Gert Drapers, a.k.a. The Datadude, which describes your exact problem. He says that it is a known problem and that the team is working on a fix. The post dates back to before the GDR2 was released, so I guess they haven't fixed it yet since you still have this problem.

Related

MariaDB does not finish query for table with foreign key

In a MariaDB data base I've set up I cannot change tables with foreign keys at all. Querys involving alter table and drop table never finish, I don't even get an error message. Same with repair table.
All I can do is hit ctrl+c. There are no apparent errors indicated in the InnoDB monitor output.
I'm quite new to relational data bases, so it's probably a user error. I just can't see what it might be. Any help greatly appreciated!
OS: Windows 10 Enterprise
MariaDB: 10.8
using both the client and a plugin in Visual Studio Code, doesn't matter.
I can alter other tables.
I set foreign_key_checks to off
The table with the foreign key looks like below. participant_id is the foreign key.
Field
Type
Null
Key
Default
Extra
trial_id
smallint(6)
NO
PRI
NULL
begin_trial
datetime
NO
UNI
NULL
participant_id
tinyint(4)
NO
MUL
NULL
The referenced table looks like this, id is participant_id.
Field
Type
Null
Key
Default
Extra
id
tinyint(4)
NO
PRI
NULL
auto_increment
code
char(6)
NO
UNI
NULL
day
date
NO
0000-00-00
Yes, user error. I had two connections open, one using the MariaDB client, another one using the VSC plugin. Thus there must have been some kind of lock on write operationsm but not on read operations.
Thanks for being my rubber duck, Stackoverflow.

Stored Procedure works fine from SQL Mgt Studio but throws Invalid Object name #AllActiveOrders from MVC app

I can run the 'guts' of my stored procedure as a giant query.. just fine from SQL Management Studio. Furthermore, I can even right click and 'execute' the stored procedure - .. y'know.. run it as a stored procedure - from SQL Management Studio.
When my ASP.NET MVC app goes to run this stored procedure, I get issues..
System.Data.SqlClient.SqlException: Invalid object name '#AllActiveOrders'.
Does the impersonation account that ASP.NET runs under need special permissions? That can't be it.. even when I run it locally from my Visual Studio (under my login account) I also get the temp table error message.
EDIT: Furthermore, it seems to work fine when called from one ASP.NET app (which is using a WCF service / ADO.NET to call the stored procedure) but does not work from a different ASP.NET app (which calls the stored proc directly using ADO.NET)
FURTHERMORE: The MVC app that doesn't crash, does pass in some parameters to the stored procedure, while the crashing app runs the Stored Proc with default parameters (doesn't pass any in). FWIW - when I run the stored procedure in SQL Mgt. Studio, it's with default parameters (and it doesn't crash).
If it's of any worth, I did have to fix a 'String or Binary data would be truncated' issue just prior to this situation. I went into this massive query and fixed the temptable definition (a different one) that I knew to be the problem (since I had just edited it a day or so ago). I was able to see the 'String/Binary truncation' issue in SQL Mgt. Studio / as well as resolve the issue in SQL Mgt Studio.. but, I'm really stumped as to why I cannot see the 'Invalid Object name' issue in SQL Mgt. Studio
Stored procedures and temp tables generally don't mix well with strongly typed implementations of database objects (ado, datasets, I'm sure there's others).
If you change your #temp table to a #variable table that should fix your issue.
(Apparently) this works in some cases:
IF 1=0 BEGIN
SET FMTONLY OFF
END
Although according to http://msdn.microsoft.com/en-us/library/ms173839.aspx, the functionality is considered deprecated.
An example on how to change from temp table to var table would be like:
create table #tempTable (id int, someVal varchar(50))
to:
declare #tempTable table (id int, someval varchar(50))
There are a few differences between temp and var tables you should consider:
What's the difference between a temp table and table variable in SQL Server?
When should I use a table variable vs temporary table in sql server?
Ok. Figured it out with the help of my colleague who did some better Google-fu than I had done prior..
First, we CAN indeed make SQL Management Studio puke on my stored procedure by adding the FMTONLY option:
SET FMTONLY ON;
EXEC [dbo].[My_MassiveStackOfSubQueriesToProduceADigestDataSet]
GO
Now, on to my two competing ASP.NET applications... why one of them worked and one of them didn't? Under the covers, both essentially used an ADO.NET System.Data.SqlClient.SqlDataAdapter to go get the data and each performed a .Fill(DataSet1)
However, the one that was crashing was trying to get the schema in advanced of the data, instead of just deriving the schema after the fact.. so, it was this line of code that was killing it:
da.FillSchema(DataSet1, SchemaType.Mapped)
If you're struggling with this same issue that I've had, you may have come across forums like this from MSDN which are all over the internets - which explain the details of what's going on quite adequately. It had just never occurred to me that when I called "FillSchema" that I was essentially tripping over this same issue.
Now I know!!!
Following on from bkwdesign's answer about finding the problem was due to ADO.NET DataAdapter.FillSchema using SET FMTONLY ON, I had a similar problem. This is how I dealt with it:
I found the simplest solution was to short-circuit the stored proc, returning a dummy recordset FillSchema could use. So at the top of the stored proc I added something like:
IF 1 = 0
BEGIN;
SELECT CAST(0 as INT) AS ID,
CAST(NULL AS VARCHAR(10)) AS SomTextCol,
...;
RETURN 0;
END;
The columns of the select statement are identical in name, data type and order to the schema of the recordset that will be returned from the stored proc when it executes normally.
The RETURN ensures that FillSchema doesn't look at the rest of the stored proc, and so avoids problems with temp tables.

python's sqlite3.executemany: Alter a Table in a single transaction? Locking Error

I've an application, where I use sqlite3's autocommit feature everywhere which usually works fine. The app includes an database scheme updater.
Basically it's just a set of SQL commands for each version upgrade, which should be called in a single transaction. I implemented this by using the .executemany() call which worked until now. Now is the first time, I want to alter a table description using this method (I made a short example, as the original table is fairly big):
Think of
-- the Table in the current Version:
table: foo (foo_id INTEGER PRIMARY KEY, quantity INTEGER,
single_price REAL, all_price REAL)
-- where I want to end:
table: foo (foo_id INTEGER PRIMARY KEY, quantity INTEGER,
single_price REAL, total_price REAL)
(so renaming the 4th column)
I'm not talking about indexes here, problem is the same without :)
What I try to run in a single executemany() is:
ALTER TABLE foo RENAME TO foo_PREv2;
CREATE TABLE foo (foo_id INTEGER PRIMARY KEY, quantity INTEGER,
single_price REAL, total_price REAL);
INSERT INTO foo (foo_id, quantity,
single_price, total_price)
SELECT foo_id, quantity,
single_price, all_price
FROM foo_PREv2;
DROP TABLE foo_PREv2; -- <<<--- here it fails with a database locked error
Even if I move the DROP to a second executemany() call it doesn't work. I have to restart my app before I could DROP.
As I understood, executemany() handles the BEGIN TRANSACTION ... COMMIT stuff for me.
What do I miss?
Thanks in advance,
king regards,
Florian.

Access 2010 Subform The data was added to the database but the data won't be displayed

I have a strange one here that I just can't seem to figure out.
My Access front-end project runs on an SQL 2005 express backend.
I have been using subforms for donkeys years and it's the only reason why I haven't migrated the application to a VB/VS front end.
However, since upgrading to Access 2010 I cannot get subforms to work. Instead, when I try to add a row, I get the following error (The data was added to the database but the data won't be displayed in the form because it doesn't satisfy the criteria in the underlying record source.):
The master and child forms are linked on poid and PONo.
I have created forms from scratch with all defaults, but still the issue remains.
My SQL tables are
PURCHASE:
- -
poid, int, PK, Identity, seed 1, inc 1
supplierID, int
orderdate, DateTime
deliverydate, datetime
ordersent, bit
ordercomplete, bit
initials, nvarchar
supplierinvoiceno, nvarchar
branchid, int
bookedin, bit
deliverycharge, money
[STOCK - Detail]:
- -
stockid, int, PK, Identity, Seed 1, inc 1
CodeID, int
service, bit
costprice, money
PONo, int
Instock, bit
SerialNo, char
StockTake, bit
Branch, Char
ProductID, int
Any help would be very much appreciated.
Many thanks,
Abe
Solved! Access 2010 does not support multiple tables with identical column names unless it is in a stored procedure / query on the SQL server.
I've been trying to come away from stored procs & queries, but A2010 will not, under any combination work with hard coded SQL as the record source.
Once I created a query and selected it as the record source the subforms worked perfectly as expected.
Also I had to alias any fields that have the same name in both tables EVEN if not selected in the query. And yes, the Alias only worked in the query too!
I love Microsoft! ;-)

Stored procedure slow when called from web, fast from Management Studio

I have stored procedure that insanely times out every single time it's called from the web application.
I fired up the Sql Profiler and traced the calls that time out and finally found out these things:
When executed the statements from within the MS SQL Management Studio, with same arguments (in fact, I copied the procedure call from sql profile trace and ran it): It finishes in 5~6 seconds avg.
But when called from web application, it takes in excess of 30 seconds (in trace) so my webpage actually times out by then.
Apart from the fact that my web application has its own user, every thing is same (same database, connection, server etc)
I also tried running the query directly in the studio with the web application's user and it doesn't take more than 6 sec.
How do I find out what is happening?
I am assuming it has nothing to do with the fact that we use BLL > DAL layers or Table adapters as the trace clearly shows the delay is in the actual procedure. That is all I can think of.
EDIT I found out in this link that ADO.NET sets ARITHABORT to true - which is good for most of the time but sometime this happens, and the suggested work-around is to add with recompile option to the stored proc. In my case, it's not working but I suspect it's something very similar to this. Anyone knows what else ADO.NET does or where I can find the spec?
I've had a similar issue arise in the past, so I'm eager to see a resolution to this question. Aaron Bertrand's comment on the OP led to Query times out when executed from web, but super-fast when executed from SSMS, and while the question is not a duplicate, the answer may very well apply to your situation.
In essence, it sounds like SQL Server may have a corrupt cached execution plan. You're hitting the bad plan with your web server, but SSMS lands on a different plan since there is a different setting on the ARITHABORT flag (which would otherwise have no impact on your particular query/stored proc).
See ADO.NET calling T-SQL Stored Procedure causes a SqlTimeoutException for another example, with a more complete explanation and resolution.
I also experience that queries were running slowly from the web and fast in SSMS and I eventually found out that the problem was something called parameter sniffing.
The fix for me was to change all the parameters that are used in the sproc to local variables.
eg. change:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
SELECT * FROM Table WHERE ID = #param1
to:
ALTER PROCEDURE [dbo].[sproc]
#param1 int,
AS
DECLARE #param1a int
SET #param1a = #param1
SELECT * FROM Table WHERE ID = #param1a
Seems strange, but it fixed my problem.
Not to spam, but as a hopefully helpful solution for others, our system saw a high degree of timeouts.
I tried setting the stored procedure to be recompiled by using sp_recompile and this resolved the issue for the one SP.
Ultimately there were a larger number of SP's that were timing-out, many of which had never done so before, by using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE the incident rate of timeouts has plummeted significantly - there are still isolated occurrences, some where I suspect the plan regeneration is taking a while, and some where the SPs are genuinely under-performant and need re-evaluation.
Could it be that some other DB calls made before the web application calls the SP is keeping a transaction open? That could be a reason for this SP to wait when called by the web application. I say isolate the call in the web application (put it on a new page) to ensure that some prior action in the web application is causing this issue.
You can target specific cached execution plans via:
SELECT cp.plan_handle, st.[text]
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
And then remove only the execution plans causing issues via, for example:
DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
I've now got a job running every 5 minutes that looks for slow running procedures or functions and automatically clears down those execution plans if it finds any:
if exists (
SELECT cpu_time, *
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
--order by req.total_elapsed_time desc
WHERE ([text] LIKE N'%your troublesome SP or function name etc%')
and cpu_time > 8000
)
begin
SELECT cp.plan_handle, st.[text]
into #results
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS st
WHERE [text] LIKE N'%your troublesome SP or function name etc%'
delete #results where text like 'SELECT cp.plan_handle%'
--select * from #results
declare #handle varbinary(max)
declare #handleconverted varchar(max)
declare #sql varchar(1000)
DECLARE db_cursor CURSOR FOR
select plan_handle from #results
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #handle
WHILE ##FETCH_STATUS = 0
BEGIN
--e.g. DBCC FREEPROCCACHE (0x050006003FCA862F40A19A93010000000000000000000000)
print #handle
set #handleconverted = '0x' + CAST('' AS XML).value('xs:hexBinary(sql:variable("#handle"))', 'VARCHAR(MAX)')
print #handleconverted
set #sql = 'DBCC FREEPROCCACHE (' + #handleconverted + ')'
print 'DELETING: ' + #sql
EXEC(#sql)
FETCH NEXT FROM db_cursor INTO #handle
END
CLOSE db_cursor
DEALLOCATE db_cursor
drop table #results
end
Simply recompiling the stored procedure (table function in my case) worked for me
like #Zane said it could be due to parameter sniffing. I experienced the same behaviour and I took a look at the execution plan of the procedure and all the statements of the sp in a row (copied all the statements form the procedure, declared the parameters as variables and asigned the same values for the variable as the parameters had). However the execution plan looked completely different. The sp execution took 3-4 seconds and the statements in a row with the exact same values was instantly returned.
After some googling I found an interesting read about that behaviour: Slow in the Application, Fast in SSMS?
When compiling the procedure, SQL Server does not know that the value of #fromdate changes, but compiles the procedure under the assumption that #fromdate has the value NULL. Since all comparisons with NULL yield UNKNOWN, the query cannot return any rows at all, if #fromdate still has this value at run-time. If SQL Server would take the input value as the final truth, it could construct a plan with only a Constant Scan that does not access the table at all (run the query SELECT * FROM Orders WHERE OrderDate > NULL to see an example of this). But SQL Server must generate a plan which returns the correct result no matter what value #fromdate has at run-time. On the other hand, there is no obligation to build a plan which is the best for all values. Thus, since the assumption is that no rows will be returned, SQL Server settles for the Index Seek.
The problem was that I had parameters which could be left null and if they were passed as null the would be initialised with a default value.
create procedure dbo.procedure
#dateTo datetime = null
begin
if (#dateTo is null)
begin
select #dateTo = GETUTCDATE()
end
select foo
from dbo.table
where createdDate < #dateTo
end
After I changed it to
create procedure dbo.procedure
#dateTo datetime = null
begin
declare #to datetime = coalesce(#dateTo, getutcdate())
select foo
from dbo.table
where createdDate < #to
end
it worked like a charm again.
--BEFORE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
SELECT * FROM tbl_Logins WHERE LoginId = #ToUserId
END
--AFTER CHANGING TO IT WORKING FINE
CREATE PROCEDURE [dbo].[SP_DEMO]
(
#ToUserId bigint=null
)
AS
BEGIN
DECLARE #Toid bigint=null
SET #Toid=#ToUserId
SELECT * FROM tbl_Logins WHERE LoginId = #Toid
END

Resources