SqlDataSource inserts id 1004 instead of 14; How to fix? [duplicate] - asp.net

This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 7 years ago.
I have a strange scenario in which the auto identity int column in my SQL Server 2012 database is not incrementing properly.
Say I have a table which uses an int auto identity as a primary key it is sporadically skipping increments, for example:
1,
2,
3,
4,
5,
1004,
1005
This is happening on a random number of tables at very random times, can not replicate it to find any trends.
How is this happening?
Is there a way to make it stop?

This is all perfectly normal. Microsoft added sequences in SQL Server 2012, finally, i might add and changed the way identity keys are generated. Have a look here for some explanation.
If you want to have the old behaviour, you can:
use trace flag 272 - this will cause a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.
use a sequence generator with the NO CACHE setting (http://msdn.microsoft.com/en-us/library/ff878091.aspx)

Got the same problem, found the following bug report in SQL Server 2012
If still relevant see conditions that cause the issue - there are some workarounds there as well (didn't try though).
Failover or Restart Results in Reseed of Identity

While trace flag 272 may work for many, it definitely won't work for hosted Sql Server Express installations. So, I created an identity table, and use this through an INSTEAD OF trigger. I'm hoping this helps someone else, and/or gives others an opportunity to improve my solution. The last line allows returning the last identity column added. Since I typically use this to add a single row, this works to return the identity of a single inserted row.
The identity table:
CREATE TABLE [dbo].[tblsysIdentities](
[intTableId] [int] NOT NULL,
[intIdentityLast] [int] NOT NULL,
[strTable] [varchar](100) NOT NULL,
[tsConcurrency] [timestamp] NULL,
CONSTRAINT [PK_tblsysIdentities] PRIMARY KEY CLUSTERED
(
[intTableId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
and the insert trigger:
-- INSERT --
IF OBJECT_ID ('dbo.trgtblsysTrackerMessagesIdentity', 'TR') IS NOT NULL
DROP TRIGGER dbo.trgtblsysTrackerMessagesIdentity;
GO
CREATE TRIGGER trgtblsysTrackerMessagesIdentity
ON dbo.tblsysTrackerMessages
INSTEAD OF INSERT AS
BEGIN
DECLARE #intTrackerMessageId INT
DECLARE #intRowCount INT
SET #intRowCount = (SELECT COUNT(*) FROM INSERTED)
SET #intTrackerMessageId = (SELECT intIdentityLast FROM tblsysIdentities WHERE intTableId=1)
UPDATE tblsysIdentities SET intIdentityLast = #intTrackerMessageId + #intRowCount WHERE intTableId=1
INSERT INTO tblsysTrackerMessages(
[intTrackerMessageId],
[intTrackerId],
[strMessage],
[intTrackerMessageTypeId],
[datCreated],
[strCreatedBy])
SELECT #intTrackerMessageId + ROW_NUMBER() OVER (ORDER BY [datCreated]) AS [intTrackerMessageId],
[intTrackerId],
[strMessage],
[intTrackerMessageTypeId],
[datCreated],
[strCreatedBy] FROM INSERTED;
SELECT TOP 1 #intTrackerMessageId + #intRowCount FROM INSERTED;
END

Related

Insert VALUES not already in table

I want to create a table and then initialize it with some values, in as concise manner as possible.
However, this script gets executed every time my app starts, so the insert should happen only on items that were not already added previously.
I do not want to use IGNORE directive in 'INSERT IGNORE INTO', because I do not want to ignore unexpected errors.
For some reason, INSERT INTO fails with "SQL error (1136): Column count doesn't match value count at row 1", even though the select that follows gives the values that need to be added.
Here's the failing code:
START TRANSACTION;
CREATE TABLE IF NOT EXISTS `privileges` (
`id` TINYINT NOT NULL AUTO_INCREMENT,
`label` VARCHAR(25) UNIQUE,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `privileges` (`label`)
SELECT `label` FROM (
SELECT NULL AS `label`
UNION VALUES
('item1'),
('item2')
) X
WHERE `label` IS NOT NULL
AND `label` NOT IN (SELECT `label` FROM `privileges`)
COMMIT;
Currently I am solving this by first inserting the values into a temporary table, and then performing a select on that. But why isn't the above working and is there a more concise way to do what I'm trying to do?
I'm using MariaDB 10.3.9, added missing UNIQUE constraint
Edit 2: Thanks to LukStorms for figuring out the error was related to AUTO_INCREMENT, it seems passing NULL for AUTO_INCREMENT column solves the problem like so:
INSERT INTO `privileges` (id, label)
WITH ITEMS(label) AS (VALUES
('users:read'),('users:create'),
('clients:read'),('clients:write'),
('catalog:read'),('catalog:write'),
('cart:read'),('cart:write'),
('orders:read'),('orders:write'), ('test1')
) SELECT NULL, label FROM ITEMS i
WHERE label NOT IN (SELECT label FROM `privileges`);
In MariaDb 10.3+, using a CTE with a the VALUES expression can let you assign a column name to it.
with ITEMS(label) as
(VALUES
('item1')
,('item2'))
select i.label
from ITEMS i
where not exists (select 1 from privileges p where p.label = i.label)
But somehow it gives an error when inserting into a table that has a field with an AUTO_INCREMENT. Seems like a bug to me.
However, when you insert a NULL into a an AUTO_INCREMENT field then the NULL gets ignored. But you discovered that behaviour yourself.
So this works:
INSERT INTO privileges (id, label)
WITH ITEMS(label) as (
VALUES ('item1'), ('item2')
)
SELECT null, i.label
FROM ITEMS i
WHERE NOT EXISTS (SELECT 1 FROM privileges p WHERE p.label = i.label);
Test on db<>fiddle here
Using unioned selects also works though.
INSERT INTO privileges (label)
SELECT label
FROM (
SELECT 'item1' as label UNION ALL
SELECT 'item2'
) i
WHERE NOT EXISTS (SELECT 1 FROM privileges p WHERE p.label = i.label);
db<>fiddle here
Maybe another way is to use a temporary table (that will vanish when the session expires)
CREATE TEMPORARY TABLE tmp_items (label VARCHAR(25) NOT NULL PRIMARY KEY);
INSERT INTO tmp_items (label) VALUES
('item1')
,('item2');
INSERT INTO privileges (label)
SELECT label
FROM tmp_items i
WHERE label NOT IN (SELECT DISTINCT label FROM privileges);
Test on db<>fiddle here
First, your application is trying to double-insert values. It probably shouldn't be doing that (though I can think of a few valid use cases). Consider making it so that it does not try to add data that it's already added before. If you don't have easy access to inter-instance state, pull the current list out of the database on startup before deciding what to insert.
Second, if you want labels to be unique, why is there not a unique key on the label field? At the moment, INSERT IGNORE wouldn't even work because there is nothing in your schema preventing duplicate label values. I would ask yourself why you need an auto-incrementing ID: why not just have the label, and make it the primary key?
Then, if you still need to do this duplicate-elision at the SQL layer, you may use ON DUPLICATE KEY to suck up redundant inserts of an existing primary key:
INSERT INTO `privileges` (`label`)
VALUES
('item1'),
('item2')
)
ON DUPLICATE KEY UPDATE `label` = `label`
This solution is difficult to implement with your auto-increment ID key, because your application probably doesn't know what the ID is going to be. Another reason to consider dropping it.
Unfortunately, there's no ON DUPLICATE KEY IGNORE.
If you want to keep the ID key, and you don't want your application to do a read step on startup (perhaps for scalability reasons), then INSERT IGNORE to be quite honest is your best bet, though you're still going to need at least a unique key on label to make that work.

Web service throws "The value 'null' cannot be parsed as the type 'Guid'." error

I have a system which stores data from an online SQL Server database in local storage. Data records are uploaded and downloaded using a web service. I am using an ADO.Net Entity Data Model in my code.
On some upload requests for one table the routine fails when I try to call it giving an error message "The value 'null' cannot be parsed as the type 'Guid'." This only happens occasionally and I have not worked out how to repeat the problem. I have logged it 80 times in the last month and in that time the routine has been called successfully 1200 times.
I have five fields in the database record for this table that are defined as uniqueidentifiers. Two of these are 'NOT NULL' and the other three are 'NULL'. Here is the 'CREATE TABLE' query showing the guid fields in this table:
CREATE TABLE [dbo].[Circuit](
[CircuitID] [uniqueidentifier] NOT NULL,
[BoardID] [uniqueidentifier] NOT NULL,
[RCDID] [uniqueidentifier] NULL,
[CircuitMasterID] [uniqueidentifier] NULL,
[DeviceID] [uniqueidentifier] NULL,
CONSTRAINT [PK_CircuitGuid] PRIMARY KEY NONCLUSTERED
(
[CircuitID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
GO
ALTER TABLE [dbo].[Circuit] WITH CHECK ADD CONSTRAINT [FK_Circuit_RCD] FOREIGN KEY([RCDID])
REFERENCES [dbo].[RCD] ([RCDID])
GO
ALTER TABLE [dbo].[Circuit] CHECK CONSTRAINT [FK_Circuit_RCD]
GO
ALTER TABLE [dbo].[Circuit] WITH CHECK ADD CONSTRAINT [FK_CircuitGuid_Board] FOREIGN KEY([BoardID])
REFERENCES [dbo].[Board] ([BoardID])
GO
ALTER TABLE [dbo].[Circuit] CHECK CONSTRAINT [FK_CircuitGuid_Board]
GO
The data uploaded for the guid fields in this table looks like this:
{"__type":"Circuit:#WaspWA","BoardID":"edb5f774-5e5d-490c-860b-73c3419628cf","CircuitID":"e95bbfa3-2af6-49a5-94dd-c98924ec9a62","CircuitMasterID":null,"DeviceID":"daf12fce-675c-46d9-94c4-ed28c63cdf30","RCDID":null}
This record was created on one machine uploaded to the online SQL Server database and then downloaded to another machine.
I have other similar tables in the database which never give any problems. It is just this table which I am getting error messages from. The two fields which are defined as 'NOT NULL' (BoardID and CircuitID) always have data in them and are never null.
Is there something obvious that I have missed here?
The problem was that the value 'null', a string, was being written into my local copy of CircuitMasterID rather than null. So when I tried to write this to SQL it didn't like it. The SQL error message shows null in quotes but I was not sure whether this was because it was a string or because the error message put the value in quotes to delineate it.
The value 'null' had found its way into the CircuitMasterID field because I had written the value null out into some HTML and when this was saved back to the field it became 'null'. I am storing data in local storage and this does not give very good type control. Note to self 'must add better type control'.

using ssdt, how can I create a filtered index on the latest 7 days?

We use SSDT to deploy our database changes. We have a script that recreates the index every week. Our script looks like this:
declare #cmd varchar(max)
set #cmd = '
CREATE NONCLUSTERED INDEX [iAudit-ModifiedDateTime] ON [dbo].[Audit]
(
[ModifiedDateTime] ASC
)
WHERE ModifiedDateTime > ''###''
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = ON, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 75) ON [PRIMARY]
'
set #cmd = replace(#cmd, '###', convert(varchar(8), dateadd(day, -3, getdate()), 112))
exec (#cmd)
Unfortunately when we run SSDT to update the database it changes the index to the definition in the project, or drops it when it is not included. Is there some way I can get around this?
The reason we need the filtered index is to add the latest records from an Audit table with 100's of millions of rows, into a data warehouse.
There are some options, in order of complexity:
Don't include the index definition in the project and disable the "Drop indexes not in source" option. In Visual Studio this is found in the Advanced options dialog of the Publish dialog. When using SqlPackage.exe to publish, you can use the parameter /p:DropIndexesNotInSource=false
Don't include the index definition in the project and put the index creation script into a post-deployment script. This will ensure that the index is always recreated after schema updates are deployed.
Use a community-authored deployment contributor to filter out modifications to this index. See https://the.agilesql.club/Blogs/Ed-Elliott/HOWTO-Filter-Dacpac-Deployments
Author a deployment contributor to filter out modifications to this index. See https://github.com/Microsoft/DACExtensions/

Summarize Historical Uptime Data

I'm finishing my first asp.net web app and I've encountered a difficult problem. The web app is designed to test network devices at various locations across the country and record the response time. A Windows service checks these devices regularly, typically every 1-10 minutes. The results of each check are then recorded in a SQL Server table with this design. (ResponseTime is NULL when the device is down.)
CREATE TABLE [dbo].[DeviceStatuses] (
[DeviceStatusID] INT IDENTITY (1, 1) NOT NULL,
[DeviceID] INT NOT NULL,
[StatusTime] DATETIME NULL,
[ResponseTime] INT NULL,
CONSTRAINT [PK_DeviceStatuses] PRIMARY KEY CLUSTERED ([DeviceStatusID] ASC),
CONSTRAINT [FK_DeviceStatuses_Devices] FOREIGN KEY ([DeviceID]) REFERENCES [dbo].[Devices] ([DeviceID])
);
The service has been running for a couple months, with a minimal number of devices and the table has about 500,000 rows. The client would like to have access to a 3-month rolling downtime summary for each device. Something along the lines of:
Down Times:
12/11/2012 3:20 PM - 3:42 PM
12/20/2012 1:00 AM - 9:00 AM
To the best of my understanding I need to get the StatusTime for the beginning and end of each block of NULL ResponseTimes, for a particular DeviceID of course. I've done several searches on Google and StackOverflow, but haven't found anything that resembles what I'm trying to do. (Maybe I'm not using the right search terms.) My brother, a much more experienced programmer, suggested that I might be able to use a CURSOR in SQL Server, though he acknowledged that CURSOR performance is terrible and it would need to be a scheduled task. Any recommendations?
declare #DeviceStatuses table(
[DeviceStatusID] INT IDENTITY (1, 1) NOT NULL,
[DeviceID] INT NOT NULL,
[StatusTime] DATETIME NULL,
[ResponseTime] INT NULL)
Insert into #DeviceStatuses([DeviceID],[StatusTime],[ResponseTime])
Values
(1,'20120101 10:10',2),(1,'20120101 10:12',NULL),(1,'20120101 10:14',2),
(1,'20120102 10:10',2),(1,'20120102 10:12',NULL),(1,'20120102 10:14',2),
(2,'20120101 10:10',2),(2,'20120101 10:12',NULL),(2,'20120101 10:14',2),
(2,'20120101 10:19',2),(2,'20120101 10:20',NULL),(2,'20120101 10:21',NULL),(2,'20120101 10:22',2),
(2,'20120102 10:10',2),(2,'20120102 10:12',NULL),(2,'20120102 10:14',2);
Select [DeviceID],MIN([StatusTime]) as StartDown,MAX([StatusTime]) as EndDown
from
(
Select [DeviceID],[StatusTime]
,(Select MAX([StatusTime]) from #DeviceStatuses s2 where s2.DeviceID=s1.DeviceID and s2.StatusTime<s1.StatusTime and s2.ResponseTime is not null) as gr
from #DeviceStatuses s1
where s1.ResponseTime is null
)a
Group by [DeviceID],gr
order by [DeviceID],gr

Optimizing a table with a huge text-field

I have a project which generates snapshots of a database, converts it to XML and then stores the XML inside a separate database. Unfortunately, these snapshots are becoming huge files, and are now about 10 megabytes each. Fortunately, I only have to store them for about a month before they can be discarded again but still, a month of snapshots turn out to become real bad for it's performance...I think there is a way to improve performance a lot. No, not by storing the XML in a separate folder somewhere, because I don't have write access to any location on that server. The XML must stay within the database. But somehow, the field [Content] might be optimized somehow so things will speed up...I won't need any full-text search options on this field. I will never do any searching based on this field. So perhaps by disabling this field for search instructions or whatever?The table has no references to other tables, but the structure is fixed. I cannot rename things, or change the field types. So I wonder if optimizations is still possible.Well, is it?
The structure, as generated by SQL Server:
CREATE TABLE [dbo].[Snapshots](
[Identity] [int] IDENTITY(1,1) NOT NULL,
[Header] [varchar](64) NOT NULL,
[Machine] [varchar](64) NOT NULL,
[User] [varchar](64) NOT NULL,
[Timestamp] [datetime] NOT NULL,
[Comment] [text] NOT NULL,
[Content] [text] NOT NULL,
CONSTRAINT [PK_SnapshotLog]
PRIMARY KEY CLUSTERED ([Identity] ASC)
WITH (PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON,
FILLFACTOR = 90) ON [PRIMARY],
CONSTRAINT [IX_SnapshotLog_Header]
UNIQUE NONCLUSTERED ([Header] ASC)
WITH (PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON,
FILLFACTOR = 90)
ON [PRIMARY],
CONSTRAINT [IX_SnapshotLog_Timestamp]
UNIQUE NONCLUSTERED ([Timestamp] ASC)
WITH (PAD_INDEX = OFF,
STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON,
FILLFACTOR = 90)
ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
Performance isn't just slow when selecting data from this table but also when selecting or inserting data in one of the other tables in this database! When I delete all records from this table, the whole system is fast. When I start adding snapshots, performance starts to decrease. After about 30 snapshots, performance becomes bad and the risk of connection timeouts increase.Maybe the problem isn't in the database itself, although it's still slow when used through the management tool. (Fast when Snapshots is empty.) I mainly use ASP.NET 3.5 and the Entity Framework to connect to this database and then read the multiple tables. Maybe some performance can be gained here, although that wouldn't explain why the database is also slow from the management tools and when used through other applications with a direct connection...
The table is in PRIMARY filegroup. Could you move this table to a different filegroup or even that is constrained? If you can, you should move it to a different filegroup with its own physical file. That should help a lot. Check out how create new filegroup and move the object to a new file group.
Given your constraints you could try zipping the XML before inserting into the DB as binary. This should significantly reduce the storage cost of this data.
You mention this is bad for performance, how often are you reading from this snapshot table? If this is just stored it should only effect performance when writing. If you are often reading this are you sure the performance issue is with the datastoreage not the parsing of 10MB of XML?
The whole system became a lot faster when I replaced the TEXT datatype with the NVARCHAR(MAX) datatype. HLGEM pointed out to me that the TEXT datatype is outdated, thus troublesome. It's still a question if the datatype of these columns could be replaced this easy with the more modern datatype, though. (Translated: I need to test if the code will work with the altered datatype...)
So, if i would alter the datatype from TEXT to NVARCHAR(MAX), is there anything that would break because of this? Problems that I can expect?
Right now, this seems to solve the problem but I need to do some lobbying before I'm allowed to make this change. So I need to be real sure it won't cause any (unexpected) problems.

Resources