Incorrect default value passed to the SQL Server database - asp.net

I have set my column to int not null default 1... but whenever I save my record, it sets default value for that record to be 0.
I am not setting it anywhere. I don't know where I am making a mistake.
I have debugged my code , and when I am passing new entity object it is setting default value for not null to 0 .May be it is something with LINQ, But I don't know how to handle it.I don't want to explicitly assign value.
Thanks!

For sql-server, you can use SQL Server Profiler to catch all the scripts you run into the DB.
This may show you some details

Try running this query, replacing the 'myTable' and 'myColumn' values with your actual TABLE and COLUMN names, and see what's returned:
SELECT
OBJECT_NAME(C.object_id) AS [Table Name]
,C.Name AS [Column Name]
,DC.Name AS [Constraint Name]
,DC.Type_Desc AS [Constraint Type]
,DC.Definition AS [Default Value]
FROM sys.default_constraints DC
INNER JOIN sys.Columns C
ON DC.parent_column_id = C.column_id
AND DC.parent_object_id = C.object_id
WHERE OBJECT_NAME(DC.parent_object_id) = 'myTable'
AND COL_NAME(DC.parent_object_id,DC.parent_column_id) = 'myColumn'
;
Should return something like this:
[Table Name] [Column Name] [Constraint Name] [Constraint Type] [Default Value]
-------------------------------------------------------------------------------------------
myTable myColumn DF_myTable_myColumn DEFAULT_CONSTRAINT ('0')
If the [Default Value] returned is indeed (1), then it means that you have set the constraint properly and something else is at play here. It might be a trigger, or some other automated DML that you've forgotten/didn't know about, or something else entirely.
I am not the world's biggest fan of using a TRIGGER, but in a case like this, it could be handy. I find that one of the best uses for a TRIGGER is debugging little stuff like this - because it lets you see what values are being passed into a table without having to scroll through mountains of profiler data. You could try something like this (again, switching out the myTable and myColumn values with your actual table and column names):
CREATE TABLE Default_Check
(
Action_Time DATETIME NOT NULL DEFAULT GETDATE()
,Inserted_Value INT
);
CREATE TRIGGER Checking_Default ON myTable
AFTER INSERT, UPDATE
AS
BEGIN
INSERT INTO Default_Check (Inserted_Value)
SELECT I.myColumn
FROM Inserted I
;
END
;
This trigger would simply list the date/time of an update/insert done against your table, as well as the inserted value. After creating this, you could run a single INSERT statement, then check:
SELECT * FROM Default_Check;
If you see one row, only one action (insert/update) was done against the table. If you see two, something you don't expect is happening - you can check to see what. You will also see here when the 0 was inserted/updated.
When you're done, just make sure you DROP the trigger:
DROP TRIGGER Checking_Default;
You'll want to DROP the table, too, once it's become irrelevant:
DROP TABLE Default_Check;
If all of this still didn't help you, let me know.

In VB use
Property VariableName As Integer? = Nothing
And
In C# use
int? value = 0;
if (value == 0)
{
value = null;
}

Please check My Example:
create table emp ( ids int null, [DOJ] datetime NOT null)
ALTER TABLE [dbo].[Emp] ADD CONSTRAINT DF_Emp_DOJ DEFAULT (GETDATE()) FOR [DOJ]
1--Not working for Default Values
insert into emp
select '1',''
2 ---working for Default Values
insert into emp(ids) Values(13)
select * From emp

Related

Converting a field to lower case and merging data in an sqlite database

I need to merge some randomly uppercased data that has been collected in an SQLite table key_val, such that key is always lowercase and no vals are lost. There is a unique compound index on key,val.
The initial data looks like this:
key|val
abc|1
abc|5
aBc|1
aBc|5
aBc|3
aBc|2
AbC|1
abC|3
The result after the merge would be
key|val
abc|1
abc|2
abc|3
abc|5
In my programmer brain, I would
for each `key` with upper case letters;
if a lower cased `key` is found with the same value
then delete `key`
else update `key` to lower case
Re implementing the loop has a sub query for each row found with upper case letters, to check if the val already exists as a lower case key
If it does, I can delete the cased key.
From there I can UPDATE key = lower(key) as the "duplicates" have been removed.
The first cut of the programming method of finding the dupes is:
SELECT * FROM key_val as parent
WHERE parent.key != lower(parent.key)
AND 0 < (
SELECT count(s.val) FROM key_val as s
WHERE s.key = lower(parent.key) AND s.val = parent.val
)
ORDER BY parent.key DESC;
I'm assuming there's a better way to do this in SQLite? The ON CONFLICT functionality seems to me like it should be able to handle the dupe deletion on UPDATE but I'm not seeing it.
First delete all the duplicates:
DELETE FROM key_val AS k1
WHERE EXISTS (
SELECT 1
FROM key_val AS k2
WHERE LOWER(k2.key) = LOWER(k1.key) AND k2.val = k1.val AND k2.rowid < k1.rowid
);
by keeping only 1 combination of key and val with the min rowid.
It is not important if you kept the key with all lower chars or not, because the 2nd step is to update the table:
UPDATE key_val
SET key = LOWER(key);
See the demo.
Honestly it might just be easier to create a new table and then insert into it. As it seems you really just want a distinct select here, use:
INSERT INTO kev_val_new ("key", val)
SELECT DISTINCT LOWER("key"), val
FROM key_val;
Once you have populated the new table, you may drop the old one, and then rename the new one to the previous name:
DROP TABLE key_val;
ALTER TABLE key_val_new RENAME TO key_val;
I agree with #Tim that it would be easire to re-create table using simple select distict lower().. statement, but that's not always easy if table has dependant objects (indexes, triggers, views). In this case this can be done as sequence of two steps:
insert lowered keys which are not still there:
insert into t
select distinct lower(tr.key) as key, tr.val
from t as tr
left join t as ts on ts.key = lower(tr.key) and ts.val = tr.val
where ts.key is null;
now when we have all lowered keys - remove other keys:
delete from t where key <> lower(key);
See fiddle: http://sqlfiddle.com/#!5/84db50/11
However this method assumes that key is always populated (otherwise it would be a strange key)
If vals can be null then "ts.val = tr.val" should be replaced with more complex stuff like ifnull(ts.val, -1) = ifnull(tr.val, -1) where -1 is some unused value (can be different). If we can't assume any unused value like -1 then it should be more complex check for null / not null cases.

Is there a way to INSERT Null value as a parameter using FireDAC?

I want to leave some fields empty (i.e. Null) when I insert values into table. I don't see why would I want to have a DB full of empty strings in fields.
I use Delphi 10, FireDAC and local SQLite DB.
Edit: Provided code is just an example. In my application values are provided by user input and functions, any many of them are optional. If value is empty, I would like to keep it at Null or default value. Creating multiple variants of ExecSQL and nesting If statements isn't an option too - there are too many optional fields (18, to be exact).
Test table:
CREATE TABLE "Clients" (
"Name" TEXT,
"Notes" TEXT
);
This is how I tried it:
var someName,someNote: string;
begin
{...}
someName:='Vasya';
someNote:='';
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name,Notes) VALUES (:nameval,:notesval)',
[someName, IfThen(someNote.isEmpty, Null, somenote)]);
This raises an exception:
could not convert variant of type (Null) into type (OleStr)
I've tried to overload it and specify [ftString,ftString] and it didn't help.
Currently I have to do it like this and I hate this messy code:
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name,Notes) VALUES ('+
IfThen(someName.isEmpty,'NULL','"'+Sanitize(someName)+'"')+','+
IfThen(someNote.isEmpty,'NULL','"'+Sanitize(someNote)+'"')+');');
Any recommendations?
Edit2: Currently I see an option of creating new row with "INSERT OR REPLACE" and then use multiple UPDATEs in a row for each non-empty value. But this looks direly ineffective. Like this:
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name) VALUES (:nameval)',[SomeName]);
id := FDConnection1.ExecSQLScalar('SELECT FROM Clients VALUES id WHERE Name=:nameval',[SomeName]);
if not SomeString.isEmpty then
FDConnection1.ExecSQL('UPDATE Clients SET Notes=:noteval WHERE id=:idval)',[SomeNote,id]);
According to Embarcadero documentation ( here ):
To set the parameter value to Null, specify the parameter data type,
then call the Clear method:
with FDQuery1.ParamByName('name') do begin
DataType := ftString;
Clear;
end;
FDQuery1.ExecSQL;
So, you have to use FDQuery to insert Null values, I suppose. Something like this:
//Assign FDConnection1 to FDQuery1's Connection property
FDQuery1.SQL.Text := 'INSERT OR REPLACE INTO Clients(Name,Notes) VALUES (:nameval,:notesval)';
with FDQuery1.ParamByName('nameval') do
begin
DataType := ftString;
Value := someName;
end;
with FDQuery1.ParamByName('notesval') do
begin
DataType := ftString;
if someNote.IsEmpty then
Clear;
else
Value := someNote;
end;
if not FDConnection1.Connected then
FDConnection.Open;
FDQuery1.ExecSql;
It's not very good idea to execute query as String without parameters because this code is vulnerable to SQL injections.
Some sources tells that it's not enough and you should do something like this:
with FDQuery1.ParamByName('name') do begin
DataType := ftString;
AsString := '';
Clear;
end;
FDQuery1.ExecSQL;
but I can't confirm it. You can try it if main example won't work.

Modify a column to NULL - Oracle

I have a table named CUSTOMER, with few columns. One of them is Customer_ID.
Initially Customer_ID column WILL NOT accept NULL values.
I've made some changes from code level, so that Customer_ID column will accept NULL values by default.
Now my requirement is that, I need to again make this column to accept NULL values.
For this I've added executing the below query:
ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL
I'm getting the following error:
ORA-01451 error, the column already allows null entries so
therefore cannot be modified
This is because already I've made the Customer_ID column to accept NULL values.
Is there a way to check if the column will accept NULL values before executing the above query...??
You can use the column NULLABLE in USER_TAB_COLUMNS. This tells you whether the column allows nulls using a binary Y/N flag.
If you wanted to put this in a script you could do something like:
declare
l_null user_tab_columns.nullable%type;
begin
select nullable into l_null
from user_tab_columns
where table_name = 'CUSTOMER'
and column_name = 'CUSTOMER_ID';
if l_null = 'N' then
execute immediate 'ALTER TABLE Customer
MODIFY (Customer_ID nvarchar2(20) NULL)';
end if;
end;
It's best not to use dynamic SQL in order to alter tables. Do it manually and be sure to double check everything first.
Or you can just ignore the error:
declare
already_null exception;
pragma exception_init (already_null , -01451);
begin
execute immediate 'alter table <TABLE> modify(<COLUMN> null)';
exception when already_null then null;
end;
/
You might encounter this error when you have previously provided a DEFAULT ON NULL value for the NOT NULL column.
If this is the case, to make the column nullable, you must also reset its default value to NULL when you modify its nullability constraint.
eg:
DEFINE table_name = your_table_name_here
DEFINE column_name = your_column_name_here;
ALTER TABLE &table_name
MODIFY (
&column_name
DEFAULT NULL
NULL
);
I did something like this, it worked fine.
Try to execute query, if any error occurs, catch SQLException.
try {
stmt.execute("ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL");
} catch (SQLException sqe) {
Logger("Column to be modified to NULL is already NULL : " + sqe);
}
Is this correct way of doing?
To modify the constraints of an existing table
for example... add not null constraint to a column.
Then follow the given steps:
1) Select the table in which you want to modify changes.
2) Click on Actions.. ---> select column ----> add.
3) Now give the column name, datatype, size, etc. and click ok.
4) You will see that the column is added to the table.
5) Now click on Edit button lying on the left side of Actions button.
6) Then you will get various table modifying options.
7) Select the column from the list.
8) Select the particular column in which you want to give not null.
9) Select Cannot be null from column properties.
10) That's it.

How to generate a column with id which increments on every insert

This is my table where i want my PNRNo to be generated as 'PNRRES001' for the first entry, and consecutive entries with 'PNRRES002','PNRRES002' so on.
So while creating table only i called that column to function which will generate the PNR no, User just has to enter the CustomerNo from the front end, and data wit PNR & Customer No will updated to the PNRDetails table.
CREATE TABLE PNRDetails(PNRNo AS (DBO.FuncIncPNR()) ,customerNo INT
--FUNCTION TO GENERATE THE PNR NUMBER
ALTER FUNCTION dbo.FuncIncPNR()
RETURNS VARCHAR(20)
AS
BEGIN
DECLARE #RR VARCHAR(20) SET #RR='PNRRESA001'
--here i have checked if no value is there then return the first value as 'PNRRESA001'
IF((SELECT COUNT(*)FROM PNRDetails)=0)
BEGIN
RETURN #RR
END
ELSE
-- if any value is there then take the last value and add 1 to it and update to the table
BEGIN
DECLARE #pnr VARCHAR(20),#S1 VARCHAR(20),#S2 INT
DECLARE PNRCursor CURSOR Static
FOR SELECT PNRNo FROM PNRDetails
OPEN PNRCursor
FETCH LAST FROM PNRNo INTO #pnr
SET #S1=SUBSTRING(#pnr,1,7)
SET #S2=RIGHT(#PNR,3)
SET #S2=#S2+1;
SET #pnr=#S1+#S2;
END
RETURN #pnr
END
--Here am inserting only customerNo as 5 and the PNR should be generated by my function
INSERT INTO PNRDetails VALUES(5)
--it shows 1 row updated :)
SELECT * FROM PNRDetails
-- but when i run select command it shows
--Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32). :(
U can run this.And pls do help if u find anything that could help me. any help will be appreciated...
Waiting for your kind response...
You could try to use a computed column and an identity column instead.
create table PNRDetails
(
ID int identity,
PNRNo as 'PNRRES'+right(1000+ID, 3),
customerNo int
)
I would suggest just using an IDENTITY instead as your id, let SQL Server handle the assignment of each id number with all it's built-in guards for concurrency, and leave the formatting up to the UI....or, create a computed column that defines the formatted version of the ID if you really do need it in the DB.
The risk you run with your intended approach is:
poor performance
concurrency issues - if loats of ids are being generate around the same time
If you are happy to change the table structure. Following will do the job.
CREATE TABLE [dbo].[PNRDetails](
[autoId] [int] IDENTITY(1,1) NOT NULL,
[prnNo] AS ('PNRRES'+right('000'+CONVERT([varchar](3),[dbo].[GetRowCount]([autoId]),(0)),(3))),
[customerNo] [int] NOT NULL,
CONSTRAINT [PK_Table1] PRIMARY KEY CLUSTERED
(
[autoId] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
EDIT: to address identity issue for your requirement pls create following function and pass the [autoId] in as above (edited) in the computed column.
CREATE FUNCTION dbo.GetRowCount
(
#autoId INT
)
RETURNS INT
AS
BEGIN
DECLARE #RESULTS AS INT
SELECT #RESULTS = COUNT(autoId) FROM PNRDetails WHERE PNRDetails.autoId<#autoId
RETURN #RESULTS + 1
END
GO
--INSERT
INSERT INTO PNRDetails (customerNo) VALUES(5)
1) You can use an identity column in your database (INTEGER)
PROS: easy/No gaps in between generated ids
CONS: You have to select the inserted id & return via procedure/query
if you were to show it to end user
2) Define a database sequence
PROS: easy to implement/Can be stored/shown to user before the form is
even saved
CONS: Gaps in between if the certain id is once generated & not used
3). Select max(id) from column + 1
PROS: Useful where only single user inserts in a table
CONS: disastrous if you were in an environment where multiple users
were inserting in the same tablle (mismatched max ids)
4) Use a database trigger to autoincrement the column
PROS:automated
CONS: hard to debug (you have to make sure it don't breaks for some
reason otherwise insert fails)
Change the way your trigger works. Something like this
CREATE FUNCTION dbo.fn_FuncIncPNR(#ID int)
RETURNS varchar(20)
BEGIN
Declare #Retval varchar(20),
#No varchar(4)
Select #No = convert(varchar(4), #ID)
while Len(#No) < 4
Select #No = '0' + #No
Select #Retval = 'PNRRESA' + #No
RETURN #Retval
END
You will notice there is a parameter field
Change your table create to this
CREATE TABLE PNRDetails(PNRNo AS (dbo.fn_ShowPNRNo(wID)), wID int IDENTITY(1,1) NOT NULL, customerNo INT)
That should solve your problem

Ordering SQL Server results by IN clause

I have a stored procedure which uses the IN clause. In my ASP.NET application, I have a multiline textbox that supplies values to the stored procedure. I want to be able to order by the values as they were entered in the textbox. I found out how to do this easily in mySQL (using FIELD function), but not a SQL Server equivalent.
So my query looks like:
Select * from myTable where item in #item
So I would be passing in values from my application like '113113','112112','114114' (in an arbitrary order). I want to order the results by that list.
Would a CASE statement be feasible? I wouldn't know how many items are coming in the textbox data.
How are you parameterising the IN clause?
As you are on SQL Server 2008 I would pass in a Table Valued Parameter with two columns item and sort_order and join on that instead. Then you can just add an ORDER BY sort_order onto the end.
From KM's comment above...
I know you didn't state it is comma seperated, but if it was a CSV or even if you have it space seperated you could do the following.
DECLARE #SomeTest varchar(100) --used to hold your values
SET #SomeTest = (SELECT '68,72,103') --just some test data
SELECT
LoginID --change to your column names
FROM
Login --change to your source table name
INNER JOIN
( SELECT
*
FROM fn_IntegerInList(#SomeTest)
) n
ON
n.InListID = Login.LoginID
ORDER BY
n.SortOrder
And then create fn_IntegerInList():
CREATE FUNCTION [dbo].[fn_IntegerInList] (#InListString ntext)
RETURNS #tblINList TABLE (InListID int, SortOrder int)
AS
BEGIN
declare #length int
declare #startpos int
declare #ctr int
declare #val nvarchar(50)
declare #subs nvarchar(50)
declare #sort int
set #sort=1
set #startpos = 1
set #ctr = 1
select #length = datalength(#InListString)
while (#ctr <= #length)
begin
select #val = substring(#InListString,#ctr,1)
if #val = N','
begin
select #subs = substring(#InListString,#startpos,#ctr-#startpos)
insert into #tblINList values (#subs, #sort)
set #startpos = #ctr+1
end
if #ctr = #length
begin
select #subs = substring(#InListString,#startpos,#ctr-#startpos)
insert into #tblINList values (#subs, #sort)
end
set #ctr = #ctr +1
set #sort = #sort + 1
end
RETURN
END
This way your function creates a table that holds a sort order namely, SortOrder and the ID or number you are passing in. You can of course modify this so that you are looking for space rather then , values. Otherwise Martin has the right idea in his answer. Please note in my example I am using one of my tables, so you will need to change the name Login to whatever you are dealing with.
the same way you concatenate ('113113','112112','114114') to pass to the sql sentence in the where clausule you can concatenate
order by
case item
when '113113' then 1
when '112112' then 2
when '114114' then 3
end
to pass to your order by clausule

Resources