How to insert NULL value in cl-dbi - common-lisp

I have the following table allowing for NULL values
CREATE TABLE test (
test int,
test2 int);
A regular query allows inserting NULL values:
INSERT INTO TABLE test (test, test2) VALUES (NULL, NULL)
However, using cl-dbi it does not work
(cl-dbi:execute
(cl-dbi:prepare connection
"INSERT INTO test (test, test2)
VALUES (?,?)")
nil
nil)
results in
DB Error: invalid input syntax for type timestamp: "false" (Code: 22007)

You have to use the :null value as indicated here.
(cl-dbi:execute
(cl-dbi:prepare connection
"INSERT INTO test (test, test2)
VALUES (?,?)")
:null
:null)

Related

In SQLite , How to SELECT a column only if it exists in the table

I am trying to write a query where the table will be generated dynamically for each job . And the columns will either exist or not based on input. In SQLite , i need to fetch the value of a column only if it exists otherwise null.
I tried with if & case statements using Pragma_table_info , but for negative scenario it is not working.
'''select
case when (select name from pragma_table_info('table_name') where name = col_name )is null
then error_message
else col_name'''
end
from table_name
This query is running if the mentioned col_name exists . But if not exists then it is throwing syntax error in else part.
Only in select query it should be done
Your code should work if the table's name, the column's name and the error message are properly quoted:
SELECT CASE
WHEN (SELECT name FROM pragma_table_info('table_name') WHERE name = 'col_name')
IS NULL THEN 'error_message'
ELSE 'col_name'
END
But you can do the same simpler with aggregation and COALESCE():
SELECT COALESCE(MAX(name), 'error_message')
FROM pragma_table_info('table_name')
WHERE name = 'col_name'
See the demo.

NULL statement doesn't fill the empty spaces in Sqlite

I'm having trouble with NULL statement in SQLITE. I added NULL in cases there is no info to be filled, but once I run the code the IDE throws an error.
CREATE TABLE tenants (
Apartment_Number INT(4),
Family_Name VARCHAR(8) NULL,
Sur_Name VARCHAR(14) NULL,
Home_Number INT(4),
Mobile_Number int(10),
PRIMARY KEY (Apartment_Number )
);
INSERT INTO tenants
VALUES
(101,,,201,0544431263),
(102,,,202,0544431263),
(103,'Shklobin','marta',203,0544431263),
(104,'arman','charles',204,0544431263);
SELECT * FROM tenants;
The empty spaces are where I hope the IDE will fill with NULL values.
The error I receive:
Error: near line 12: near ",": syntax error.
If I remove the NULL statement, the IDE runs the code with no errors.
Official documentation indicates that
The default value of each column is NULL.
The default behavior is also to allow NULL in each column. The behavior only changes if NOT NULL and/or DEFAULT ... constraints are specified. You should get the same error whether or not you have the lone NULL keyword as shown in the question code. My testing shows that the following does not suppress the error as implied in the question--in other words, the following change results in the same error.
Family_Name VARCHAR(8),
Sur_Name VARCHAR(14),
The following alternative INSERT statements will work:
INSERT INTO tenants
(Apartment_Number, Home_Number, Mobile_Number)
VALUES
(101,201,0544431263),
(102,202,0544431263);
INSERT INTO tenants
VALUES
(103,'Shklobin','marta',203,0544431263),
(104,'arman','charles',204,0544431263);
or
INSERT INTO tenants
VALUES
(101, NULL, NULL,201,0544431263),
(102, NULL, NULL,202,0544431263),
(103,'Shklobin','marta',203,0544431263),
(104,'arman','charles',204,0544431263);

tSQLt SpyProcedure with Output Parameters

I am attempting to spy a procedure with an output parameter. This procedure has two parameters, one input parameter and one output parameter.
The input parameter has a default value of NULL.
CREATE PROCEDURE spExampleProcedure
#INPUTPARAM DATETIME = NULL,
#OUTPUTPARAM INT = NULL OUTPUT
AS
....
I'm attempting to test a procedure that is calling spExampleProcedure. spExampleProcedure is called multiple times with a different #INPUTPARAM. I want to check that param and return a different value based on the input. (A more advanced sort of mock.)
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
This is not working. I would really like to be able to fake/spy a procedure like I do a function because It would really help when a stored procedure is called multiple times.
An option that I've considered is converting my spExampleProcedure to a Function but that would only avoid my issue instead. Looking at spy procedure, I see no reason why my setup should not be working beside maybe the fake procedure it creates might not have a default value of null.
The posted example should work as pointed out by Sebastian Meine. I wanted to elaborate on why my test was not working in case that ends up helping somebody else.
My issue was related to my test data setup.
Consider:
CREATE PROCEDURE spExampleProcedure
#INPUTPARAM DATETIME = NULL,
#OUTPUTPARAM INT = NULL OUTPUT
AS
....
And
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
Procedure Under Test:
CREATE PROCEDURE spExampleProcedureUnderTest
#ID INT
AS
BEGIN
DECLARE #EXAMPLEVAR DATETIME, #OUTPUT
SELECT #EXAMPLEVAR = VAR FROM ExampleTable WHERE ID = #ID
EXEC spExampleProcedure #OUTPUTPARAM = #OUTPUT OUTPUT
EXEC spExampleProcedure #EXAMPLEVAR, #OUTPUT OUTPUT
...
My Test Procedure was faking ExampleTable but not putting a value in for VAR.
EXEC tSQLt.FakeTable 'dbo.ExampleTable'
INSERT INTO ExampleTable (ID) VALUES (1)
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
EXEC spExampleProvedureUnderTest 1
Instead of
EXEC tSQLt.FakeTable 'dbo.ExampleTable'
INSERT INTO ExampleTable (ID, VAR) VALUES (1, '2018-06-01')
EXEC tSQLt.SpyProcedure 'dbo.spExampleProcedure',
'SET #OUTPUTPARAM = CASE WHEN #INPUTPARAM IS NULL THEN 1 ELSE 2 END'
EXEC spExampleProvedureUnderTest 1
EMPHASIS ON line 2 of each. Notice that I added a value in my insert.
Effectively, my spied Procedure was being called both times with NULL. BECAREFUL with data coming from Faked Tables. Faked tables remove constraints so it's easy to put a null into a table that would otherwise not allow it.

tSQLt - Test that a column is output by a stored procedure

I'm very new to tSQLt and am having some difficulty with what should really be a very simple test.
I have added a column to the SELECT statement executed within a stored procedure.
How do I test in a tSQLt test that the column is included in the resultset from that stored procedure?
Generally, when adding a column to the output of a stored procedure, you will want to test that the column both exists and is populated with the correct data. Since we're going to make sure that the column is populated with the same data, we can design a test that does exactly that:
CREATE PROCEDURE MyTests.[test stored procedure values MyNewColumn correctly]
AS
BEGIN
-- Create Actual and Expected table to hold the actual results of MyProcedure
-- and the results that I expect
CREATE TABLE MyTests.Actual (FirstColumn INT, MyNewColumn INT);
CREATE TABLE MyTests.Expected (FirstColumn INT, MyNewColumn INT);
-- Capture the results of MyProcedure into the Actual table
INSERT INTO MyTests.Actual
EXEC MySchema.MyProcedure;
-- Create the expected output
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (7, 12);
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (25, 99);
-- Check that Expected and Actual tables contain the same results
EXEC tSQLt.AssertEqualsTable 'MyTests.Expected', 'MyTests.Actual';
END;
Generally, the stored procedure you are testing relies on other tables or other stored procedures. Therefore, you should become familiar with FakeTable and SpyProcedure as well: http://tsqlt.org/user-guide/isolating-dependencies/
Another option if you are just interested in the structure of the output and not the content (and you are running on SQL2012 or greater) would be to make use of sys.dm_exec_describe_first_result_set_for_object in your test.
This dmo (dynamic management object) returns a variety of information about the first result set returned for a given object.
In my example below, I have only used a few of the columns returned by this dmo but others are available if, for example, your output includes decimal data types.
In this test, I populate a temporary table (#expected) with information about how I expect each column to be returned - such as name, datatype and nullability.
I then select the equivalent columns from the dmo into another temporary table (#actual).
Finally I use tSQLt.AssertEqualsTable to compare the contents of the two tables.
Having said all that, whilst I frequently write tests to validate the structure of views or tables (using tSQLt.AssertResultSetsHaveSameMetaData), I have never found the need to just test the result set contract for procedures. Dennis is correct, you would typically be interested in asserting that the various columns in your result set are populated with the correct values and by the time you've covered that functionality you should have covered every column anyway.
if object_id('dbo.myTable') is not null drop table dbo.myTable;
go
if object_id('dbo.myTable') is null
begin
create table dbo.myTable
(
Id int not null primary key
, ColumnA varchar(32) not null
, ColumnB varchar(64) null
)
end
go
if object_id('dbo.myProcedure') is not null drop procedure dbo.myProcedure;
go
create procedure dbo.myProcedure
as
begin
select Id, ColumnA, ColumnB from dbo.myTable;
end
go
exec tSQLt.NewTestClass #ClassName = 'myTests';
if object_id('[myTests].[test result set on SQL2012+]') is not null drop procedure [myTests].[test result set on SQL2012+];
go
create procedure [myTests].[test result set on SQL2012+]
as
begin
; with expectedCte (name, column_ordinal, system_type_name, is_nullable)
as
(
-- The first row sets up the data types for the #expected but is excluded from the expected results
select cast('' as nvarchar(200)), cast(0 as int), cast('' as nvarchar(200)), cast(0 as bit)
-- This is the result we are expecting to see
union all select 'Id', 1, 'int', 0
union all select 'ColumnA', 2, 'varchar(32)', 0
union all select 'ColumnB', 3, 'varchar(64)', 1
)
select * into #expected from expectedCte where column_ordinal > 0;
--! Act
select
name
, column_ordinal
, system_type_name
, is_nullable
into
#actual
from
sys.dm_exec_describe_first_result_set_for_object(object_id('dbo.myProcedure'), 0);
--! Assert
exec tSQLt.AssertEqualsTable '#expected', '#actual';
end
go
exec tSQLt.Run '[myTests].[test result set on SQL2012+]'

Modify a column to NULL - Oracle

I have a table named CUSTOMER, with few columns. One of them is Customer_ID.
Initially Customer_ID column WILL NOT accept NULL values.
I've made some changes from code level, so that Customer_ID column will accept NULL values by default.
Now my requirement is that, I need to again make this column to accept NULL values.
For this I've added executing the below query:
ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL
I'm getting the following error:
ORA-01451 error, the column already allows null entries so
therefore cannot be modified
This is because already I've made the Customer_ID column to accept NULL values.
Is there a way to check if the column will accept NULL values before executing the above query...??
You can use the column NULLABLE in USER_TAB_COLUMNS. This tells you whether the column allows nulls using a binary Y/N flag.
If you wanted to put this in a script you could do something like:
declare
l_null user_tab_columns.nullable%type;
begin
select nullable into l_null
from user_tab_columns
where table_name = 'CUSTOMER'
and column_name = 'CUSTOMER_ID';
if l_null = 'N' then
execute immediate 'ALTER TABLE Customer
MODIFY (Customer_ID nvarchar2(20) NULL)';
end if;
end;
It's best not to use dynamic SQL in order to alter tables. Do it manually and be sure to double check everything first.
Or you can just ignore the error:
declare
already_null exception;
pragma exception_init (already_null , -01451);
begin
execute immediate 'alter table <TABLE> modify(<COLUMN> null)';
exception when already_null then null;
end;
/
You might encounter this error when you have previously provided a DEFAULT ON NULL value for the NOT NULL column.
If this is the case, to make the column nullable, you must also reset its default value to NULL when you modify its nullability constraint.
eg:
DEFINE table_name = your_table_name_here
DEFINE column_name = your_column_name_here;
ALTER TABLE &table_name
MODIFY (
&column_name
DEFAULT NULL
NULL
);
I did something like this, it worked fine.
Try to execute query, if any error occurs, catch SQLException.
try {
stmt.execute("ALTER TABLE Customer MODIFY Customer_ID nvarchar2(20) NULL");
} catch (SQLException sqe) {
Logger("Column to be modified to NULL is already NULL : " + sqe);
}
Is this correct way of doing?
To modify the constraints of an existing table
for example... add not null constraint to a column.
Then follow the given steps:
1) Select the table in which you want to modify changes.
2) Click on Actions.. ---> select column ----> add.
3) Now give the column name, datatype, size, etc. and click ok.
4) You will see that the column is added to the table.
5) Now click on Edit button lying on the left side of Actions button.
6) Then you will get various table modifying options.
7) Select the column from the list.
8) Select the particular column in which you want to give not null.
9) Select Cannot be null from column properties.
10) That's it.

Resources