SHA return different result i MariaDB - mariadb

I have a table that is filled with some value, for setting the value I use a stored procedure that also calculate a hash function and save in database.
In case of updating value hash should be recalculated. For recalculating hash I use the following procedure:
DELIMITER $$
CREATE PROCEDURE `sp_UpdateHash`(IN rkey int)
Begin
DECLARE AuthCode VarChar(10);
SET #input = concat('SELECT r_ac into #AuthCode
FROM table_rec
where r_key=',rkey);
PREPARE squery FROM #input;
EXECUTE squery;
SET #hashed = SHA2(#AuthCode,256);
select #hashed;
DEALLOCATE PREPARE squery;
end;
and procedure just for calculating hash:
CREATE PROCEDURE `sp_GetHash`(IN AuthCode VarChar(10))
BEGIN
DECLARE hashed VarChar(64);
SET hashed = SHA2(AuthCode,256);
select hashed as 'Hash';
END
AuthCode identical, but hash is different when I try to process value after select command I get a wrong code. If I compare two hashes with other results, for example from an online generator, the result is similar to the second function: sp_GetHash
Do you have any idea why?

The problem was in one field that has a different coding from the table, and when I use it in the query it has a different size.

Related

MariaDB Stored Procedure store paramters for update

I am trying to write a MariaDB stored procedure.
Due to SQL_SAFE_UPDATES, it is required to use the ID column to use in the WHERE clause for updates. Due to this, what is the normal approach to also select a value from one of the other columns? I do not want to have multiple SELECT statements as it seems inefficient and room for error because they could return values from different rows.
I would like to store my first select statement
SELECT id, sequence FROM RECORDSEQUENCE WHERE SEQTABLE = SeqTable;
In the following two parameters #id, #seq from two seperate columns in the above query and use them in the UPDATE statement as well as the IF statement.
CREATE DEFINER=`sd`#`%` PROCEDURE `SD_GenerateNextRecordSequence`(IN SeqTable int)
BEGIN
SELECT id, sequence FROM RECORDSEQUENCE WHERE SEQTABLE = SeqTable;
IF (#seq IS NOT NULL) THEN
SET #NEXTSEQ := #seq+1;
UPDATE RECORDSEQUENCE SET RECORDSEQUENCE = #NEXTSEQ WHERE id = #id;
ELSE
SET #NEXTSEQ := 100;
INSERT INTO RECORDSEQUENCE (RECORDSEQUENCE,SEQTABLE) VALUES (#NEXTSEQ,SeqTable);
END IF;
SELECT #NEXTSEQ as SEQUENCE;
END

Is there a way to INSERT Null value as a parameter using FireDAC?

I want to leave some fields empty (i.e. Null) when I insert values into table. I don't see why would I want to have a DB full of empty strings in fields.
I use Delphi 10, FireDAC and local SQLite DB.
Edit: Provided code is just an example. In my application values are provided by user input and functions, any many of them are optional. If value is empty, I would like to keep it at Null or default value. Creating multiple variants of ExecSQL and nesting If statements isn't an option too - there are too many optional fields (18, to be exact).
Test table:
CREATE TABLE "Clients" (
"Name" TEXT,
"Notes" TEXT
);
This is how I tried it:
var someName,someNote: string;
begin
{...}
someName:='Vasya';
someNote:='';
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name,Notes) VALUES (:nameval,:notesval)',
[someName, IfThen(someNote.isEmpty, Null, somenote)]);
This raises an exception:
could not convert variant of type (Null) into type (OleStr)
I've tried to overload it and specify [ftString,ftString] and it didn't help.
Currently I have to do it like this and I hate this messy code:
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name,Notes) VALUES ('+
IfThen(someName.isEmpty,'NULL','"'+Sanitize(someName)+'"')+','+
IfThen(someNote.isEmpty,'NULL','"'+Sanitize(someNote)+'"')+');');
Any recommendations?
Edit2: Currently I see an option of creating new row with "INSERT OR REPLACE" and then use multiple UPDATEs in a row for each non-empty value. But this looks direly ineffective. Like this:
FDConnection1.ExecSQL('INSERT OR REPLACE INTO Clients(Name) VALUES (:nameval)',[SomeName]);
id := FDConnection1.ExecSQLScalar('SELECT FROM Clients VALUES id WHERE Name=:nameval',[SomeName]);
if not SomeString.isEmpty then
FDConnection1.ExecSQL('UPDATE Clients SET Notes=:noteval WHERE id=:idval)',[SomeNote,id]);
According to Embarcadero documentation ( here ):
To set the parameter value to Null, specify the parameter data type,
then call the Clear method:
with FDQuery1.ParamByName('name') do begin
DataType := ftString;
Clear;
end;
FDQuery1.ExecSQL;
So, you have to use FDQuery to insert Null values, I suppose. Something like this:
//Assign FDConnection1 to FDQuery1's Connection property
FDQuery1.SQL.Text := 'INSERT OR REPLACE INTO Clients(Name,Notes) VALUES (:nameval,:notesval)';
with FDQuery1.ParamByName('nameval') do
begin
DataType := ftString;
Value := someName;
end;
with FDQuery1.ParamByName('notesval') do
begin
DataType := ftString;
if someNote.IsEmpty then
Clear;
else
Value := someNote;
end;
if not FDConnection1.Connected then
FDConnection.Open;
FDQuery1.ExecSql;
It's not very good idea to execute query as String without parameters because this code is vulnerable to SQL injections.
Some sources tells that it's not enough and you should do something like this:
with FDQuery1.ParamByName('name') do begin
DataType := ftString;
AsString := '';
Clear;
end;
FDQuery1.ExecSQL;
but I can't confirm it. You can try it if main example won't work.

tSQLt - Test that a column is output by a stored procedure

I'm very new to tSQLt and am having some difficulty with what should really be a very simple test.
I have added a column to the SELECT statement executed within a stored procedure.
How do I test in a tSQLt test that the column is included in the resultset from that stored procedure?
Generally, when adding a column to the output of a stored procedure, you will want to test that the column both exists and is populated with the correct data. Since we're going to make sure that the column is populated with the same data, we can design a test that does exactly that:
CREATE PROCEDURE MyTests.[test stored procedure values MyNewColumn correctly]
AS
BEGIN
-- Create Actual and Expected table to hold the actual results of MyProcedure
-- and the results that I expect
CREATE TABLE MyTests.Actual (FirstColumn INT, MyNewColumn INT);
CREATE TABLE MyTests.Expected (FirstColumn INT, MyNewColumn INT);
-- Capture the results of MyProcedure into the Actual table
INSERT INTO MyTests.Actual
EXEC MySchema.MyProcedure;
-- Create the expected output
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (7, 12);
INSERT INTO MyTests.Expected (FirstColumn, MyNewColumn)
VALUES (25, 99);
-- Check that Expected and Actual tables contain the same results
EXEC tSQLt.AssertEqualsTable 'MyTests.Expected', 'MyTests.Actual';
END;
Generally, the stored procedure you are testing relies on other tables or other stored procedures. Therefore, you should become familiar with FakeTable and SpyProcedure as well: http://tsqlt.org/user-guide/isolating-dependencies/
Another option if you are just interested in the structure of the output and not the content (and you are running on SQL2012 or greater) would be to make use of sys.dm_exec_describe_first_result_set_for_object in your test.
This dmo (dynamic management object) returns a variety of information about the first result set returned for a given object.
In my example below, I have only used a few of the columns returned by this dmo but others are available if, for example, your output includes decimal data types.
In this test, I populate a temporary table (#expected) with information about how I expect each column to be returned - such as name, datatype and nullability.
I then select the equivalent columns from the dmo into another temporary table (#actual).
Finally I use tSQLt.AssertEqualsTable to compare the contents of the two tables.
Having said all that, whilst I frequently write tests to validate the structure of views or tables (using tSQLt.AssertResultSetsHaveSameMetaData), I have never found the need to just test the result set contract for procedures. Dennis is correct, you would typically be interested in asserting that the various columns in your result set are populated with the correct values and by the time you've covered that functionality you should have covered every column anyway.
if object_id('dbo.myTable') is not null drop table dbo.myTable;
go
if object_id('dbo.myTable') is null
begin
create table dbo.myTable
(
Id int not null primary key
, ColumnA varchar(32) not null
, ColumnB varchar(64) null
)
end
go
if object_id('dbo.myProcedure') is not null drop procedure dbo.myProcedure;
go
create procedure dbo.myProcedure
as
begin
select Id, ColumnA, ColumnB from dbo.myTable;
end
go
exec tSQLt.NewTestClass #ClassName = 'myTests';
if object_id('[myTests].[test result set on SQL2012+]') is not null drop procedure [myTests].[test result set on SQL2012+];
go
create procedure [myTests].[test result set on SQL2012+]
as
begin
; with expectedCte (name, column_ordinal, system_type_name, is_nullable)
as
(
-- The first row sets up the data types for the #expected but is excluded from the expected results
select cast('' as nvarchar(200)), cast(0 as int), cast('' as nvarchar(200)), cast(0 as bit)
-- This is the result we are expecting to see
union all select 'Id', 1, 'int', 0
union all select 'ColumnA', 2, 'varchar(32)', 0
union all select 'ColumnB', 3, 'varchar(64)', 1
)
select * into #expected from expectedCte where column_ordinal > 0;
--! Act
select
name
, column_ordinal
, system_type_name
, is_nullable
into
#actual
from
sys.dm_exec_describe_first_result_set_for_object(object_id('dbo.myProcedure'), 0);
--! Assert
exec tSQLt.AssertEqualsTable '#expected', '#actual';
end
go
exec tSQLt.Run '[myTests].[test result set on SQL2012+]'

Entity Framework shows error when called stored procedure

In my project EF calls a stored procedure which is shown below. It returns either 1 or scope identity.
On EF function imports, the stored procedure is listed with a return type of decimal.
When the stored procedure returns scope identity, everything is ok.
But when if condition of sp satisfies, ef throws error as
The data reader returned by the store data provider does not have enough columns for the query requested.
Pls help..
This is my stored procedure:
#VendorId int,
#ueeareaCode varchar(3),
#TuPrfxNo varchar(3),
#jeeSfxNo varchar(4),
#Tjode varchar(3),
#uxNo varchar(3),
#TyufxNo varchar(4),
#Iyuy bit
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
SET NOCOUNT ON;
IF EXISTS (Select dfen_id
from dbo.efe_phfedwn_eflwn
where
[yu] = #Tyuode and
[uy] = #TuyxNo and
[yuno] = #Tuo)
return 1
ELSE
Begin
INSERT INTO dbo.yu
....................
Select Scope_Identity()
End
END
The error tells us that EF is expecting a result set and when we use RETURN we don't get a result set. Your error means that the stored procedure is returning an integer but EF is expecting a decimal, so we just CAST the selected values to a decimal.
So modify the SQL so that we SELECT instead of RETURN, like so (not forgetting to use CAST):
IF EXISTS (Select cntct_ctr_phn_ln_id
from dbo.cntct_ctr_phn_ln
where
[toll_free_phn_area_cd] = #TollfreeareaCode and
[toll_free_phn_prfx_no] = #TollfreePrfxNo and
[toll_free_phn_sfx_no] = #TollfreeSfxNo)
SELECT CAST(1 AS decimal)
Then also CAST the result of SCOPE_IDENTITY() to a decimal:
SELECT CAST(SCOPE_IDENTITY() AS decimal)

Best way to insert values multiple times from data layer to stored procedure?

Hi
I have DAL Layer, from where invoking a stored procedure to insert values into the table.
E.g.:-
CREATE PROCEDURE [dbo].[DataInsert]
#DataName nvarchar(64)
AS
BEGIN
INSERT INTO
table01 (dataname)
VALUES
(#dataname)
END
Now as requirement changed, per client request i have to add values 5 times. So what is the best practice?
Do i call this Stored Procedure 5 times from my DAL?
or
Pass all the values (may be comma separated) to storedprocedure in one go and then let the stored procedure add it for 5 times?
BTW. Its not always 5 times. It is changeable.
You could create a user-defined table type;
CREATE TYPE [dbo].[SomeInfo] AS TABLE(
[Id] [int] NOT NULL,
[SomeValue] [int] NOT NULL )
Define your stored proc as such;
CREATE PROCEDURE [dbo].[AddSomeStuff]
#theStuff [SomeInfo] READONLY
AS
BEGIN
INSERT INTO SOMETABLE ([...columns...])
SELECT [...columns...] from #theStuff
END
Then you'll need to create a datatable (called table below) that matches the schema and call the stored proc as so;
var cmd = new SqlCommand("AddSomeStuff", sqlConn) {CommandType = CommandType.StoredProcedure};
var param = new SqlParameter("#theStuff", SqlDbType.Structured) {Value = table};
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
btw this proc works - I've just written and tested it see results below!
CREATE PROCEDURE [dbo].[DataInsert]
#DataName nvarchar(max) AS
BEGIN
DECLARE #pos SMALLINT, #str VARCHAR(max)
WHILE #DataName <> ''
BEGIN
SET #pos = CHARINDEX(',', #DataName)
IF #pos>0
BEGIN
SET #str = LEFT(#DataName, #pos-1)
SET #DataName = RIGHT(#DataName, LEN(#DataName)-#pos)
END
ELSE
BEGIN
SET #str = #DataName
SET #DataName = ''
END
INSERT INTO table01 VALUES(CONVERT(VARCHAR(100),#str))
END
END
GO
then run it: -
EXEC #return_value = [dbo].[DataInsert]
#DataName = N'five, bits, of, your, data'
*rows from table01: *
five
bits
of
your
data
(5 row(s) affected)
I'd either call your proc repeatedly(that would be my choice), or else you could use XML to pass in a list of values as a single parameter.
http://support.microsoft.com/kb/555266
Instead of fancy SQL code that is difficult to maintain and is not scalable, I would simply go to invoking your stored procedure multiple times.
If performance or transactional behavior is an issue, you can consider to send the commands in a single batch.
You talked about 5 insert. If the number of record to insert is much greater, you could consider bulk insert as well.

Resources