SQLite Trigger to SQL Server Express 2008 - sqlite

I need help on SQL Server syntax to create some triggres that I´ve used with success on SQLite, but I'm having trouble with NEW.Table once it´s not part of SQL Server.
That´s my SQLite trigger code:
UPDATE EngineeringItems set ElevacaoFIT = CAST(round((select "Position Z" - MatchingPipeOD / 2 from EngineeringItems where PnPId = new.PNPID), 0) as INT) where PNPID = new.PNPID;
PNPID is the table pk!
EDIT 2:
Thanks again buddy!
I´ve tried your new code with a little modifications and SQL Server accepted it and trigger was created successfully. So the code is:
CREATE TRIGGER [ElevacaoFIT_Insert]
ON [dbo].[EngineeringItems]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
UPDATE EngineeringItems
SET
ElevacaoFIT = CAST(ROUND((select EngineeringItems."Position Z" - (EngineeringItems.MatchingPipeOD / 2)
FROM
EngineeringItems INNER JOIN inserted ON EngineeringItems.PnPId = inserted.PnPId), 0) AS INT);
END
But now I get this message: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.! Any idea?
Sorry for bother you... Thanks again!!

Can you post what you have tried on SQL Server?
Something like the following may do it:
CREATE TRIGGER [dbo].[EngineeringItems_Trigger]
ON [dbo].[EngineeringItems]
AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
UPDATE EngineeringItems
SET
ElevacaoFIT = CAST(round(("Position Z" - MatchingPipeOD / 2), 0) as INT)
FROM
EngineeringItems INNER JOIN inserted ON EngineeringItems.PnPId = inserted.PNPID;
END

Related

Incorrect default value passed to the SQL Server database

I have set my column to int not null default 1... but whenever I save my record, it sets default value for that record to be 0.
I am not setting it anywhere. I don't know where I am making a mistake.
I have debugged my code , and when I am passing new entity object it is setting default value for not null to 0 .May be it is something with LINQ, But I don't know how to handle it.I don't want to explicitly assign value.
Thanks!
For sql-server, you can use SQL Server Profiler to catch all the scripts you run into the DB.
This may show you some details
Try running this query, replacing the 'myTable' and 'myColumn' values with your actual TABLE and COLUMN names, and see what's returned:
SELECT
OBJECT_NAME(C.object_id) AS [Table Name]
,C.Name AS [Column Name]
,DC.Name AS [Constraint Name]
,DC.Type_Desc AS [Constraint Type]
,DC.Definition AS [Default Value]
FROM sys.default_constraints DC
INNER JOIN sys.Columns C
ON DC.parent_column_id = C.column_id
AND DC.parent_object_id = C.object_id
WHERE OBJECT_NAME(DC.parent_object_id) = 'myTable'
AND COL_NAME(DC.parent_object_id,DC.parent_column_id) = 'myColumn'
;
Should return something like this:
[Table Name] [Column Name] [Constraint Name] [Constraint Type] [Default Value]
-------------------------------------------------------------------------------------------
myTable myColumn DF_myTable_myColumn DEFAULT_CONSTRAINT ('0')
If the [Default Value] returned is indeed (1), then it means that you have set the constraint properly and something else is at play here. It might be a trigger, or some other automated DML that you've forgotten/didn't know about, or something else entirely.
I am not the world's biggest fan of using a TRIGGER, but in a case like this, it could be handy. I find that one of the best uses for a TRIGGER is debugging little stuff like this - because it lets you see what values are being passed into a table without having to scroll through mountains of profiler data. You could try something like this (again, switching out the myTable and myColumn values with your actual table and column names):
CREATE TABLE Default_Check
(
Action_Time DATETIME NOT NULL DEFAULT GETDATE()
,Inserted_Value INT
);
CREATE TRIGGER Checking_Default ON myTable
AFTER INSERT, UPDATE
AS
BEGIN
INSERT INTO Default_Check (Inserted_Value)
SELECT I.myColumn
FROM Inserted I
;
END
;
This trigger would simply list the date/time of an update/insert done against your table, as well as the inserted value. After creating this, you could run a single INSERT statement, then check:
SELECT * FROM Default_Check;
If you see one row, only one action (insert/update) was done against the table. If you see two, something you don't expect is happening - you can check to see what. You will also see here when the 0 was inserted/updated.
When you're done, just make sure you DROP the trigger:
DROP TRIGGER Checking_Default;
You'll want to DROP the table, too, once it's become irrelevant:
DROP TABLE Default_Check;
If all of this still didn't help you, let me know.
In VB use
Property VariableName As Integer? = Nothing
And
In C# use
int? value = 0;
if (value == 0)
{
value = null;
}
Please check My Example:
create table emp ( ids int null, [DOJ] datetime NOT null)
ALTER TABLE [dbo].[Emp] ADD CONSTRAINT DF_Emp_DOJ DEFAULT (GETDATE()) FOR [DOJ]
1--Not working for Default Values
insert into emp
select '1',''
2 ---working for Default Values
insert into emp(ids) Values(13)
select * From emp

PL/SQL - Inserting data using Exception

I have the following code which is not executing correctly. I have data stored in date_tmp (varchar) that includes dates and nondates. I want to move the dates in that column to date_run (date) and data that is not a date, will be moved to a comments (varchar) column. When I run the following code, the entire set of data gets moved to comments. It runs fine when I edit out the insert statement and just run the dbms_outputline line. What might I be doing incorrectly?
DECLARE
CURSOR getrow IS
SELECT a.id, a.date_tmp
FROM mycolumn a
WHERE a.id < 1300;
v_date date;
BEGIN
FOR i in getrow LOOP
BEGIN
v_date := to_date(i.date_tmp, 'mm/dd/yy');
INSERT INTO mycolumn a(a.date_run)
VALUES(i.date_tmp);
EXCEPTION
WHEN OTHERS THEN
--dbms_output.put_line(i.date_tmp);
update mycolumn a
SET a.comments = i.date_tmp
where a.id = i.id;
END;
END LOOP;
END;
You try to insert varchar i.date_tmp into a date field. Instead insert v_date.
...
INSERT INTO mycolumn a (a.date_run)
VALUES(v_date);
...
But actually your requirement is a move. That calls for an update actually. So I think what you really want to do is:
...
update mycolumn a
SET a.date_run = v_date
where a.id = i.id
...
And actually you could have a function that checks if you have a valid date or not and then you might be able to handle the whole task using a simple update statement.
create or replace function is_a_date(i_date varchar2, i_pattern varchar2)
return date
is
begin
return to_date(i_date, i_pattern);
exception
when others return null;
end is_a_date;
With that function you could write two update statements
update mycolumn
set date_run = to_date(date_tmp,'dd/mm/yy')
where is_a_date(date_tmp, 'dd/mm/yy') is not null;
update mycolumn
set comment = date_tmp
where is_a_date(date_tmp, 'dd/mm/yy') is null;
I designed the function in a way that you could use it in various ways as it returns you a date or null but no exception if the varchar does not conform to the date pattern.
You have an insert where it looks like you need an update, like you have in the exception handler. So just change it to:
v_date := to_date(i.date_tmp, 'mm/dd/yy');
update mycolumn
set date_run = v_date
where id = i.id;
or you could shorten it to:
update mycolumn
set date_run = to_date(i.date_tmp, 'mm/dd/yy')
where id = i.id;
#hol solution is the best approach for me.
Avoid always you can loops and procedures if you can do it with simple SQL statments, your code will be more faster.
Also, if you have always have a data fixed format , you can ride of the PL/SQL function is_a_date function and do it everything with SQL... but the code gets a little uglier with something like this:
update mycolumn
set date_run = to_date(date_tmp,'dd/mm/yy')
where substr(date_tmp,1,2) between '1' and '31'
and substr(date_tmp,4,2) between '1' and '12'
and substr(date_tmp,7,2) between '00' and '99';
If you need more speed in your query or you have a huge amount of data in date_tmp, as function is_a_date is deterministic (always returns the same value given the same values for X, Y,), you can create an index for it:
create index mycol_idx on mycolumn(is_a_date(date_tmp));
And when you use the function, Oracle will use your index, like in those selects:
SELECT a.id, a.date_tmp
FROM mycolumn a
WHERE a.id < 1300
and is_a_date(a.date_tmp) is not null;
SELECT a.id, a.date_tmp
FROM mycolumn a
WHERE a.id < 1300
and (is_a_date(a.date_tmp) is not null and is_a_date(a.date_tmp)>sysdate-5);

PL/SQL variable scope in nested blocks

I need to run some SQL blocks to test them, is there an online app where I can insert the code and see what outcome it triggers?
Thanks a lot!
More specific question below:
<<block1>>
DECLARE
var NUMBER;
BEGIN
var := 3;
DBMS_OUTPUT.PUT_LINE(var);
<<block2>>
DECLARE
var NUMBER;
BEGIN
var := 200;
DBMS_OUTPUT.PUT_LINE(block1.var);
END block2;
DBMS_OUTPUT.PUT_LINE(var);
END block1;
Is the output:
3
3
200
or is it:
3
3
3
I read that the variable's value is the value received in the most recent block so is the second answer the good one? I'd love to test these online somewhere if there is a possibility.
Also, is <<block2>> really the correct way to name a block??
Later edit:
I tried this with SQL Fiddle, but I get a "Please build schema" error message:
Thank you very much, Dave! Any idea why this happens?
create table log_table
( message varchar2(200)
)
<<block1>>
DECLARE
var NUMBER;
BEGIN
var := 3;
insert into log_table(message) values (var)
select * from log_table
<<block2>>
DECLARE
var NUMBER;
BEGIN
var := 200;
insert into log_table(message) values (block1.var || ' 2nd')
select * from log_table
END block2;
insert into log_table(message) values (var || ' 3rd')
select * from log_table
END block1;
In answer to your three questions.
You can use SQL Fiddle with Oracle 11g R2: http://www.sqlfiddle.com/#!4. However, this does not allow you to use dbms_output. You will have to insert into / select from tables to see the results of your PL/SQL scripts.
The answer is 3 3 3. Once the inner block is END-ed the variables no longer exist/have scope. You cannot access them any further.
The block naming is correct, however, you aren't required to name blocks, they can be completely anonymous.
EDIT:
So after playing with SQL Fiddle a bit, it seems like it doesn't actually support named blocks (although I have an actual Oracle database to confirm what I said earlier).
You can, however, basically demonstrate the way variable scope works using stored procedures and inner procedures (which are incidentally two very important PL/SQL features).
Before I get to that, I noticed three issues with you code:
You need to terminate the insert statements with a semi-colon.
You need to commit the the transactions after the third insert.
In PL/SQL you can't simply do a select statement and get a result, you need to select into some variable. This would be a simple change, but because we can't use dbms_output to view the variable it doesn't help us. Instead do the inserts, then commit and afterwards select from the table.
In the left hand pane of SQL Fiddle set the query terminator to '//' then paste in the below and 'build schema':
create table log_table
( message varchar2(200)
)
//
create or replace procedure proc1 as
var NUMBER;
procedure proc2 as
var number;
begin
var := 200;
insert into log_table(message) values (proc1.var || ' 2nd');
end;
begin
var := 3;
insert into log_table(message) values (var || ' 1st');
proc2;
insert into log_table(message) values (var || ' 3rd');
commit;
end;
//
begin
proc1;
end;
//
Then in the right hand panel run this SQL:
select * from log_table
You can see that proc2.var has no scope outside of proc2. Furthermore, if you were to explicitly try to utilize proc2.var outside of proc2 you would raise an exception because it is out-of-scope.

SQL Server triggers aren't working with Linq to SQL on ASP.NET

My work colleague is making the ASP.NET Web Forms application collecting data. I'm administrating SQL Server database of it. Based on databse he makes objects to Web Forms using Linq to SQL. He wanted me to make recodrds in Osoby to change dataDodania with date of generation the object and dataModyfikacji with date of last update. Having experience in PL/SQL I made simple triggers for this. The problem is that triggers work when I run SQL statements in SQL Server Management Studio 2008 nicely, but when used in application - they are omitted, not making changes needed. Here is triggers SQL code:
CREATE TRIGGER [dbo].[DodanieOsoby]
ON [dbo].[Osoby]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
INSERT INTO Osoby(dataDodania, dataModyfikacji, loginId, rola, imie, imieDrugie, nazwisko, plec, wiek,pESEL,wyksztalcenie,opieka,ulica, nrDom, nrLokal, miejscowosc, obszar, kodPoczty, telefonKontakt, telefonStacjo, email, zatrudnienie, stanowisko, przedsiebiorstwo)
SELECT GETDATE(), GETDATE(), loginId, rola, imie, imieDrugie, nazwisko, plec, wiek, pESEL, wyksztalcenie,opieka,ulica, nrDom, nrLokal, miejscowosc, obszar, kodPoczty, telefonKontakt, telefonStacjo, email, zatrudnienie, stanowisko, przedsiebiorstwo
FROM inserted
END
And for UPDATE of Osoby...
CREATE TRIGGER [dbo].[AktualizacjaOsoby]
ON [dbo].[Osoby]
AFTER UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
UPDATE Osoby
SET dataModyfikacji = GETDATE()
WHERE id in
(SELECT DISTINCT id from Inserted)
END
Possible this be helpful for you (if dbo.Osoby is view) -
ALTER TRIGGER dbo.trg_IOIU_vw_WorkOut
ON dbo.vw_WorkOut
INSTEAD OF INSERT, UPDATE
AS BEGIN
SET NOCOUNT ON
SET XACT_ABORT ON
DECLARE
#WorkOutID BIGINT
, #DateOut DATETIME
, #EmployeeID INT
DECLARE workout CURSOR LOCAL READ_ONLY FAST_FORWARD FOR
SELECT
WorkOutID
, DateOut
, EmployeeID
FROM INSERTED
OPEN workout
FETCH NEXT FROM workout INTO
#WorkOutID
, #DateOut
, #EmployeeID
WHILE ##FETCH_STATUS = 0 BEGIN
IF NOT EXISTS(
SELECT 1
FROM dbo.WorkOut
WHERE WorkOutID = #WorkOutID
)
BEGIN
INSERT INTO dbo.WorkOut
(
EmployeeID
, DateOut
)
SELECT
#EmployeeID
, #DateOut
SELECT SCOPE_IDENTITY() -- if you use LINQ need return new ID to client
END
ELSE BEGIN
UPDATE dbo.WorkOut
SET
EmployeeID = #EmployeeID
, DateOut = #DateOut
WHERE WorkOutID = #WorkOutID
END
FETCH NEXT FROM workout INTO
#WorkOutID
, #DateOut
, #EmployeeID
END
CLOSE workout
DEALLOCATE workout
END

Preferred Method to Catch Specific OleDB Error

Ok - I have a situation in which I must execute a dynamically built stored procedure against tables that may, or may not be in the database. The data retrieved is then shunted to a VB.Net backed ASP based report page. By design, if the tables are not present in the database, the relevant data is automatically hidden on the report page. Currently, I'm doing this by checking for the inevitable error, and hiding the div in the catch block. A bit kludgy, but it worked.
I can't include the VB code-behind, but the relevant stored procedure is included below.
However, a problem with this method was recently brought to my attention when, for no apparent reason, the div was being hidden even though the proper data was available. As it turned out, the user trying to select the table in the dynamic SQL call didn't have the proper select permissions, an easy enough fix once I could track it down.
So, two fold question. First and foremost - is there a better way to check for a missing table than through catching the error in the VB.Net codebehind? All things considered, I'd rather save the error checking for an actual error. Secondly, is there a preferred method to squirrel out a particular OLE DB error out of the general object caught by the try->catch block other than just checking the actual stack trace string?
SQL Query - The main gist of the code is that, due to the design of the database, I have to determine the name of the actual table being targeted manually. The database records jobs in a single table, but each job also gets its own table for processing data on the items processed in that job, and it's data from those tables I have to retrieve. Absolutely nothing I can do about this setup, unfortunately.
DECLARE #sql NVarChar(Max),
#params NVarChar(Max),
#where NVarChar(Max)
-- Retained for live testing of stored procedure.
-- DECLARE #Table NvarChar(255) SET #Table = N'tblMSGExportMessage_10000'
-- DECLARE #AcctID Integer SET #AcctID = 10000
-- DECLARE #Type Integer SET #Type = 0 -- 0 = Errors only, 1 = All Messages
-- DECLARE #Count Integer
-- Sets our parameters for our two dynamic SQL calls.
SELECT #params = N'#MsgExportAccount INT, #cnt INT OUTPUT'
-- Sets our where clause dependent upon whether we want all results or just errors.
IF #Type = 0
BEGIN
SELECT #where =
N' AND ( mem.[MSGExportStatus_OPT_CD] IN ( 11100, 11102 ) ' +
N' OR mem.[IngestionStatus_OPT_CD] IN ( 11800, 11802, 11803 ) ' +
N' OR mem.[ShortcutStatus_OPT_CD] IN ( 11500, 11502 ) ) '
END
ELSE
BEGIN
SELECT #where = N' '
END
-- Retrieves a count of messages.
SELECT #sql =
N'SELECT #cnt = Count( * ) FROM dbo.' + QuoteName( #Table ) + N' AS mem ' +
N'WHERE mem.[MSGExportAccount_ID] = #MsgExportAccount ' + #where
EXEC sp_executesql #sql, #params, #AcctID, #cnt = #Count OUTPUT
To avoid an error you could query the sysobjects table to find out if the table exists. Here's the SQL (replace YourTableNameHere). If it returns > 0 then the table exists. Create a stores procedure on the server that runs this query.
select count(*)
from sysobjects a with(nolock)
where a.xtype = 'U'
and a.name = 'YourTableNameHere'

Resources