I'm trying to insert a parent and child at the same time.
My idea is to insert the parent, get the id using SELECT last_insert_rowid() AS [Id] and use this id to insert the child
I can get each part of this working independently but not as a whole. This is what I currently have:
INSERT INTO ParentTable (Col1)
VALUES( 'test')
SELECT last_insert_rowid() AS [Id]
The above works - so far so good. Now I want to use the result of this in the child insert. This is what I have:
INSERT INTO ChildTable (col1, col2, ParentId)
VALUES( 1, 2, SELECT Id FROM (
INSERT INTO ParentTable (Col1)
VALUES( 'test')
SELECT last_insert_rowid() AS [Id]
);
I get this error:
near "SELECT": syntax error:
Can anyone point me in the right direction?
You can't use INSERT in SELECT statement. You should first insert and then use last inserted id:
INSERT INTO ParentTable (Col1) VALUES( 'test');
INSERT INTO ChildTable (col1, col2, ParentId)
VALUES(1,2, (SELECT last_insert_rowid()));
Since you want to insert many records with parent ID, here is a workaround:
BEGIN TRANSACTION;
CREATE TEMPORARY TABLE IF NOT EXISTS temp(id integer);
DELETE FROM temp;
INSERT INTO ParentTable (Col1) VALUES( 'test');
INSERT INTO temp SELECT last_insert_rowid();
INSERT INTO ChildTable (col1, col2, ParentId)
VALUES(1,2, (SELECT id FROM temp LIMIT 1));
.............
COMMIT;
DROP TABLE temp;
Or you can create a permanent table to this effect.
That SQLite.Net PCL driver assumes that you use the ORM: inserting an object will automatically read back and assign the autoincremented ID value.
If you're using raw SQL, you have to manage the last_insert_rowid() calls yourself.
Your idea is correct, but you have to do everything in separate SQL statements:
BEGIN; -- better use RunInTransaction()
INSERT INTO Parent ...;
SELECT last_insert_rowid(); --> store in a variable in your program
INSERT INTO Child ...;
...
END;
(SQLite is an embedded database and has no client/server communication overhead; there is no reason to try to squeeze everything into a single statement.)
Related
I have create a table person(id, name ,samenamecount).The samenamecount attribute can be null but for each row can store the row count for same names.I am achieving this by calling a stored procedure inside a after insert trigger.Below is my code.
create or replace procedure automatic(s in person.name%type)
AS
BEGIN
update person set samenamecount=(select count(*) from person where name=s) where name=s;
END;
create or replace trigger inserttrigger
after insert
on person
for each row
declare
begin
automatic(:new.name);
end;
On inserting a row it is giving error like
table ABCD.PERSON is mutating, trigger/function may not see it.
Can somebody help me to figure out this?
If you have the table:
CREATE TABLE person (
id NUMBER
GENERATED ALWAYS AS IDENTITY
CONSTRAINT person__id__pk PRIMARY KEY,
name VARCHAR2(20)
NOT NULL
);
Then rather than creating a trigger, instead, you could use a view:
CREATE VIEW person_view (
id,
name,
samenamecount
) AS
SELECT id,
name,
COUNT(*) OVER (PARTITION BY name)
FROM person;
You can use the trigger:
CREATE TRIGGER inserttrigger
AFTER INSERT ON person
BEGIN
MERGE INTO person dst
USING (
SELECT ROWID AS rid,
COUNT(*) OVER (PARTITION BY name) AS cnt
FROM person
) src
ON (src.rid = dst.ROWID)
WHEN MATCHED THEN
UPDATE SET samenamecount = src.cnt;
END;
/
fiddle
If you want to make it more efficient then you could use a compound trigger and collate the names that are being inserted and only update the matching rows.
I have 2 tables with identical structure I want to update one table using data from the other, matching on primary key. SQLite has a with (CTE) statement but the following doesn't work (sqlite3 v. 3.29.0):
sqlite> select * from main;
1|A
2|B
4|D
5|E
6|F
sqlite> select * from temp;
1|aa
2|bb
3|cc
4|dd
5|ee
sqlite> with mapping as (select main.ID, temp.Desc from main join temp on temp.ID=main.ID) update main set Desc=mapping.Desc where main.ID=mapping.ID;
Error: no such column: mapping.Desc
I've tried using "select main.ID as ID, temp.Desc as Desc", but get the same error message.
To update your main table from your cte, use a subquery, since sqlite doesn't support update from
with mapping as
(select main.ID, temp.Desc
from main
join temp on temp.ID=main.ID)
update main set Desc=
(select Desc from mapping where ID = main.ID limit 1);
see dbfiddle
I need to identify tables that were created today by an interface, which I was able to do by using following query:
Note: The interface changes table names on daily basis.
SELECT [name] AS [TableName]
FROM sys.tables
WHERE NAME LIKE '_XYZExport_%'
AND CAST(create_date AS DATE) = CAST(GETDATE() AS DATE)
ORDER BY NAME
What I need:
Once the table names are pulled, I need dump its data into a new table. How can this be done easily?
Example:
Following tables returned from my queries:
_XYZExport_B02
_XYZExport_B12
_XYZExport_B22
I want to take these returned tables and insert their data into an existing Archive table using Union All.
Any help would be great!
You are on the right track with your "cursor" tag. I would recommend creating an insert statement and executing it each cursor loop.
DECLARE #TableName sysname
DECLARE #SQLInsert VARCHAR(100)
DECLARE TableNamesCursor CURSOR FAST_FORWARD READ_ONLY FOR
SELECT [name] AS [TableName]
FROM sys.tables
WHERE NAME LIKE '_XYZExport_%'
AND CAST(create_date AS DATE) = CAST(GETDATE() AS DATE)
ORDER BY NAME
OPEN TableNamesCursor
FETCH NEXT FROM TableNamesCursor INTO #TableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #SQLInsert = 'INSERT INTO ArchiveTable SELECT * FROM ' + #TableName
EXEC sp_executesql #SQLInsert
FETCH NEXT FROM TableNamesCursor INTO #TableName
END
CLOSE TableNamesCursor
DEALLOCATE TableNamesCursor
Hope that gets you going.
Noel
Original table1 and Table2. Both tables has data.
CREATE TABLE [dbo].[Table1]
(
[Id] int NOT NULL PRIMARY KEY
)
CREATE TABLE [dbo].[Table2]
(
[Id] INT NOT NULL PRIMARY KEY,
[Table1Id] Int NULL,
Constraint [FK_Table1_Table2] foreign key ([Table1Id]) references [Table1] (Id)
)
I'd like to change the Table1.Id to UNIQUEIDENTIFIER.
Obviously just jump in and change the type from int to UNIQUEIDENTIFIER for Table1.Id and 'Table2.Table1Id'. Then Publish. Here is the code:
CREATE TABLE [dbo].[tmp_ms_xx_Table1] (
[Id] UNIQUEIDENTIFIER NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
IF EXISTS (SELECT TOP 1 1
FROM [dbo].[Table1])
BEGIN
INSERT INTO [dbo].[tmp_ms_xx_Table1] ([Id])
SELECT [Id]
FROM [dbo].[Table1]
ORDER BY [Id] ASC;
END
This code will fail because original Table1.Id is Int while temp table Id is UNIQUEIDENTIFIER.
Then, I try to with Pre-Scripts. Ideally all the changes will be done manually.
--drop fk constraint
alter table [Table2] drop constraint [FK_Table1_Table2];
--rename table1.id
exec sp_rename 'Table1.Id', 'Id2', 'COLUMN';
alter table [Table1] add Id uniqueidentifier not null
default newid();
--rename table2.table1id
exec sp_rename 'Table2.Table1Id', 'Table1Id2', 'COLUMN';
alter table [Table2] add Table1Id uniqueidentifier null;
update t2 set t2.Table1ID = t1.Id
from Table2 t2 left join Table1 t1 on t2.Table1Id2 = t1.Id2;
alter table [Table2] add constraint [FK_Table1_Table2] foreign key (Table1Id) references Table1 (Id);
However it FAIL again as SSDT is trying to compare its data structure again the target database.
Any idea please?
You're right. The problem is that you can't include schema changes in the pre-deployment script because SSDT's deployment script is generated prior to your schema changes. It is therefore only useful for data-only changes.
The solution is to do this outside of the SSDT process altogether. Yes, it's a pre-pre-deployment script! Essentially you have to apply your change by yourself before you even get to the SSDT bit.
(There's probably a way to do this via a custom deployment contributor. After all, everything is possible in code...)
Can I convince you to take a look at a migration-based solution as it appears that you have sufficient need for an element of fine-grained script "customisation". DBUp is a popular open source solution. ReadyRoll is a more-integrated commercial solution that shares a lot with SSDT.
the problem is the old data. it will be ok without the data in table in step 2.
1.pre-script: copy/process old data to temp tables, delete them from original tables
create table #table1 (
id int null,
id2 uniqueidentifier null
);
insert into #table1 (id,id2)
select id,newid() from Table1;
create table #table2 (
id int null,
table1id int null,
table1id2 uniqueidentifier null
);
insert into #table2 (id, table1id)
select id,table1id from Table2;
update t2 set t2.table1id2=t1.id2
from #table2 t2 left join #table1 t1 on t2.table1id = t1.id;
delete from table2;
delete from table1;
dacpac will auto generate the changes for schema. it will be ok because no data is existing any more.
post-script: insert data back from temp tables in pre-script:
insert into table1 (id)
select id2 from #table1;
insert into table2 (id,table1id)
select id, table1id2 from #table2;
I have the following sqlite db:
BEGIN TRANSACTION;
CREATE TABLE `table1` (
`Field1` TEXT
);
INSERT INTO `table1` VALUES ('testing');
INSERT INTO `table1` VALUES ('123');
INSERT INTO `table1` VALUES ('87654');
COMMIT;
This select returns the correct result:
select * from table1 where Field1 like '%e%';
However this one returns nothing?
select * from table1 where Field1 like '%2%';
Even Stranger in DB Browser for SQLite:
select * from table1 where CAST(Field1 AS Text) LIKE '%2%'
Returns:
1 Rows returned from: select * from table1 where CAST(Field1 AS Text) LIKE '2%' (took %3ms)
Maybe a bug? Drops the first %
A bug in DB browser. I raised an issue and its now been fixed in the nightlies.