Inserting data in to a table add already inserted data - asp.net

I have 3 tables tblpermission, tblgroup, tblassigngrouppermission. Then I have a design there have two comboboxes for selecting group and permission. After select I add it to a listview. Then I save it, at that time it will go to the table tblassigngrouppermission.
That table has columns such as assign id (default increment), groupid, permission id. All are correctly added to the table. After that saving if I select the same group for assign permission. Then I select already assigned permission and click save it added to the table. But I need there not add the already assigned permission to the table.
How can I do this?

When you are saving the data back to tblassigngrouppermission you will have to check the presence of group_id and permission_id in the table.
if they are present you will have to update tblassigngrouppermission else you will have to insert in tblassigngrouppermission

If you are using stored procedure you could do this
IF NOT EXISTS(Select permissionId From tblassigngrouppermission
Where groupId=#GroupID AND permissionId=#permissionId)
Begin
INSERT INTO tblassigngrouppermission(groupId,permissionId) Values(#groupId, #PermissionID)
End
You can also check from your code
==> Write a function that test if the permission already exist
bool GroupPermissionExists(int groupId, int permissionId)
{
//Select Where GroupId=groupId AND PermissionID=permissionId
}
if(!GroupPermissionExists(10, 123))
{
AddPermissionToGroup(10, 123);
}

Related

Do SQLite FTS tables need to be manually populated?

The documentation for SQLite FTS implies that FTS tables should be populated and updated using INSERT, UPDATE, DELETE, etc.
That's what I was doing - adding rows, deleting them, etc., but recently I've noticed that as soon as I create the FTS table, it is automatically populated using the data from the source. I create it this way:
CREATE VIRTUAL TABLE notes_fts USING fts4(content="notes", notindexed="id", id, title, body)
If I add a row to the "notes" table, it is also automatically added to notes_fts. I guess that's what virtual tables are.
But then, why is there a chapter about populating FTS tables? What would even be the point since for example if I delete a row, it will come back if it's still in the source table.
Any idea about this? Do FTS actually need to be populated?
After further reading I found that the FTS table indeed need to be manually kept in sync with the content table. When running the CREATE VIRTUAL TABLE call, the FTS table is automatically populated but after that deletions, insertions and updates have to be done manually.
In my case I've done it using the following triggers:
CREATE VIRTUAL TABLE notes_fts USING fts4(content="notes", notindexed="id", id, title, body
CREATE TRIGGER notes_fts_before_update BEFORE UPDATE ON notes BEGIN
DELETE FROM notes_fts WHERE docid=old.rowid;
END
CREATE TRIGGER notes_fts_before_delete BEFORE DELETE ON notes BEGIN
DELETE FROM notes_fts WHERE docid=old.rowid;
END
CREATE TRIGGER notes_after_update AFTER UPDATE ON notes BEGIN
INSERT INTO notes_fts(docid, id, title, body) SELECT rowid, id, title, body FROM notes WHERE is_conflict = 0 AND encryption_applied = 0 AND new.rowid = notes.rowid;
END
CREATE TRIGGER notes_after_insert AFTER INSERT ON notes BEGIN
INSERT INTO notes_fts(docid, id, title, body) SELECT rowid, id, title, body FROM notes WHERE is_conflict = 0 AND encryption_applied = 0 AND new.rowid = notes.rowid;
END;
According to sqlite document
To delete entry, either
-- Insert a row with rowid=14 into the fts5 table.
INSERT INTO ft(rowid, a, b, c) VALUES(14, $a, $b, $c);
-- Remove the same row from the fts5 table.
INSERT INTO ft(ft, rowid, a, b, c) VALUES('delete', 14, $a, $b, $c);
or
CREATE TRIGGER tbl_ad AFTER DELETE ON tbl BEGIN
INSERT INTO fts_idx(fts_idx, rowid, b, c) VALUES('delete', old.a, old.b, old.c);
END;
To rebuild based on the modified virtual table
INSERT INTO ft(ft) VALUES('rebuild');

MariaDB, Delete with condition not working

I'm building an App and using MariaDB as my database.
I have a table "kick_votes". Its primary key consits of three fields:
user_id
group_id
vote_id
I need to delete rows where user_id AND group_id fulfill my conditions or just the vote_id.
I enabled the config, that I have to use a key column in my WHERE clause for security issues.
This one is working correctly:
DELETE FROM kick_votes WHERE (user_id=86 AND group_id=10);
DELETE FROM kick_votes WHERE vote_id=2;
But I don't want to use two statements, but the following doesn't work:
DELETE FROM kick_votes WHERE (user_id=86 AND group_id=10) OR vote_id=2;
I get the error:
You are using safe update mode and you tried to update a table without a
WHERE that uses a KEY column.
Why isn't it working?
Seems like key existence doesn't actually affect the error.
From mariadb sources, mysql_delete:
const_cond= (!conds || conds->const_item());
safe_update= MY_TEST(thd->variables.option_bits & OPTION_SAFE_UPDATES);
if (safe_update && const_cond)
{
my_message(ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE,
ER_THD(thd, ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE), MYF(0));
DBUG_RETURN(TRUE);
}
So the only variant is to turn off safe mode:
set ##sql_safe_updates= 0;
Try this Kludge:
DELETE FROM kick_votes
WHERE id > 0 -- assuming this is your PRIMARY KEY
AND ( (user_id=86 AND group_id=10)
OR vote_id=2
);

Insert/update trigger updating column value of all rows

I am running into a logical problem.My Trigger is:
create trigger Points1
on Posts
after insert, update
As
declare #value int
declare #postedby int
select #value= Count(Message) from Posts
select #postedby = PostedBy from Posts
update AspNetUsers set User_points = #value * 3
where ( AspNetUsers.Id = #postedby)
I dont know whether i am doing it right or not.
Two tables: AspNetUsers table with User_points column and Id Column as primary key
Posts table with PostId as primary key and PostedBy as foreign key referencing the AspNetUsers table.
Now, i want to compare PostedBy with Id column and if they both are same then update the User_Points column with +3 on every single message he posted.
Now, problem is:
1> It is inserting same number of points in every Row.It should check only currently inserted row and the PostedBy column of that row and then compare with Id column of other table and should Update user's Point of only that Id.
But same result nothing happens
Please tell me how to do it.
thanks in advance
change
select #postedby = PostedBy from Posts
to
select #postedby = PostedBy from INSERTED
'INSERTED' is a magic table that keep insert/updated data in this scope.
Same as this 'DELETED' table keep previous data in update a row

How to skip a transaction in replicate(Target system) in Goldengate oracle

Requirement:
Source table contains 5 columns. We are replicating 3 columns on target out of 5.
SEQ_ID is additional column on target.
When update operation is performed on columns which are not in target table,SEQ_ID is increased.
SEQ_ID should increase only when update performed on columns which are present on target.
Enabling unconditional supplemental table level logging on selected columns(ID,AGE,COL1) to be replicated:
Source:
Table name: Test1(ID,AGE,COL1,COL2,COL3)
Target:
Table name: Test1(ID,AGE,COL1,SEQ_ID)
We created a sequence to increse the SEQ_ID when a insert or update happens.
Scenario :
If insert or update happens on source table on these columns (ID,AGE,COL1) SEQ_ID is incresed,
and if update happens on others columns(COL2,COL3) SEQ_ID is also getting incremented.
Our Requirement is when update happens on others columns(COL2,COL3) ,SEQ_ID should not get incrementd.
I want to skip the transaction of updates happening on columns(COL2,COL3) .
Source:
Primary extract test_e1
EXTRACT TEST_e1
USERID DBTUAT_GG,PASSWORD dbt_1234
EXTTRAIL /DB_TRACK_GG/GGS/dirdat/dd
GETUPDATEBEFORES
--IGNOREUPDATES
--IGNOREDELETES
NOCOMPRESSUPDATES
TABLE HARI.TEST1,COLS(ID,AGE,COL1),FILTER (ON UPDATE,IGNORE UPDATE, #STREQ(before.AGE, AGE) = 0);
Datapump test_p1:
EXTRACT TEST_P1
USERID DBTUAT_GG,PASSWORD dbt_1234
RMTHOST 10.24.187.235, MGRPORT 7809,
RMTTRAIL /Trail_files/tt
--PASSTHRU
TABLE DBTUAT_GG.TEST1;
Target:
Target Repicat file:
Edit param test_r
REPLICAT TEST_R
USERID GGPROD,PASSWORD GGPROD_123
SOURCEDEFS ./dirsql/def32.sql
HANDLECOLLISIONS
IGNOREDELETES
INSERTMISSINGUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS,SEQ_ID=test_num.test_val);
Kindly suggest any possible solutions .
First note: you don't need USERID and PASSWORD in the Pump PRM. The Pump process does not connect to any database.
Actually you have already achieved your goal. You are only replicating the data when AGE is modified. There is a FILTER clause in the Extract PRM file. The ID column should not get changed since this is the PK. The only problem is that your are increasing the SEQ_ID if a delete gets replicated.
Something like that should work:
ALLOWDUPTARGETMAP
IGNOREDELETES
-- just inserts and updates that change COL1
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS, SEQ_ID = test_num.test_val);
-- just replicate the delete operations
GETDELETES
IGNOREINSERTS
IGNOREUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1;
GETINSERTS
GETUPDATES

SQLITE fill value with unique random table

I want to create a table with a field that is unique and limited to a certain value. Lets say that the limit is 100, the table is full, I remove a random row, and when I create a new row it has the value that was freed before.
It doesn't need to be the fastest thing in the world (the limit is quite small), I just want to implement it in a DB.
Any ideas?
Create one more column in main table, say deleted (integer, 0 or 1). When you need to delete with certain id, do not really delete it, but simply update deleted to 1:
UPDATE mytable SET deleted=1 WHERE id = <id_to_delete>
When you need to insert, find id to be reused:
SELECT id FROM mytable WHERE deleted LIMIT 1
If this query returns empty result, then use INSERT to create new id. Otherwise, simply update your row:
UPDATE mytable SET deleted=0, name='blah', ... WHERE id=<id_to_reuse>
All queries reading from your main table should have WHERE constraint with NOT deleted condition:
SELECT * FROM mytable WHERE NOT deleted
If you add index on deleted, this method should work fast even for large number of rows.
This solution does everything in a trigger, so you can just use a normal INSERT.
For the table itself, we use an autoincrementing ID column:
CREATE TABLE MyTable(ID INTEGER PRIMARY KEY, Name);
We need another table to store an ID temporarily:
CREATE TABLE moriturus(ID INTEGER PRIMARY KEY);
And the trigger:
CREATE TRIGGER MyTable_DeleteAndReorder
AFTER INSERT ON MyTable
FOR EACH ROW
WHEN (SELECT COUNT(*) FROM MyTable) > 100
BEGIN
-- first, select a random record to be deleted, and save its ID
DELETE FROM moriturus;
INSERT INTO moriturus
SELECT ID FROM MyTable
WHERE ID <> NEW.ID
ORDER BY random()
LIMIT 1;
-- then actually delete it
DELETE FROM MyTable
WHERE ID = (SELECT ID
FROM moriturus);
-- then change the just inserted record to have that ID
UPDATE MyTable
SET ID = (SELECT ID
FROM moriturus)
WHERE ID = NEW.ID;
END;

Resources