I'm building an App and using MariaDB as my database.
I have a table "kick_votes". Its primary key consits of three fields:
user_id
group_id
vote_id
I need to delete rows where user_id AND group_id fulfill my conditions or just the vote_id.
I enabled the config, that I have to use a key column in my WHERE clause for security issues.
This one is working correctly:
DELETE FROM kick_votes WHERE (user_id=86 AND group_id=10);
DELETE FROM kick_votes WHERE vote_id=2;
But I don't want to use two statements, but the following doesn't work:
DELETE FROM kick_votes WHERE (user_id=86 AND group_id=10) OR vote_id=2;
I get the error:
You are using safe update mode and you tried to update a table without a
WHERE that uses a KEY column.
Why isn't it working?
Seems like key existence doesn't actually affect the error.
From mariadb sources, mysql_delete:
const_cond= (!conds || conds->const_item());
safe_update= MY_TEST(thd->variables.option_bits & OPTION_SAFE_UPDATES);
if (safe_update && const_cond)
{
my_message(ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE,
ER_THD(thd, ER_UPDATE_WITHOUT_KEY_IN_SAFE_MODE), MYF(0));
DBUG_RETURN(TRUE);
}
So the only variant is to turn off safe mode:
set ##sql_safe_updates= 0;
Try this Kludge:
DELETE FROM kick_votes
WHERE id > 0 -- assuming this is your PRIMARY KEY
AND ( (user_id=86 AND group_id=10)
OR vote_id=2
);
Related
I have set my column to int not null default 1... but whenever I save my record, it sets default value for that record to be 0.
I am not setting it anywhere. I don't know where I am making a mistake.
I have debugged my code , and when I am passing new entity object it is setting default value for not null to 0 .May be it is something with LINQ, But I don't know how to handle it.I don't want to explicitly assign value.
Thanks!
For sql-server, you can use SQL Server Profiler to catch all the scripts you run into the DB.
This may show you some details
Try running this query, replacing the 'myTable' and 'myColumn' values with your actual TABLE and COLUMN names, and see what's returned:
SELECT
OBJECT_NAME(C.object_id) AS [Table Name]
,C.Name AS [Column Name]
,DC.Name AS [Constraint Name]
,DC.Type_Desc AS [Constraint Type]
,DC.Definition AS [Default Value]
FROM sys.default_constraints DC
INNER JOIN sys.Columns C
ON DC.parent_column_id = C.column_id
AND DC.parent_object_id = C.object_id
WHERE OBJECT_NAME(DC.parent_object_id) = 'myTable'
AND COL_NAME(DC.parent_object_id,DC.parent_column_id) = 'myColumn'
;
Should return something like this:
[Table Name] [Column Name] [Constraint Name] [Constraint Type] [Default Value]
-------------------------------------------------------------------------------------------
myTable myColumn DF_myTable_myColumn DEFAULT_CONSTRAINT ('0')
If the [Default Value] returned is indeed (1), then it means that you have set the constraint properly and something else is at play here. It might be a trigger, or some other automated DML that you've forgotten/didn't know about, or something else entirely.
I am not the world's biggest fan of using a TRIGGER, but in a case like this, it could be handy. I find that one of the best uses for a TRIGGER is debugging little stuff like this - because it lets you see what values are being passed into a table without having to scroll through mountains of profiler data. You could try something like this (again, switching out the myTable and myColumn values with your actual table and column names):
CREATE TABLE Default_Check
(
Action_Time DATETIME NOT NULL DEFAULT GETDATE()
,Inserted_Value INT
);
CREATE TRIGGER Checking_Default ON myTable
AFTER INSERT, UPDATE
AS
BEGIN
INSERT INTO Default_Check (Inserted_Value)
SELECT I.myColumn
FROM Inserted I
;
END
;
This trigger would simply list the date/time of an update/insert done against your table, as well as the inserted value. After creating this, you could run a single INSERT statement, then check:
SELECT * FROM Default_Check;
If you see one row, only one action (insert/update) was done against the table. If you see two, something you don't expect is happening - you can check to see what. You will also see here when the 0 was inserted/updated.
When you're done, just make sure you DROP the trigger:
DROP TRIGGER Checking_Default;
You'll want to DROP the table, too, once it's become irrelevant:
DROP TABLE Default_Check;
If all of this still didn't help you, let me know.
In VB use
Property VariableName As Integer? = Nothing
And
In C# use
int? value = 0;
if (value == 0)
{
value = null;
}
Please check My Example:
create table emp ( ids int null, [DOJ] datetime NOT null)
ALTER TABLE [dbo].[Emp] ADD CONSTRAINT DF_Emp_DOJ DEFAULT (GETDATE()) FOR [DOJ]
1--Not working for Default Values
insert into emp
select '1',''
2 ---working for Default Values
insert into emp(ids) Values(13)
select * From emp
I'm familiar with MySQL and am starting to use Amazon DynamoDB for a new project.
Assume I have a MySQL table like this:
CREATE TABLE foo (
id CHAR(64) NOT NULL,
scheduledDelivery DATETIME NOT NULL,
-- ...other columns...
PRIMARY KEY(id),
INDEX schedIndex (scheduledDelivery)
);
Note the secondary Index schedIndex which is supposed to speed-up the following query (which is executed periodically):
SELECT *
FROM foo
WHERE scheduledDelivery <= NOW()
ORDER BY scheduledDelivery ASC
LIMIT 100;
That is: Take the 100 oldest items that are due to be delivered.
With DynamoDB I can use the id column as primary partition key.
However, I don't understand how I can avoid full-table scans in DynamoDB. When adding a secondary index I must always specify a "partition key". However, (in MySQL words) I see these problems:
the scheduledDelivery column is not unique, so it can't be used as a partition key itself AFAIK
adding id as unique partition key and using scheduledDelivery as "sort key" sounds like a (id, scheduledDelivery) secondary index to me, which makes that index pratically useless
I understand that MySQL and DynamoDB require different approaches, so what would be a appropriate solution in this case?
It's not possible to avoid a full table scan with this kind of query.
However, you may be able to disguise it as a Query operation, which would allow you to sort the results (not possible with a Scan).
You must first create a GSI. Let's name it scheduled_delivery-index.
We will specify our index's partition key to be an attribute named fixed_val, and our sort key to be scheduled_delivery.
fixed_val will contain any value you want, but it must always be that value, and you must know it from the client side. For the sake of this example, let's say that fixed_val will always be 1.
GSI keys do not have to be unique, so don't worry if there are two duplicated scheduled_delivery values.
You would query the table like this:
var now = Date.now();
//...
{
TableName: "foo",
IndexName: "scheduled_delivery-index",
ExpressionAttributeNames: {
"#f": "fixed_value",
"#d": "scheduled_delivery"
},
ExpressionAttributeValues: {
":f": 1,
":d": now
},
KeyConditionExpression: "#f = :f and #d <= :d",
ScanIndexForward: true
}
Requirement:
Source table contains 5 columns. We are replicating 3 columns on target out of 5.
SEQ_ID is additional column on target.
When update operation is performed on columns which are not in target table,SEQ_ID is increased.
SEQ_ID should increase only when update performed on columns which are present on target.
Enabling unconditional supplemental table level logging on selected columns(ID,AGE,COL1) to be replicated:
Source:
Table name: Test1(ID,AGE,COL1,COL2,COL3)
Target:
Table name: Test1(ID,AGE,COL1,SEQ_ID)
We created a sequence to increse the SEQ_ID when a insert or update happens.
Scenario :
If insert or update happens on source table on these columns (ID,AGE,COL1) SEQ_ID is incresed,
and if update happens on others columns(COL2,COL3) SEQ_ID is also getting incremented.
Our Requirement is when update happens on others columns(COL2,COL3) ,SEQ_ID should not get incrementd.
I want to skip the transaction of updates happening on columns(COL2,COL3) .
Source:
Primary extract test_e1
EXTRACT TEST_e1
USERID DBTUAT_GG,PASSWORD dbt_1234
EXTTRAIL /DB_TRACK_GG/GGS/dirdat/dd
GETUPDATEBEFORES
--IGNOREUPDATES
--IGNOREDELETES
NOCOMPRESSUPDATES
TABLE HARI.TEST1,COLS(ID,AGE,COL1),FILTER (ON UPDATE,IGNORE UPDATE, #STREQ(before.AGE, AGE) = 0);
Datapump test_p1:
EXTRACT TEST_P1
USERID DBTUAT_GG,PASSWORD dbt_1234
RMTHOST 10.24.187.235, MGRPORT 7809,
RMTTRAIL /Trail_files/tt
--PASSTHRU
TABLE DBTUAT_GG.TEST1;
Target:
Target Repicat file:
Edit param test_r
REPLICAT TEST_R
USERID GGPROD,PASSWORD GGPROD_123
SOURCEDEFS ./dirsql/def32.sql
HANDLECOLLISIONS
IGNOREDELETES
INSERTMISSINGUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS,SEQ_ID=test_num.test_val);
Kindly suggest any possible solutions .
First note: you don't need USERID and PASSWORD in the Pump PRM. The Pump process does not connect to any database.
Actually you have already achieved your goal. You are only replicating the data when AGE is modified. There is a FILTER clause in the Extract PRM file. The ID column should not get changed since this is the PK. The only problem is that your are increasing the SEQ_ID if a delete gets replicated.
Something like that should work:
ALLOWDUPTARGETMAP
IGNOREDELETES
-- just inserts and updates that change COL1
MAP HARI.TEST1, TARGET HARI.TEST1, &
SQLEXEC (ID test_num,QUERY "select GGPROD.test_seq.NEXTVAL test_val from dual", NOPARAMS), &
COLMAP(USEDEFAULTS, SEQ_ID = test_num.test_val);
-- just replicate the delete operations
GETDELETES
IGNOREINSERTS
IGNOREUPDATES
MAP HARI.TEST1, TARGET HARI.TEST1;
GETINSERTS
GETUPDATES
I've got a function that I'd like use to delete a row in my database. This is the only way I've used the DELETE statement to remove a row before but I want the 1 to be replaced by a variable called recID so that the value of recID is the row ID number which is deleted. So if recID = 6, I want the function to delete the row with ID = 6. I hope that makes sense.
'DELETE FROM MyRecords WHERE ID=1';
The notation I've been using is the following, if it helps or makes any difference.
db.transaction(function(transaction) {
transaction.executeSql( //DELETE STATEMENT HERE );
});
executeSql supports arguments (check definition).
Use it like:
db.transaction(function(transaction) {
transaction.executeSql("DELETE FROM MyRecords WHERE ID=?", [recId]);
});
If you're certain that your variable, recID, will only ever contain numbers, you can just use:
transaction.executeSql("DELETE FROM MyRecords WHERE ID=" + recID);
If recID comes from outside your application (user input) however, it either needs to be sanitized, or use a prepared statement and use the database API to set the parameter after the statement has been prepared. Otherwise you open yourself up to SQL injection attacks.
I don't know the details of your SQLite wrapper, or what version of SQLite it wraps, but creating a prepared statement using the SQLite3 C API would go something like this:
// sqlite3* db = ...
sqlite3_stmt* stmt;
sqlite3_prepare_v2(db, "DELETE FROM MyRecords WHERE ID=?", -1, &stmt, 0);
sqlite3_bind_int(stmt, 1, recID);
sqlite3_step();
// ...
sqlite3_finalize(stmt);
This simple example excludes all the error checking you'd want to do in a real application, but since you're using a wrapper that has different syntax anyway, you'd have to figure out how it wraps these functions anyway.
I have 3 tables tblpermission, tblgroup, tblassigngrouppermission. Then I have a design there have two comboboxes for selecting group and permission. After select I add it to a listview. Then I save it, at that time it will go to the table tblassigngrouppermission.
That table has columns such as assign id (default increment), groupid, permission id. All are correctly added to the table. After that saving if I select the same group for assign permission. Then I select already assigned permission and click save it added to the table. But I need there not add the already assigned permission to the table.
How can I do this?
When you are saving the data back to tblassigngrouppermission you will have to check the presence of group_id and permission_id in the table.
if they are present you will have to update tblassigngrouppermission else you will have to insert in tblassigngrouppermission
If you are using stored procedure you could do this
IF NOT EXISTS(Select permissionId From tblassigngrouppermission
Where groupId=#GroupID AND permissionId=#permissionId)
Begin
INSERT INTO tblassigngrouppermission(groupId,permissionId) Values(#groupId, #PermissionID)
End
You can also check from your code
==> Write a function that test if the permission already exist
bool GroupPermissionExists(int groupId, int permissionId)
{
//Select Where GroupId=groupId AND PermissionID=permissionId
}
if(!GroupPermissionExists(10, 123))
{
AddPermissionToGroup(10, 123);
}