Deleting same records from same table causes deadlock - mariadb

We have an table which stores the login information of a company (company will have multiple users), and deletes all the records of the company (all records of that company) when ANY user of that company logins.
we are receiving the deadlock for the above case, cannot change from application side.
There is no Gap Lock happening, and both delete's are using the proper index. As it's going for index scan, reading of data should be in sequential order. If it's an sequential order, then the second statement should wait for acquiring lock, but it's causing Deadlock.
MariaDB Version : 10.2.14
Isolation : READ-COMMITTED
Below the table DDL
CREATE TABLE `TableA` (
`TableAKY` INT(11) NOT NULL AUTO_INCREMENT,
`PERSONKY` INT(11) NOT NULL,
`ID` INT(11) NULL DEFAULT NULL,
`PackageName` VARCHAR(150) NOT NULL COLLATE 'utf8_bin',
`LOCKEDKY` BIGINT(20) NULL DEFAULT NULL,
`LASTACTIVITYDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEUSER` VARCHAR(32) NOT NULL DEFAULT 'SYSTEM' COLLATE 'utf8_bin',
`VERSIONSTAMP` SMALLINT(6) NOT NULL DEFAULT '1',
`USERKY` INT(11) NULL DEFAULT '0',
PRIMARY KEY (`TableAKY`),
INDEX `TableA_GI1` (`LASTACTIVITYDTTM`),
INDEX `TableA_GI2` (`PERSONKY`),
INDEX `TableA_GI4` (`LOCKEDKY`),
INDEX `TableA_GI3` (`ID`, `LASTACTIVITYDTTM`),
CONSTRAINT `TableA_FK1` FOREIGN KEY (`PERSONKY`) REFERENCES `corperson` (`PERSONKY`)
)
COLLATE='utf8_bin'
ENGINE=InnoDB;
show create table is below
CREATE TABLE `TableA` (
`TableAKY` INT(11) NOT NULL AUTO_INCREMENT,
`PERSONKY` INT(11) NOT NULL,
`ID` INT(11) NULL DEFAULT NULL,
`PackageName` VARCHAR(150) NOT NULL COLLATE 'utf8_bin',
`LOCKEDKY` BIGINT(20) NULL DEFAULT NULL,
`LASTACTIVITYDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEUSER` VARCHAR(32) NOT NULL DEFAULT 'SYSTEM' COLLATE 'utf8_bin',
`VERSIONSTAMP` SMALLINT(6) NOT NULL DEFAULT '1',
`USERKY` INT(11) NULL DEFAULT '0',
PRIMARY KEY (`TableAKY`),
INDEX `TableA_GI1` (`LASTACTIVITYDTTM`),
INDEX `TableA_GI2` (`PERSONKY`),
INDEX `TableA_GI4` (`LOCKEDKY`),
INDEX `TableA_GI3` (`ID`, `LASTACTIVITYDTTM`),
CONSTRAINT `TableA_FK1` FOREIGN KEY (`PERSONKY`) REFERENCES `corperson` (`PERSONKY`),
CONSTRAINT `TableA_CK9` CHECK (`VERSIONSTAMP` >= 0)
) ENGINE=InnoDB AUTO_INCREMENT=495189 DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC;
Scenario 1 ----------- Deleting different records from same table, but different sessions causing deadlock
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: Transactions deadlock detected, dumping detailed information.
2019-09-05 11:02:06 140065601476352 [Note] InnoDB:
*** (1) TRANSACTION:
TRANSACTION 3665046721, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 5 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 1
MySQL thread id 7139751, OS thread handle 140065636890368, query id 16445911731 <IP of Application 1> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=3
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046721 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d70818e; asc ]p ;;
1: len 4; hex 80125384; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (2) TRANSACTION:
TRANSACTION 3665046667, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 7126671, OS thread handle 140065601476352, query id 16445907427 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=229753
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046667 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d70818e; asc ]p ;;
1: len 4; hex 80125384; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046667 lock_mode X locks rec but not gap waiting
Record lock, heap no 3 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d7081a9; asc ]p ;;
1: len 4; hex 80125385; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (2)
Scenario 2 ----------- Deleting same records from same table, but different sessions causing deadlock
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: Transactions deadlock detected, dumping detailed information.
2019-09-06 13:00:11 140064167008000 [Note] InnoDB:
*** (1) TRANSACTION:
TRANSACTION 3671781064, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 5 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 1
MySQL thread id 7298913, OS thread handle 140064152139520, query id 16923847059 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=1155003
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671781064 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71eed4; asc ]q ;;
2: len 4; hex 8012590a; asc Y ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) TRANSACTION:
TRANSACTION 3671780157, ACTIVE 15 sec starting index read
mysql tables in use 1, locked 1
549 lock struct(s), heap size 73936, 66 row lock(s), undo log entries 48
MySQL thread id 7277612, OS thread handle 140064167008000, query id 16923851347 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=1155003
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671780157 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71eed4; asc ]q ;;
2: len 4; hex 8012590a; asc Y ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671780157 lock_mode X locks rec but not gap waiting
Record lock, heap no 21 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71ed15; asc ]q ;;
2: len 4; hex 801258ff; asc X ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (1)
Am i missing something to consider resolving this deadlock? , please help. Thanks in advance

Related

how can you add multiple rows in sqlcl?

i am trying to add multiple rows in my table. i tried to follow some of the online solutions but i keep getting ORA-00933: SQL command not properly ended.
how do i add multiple rows at once.
insert into driver_detail values(1003,'sajuman','77f8s0990',1),
(1004,'babu ram coi','2g64s8877',8);
INSERT ALL is one way to go.
SQL> create table driver_detail (id integer, text1 varchar2(20), text2 varchar2(20), some_num integer);
Table DRIVER_DETAIL created.
SQL> insert all
2 into driver_detail (id, text1, text2, some_num) values (1003, 'sajuman', '77f8s0090', 1)
3 into driver_detail (id, text1, text2, some_num) values (1004, 'babu ram coi', '2g64s887', 8)
4* select * from dual;
2 rows inserted.
SQL> commit;
Commit complete.
SQL> select * from driver_detail;
ID TEXT1 TEXT2 SOME_NUM
_______ _______________ ____________ ___________
1003 sajuman 77f8s0090 1
1004 babu ram coi 2g64s887 8
But SQLcl is a modern CLI for the Oracle Database, surely there might be a better way?
Yes.
Put your rows into a CSV.
Use the LOAD command.
SQL> delete from driver_detail;
0 rows deleted.
SQL> help load
LOAD
-----
Loads a comma separated value (csv) file into a table.
The first row of the file must be a header row. The columns in the header row must match the columns defined on the table.
The columns must be delimited by a comma and may optionally be enclosed in double quotes.
Lines can be terminated with standard line terminators for windows, unix or mac.
File must be encoded UTF8.
The load is processed with 50 rows per batch.
If AUTOCOMMIT is set in SQLCL, a commit is done every 10 batches.
The load is terminated if more than 50 errors are found.
LOAD [schema.]table_name[#db_link] file_name
SQL> load hr.driver_detail /Users/thatjeffsmith/load_example.csv
--Number of rows processed: 4
--Number of rows in error: 0
0 - SUCCESS: Load processed without errors
SQL> select * from driver_detail;
ID TEXT1 TEXT2 SOME_NUM
_______ _________________ ______________ ___________
1003 'sajuman' '77f8s0990' 1
1004 'babu ram coi' '2g64s8877' 8
1 'hello' 'there' 2
2 'nice to' 'meet you' 3
SQL>

MariaDB 10.2.10 Slave stops on duplicate key error

I've been setting up two Debian Stretch based MariaDB-Server in Master-Master-Replication.
Slave's replication config section is:
server-id = 2226
auto_increment_increment = 1
auto_increment_offset = 1
log_bin = /var/tmp/mysql_binlog/mysql-bin.log
log_bin_index = /var/tmp/mysql_binlog/mysql-bin.log.index
expire_logs_days = 3
max_binlog_size = 100M
relay_log = /var/tmp/mysql_binlog/slave-relay.log
relay_log_index = /var/tmp/mysql_binlog/slave-relay.log.index
log_slave_updates = 1
replicate_annotate_row_events = 0
log_bin_trust_function_creators = 1
I am experiencing the following error:
show slave status\G;
Relay_Log_File: slave-relay.032025
Relay_Log_Pos: 14887746
Relay_Master_Log_File: mysql-bin.001119
Slave_IO_Running: Yes
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '71899-single' for key 'PRIMARY'' on query. Default database: 'mydb'. Query: 'INSERT INTO document_reference
(document_reference_document_id, document_reference_type, document_reference_value)
VALUES (71899, "single", 0)'
but:
MariaDB [(none)]> select * from mydb.document_reference WHERE document_reference_document_id=71899;
Empty set (0.00 sec)
I've looked up the relay log file - there is only one insert statement.
Anyone has an idea what causes duplicate entry error on slave?
Added information:
Master settings:
auto_increment_increment | 1
auto_increment_offset | 1
binlog_format | MIXED
Table definition:
CREATE TABLE "document_reference" (
"document_reference_document_id" int(10) unsigned NOT NULL,
"document_reference_type" enum('single,'multi') COLLATE utf8_unicode_ci
NOT NULL DEFAULT 'single',
"document_reference_value" int(11) NOT NULL,
PRIMARY KEY ("document_reference_document_id","document_reference_type"))
When having "dual Master", set
auto_increment_increment = 2 -- on both
auto_increment_offset = 1 on one Master, = 2 on the other
Also, server_id must be different on the two Masters (and on any Slaves). If they are not different, replication goes round and round between the Masters.
Otherwise, inserts on separate Masters can generate the same AUTO_INCREMENT values.
However, that does not answer the problem. (There is no auto_inc involved.) The only possible answer (I think) is that two of these rows: (71899, "single", ...) were inserted. The absence of evidence of such in the SELECT may have to do with DELETEs, deadlocks, etc. (So, I do not have a definite answer.)

Oracle Sql Developer can't export clob data?

I have a table, which contain a clob field having some data. When I export, i couldn't get the data of clob field.
CREATE TABLE "ADMIN"."TABLE"
( "ID" NUMBER(10,0),
"DATAS" CLOB
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM"
LOB ("DATAS") STORE AS BASICFILE (
TABLESPACE "SYSTEM" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION
NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
REM INSERTING into ADMIN.TABLE
SET DEFINE OFF;
Insert into ADMIN.TABLE (ID) values (1);
This is the exported sql query. Here you can find the last line in 'Insert into ADMIN.TABLE (ID) values (1);' No 'DATAS' field here. Its a clob field.
You'll have to do this.
SELECT /*insert*/* FROM ADMIN.TABLE;
Click run script, not run statement. This will produce the insert statements you are looking for.

SQLite: Summary data of a query result

I have the following query that provides me with the 10 most recent records in the database:
SELECT
dpDate AS Date,
dpOpen AS Open,
dpHigh AS High,
dpLow AS Low,
dpClose AS Close
FROM DailyPrices
WHERE dpTicker = 'DL.AS'
ORDER BY dpDate DESC
LIMIT 10;
The result of this query is as follows:
bash-3.2$ sqlite3 myData < Queries/dailyprice.sql
Date Open High Low Close
---------- ---------- ---------- ---------- ----------
2016-06-13 4.0 4.009 3.885 3.933
2016-06-10 4.23 4.236 4.05 4.08
2016-06-09 4.375 4.43 4.221 4.231
2016-06-08 4.406 4.474 4.322 4.35
2016-06-07 4.377 4.466 4.369 4.384
2016-06-06 4.327 4.437 4.321 4.353
2016-06-03 4.34 4.428 4.316 4.335
2016-06-02 4.434 4.51 4.403 4.446
2016-06-01 4.51 4.512 4.317 4.399
2016-05-31 4.613 4.67 4.502 4.526
bash-3.2$
Whilst I need to plot the extracted data, I also need to obtain the following summary data of the dataset:
Minimum date ==> 2016-05-31
Maximum date ==> 2016-06-13
Open value at minimum date ==> 4.613
Close value at maximum date ==> 3.933
Maximum of High column ==> 4.67
Minimum of Low column ==> 3.885
How can I, as newbie, approach this issue? Can this be done in one query?
Thanks for pointing me in the right direction.
Best regards,
GAM
The desired output can be achieved with
aggregate functions on a convenient common table expression,
which uses OPs expression verbatim
OPs method, with limit 1 applied to common table expression,
for getting mindate and maxdate among the ten days
Query:
WITH Ten(Date,Open,High,Low,Close) AS
(SELECT dpDate AS Date,
dpOpen AS Open,
dpHigh AS High,
dpLow AS Low,
dpClose AS Close
FROM DailyPrices
WHERE dpTicker = 'DL.AS'
ORDER BY dpDate DESC LIMIT 10)
SELECT min(Date) AS mindate,
max(Date) AS maxdate,
(SELECT Open FROM Ten ORDER BY Date ASC LIMIT 1) AS Open,
max(High) AS High,
min(Low) AS Low,
(SELECT Close FROM Ten ORDER BY Date DESC LIMIT 1) AS Close
FROM Ten;
Output (.headers on and .mode column):
mindate maxdate Open High Low Close
---------- ---------- ---------- ---------- ---------- ----------
2016-05-31 2016-06-13 4.613 4.67 3.885 3.933
Note:
I think the order of values in OPs last comment do not match the order of columns in the preceding comment by OP.
I chose the order from the preceding comment.
The order in the last comment seems to me to be "mindate, maxdate, Open, Close, High, Low".
Adapting my proposed query to that order would be simple.
Using SQLite 3.18.0 2017-03-28 18:48:43
Here is the .dump of my toy database, i.e. my MCVE, in case something is unclear. (I did not enter the many decimal places, it is probably a float rounding thing.)
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE dailyPrices (dpDate date, dpOpen float, dpHigh float, dpLow float, dpClose float, dpTicker varchar(10));
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-13',4.0,4.009000000000000341,3.8849999999999997868,3.9329999999999998294,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-10',4.2300000000000004263,4.2359999999999997655,4.0499999999999998223,4.080000000000000071,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-09',4.375,4.4299999999999997157,4.2210000000000000852,4.2309999999999998721,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-08',4.4059999999999996944,4.4740000000000001989,4.3220000000000000639,4.3499999999999996447,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-07',4.3769999999999997797,4.4660000000000001918,4.3689999999999997726,4.384000000000000341,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-06',4.3269999999999999573,4.4370000000000002771,4.3209999999999997299,4.3529999999999997584,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-03',4.3399999999999998578,4.4370000000000002771,4.3209999999999997299,4.3529999999999997584,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-02',4.4340000000000001634,4.5099999999999997868,4.4029999999999995807,4.4459999999999997299,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-06-01',4.5099999999999997868,4.5119999999999995665,4.3170000000000001705,4.3990000000000000213,'DL.AS');
INSERT INTO dailyPrices(dpDate,dpOpen,dpHigh,dpLow,dpClose,dpTicker) VALUES('2016-05-31',4.6130000000000004334,4.6699999999999999289,4.5019999999999997797,4.525999999999999801,'DL.AS');
COMMIT;

Creating stored procedure giving error

I have created a stored procedure
create or replace
PROCEDURE "USP_USER_ADD" (
USERNAME IN VARCHAR2 ,
P_PASSWORD IN VARCHAR2 ,
SALT IN BLOB ,
EMAIL IN VARCHAR2 ,
FIRST_NAME IN VARCHAR2 ,
LAST_NAME IN VARCHAR2 ,
ip_address IN VARCHAR2 ,
EMAIL_VERIFIED IN NUMBER ,
ACTIVE IN NUMBER ,
CREATEDBY IN VARCHAR2 ,
CREATED IN DATE ,
MODIFIED IN DATE ,
MODIFIEDBY IN VARCHAR2 ,
USER_GROUP_ID IN NUMBER ,
LAST_PASSWORD_CHANGE_DATE IN DATE ,
P_failed_login_attempts IN NUMBER )
AS
BEGIN
declare
user_id_tmp number(20);
INSERT INTO users( "username" ,
"password" ,
"salt" ,
"email" ,
"first_name" ,
"last_name" ,
"email_verified" ,
"active" ,
"ip_address" ,
"created" ,
"createdby" ,
"modified" ,
"modifiedby" ,
"user_group_id" ,
"last_password_change_date" ,
"FAILED_LOGIN_ATTEMPTS"
)
VALUES
(
username ,
p_password ,
salt ,
email ,
first_name ,
last_name ,
email_verified ,
active ,
ip_address ,
created ,
createdby ,
modified ,
modifiedby ,
user_group_id ,
last_password_change_date ,
p_failed_login_attempts
);
SELECT MAX(id) INTO user_id_tmp FROM users ;
INSERT INTO user_passwords
(
"user_id" ,
"password" ,
"created"
)
VALUES
(
user_id_tmp,
p_password,
created
);
END USP_USER_ADD;
It's giving me two errors
1: Error(26,5): PLS-00103: Encountered the symbol "INSERT" when expecting one of the following: begin function package pragma procedure subtype type use form current cursor The symbol "begin" was substituted for "INSERT" to continue.
2: Error(78,19): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: begin case declare end exception exit for goto if loop mod null pragma raise return select update while with << close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe
These are my tables
--------------------------------------------------------
-- DDL for Table USER_PASSWORDS
--------------------------------------------------------
CREATE TABLE "NEWS1.0"."USER_PASSWORDS"
( "ID" NUMBER(11,0),
"USER_ID" NUMBER(11,0),
"PASSWORD" VARCHAR2(255 BYTE),
"SALT" VARCHAR2(255 BYTE),
"IP" VARCHAR2(15 BYTE),
"CREATEDBY" VARCHAR2(255 BYTE),
"CREATED" DATE,
"MODIFIED" DATE,
"MODIFIEDBY" VARCHAR2(255 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- DDL for Index USERS_PK
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_PK" ON "NEWS1.0"."USER_PASSWORDS"
("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- Constraints for Table USER_PASSWORDS
--------------------------------------------------------
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("USER_ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("PASSWORD" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" ADD CONSTRAINT "USERS_PK" PRIMARY
KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
--------------------------------------------------------
-- DDL for Trigger BI_USER_PASSWORDS_ID
--------------------------------------------------------
CREATE OR REPLACE TRIGGER "NEWS1.0"."BI_USER_PASSWORDS_ID"
before insert on "USER_PASSWORDS"
for each row
begin
if inserting then
if :NEW."ID" is null then
select USER_PASSWORDS_SEQ.nextval into :NEW."ID" from dual;
end if;
end if;
end;
/
ALTER TRIGGER "NEWS1.0"."BI_USER_PASSWORDS_ID" ENABLE;
--------------------------------------------------------
-- DDL for Table USERS
--------------------------------------------------------
CREATE TABLE "NEWS1.0"."USERS"
( "ID" NUMBER(*,0),
"USERNAME" VARCHAR2(100 BYTE),
"PASSWORD" VARCHAR2(255 BYTE),
"SALT" BLOB,
"EMAIL" VARCHAR2(100 BYTE),
"FIRST_NAME" VARCHAR2(100 BYTE),
"LAST_NAME" VARCHAR2(100 BYTE),
"EMAIL_VERIFIED" NUMBER(*,0) DEFAULT 1,
"ACTIVE" NUMBER(*,0) DEFAULT 1,
"IP_ADDRESS" VARCHAR2(50 BYTE),
"USER_GROUP_ID" NUMBER(*,0),
"LAST_PASSWORD_CHANGE_DATE" DATE,
"FAILED_LOGIN_ATTEMPTS" NUMBER(*,0),
"CREATED" DATE,
"CREATEDBY" VARCHAR2(255 BYTE),
"MODIFIED" DATE,
"MODIFIEDBY" VARCHAR2(255 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"
LOB ("SALT") STORE AS (
TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)) ;
--------------------------------------------------------
-- DDL for Index USERS_UK2
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_UK2" ON "NEWS1.0"."USERS" ("EMAIL")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- DDL for Index USERS_UK1
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_UK1" ON "NEWS1.0"."USERS" ("USERNAME")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- Constraints for Table USERS
--------------------------------------------------------
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("USERNAME" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("PASSWORD" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" ADD CONSTRAINT "USERS_UK1" UNIQUE
("USERNAME")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
ALTER TABLE "NEWS1.0"."USERS" ADD CONSTRAINT "USERS_UK2" UNIQUE ("EMAIL")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
The problem is in this section of code:
P_failed_login_attempts IN NUMBER )
AS
BEGIN
declare
user_id_tmp number(20);
INSERT INTO users( "username" ,
-- rest omitted
Remove the declare and move the declaration of user_id between the AS and the BEGIN:
P_failed_login_attempts IN NUMBER )
AS
user_id_tmp number(20);
BEGIN
INSERT INTO users( "username" ,
-- rest omitted
In Oracle PL/SQL, a block has the form
DECLARE
-- some variable declarations
BEGIN
-- some code
EXCEPTION
-- some exception-handling
END;
The variable declarations and exception-handling sections are optional. If there are no variable declarations, you can remove the keyword DECLARE (it's not an error if you leave it there). However, if the block has no exception handling, the EXCEPTION keyword must be removed.
When declaring a procedure or function, the CREATE OR REPLACE PROCEDURE ... AS part takes the place of a DECLARE.
The -- some code section of a PL/SQL block can contain further blocks inside it. In fact, this is what the PL/SQL compiler thought you wanted when it saw your declare keyword. It thought you were doing something like the following:
CREATE OR REPLACE PROCEDURE USP_USER_ADD (
-- parameters omitted
)
AS
BEGIN
DECLARE
-- some variable declarations
BEGIN
-- some code
END;
END USP_USER_ADD;
However, you had an INSERT after the declare. The compiler wasn't expecting that, and that's why you got an error. You also got an error about end-of-file, and that was because the PL/SQL compiler was expecting two ENDs but got to the end of your stored procedure before it found the second one.
I think that you should remove DECLARE.
Oracle 11g Create Procedure Documentation
Previous answers have cleared up the compile errors, so I won't address those. But you have a potential bug in place; the id inserted into "user_password" table may not be the same as in "user" table In a multi-user environment it is possible another user could insert and commit into after you insert into set but before you do the select max. That would hopefully raise dup-val-on-index, but would be extremely difficult to find.
You can remove this possibility, and have slightly less/cleaner(?) code by getting the id assigned by a trigger?) by using the return option on the insert itself:
insert into user ( ... ) values (...) returning id into user_id_tmp ;
and delete the select max(id) ... statement.

Resources