Oracle Sql Developer can't export clob data? - oracle11g

I have a table, which contain a clob field having some data. When I export, i couldn't get the data of clob field.
CREATE TABLE "ADMIN"."TABLE"
( "ID" NUMBER(10,0),
"DATAS" CLOB
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM"
LOB ("DATAS") STORE AS BASICFILE (
TABLESPACE "SYSTEM" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION
NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ;
REM INSERTING into ADMIN.TABLE
SET DEFINE OFF;
Insert into ADMIN.TABLE (ID) values (1);
This is the exported sql query. Here you can find the last line in 'Insert into ADMIN.TABLE (ID) values (1);' No 'DATAS' field here. Its a clob field.

You'll have to do this.
SELECT /*insert*/* FROM ADMIN.TABLE;
Click run script, not run statement. This will produce the insert statements you are looking for.

Related

Deleting same records from same table causes deadlock

We have an table which stores the login information of a company (company will have multiple users), and deletes all the records of the company (all records of that company) when ANY user of that company logins.
we are receiving the deadlock for the above case, cannot change from application side.
There is no Gap Lock happening, and both delete's are using the proper index. As it's going for index scan, reading of data should be in sequential order. If it's an sequential order, then the second statement should wait for acquiring lock, but it's causing Deadlock.
MariaDB Version : 10.2.14
Isolation : READ-COMMITTED
Below the table DDL
CREATE TABLE `TableA` (
`TableAKY` INT(11) NOT NULL AUTO_INCREMENT,
`PERSONKY` INT(11) NOT NULL,
`ID` INT(11) NULL DEFAULT NULL,
`PackageName` VARCHAR(150) NOT NULL COLLATE 'utf8_bin',
`LOCKEDKY` BIGINT(20) NULL DEFAULT NULL,
`LASTACTIVITYDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEUSER` VARCHAR(32) NOT NULL DEFAULT 'SYSTEM' COLLATE 'utf8_bin',
`VERSIONSTAMP` SMALLINT(6) NOT NULL DEFAULT '1',
`USERKY` INT(11) NULL DEFAULT '0',
PRIMARY KEY (`TableAKY`),
INDEX `TableA_GI1` (`LASTACTIVITYDTTM`),
INDEX `TableA_GI2` (`PERSONKY`),
INDEX `TableA_GI4` (`LOCKEDKY`),
INDEX `TableA_GI3` (`ID`, `LASTACTIVITYDTTM`),
CONSTRAINT `TableA_FK1` FOREIGN KEY (`PERSONKY`) REFERENCES `corperson` (`PERSONKY`)
)
COLLATE='utf8_bin'
ENGINE=InnoDB;
show create table is below
CREATE TABLE `TableA` (
`TableAKY` INT(11) NOT NULL AUTO_INCREMENT,
`PERSONKY` INT(11) NOT NULL,
`ID` INT(11) NULL DEFAULT NULL,
`PackageName` VARCHAR(150) NOT NULL COLLATE 'utf8_bin',
`LOCKEDKY` BIGINT(20) NULL DEFAULT NULL,
`LASTACTIVITYDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEDTTM` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
`UPDATEUSER` VARCHAR(32) NOT NULL DEFAULT 'SYSTEM' COLLATE 'utf8_bin',
`VERSIONSTAMP` SMALLINT(6) NOT NULL DEFAULT '1',
`USERKY` INT(11) NULL DEFAULT '0',
PRIMARY KEY (`TableAKY`),
INDEX `TableA_GI1` (`LASTACTIVITYDTTM`),
INDEX `TableA_GI2` (`PERSONKY`),
INDEX `TableA_GI4` (`LOCKEDKY`),
INDEX `TableA_GI3` (`ID`, `LASTACTIVITYDTTM`),
CONSTRAINT `TableA_FK1` FOREIGN KEY (`PERSONKY`) REFERENCES `corperson` (`PERSONKY`),
CONSTRAINT `TableA_CK9` CHECK (`VERSIONSTAMP` >= 0)
) ENGINE=InnoDB AUTO_INCREMENT=495189 DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC;
Scenario 1 ----------- Deleting different records from same table, but different sessions causing deadlock
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: Transactions deadlock detected, dumping detailed information.
2019-09-05 11:02:06 140065601476352 [Note] InnoDB:
*** (1) TRANSACTION:
TRANSACTION 3665046721, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 5 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 1
MySQL thread id 7139751, OS thread handle 140065636890368, query id 16445911731 <IP of Application 1> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=3
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046721 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d70818e; asc ]p ;;
1: len 4; hex 80125384; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (2) TRANSACTION:
TRANSACTION 3665046667, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
3 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 7126671, OS thread handle 140065601476352, query id 16445907427 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=229753
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046667 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d70818e; asc ]p ;;
1: len 4; hex 80125384; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 4 n bits 72 index TableA_GI1 of table `DatabaseA`.`TableA` trx id 3665046667 lock_mode X locks rec but not gap waiting
Record lock, heap no 3 PHYSICAL RECORD: n_fields 2; compact format; info bits 32
0: len 4; hex 5d7081a9; asc ]p ;;
1: len 4; hex 80125385; asc S ;;
2019-09-05 11:02:06 140065601476352 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (2)
Scenario 2 ----------- Deleting same records from same table, but different sessions causing deadlock
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: Transactions deadlock detected, dumping detailed information.
2019-09-06 13:00:11 140064167008000 [Note] InnoDB:
*** (1) TRANSACTION:
TRANSACTION 3671781064, ACTIVE 1 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 5 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 1
MySQL thread id 7298913, OS thread handle 140064152139520, query id 16923847059 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=1155003
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671781064 lock_mode X locks rec but not gap waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71eed4; asc ]q ;;
2: len 4; hex 8012590a; asc Y ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) TRANSACTION:
TRANSACTION 3671780157, ACTIVE 15 sec starting index read
mysql tables in use 1, locked 1
549 lock struct(s), heap size 73936, 66 row lock(s), undo log entries 48
MySQL thread id 7277612, OS thread handle 140064167008000, query id 16923851347 <IP of Application 2> appid Updating
delete from TableA where current_timestamp>=lastActivityDtTm and ID=1155003
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671780157 lock_mode X locks rec but not gap
Record lock, heap no 2 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71eed4; asc ]q ;;
2: len 4; hex 8012590a; asc Y ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 26908 page no 6 n bits 96 index TableA_GI3 of table `sparkdb`.`TableA` trx id 3671780157 lock_mode X locks rec but not gap waiting
Record lock, heap no 21 PHYSICAL RECORD: n_fields 3; compact format; info bits 32
0: len 4; hex 80119fbb; asc ;;
1: len 4; hex 5d71ed15; asc ]q ;;
2: len 4; hex 801258ff; asc X ;;
2019-09-06 13:00:11 140064167008000 [Note] InnoDB: *** WE ROLL BACK TRANSACTION (1)
Am i missing something to consider resolving this deadlock? , please help. Thanks in advance

MariaDB limit value of column

I want to limit the value of the column limited_column, where 0 >= limited_column <= 100 SQL side, on MariaDB
I've tried creating a trigger on INSERT ad UPDATE as such:
DROP TABLE IF EXISTS `users`;
CREATE TABLE `users` (
`username` varchar(25) NOT NULL,
`user_id` int(100) NOT NULL,
`limited_column` bigint(20) unsigned NOT NULL DEFAULT '0',
[...]
PRIMARY KEY (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
DELIMITER $$
CREATE TRIGGER `limited_column_check_on_insert_trigger` BEFORE INSERT ON `users` FOR EACH ROW
BEGIN
DECLARE dummy,baddataflag INT;
SET baddataflag = 0;
IF NEW.limited_column > 100 THEN
SET baddataflag = 1;
END IF;
IF NEW.limited_column < 0 THEN
SET baddataflag = 1;
END IF;
IF baddataflag = 1 THEN
SELECT CONCAT('Cannot INSERT new value because limited_column is > 100, value was ',NEW.limited_column)
INTO dummy FROM information_schema.tables;
END IF;
END; $$
CREATE TRIGGER `limited_column_check_on_update_trigger` BEFORE UPDATE ON `users` FOR EACH ROW
BEGIN
DECLARE dummy,baddataflag INT;
SET baddataflag = 0;
IF NEW.limited_column > 100 THEN
SET baddataflag = 1;
END IF;
IF NEW.limited_column < 0 THEN
SET baddataflag = 1;
END IF;
IF baddataflag = 1 THEN
SELECT CONCAT('Cannot UPDATE new value because limited_column is > 100, value was ',NEW.limited_column)
INTO dummy FROM information_schema.tables;
END IF;
END; $$
DELIMITER ;
This is what I get if I try inserting a new user when limited_column > 100 (limited_column > 100 works):
MariaDB [NameOfADatabase]> INSERT INTO users (username,user_id,limited_column,[...]) VALUES ('testestes',1,1000,[...]);
ERROR 1172 (42000): Result consisted of more than one row
MariaDB [NameOfADatabase]> INSERT INTO users (username,user_id,limited_column,[...]) VALUES ('testestes',2,100,[...]);
Query OK, 1 row affected (0.02 sec)
Any ideas on what I can do to make this more graceful?
This is running on 10.1.38-MariaDB-0ubuntu0.18.04.2 Ubuntu 18.04
Upgrading to 10.3.15 was the best solution for this, as I can use the CHECK option. Thanks to #RickJames for the info about the update.
Here's the Schema I'm using that works:
DROP TABLE IF EXISTS `users`;
CREATE TABLE `users` (
`username` varchar(25) NOT NULL,
`user_id` int(100) NOT NULL,
`limited_column` bigint(20) unsigned NOT NULL DEFAULT '0',
[...]
PRIMARY KEY (`user_id`),
CHECK (limited_column<=100)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
Here's the output I get:
MariaDB [NameOfADatabase]> INSERT INTO users (username,user_id,limited_column,[...]) VALUES ('test1',1,100,[...]);
Query OK, 1 row affected (0.016 sec)
MariaDB [NameOfADatabase]> INSERT INTO users (username,user_id,limited_column,[...]) VALUES ('test2',2,101,[...]);
ERROR 4025 (23000): CONSTRAINT `CONSTRAINT_1` failed for `NameOfADatabase`.`users`

MariaDB 10.2.10 Slave stops on duplicate key error

I've been setting up two Debian Stretch based MariaDB-Server in Master-Master-Replication.
Slave's replication config section is:
server-id = 2226
auto_increment_increment = 1
auto_increment_offset = 1
log_bin = /var/tmp/mysql_binlog/mysql-bin.log
log_bin_index = /var/tmp/mysql_binlog/mysql-bin.log.index
expire_logs_days = 3
max_binlog_size = 100M
relay_log = /var/tmp/mysql_binlog/slave-relay.log
relay_log_index = /var/tmp/mysql_binlog/slave-relay.log.index
log_slave_updates = 1
replicate_annotate_row_events = 0
log_bin_trust_function_creators = 1
I am experiencing the following error:
show slave status\G;
Relay_Log_File: slave-relay.032025
Relay_Log_Pos: 14887746
Relay_Master_Log_File: mysql-bin.001119
Slave_IO_Running: Yes
Last_Errno: 1062
Last_Error: Error 'Duplicate entry '71899-single' for key 'PRIMARY'' on query. Default database: 'mydb'. Query: 'INSERT INTO document_reference
(document_reference_document_id, document_reference_type, document_reference_value)
VALUES (71899, "single", 0)'
but:
MariaDB [(none)]> select * from mydb.document_reference WHERE document_reference_document_id=71899;
Empty set (0.00 sec)
I've looked up the relay log file - there is only one insert statement.
Anyone has an idea what causes duplicate entry error on slave?
Added information:
Master settings:
auto_increment_increment | 1
auto_increment_offset | 1
binlog_format | MIXED
Table definition:
CREATE TABLE "document_reference" (
"document_reference_document_id" int(10) unsigned NOT NULL,
"document_reference_type" enum('single,'multi') COLLATE utf8_unicode_ci
NOT NULL DEFAULT 'single',
"document_reference_value" int(11) NOT NULL,
PRIMARY KEY ("document_reference_document_id","document_reference_type"))
When having "dual Master", set
auto_increment_increment = 2 -- on both
auto_increment_offset = 1 on one Master, = 2 on the other
Also, server_id must be different on the two Masters (and on any Slaves). If they are not different, replication goes round and round between the Masters.
Otherwise, inserts on separate Masters can generate the same AUTO_INCREMENT values.
However, that does not answer the problem. (There is no auto_inc involved.) The only possible answer (I think) is that two of these rows: (71899, "single", ...) were inserted. The absence of evidence of such in the SELECT may have to do with DELETEs, deadlocks, etc. (So, I do not have a definite answer.)

Can't insert specific timestamp to table

I can not insert timestamp value '2015-01-07 00:00:00' into my database.
This issue works for any hour between 2015-01-07 00:00:00 and 2015-01-07 01:00:00
This works ONLY for the 7th of January of 2015.
SHOW VARIABLES LIKE "%version%";
protocol_version 10
version 5.1.50-community
version_comment MySQL Community Server (GPL)
version_compile_machine ia32
version_compile_os Win32
==
CREATE TABLE `eventtest3` (
`event_dt` TIMESTAMP
) ENGINE=INNODB DEFAULT CHARSET=utf8
1 queries executed, 1 success, 0 errors, 0 warnings
0 row(s) affected
==
And my insert query is:
1 queries executed, 0 success, 1 errors, 0 warnings
Query: INSERT INTO eventtest3 (event_dt) VALUES('2015-01-07 00:00:00')
Error Code: 1292
Incorrect datetime value: '2015-01-07 00:00:00' for column 'event_dt' at row 1
==
Working queries:
Query: INSERT INTO eventtest3 (event_dt) VALUES('2017-01-07 00:00:00')
1 queries executed, 1 success, 0 errors, 0 warnings
1 row(s) affected
Query: INSERT INTO eventtest3 (event_dt) VALUES('2014-01-07 00:00:00')
1 queries executed, 1 success, 0 errors, 0 warnings
1 row(s) affected
Query: INSERT INTO eventtest3 (event_dt) VALUES('2016-01-07 00:00:00')
1 queries executed, 1 success, 0 errors, 0 warnings
1 row(s) affected
1 queries executed, 1 success, 0 errors, 0 warnings
Query: INSERT INTO eventtest3 (event_dt) VALUES('2015-01-31 05:07:09')
1 row(s) affected
There is no time shifting here or there on this date so I dont think this issue is because of timezone settings.
The solution is to check if mysql.time_zone* tables are empty, download timezones information from http://dev.mysql.com/downloads/timezones.html and just replace tables with stopped mysql service.
I don't really know HOW this can help with 7th of January of 2015 but it really does.

Creating stored procedure giving error

I have created a stored procedure
create or replace
PROCEDURE "USP_USER_ADD" (
USERNAME IN VARCHAR2 ,
P_PASSWORD IN VARCHAR2 ,
SALT IN BLOB ,
EMAIL IN VARCHAR2 ,
FIRST_NAME IN VARCHAR2 ,
LAST_NAME IN VARCHAR2 ,
ip_address IN VARCHAR2 ,
EMAIL_VERIFIED IN NUMBER ,
ACTIVE IN NUMBER ,
CREATEDBY IN VARCHAR2 ,
CREATED IN DATE ,
MODIFIED IN DATE ,
MODIFIEDBY IN VARCHAR2 ,
USER_GROUP_ID IN NUMBER ,
LAST_PASSWORD_CHANGE_DATE IN DATE ,
P_failed_login_attempts IN NUMBER )
AS
BEGIN
declare
user_id_tmp number(20);
INSERT INTO users( "username" ,
"password" ,
"salt" ,
"email" ,
"first_name" ,
"last_name" ,
"email_verified" ,
"active" ,
"ip_address" ,
"created" ,
"createdby" ,
"modified" ,
"modifiedby" ,
"user_group_id" ,
"last_password_change_date" ,
"FAILED_LOGIN_ATTEMPTS"
)
VALUES
(
username ,
p_password ,
salt ,
email ,
first_name ,
last_name ,
email_verified ,
active ,
ip_address ,
created ,
createdby ,
modified ,
modifiedby ,
user_group_id ,
last_password_change_date ,
p_failed_login_attempts
);
SELECT MAX(id) INTO user_id_tmp FROM users ;
INSERT INTO user_passwords
(
"user_id" ,
"password" ,
"created"
)
VALUES
(
user_id_tmp,
p_password,
created
);
END USP_USER_ADD;
It's giving me two errors
1: Error(26,5): PLS-00103: Encountered the symbol "INSERT" when expecting one of the following: begin function package pragma procedure subtype type use form current cursor The symbol "begin" was substituted for "INSERT" to continue.
2: Error(78,19): PLS-00103: Encountered the symbol "end-of-file" when expecting one of the following: begin case declare end exception exit for goto if loop mod null pragma raise return select update while with << close current delete fetch lock insert open rollback savepoint set sql execute commit forall merge pipe
These are my tables
--------------------------------------------------------
-- DDL for Table USER_PASSWORDS
--------------------------------------------------------
CREATE TABLE "NEWS1.0"."USER_PASSWORDS"
( "ID" NUMBER(11,0),
"USER_ID" NUMBER(11,0),
"PASSWORD" VARCHAR2(255 BYTE),
"SALT" VARCHAR2(255 BYTE),
"IP" VARCHAR2(15 BYTE),
"CREATEDBY" VARCHAR2(255 BYTE),
"CREATED" DATE,
"MODIFIED" DATE,
"MODIFIEDBY" VARCHAR2(255 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- DDL for Index USERS_PK
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_PK" ON "NEWS1.0"."USER_PASSWORDS"
("ID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- Constraints for Table USER_PASSWORDS
--------------------------------------------------------
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("USER_ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" MODIFY ("PASSWORD" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USER_PASSWORDS" ADD CONSTRAINT "USERS_PK" PRIMARY
KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
--------------------------------------------------------
-- DDL for Trigger BI_USER_PASSWORDS_ID
--------------------------------------------------------
CREATE OR REPLACE TRIGGER "NEWS1.0"."BI_USER_PASSWORDS_ID"
before insert on "USER_PASSWORDS"
for each row
begin
if inserting then
if :NEW."ID" is null then
select USER_PASSWORDS_SEQ.nextval into :NEW."ID" from dual;
end if;
end if;
end;
/
ALTER TRIGGER "NEWS1.0"."BI_USER_PASSWORDS_ID" ENABLE;
--------------------------------------------------------
-- DDL for Table USERS
--------------------------------------------------------
CREATE TABLE "NEWS1.0"."USERS"
( "ID" NUMBER(*,0),
"USERNAME" VARCHAR2(100 BYTE),
"PASSWORD" VARCHAR2(255 BYTE),
"SALT" BLOB,
"EMAIL" VARCHAR2(100 BYTE),
"FIRST_NAME" VARCHAR2(100 BYTE),
"LAST_NAME" VARCHAR2(100 BYTE),
"EMAIL_VERIFIED" NUMBER(*,0) DEFAULT 1,
"ACTIVE" NUMBER(*,0) DEFAULT 1,
"IP_ADDRESS" VARCHAR2(50 BYTE),
"USER_GROUP_ID" NUMBER(*,0),
"LAST_PASSWORD_CHANGE_DATE" DATE,
"FAILED_LOGIN_ATTEMPTS" NUMBER(*,0),
"CREATED" DATE,
"CREATEDBY" VARCHAR2(255 BYTE),
"MODIFIED" DATE,
"MODIFIEDBY" VARCHAR2(255 BYTE)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"
LOB ("SALT") STORE AS (
TABLESPACE "USERS" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
NOCACHE LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)) ;
--------------------------------------------------------
-- DDL for Index USERS_UK2
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_UK2" ON "NEWS1.0"."USERS" ("EMAIL")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- DDL for Index USERS_UK1
--------------------------------------------------------
CREATE UNIQUE INDEX "NEWS1.0"."USERS_UK1" ON "NEWS1.0"."USERS" ("USERNAME")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ;
--------------------------------------------------------
-- Constraints for Table USERS
--------------------------------------------------------
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("ID" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("USERNAME" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" MODIFY ("PASSWORD" NOT NULL ENABLE);
ALTER TABLE "NEWS1.0"."USERS" ADD CONSTRAINT "USERS_UK1" UNIQUE
("USERNAME")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
ALTER TABLE "NEWS1.0"."USERS" ADD CONSTRAINT "USERS_UK2" UNIQUE ("EMAIL")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE;
The problem is in this section of code:
P_failed_login_attempts IN NUMBER )
AS
BEGIN
declare
user_id_tmp number(20);
INSERT INTO users( "username" ,
-- rest omitted
Remove the declare and move the declaration of user_id between the AS and the BEGIN:
P_failed_login_attempts IN NUMBER )
AS
user_id_tmp number(20);
BEGIN
INSERT INTO users( "username" ,
-- rest omitted
In Oracle PL/SQL, a block has the form
DECLARE
-- some variable declarations
BEGIN
-- some code
EXCEPTION
-- some exception-handling
END;
The variable declarations and exception-handling sections are optional. If there are no variable declarations, you can remove the keyword DECLARE (it's not an error if you leave it there). However, if the block has no exception handling, the EXCEPTION keyword must be removed.
When declaring a procedure or function, the CREATE OR REPLACE PROCEDURE ... AS part takes the place of a DECLARE.
The -- some code section of a PL/SQL block can contain further blocks inside it. In fact, this is what the PL/SQL compiler thought you wanted when it saw your declare keyword. It thought you were doing something like the following:
CREATE OR REPLACE PROCEDURE USP_USER_ADD (
-- parameters omitted
)
AS
BEGIN
DECLARE
-- some variable declarations
BEGIN
-- some code
END;
END USP_USER_ADD;
However, you had an INSERT after the declare. The compiler wasn't expecting that, and that's why you got an error. You also got an error about end-of-file, and that was because the PL/SQL compiler was expecting two ENDs but got to the end of your stored procedure before it found the second one.
I think that you should remove DECLARE.
Oracle 11g Create Procedure Documentation
Previous answers have cleared up the compile errors, so I won't address those. But you have a potential bug in place; the id inserted into "user_password" table may not be the same as in "user" table In a multi-user environment it is possible another user could insert and commit into after you insert into set but before you do the select max. That would hopefully raise dup-val-on-index, but would be extremely difficult to find.
You can remove this possibility, and have slightly less/cleaner(?) code by getting the id assigned by a trigger?) by using the return option on the insert itself:
insert into user ( ... ) values (...) returning id into user_id_tmp ;
and delete the select max(id) ... statement.

Resources