ORA-01465: invalid hex number, when inserting data to oracle table - oracle11g

I'm working under Oracle, and when want to insert a simple insert query I got the below error
ORA-01465: invalid hex number
Knowing that I don't have any specific column in my table.
I have tried lot of solutions on the web but without any success.
INSERT INTO LIMITE_AGENC (OBJECTID_1, OBJECTID, NOM_AGENCE, SURFACE, BASSIN,
NOM, SHAPE_LENG, ID_REG, ORDRE_ABH)
VALUES
(30, 25, 'Agence', 13591, 'Hydraulique',
'Loukkomoti', 8.12883522, 12, 3);
and here is my table script, I have updated the script with more details if you need to check the index part...
DROP TABLE LIMITE_AGENC CASCADE CONSTRAINTS;
CREATE TABLE LIMITE_AGENC
(
OBJECTID_1 INTEGER,
OBJECTID INTEGER,
NOM_AGENCE NVARCHAR2(80),
SURFACE INTEGER,
BASSIN NVARCHAR2(100),
NOM NVARCHAR2(50),
SHAPE_LENG NUMBER(38,8),
SHAPE SDE.ST_GEOMETRY,
ID_REG INTEGER,
ORDRE_ABH INTEGER
)
LOB ("SHAPE"."POINTS") STORE AS BASICFILE (
TABLESPACE USERS
ENABLE STORAGE IN ROW
CHUNK 8192
PCTVERSION 10
NOCACHE
LOGGING
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
))
TABLESPACE USERS
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
MONITORING;
CREATE UNIQUE INDEX UNIQ_CONST_AGENCE2 ON LIMITE_AGENC
(OBJECTID)
LOGGING
TABLESPACE USERS
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MAXSIZE UNLIMITED
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
);
ALTER TABLE LIMITE_AGENC ADD (
CONSTRAINT UNIQ_CONST_AGENCE2
UNIQUE (OBJECTID)
USING INDEX UNIQ_CONST_AGENCE2
ENABLE VALIDATE);
GRANT SELECT ON LIMITE_AGENC TO SDE;

Hi I have found the issue, so the problem is that I have to add the geometry column Shape in my request with an empty value as below
Insert into LIMITE_AGENC (OBJECTID_1, OBJECTID, NOM_AGENCE, SURFACE, BASSIN,
NOM, SHAPE_LENG,SHAPE, ID_REG, ORDRE_ABH)
Values
(30, 25, 'Agence', 13591, 'Hydraulique',
'Loukkomoti', 8.12883522, '', 12, 3);
I hope that would help someone in the future

Related

InnoDB: Why does transaction wait for IX lock when it already has a X lock?

I'm getting a deadlock in my code. Txn 1 is waiting for a lock to be granted which is held currently by Txn 2. Txn 2 already has a X lock but still is requesting for a IX lock.
Both the transactions run an Insert query using ActiveRecord import.
The deadlock section in SHOW ENGINE INNODB STATUS gives me:
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147861 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147863 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147863 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
My question is why does Txn 2 need an IX lock if it already holds an X lock?
Update 1:
Here is the complete LATEST DETECTED DEADLOCK section:
------------------------
LATEST DETECTED DEADLOCK
------------------------
2022-09-02 16:11:22 0x149b2c2ef700
*** (1) TRANSACTION:
TRANSACTION 51241147861, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 8 lock struct(s), heap size 1136, 5 row lock(s), undo log entries 1
MySQL thread id 8423321, OS thread handle 22689954051840, query id 19045173171 172.31.15.180 mitdb4dm1n update
INSERT INTO `user_metrics` (`id`,`current`,`total`,`my_type`,`owner_id`,`owner_type`,`created_at`,`updated_at`) VALUES (NULL,175,175,0,108840,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,100,151,0,108841,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,169,169,0,112780,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,202,217,0,112781,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,26,62,0,112782,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,169,169,0,112794,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,177,217,0,112795,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,28,62,0,112796,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,140,140,0,114162,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,64,64,0,114163,'OwnerName','2022-09-0
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147861 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) TRANSACTION:
TRANSACTION 51241147863, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
7 lock struct(s), heap size 1136, 5 row lock(s), undo log entries 1
MySQL thread id 8424694, OS thread handle 22656693761792, query id 19045173179 172.31.4.27 mitdb4dm1n update
INSERT INTO `user_metrics` (`id`,`current`,`total`,`my_type`,`owner_id`,`owner_type`,`created_at`,`updated_at`) VALUES (NULL,1,89,0,137623,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,0,3,0,137624,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,178,182,0,137635,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,77,129,0,137645,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,5,14,0,137646,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,0,87,0,137656,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,0,11,0,137657,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,71,71,0,146601,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,71,71,0,146616,'OwnerName','2022-09-02 16:11:22','2022-09-02 16:11:22'),(NULL,39,64,0,146631,'OwnerName','2022-09-02 16:11:22','2022-0
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147863 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 7431 page no 178 n bits 80 index PRIMARY of table `company_ebdb`.`user_metrics` trx id 51241147863 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
*** WE ROLL BACK TRANSACTION (2)
Update 2:
Create table output for the table involved:
CREATE TABLE `user_metrics` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`current` decimal(10,0) DEFAULT NULL,
`total` decimal(10,0) DEFAULT NULL,
`my_type` int(11) DEFAULT NULL,
`owner_id` int(11) DEFAULT NULL,
`owner_type` varchar(255) DEFAULT NULL,
`created_at` datetime NOT NULL,
`updated_at` datetime NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `index_user_metrics_on_owner_type_and_owner_id_and_my_type` (`owner_type`,`owner_id`,`my_type`)
) ENGINE=InnoDB AUTO_INCREMENT=1167 DEFAULT CHARSET=utf8mb4
Plan A:
Live with it. But do catch the error and replay the INSERT.
Plan B:
Try this. (I have no confidence that it will help):
You have two Unique keys (the PK is one). Let's switch them around, to the following. (It assumes you can change all three columns to NOT NULL.)
PRIMARY KEY(`owner_type`,`owner_id`,`my_type`),
INDEX(id)
Rationale:
Two Unique keys leads to two things being locked, and more than twice the likelihood of a conflict.
Having the data clustered in the order that is beneficial to the query will speed up the query, hence making it more likely to finish before conflicting with another connection.
I doubt if either of these will be sufficient to prevent deadlocks. But they may decrease the frequency of deadlocks. Hence, plan doing Plan A, too.

Teradata auto increment id

I've been working with teradata for couple of months.
However cannot understand how correctly create auto-incrementing ID.
Serfing the internet I found the most clear and well-working for me the way to create id column as auto generated:
CREATE TABLE tbl_emp (
id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 1000000
NO CYCLE),
Name VARCHAR(20),
Phone VARCHAR(10));
But it seems that exists another way to do the same by adding 'primary index (id)':
CREATE TABLE tbl_emp (
id INTEGER GENERATED ALWAYS AS IDENTITY (START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 1000000
NO CYCLE),
Name VARCHAR(20),
Phone VARCHAR(10)) primary index (id);
I am a bit confused. What is actual difference between this 2 queries? What will be the result?

oracle 11g application express interface virtual column that calculates total amount of sales

Im currently working on my interface project by editing a certain page item of a form in oracle 11g application express. Im in confusion that one of the column is derived from another column in the table, does not work properly when i tried to register a new data through the interface. the column cannot calculate the derived data just like how it works in the oracle sql developer. i set the column as follows:
Im clueless, that should i display the column as hidden, or it has something to do with the source settings that needs PL/SQL expression to calculate the values of quantity, and cost_perunit column for total_amount values automatically. I have searched the web for the solution, but cant find a solution and a related issue to this.
this command was uploaded in a text .txt format into the application express
CREATE TABLE PAYMENT(
PAY_ID NUMBER(25)NOT NULL,
PAY_DATE DATE,
PAY_METHOD VARCHAR2(50 char),
SPARE_TYPE VARCHAR2(50 char),
QUANTITY NUMBER(12),
COST_PERUNIT NUMBER(6,2),
TOTAL_AMOUNT as (quantity*cost_perunit),
CONSTRAINT PAY_PK PRIMARY KEY(PAY_ID)
);
CREATE SEQUENCE pay_id_seq START WITH 400;
Im referring to the TOTAL_AMOUNT as (quantity*cost_perunit) column that returns error when i try to create a new data through the form. what should i change in the page edit settings, so it will work as it supposed to be.
You have two choices create a table without virtual column calculate total amount at run time
e.g select a.*,(quantity*cost_perunit) total_amount from payment a;
or create table with virtual column with keyword virtual
CREATE TABLE PAYMENT(
PAY_ID NUMBER(25)NOT NULL,
PAY_DATE DATE,
PAY_METHOD VARCHAR2(50 char),
SPARE_TYPE VARCHAR2(50 char),
QUANTITY NUMBER(12),
COST_PERUNIT NUMBER(6,2),
TOTAL_AMOUNT as (quantity*cost_perunit) VIRTUAL,
CONSTRAINT PAY_PK PRIMARY KEY(PAY_ID));
#ddl payment
DBMS_METADATA.GET_DDL(OBJECT_TYPE,OBJECT_NAME,OWNER)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
CREATE TABLE "SCOTT"."PAYMENT"
( "PAY_ID" NUMBER(25,0) NOT NULL ENABLE,
"PAY_DATE" DATE,
"PAY_METHOD" VARCHAR2(50 CHAR),
"SPARE_TYPE" VARCHAR2(50 CHAR),
"QUANTITY" NUMBER(12,0),
"COST_PERUNIT" NUMBER(6,2),
"TOTAL_AMOUNT" NUMBER GENERATED ALWAYS AS ("QUANTITY"*"COST_PERUNIT") VIRTUAL ,
CONSTRAINT "PAY_PK" PRIMARY KEY ("PAY_ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255
TABLESPACE "EXAMPLE" ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "EXAMPLE" ;
SQL> insert into payment (pay_id,pay_date,pay_method,spare_type,quantity,cost_perunit)
2 values
3 (&pid,&pdt,&pmethod,&spare,&qty,&cpu);
Enter value for pid: 101
Enter value for pdt: sysdate
Enter value for pmethod: 'CASH'
Enter value for spare: 'CREDIT CARD'
Enter value for qty: 12
Enter value for cpu: 1.9
1 row created.
Elapsed: 00:00:00.10
SQL> /
Enter value for pid: 102
Enter value for pdt: sysdate-1
Enter value for pmethod: 'DEBIT CARD'
Enter value for spare: 'CREDIT CARD'
Enter value for qty: 21
Enter value for cpu: 2.99
1 row created.
Elapsed: 00:00:00.00
SQL> commit;
Commit complete.
SQL> select * from payment;
PAY_ID PAY_DATE PAY_METHOD SPARE_TYPE QUANTITY COST_PERUNIT TOTAL_AMOUNT
---------- ------------------- -------------------------------------------------- -------------------------------------------------- ---------- ------------ ------------
101 2020-06-25 14:06:46 CASH CREDIT CARD 12 1.9 22.8
102 2020-06-24 14:07:29 DEBIT CARD CREDIT CARD 21 2.99 62.79
SQL>
Edit:I have no idea about apex but if you decide to create the table without virtual column you can add label box or non editable text box item and use formula to display calculation for apex forms run time

Sqlite delete slow in 4GB

I have been using fat32 file system. I have a database which has 4 GB size. Database consist of 1 parent table and 1 child table. Parent table has 10 rows and child table has 4000 rows. 1 row of child table has 1 MB size.
When I delete a row in parent table, deletion cascades 1MB-sized child records. (pragma foreign_keys is on) When I try to delete 100 MB data by cascade (1 parent record - 100 child records) it takes too long time (almost 1-10 minute) to complete, and the duration increase/decrease by size of data (100 Mb: 1-10 minute, 300 MB: 3-30 minute,etc).
I tried some pragma commands (synchronous, temp_store, journal_mode) suggested by others posts and i also tried to add index on foreign key, but those does not help solve my problem.(Actually, after adding index on foreign key, 1 MB data deletion became faster/st, but 100 MB data deletion duration did not change) Can you give me please any suggestion to increase deletion performance?
CREATE TABLE "ANHXT" (
"id" integer primary key autoincrement,
"ANH_AD" text,
"ANH_DBGMHWID" text,
"ANH_TYPE" integer,
"ANH_INDEXNO" int64_t
)
CREATE TABLE "PRCXT" (
"id" integer primary key autoincrement,
"ANP_SEGMENTNO" integer not null,
"ANP_VALUE" blob,
"ANH_PRC_id" bigint,
constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references "ANHXT" ("id") on update cascade on delete cascade deferrable initially deferred
)
CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)
CREATE UNIQUE INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT( ANP_SEGMENTNO,ANH_PRC_id)
CREATE INDEX findex on PRCXT( ANH_PRC_id)

Automatic random primary key within a specific range in sqlite3?

I would like to automatically insert a primary key every time I add a new record to an SQLite3 table, much like a PRIMARY KEY AUTOINCREMENT except that the value should be randomly chosen from some range (say 0000 through 9999) rather than being assigned sequentially.
For demonstration purposes, let's restrict the range to 1 through 6 instead and try to populate the following table:
CREATE TABLE dice (rolled INTEGER PRIMARY KEY NOT NULL);
Now every time I insert a new record into that table, I want a new random primary key to be created.
The following works and does exactly what I want
INSERT INTO dice VALUES(
(
WITH RECURSIVE roll(x) AS (
VALUES(ABS(RANDOM()) % 6 + 1)
UNION ALL
SELECT x % 6 + 1 FROM roll
)
SELECT x FROM roll WHERE (
SELECT COUNT(*) FROM dice where rolled = x
) = 0 LIMIT 1
)
);
except that I have to invoke it manually/explicitly.
Is there any way to embed the above (or an equivalent) calculation for the random primary key into a DEFAULT clause for the "rolled" column or into some sort of trigger, so that a new key will be calculated automatically every time I insert a record?

Resources