I am writing a procedure with a cursor that take table name and column name from a lookup table we have in DB. This table has a mapping of table name and columns.
These two info I am collecting in a variable and running a select query to see if the data in that column is number or not. for example:-
select TO_NUMBER(REGEXP_REPLACE(v_landing_column,'',''))
from v_landing_table
Here v_landing_column is a column name and v_landing_table is a table name. One of the values in v_landing_column is 12,300 which is a number but because of , the loop flow is going into exception where I am dumping error record in a seperate table.
I tried using REPLACE also with above syntax but still flow is dumping this record with value 12,300 into error table. How to remove , from 12,300 inside plsql procedure using execute immediate?
Related
I have two databases with the same structure. The first is the main one, while the second get updated periodically (in reality I have multiple "secondary" databases that I want to merge one by one into the main one).
The structure of the main and the secondary databases is identical.
I want to periodically dump all new values from the secondary database in the main one. However, the second time I do it, I want to exclude rows that were already copied the first time (and so on).
The tables in all these database have:
an ID column set as PRIMARY KEY going from 1 to N for each database (I suspect this was a mistake, but at the moment I can't change this)
a DATE column, representing a posix timestamp (float)
some other columns
My code looks like this:
ATTACH DATABASE secondary.db AS temp_db
DROP TABLE IF EXISTS my_table_temp
CREATE TABLE my_table_temp AS SELECT * FROM my_table
INSERT INTO main.my_table_temp SELECT * FROM temp_db.my_table
DELETE FROM my_table
INSERT INTO main.my_table SELECT DISTINCT * FROM main.my_table_temp ORDER BY date
DROP TABLE my_table_temp
the problem is that - I suspect due to the repeated ID column - the DISTINCT clause returns me:
UNIQUE constraint failed: my_table.id
However I don't care at all of the ID field that could also be dropped or reset.
NOTES:
the secondary databases are constantly updated by a code that - at the moment - I can't change
I initialize the "main" database copy-pasting one of the secondary to avoid regenerating the whole structure from scratch. Maybe there is a better way of doing this
Apologies if this is a naive question, but I'm very new with SQLite.
Thanks
Following the advice from #forpas, I solved this with the following code:
Assuming the columns to be id,date,col1 and col2
ATTACH DATABASE secondary.db AS temp_db
DROP TABLE IF EXISTS my_table_temp
CREATE TABLE my_table_temp AS SELECT date,col1,col2 FROM my_table
INSERT INTO main.my_table_temp SELECT date,col1,col2 FROM temp_db.my_table
DROP TABLE my_table /* I need to recreate my_table as I've removed a column*/
CREATE TABLE main.my_table AS SELECT DISTINCT date,col1,col2 FROM main.my_table_temp ORDER BY date
DROP TABLE my_table_temp
also, I automatized the extraction of the column names doing
SELECT name FROM PRAGMA_TABLE_INFO('my_table');
This is then passed to the python code running the script and the column id is removed from the list. Note that the second (and following) time I run this code, the column id won't be present in my_table to start with. However this approach allows the code to be the same in the two cases: either if the column id is there or not.
This procedure is then iterated over each table name to fully merge the two databases.
I need a help in getting the PLSQL procedure to : Insert/Delete the rows of a table , because as I used Update functionality getting duplicates for that particular Sequence ID field.
So for a particular sequence ID row, whenever I insert the data, it should be the latest in that table.
The last sentence you wrote suggests that you have to
delete row(s) whose ID equals that particular "sequence ID" value
then insert a new row
If you expected some code to be written, you should have posted some more info (CREATE TABLE and INSERT INTO sample data, as well as the way you manipulate it by inserting new row, showing what you expect to happen with old one(s)). It is difficult to write code based on unknown data model.
A guess...
INSERT INTO schema_name.table_name(
primary_key_column
, other_column
)
VALUES(
(SELECT max(primary_key_column)+1 FROM schema_name.table_name),
, 'other_value'
);
COMMIT;
This is the procedure I am using:
https://drive.google.com/file/d/1eGbxSppjexpICKh6pzuW0ZzckVxA6BB0/view?usp=sharing
My requirement when we need to insert the new data , the previous data should be deleted for the corresponding ID.
In the above procedure I am updating the data.
Hoping you all can help. I have a created table with a UPI (incremental index) and when I run the macro to insert in continuously gives me the error that "the positional assignment list has too few values". I have verified that the two tables match except for the UPI ID. How do you account for that field in the insert macro so that the table and the macro have the same number of assignments?
The list of values specified by the INSERT statement is shorter than the list of columns in the table. This error occurs on the INSERT statement.
try to check the origem table with the match of the destiny table.
I am trying to automate some performance check on query in Teradata.
So as part of that I want to check if columns used in joining condition are primary index of respective table or not and similarly for columns used in where condition are partition column in respective table or not. Is there any direct Teradata query which can directly give this without parsing whole query.
Yes there are two dbc objects where you can query :
dbc.columnsv
dbc.indicesv.
Primary index information will be stored in the 2nd view just search with your tablename and database name.
Partitioned information is stored in columnsv , there is a column with a flag value 'Y' for partitioned columns.
Example :
SELECT DATABASENAME,TABLENAME,COLUMNNAME FROM DBC.COLUMNSV WHERE PARTITIONINGCOLUMN='Y' where tablename=<> and databasename=<>;
Select * from dbc.indicesv where tablename=<> and databasename=<>;
I face the following exception when i try to get data from table with the following structure:
ERROR:-528 MEssage: [Informix .NET provider][Informix]Maximum output
rowsize (32767) exceeded.
CREATE TABLE dr66req
(
req_ser SERIAL PRIMARY KEY,
req_desc LVarChar(32739),
);
Ref:
The total number of bytes that this statement selects exceeds the
maximum that can be passed between the database server and the program.
Try following-
1) Make sure that the columns selected are the ones that you intended.
2) Check that you have not named some very wide character column by
mistake, neglected to specify a substring, or specified too long a
substring. If the selection is what you require, rewrite this SELECT
statement into two or more statements, each of which selects only some
of the fields.
3) If it is a join of several tables, you might best select
all desired data INTO TEMP; then select individual columns of the
temporary table.
4)If this is a fetch via a cursor in a program, you
might revise the program as follows.
First, change the cursor to select only the ROWID of the desired row.
Second, augment the FETCH statement with a series of SELECT statements, each of which selects one or a few columns WHERE ROWID = the saved row ID.