pyodbc cursor's rollback functionality - odbc

Using pyodbc I am updating three database tables. When updating the last table I am encountering an error and I am calling the rollback() function. The second table retains its updated values even on rollback(); first and third table are not updated as expected. Any reason why it is behaving this way?
dbCursor.execute("insert into table1(item11, item12, item13, item14, item15, item16) values(?,?,?,?,?,?)", value1, value2, value3, value4, value5, value6)
dbCursor.execute("update into table2(item21, item22, item23, item24, item25, item26) values(?,?,?,?,?,?)", value1, value2, value3, value4, value5, value6)
---> two rows in the table get updated
dbCursor.execute("select item from table2 where unit_no = ?", unit_number)
tempVal = dbCursor.fetchone()
dbCursor.execute("update table3 set item31=?, item32=? where unit=?", val1, val2, tempVal)
---> Nothing gets updated as there is no tempVal found
if dbCursor.rowcount is 0:
dbCursor.rollback()
Expected that rollback() will cause rollback on all the three tables, but table2 retains the value.
connection.getinfo(pyodbc.SQL_DRIVER_NAME) --> iclit09b.dll
connection.getinfo(pyodbc.SQL_DRIVER_VER) --> 4.10.FC4DE

A quick test with 'autocommit = False' shows the rollback works as it should:
Create the test tables with:
informix#irk:/data/informix/IBM/OpenInformix/pyodbc# cat p.sql
drop table table1;
drop table table2;
drop table table3;
create table table1(item11 integer, item12 integer, item13 integer, item14 integer, item15 integer, item16 integer) ;
create table table2(item11 integer, item12 integer, item13 integer, item14 integer, item15 integer, item16 integer) ;
create table table3(item31 integer, item32 integer, item13 integer, unit integer, item15 integer, item16 integer) ;
insert into table2 values (1,1,1,1,1,1);
insert into table2 values (1,1,1,1,1,1);
informix#irk:/data/informix/IBM/OpenInformix/pyodbc# dbaccess stores7 p
Database selected.
Table dropped.
Table dropped.
Table dropped.
Table created.
Table created.
Table created.
1 row(s) inserted.
1 row(s) inserted.
Database closed.
informix#irk:/data/informix/IBM/OpenInformix/pyodbc#
'table2' will have "1,1,1,1,1,1" for both rows
The python code below will output the data in 'table2', switch off AutoCommit and try the update.
informix#irk:/data/informix/IBM/OpenInformix/pyodbc# cat p.py
import pyodbc
cnxn = pyodbc.connect('DSN=irk1210')
dbCursor = cnxn.cursor()
value1 = "2"
value2 = "2"
unit_number = "1"
val1 = "2"
val2 = "2"
tempVal = "2"
print("before update")
dbCursor.execute("select item11,item12 from table2")
row = dbCursor.fetchone()
if row:
print(row)
#Set AutoCommit off
cnxn.autocommit = False
dbCursor.execute("insert into table1(item11, item12) values (?,?)", value1, value2)
dbCursor.execute("update table2 set item11=?, item12=?", value1, value2)
print("after update")
dbCursor.execute("select item11,item12 from table2")
row = dbCursor.fetchone()
if row:
print(row)
dbCursor.execute("select item11 from table2 where item12 = ?", unit_number)
tempVal = dbCursor.fetchone()
dbCursor.execute("update table3 set item31=?, item32=? where unit=?", val1, val2, tempVal)
if dbCursor.rowcount is 0:
dbCursor.rollback()
print("after rollback")
dbCursor.execute("select item11,item12 from table2")
row = dbCursor.fetchone()
if row:
print(row)
informix#irk:/data/informix/IBM/OpenInformix/pyodbc#
Output:
informix#irk:/data/informix/IBM/OpenInformix/pyodbc# python3 p.py
before update
(1, 1)
after update
(2, 2)
after rollback
(1, 1)
informix#irk:/data/informix/IBM/OpenInformix/pyodbc#
The last select shows the update operation been rollback as expected.
Values in the two rows are the same as before the update (1,1)

Related

Error while deleting row from table color: sub-select returns 3 columns - expected 1

I am trying to write a trigger so that whenever a change occurs in any table in the db the trigger records the changes in a sql table with schema
I am trying to run this script
def log_changes(db_path):
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
cursor.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name NOT IN ('changes', 'sqlite_sequence');")
tables = cursor.fetchall()
for table in tables:
table_name = table[0]
column_names = ', '.join([f"{col[1]} as '{col[0]}'" for col in cursor.execute(
f"PRAGMA table_info({table_name})").fetchall()])
# print(column_names)
cursor.execute(f"DROP TRIGGER IF EXISTS log_{table_name}_insert")
cursor.execute(f"DROP TRIGGER IF EXISTS log_{table_name}_update")
cursor.execute(f"DROP TRIGGER IF EXISTS log_{table_name}_delete")
# create trigger so that when a row is inserted, the row is logged in the changes table with the action 'insert' and the current timestamp and the new data is the row that was inserted (NEW)
insert_trigger = f"CREATE TRIGGER IF NOT EXISTS log_{table_name}_insert AFTER INSERT ON {table_name} BEGIN INSERT INTO changes (table_name, action, timestamp, old_data, new_data) VALUES ('{table_name}', 'insert', datetime('now','localtime'), null, (SELECT {column_names} FROM {table_name} WHERE rowid = NEW.rowid)); END;"
# print(insert_trigger)
delete_trigger = f"CREATE TRIGGER IF NOT EXISTS log_{table_name}_delete AFTER DELETE ON {table_name} BEGIN INSERT INTO changes (table_name, action, timestamp, old_data, new_data) VALUES ('{table_name}', 'delete', datetime('now','localtime'), (SELECT {column_names} FROM {table_name} WHERE rowid = OLD.rowid), null); END;"
# print(delete_trigger)
update_trigger = f"CREATE TRIGGER IF NOT EXISTS log_{table_name}_update AFTER UPDATE ON {table_name} BEGIN INSERT INTO changes (table_name, action, timestamp, old_data, new_data) VALUES ('{table_name}', 'update', datetime('now','localtime'), (SELECT {column_names} FROM {table_name} WHERE rowid = OLD.rowid), (SELECT {column_names} FROM {table_name} WHERE rowid = NEW.rowid)); END;"
# print(update_trigger)
cursor.execute(insert_trigger)
cursor.execute(delete_trigger)
cursor.execute(update_trigger)
conn.commit()
conn.close()
this query works but the trigger is not working properly.
I want to insert the row data that has been modified in the new data column so I am using rowid are the queries correct?

How to insert data from R into Oracle table with identity column?

Assume I have a simple table in Oracle db
CREATE TABLE schema.d_test
(
id_record integer GENERATED AS IDENTITY START WITH 95000 NOT NULL,
DT DATE NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)
And I have a dataframe in R
dt = c('2022-01-01', '2005-04-01', '2011-10-02')
var = c('sgdsg', 'hjhgjg', 'rurtur')
num = c(165, 1658.5, 8978.12354)
data = data.frame(dt, var, num)%>%
mutate(dt = as.Date(dt))
I'm trying to insert data into Oracle d_test table using the code
data %>%
dbWriteTable(
oracle_con,
value = .,
date = T,
'D_TEST',
append = T,
row.names=F,
overwrite = F
)
But the following error returned
Error in .oci.WriteTable(conn, name, value, row.names = row.names, overwrite = overwrite, :
Error in .oci.GetQuery(con, stmt, data = value) :
ORA-00947: not enough values
What's the problem?
How can I fix it?
Thank you.
This is pure Oracle (I don't know R).
Sample table:
SQL> create table test_so (id number generated always as identity not null, name varchar2(20));
Table created.
SQL> insert into test_so(name) values ('Name 1');
1 row created.
My initial idea was to suggest you to insert any value into the ID column, hoping that Oracle would discard it and generate its own value. However, that won't work.
SQL> insert into test_so (id, name) values (-100, 'Name 2');
insert into test_so (id, name) values (-100, 'Name 2')
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column
But, if you can afford recreating the table so that it doesn't automatically generate the ID column's value but use a "workaround" (we used anyway, as identity columns are relatively new in Oracle) - a sequence and a trigger - you might be able to "fix" it.
SQL> drop table test_so;
Table dropped.
SQL> create table test_so (id number not null, name varchar2(20));
Table created.
SQL> create sequence seq_so;
Sequence created.
SQL> create or replace trigger trg_bi_so
2 before insert on test_so
3 for each row
4 begin
5 :new.id := seq_so.nextval;
6 end;
7 /
Trigger created.
Inserting only name (Oracle will use a trigger to populate ID):
SQL> insert into test_so(name) values ('Name 1');
1 row created.
This is what you'll do in your code - provide dummy ID value, just to avoid
ORA-00947: not enough values
error you have now. Trigger will discard it and use sequence anyway:
SQL> insert into test_so (id, name) values (-100, 'Name 2');
1 row created.
SQL> select * from test_so;
ID NAME
---------- --------------------
1 Name 1
2 Name 2 --> this is a row which was supposed to have ID = -100
SQL>
The way you can handle this problem is to create table with GENERATED BY DEFAULT ON NULL AS IDENTITY like this
CREATE TABLE CM_RISK.d_test
(
id_record integer GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 5000 NOT NULL ,
DT date NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)

Updating single specified values from another table in SQLite

I have two SQLite tables A and B defined as:
CREATE TABLE A (orig_cat INTEGER, type INTEGER,gv_ID INTEGER);
INSERT INTO A (orig_cat,type) VALUES (1,1);
INSERT INTO A (orig_cat,type) VALUES (2,2);
INSERT INTO A (orig_cat,type) VALUES (3,2);
INSERT INTO A (orig_cat,type) VALUES (4,2);
INSERT INTO A (orig_cat,type) VALUES (1,3);
INSERT INTO A (orig_cat,type) VALUES (2,3);
INSERT INTO A (orig_cat,type) VALUES (3,3);
UPDATE A SET gv_ID=rowid+99;
and
CREATE TABLE B (col_t INTEGER, orig_cat INTEGER, part INTEGER);
INSERT INTO B VALUES (1,1,1);
INSERT INTO B VALUES (3,1,2);
INSERT INTO B VALUES (2,2,1);
INSERT INTO B VALUES (1,2,2);
INSERT INTO B VALUES (3,3,1);
INSERT INTO B VALUES (4,3,2);
I'd like to update and set/replace the values in column col_t of table B where part=2 with selected values of column gv_ID of table A. The selected values I can get with a SELECT command:
SELECT gv_ID
FROM (SELECT * FROM B where part=2) AS B_sub
JOIN (SELECT * FROM A WHERE type=3) AS A_sub
ON B_sub.orig_cat=A_sub.orig_cat;
How can I use that so that the values of col_t in rows 2,3 and 5 (=1,2,3) get replaced with the values 104,105,106 (wich is returned by the selection)?
You can use correlated subquery:
UPDATE B
SET col_t = (SELECT gv_ID FROM A WHERE A.orig_cat = B.orig_cat AND A.type = 3)
WHERE B."part" = 2;
SqlFiddleDemo
I've assumed that pair A.orig_cat and A.type is UNIQUE.

SQLite 3.6.21: "Squeezing" out NULL values with a unique column value

Using SQLite 3.6.21, I would like to update a column in a table.
The goal is to "squeeze out" NULLs from a column if there is only 1 unique real value in that column. If LastName contained "Smith", "Johnson", and a NULL, then do nothing.
For example:
create table foo (FirstName char(20), LastName char(20));
insert into foo values ('Joe', 'Smith');
insert into foo values ('Susan', NULL);
insert into foo values ('Shirley', 'Smith');
insert into foo values ('Kevin', NULL);
Since there is only one last name, I want to replace the NULLs with Smith. I have tried this without success. It ends up replacing the whole column with NULLs.
UPDATE foo
SET LastName =
( CASE
WHEN ((select count(distinct LastName) from foo) = 1) THEN (SELECT distinct LastName from foo)
ELSE LastName
END
);
EDIT:
I'm executing this in Python using the following code:
import sqlite3 as lite
'''
con = lite.connect('test.db')
names = (
('Joe', 'Smith'),
('Susan', None),
('Shirley', 'Smith'),
('Kevin', None),
)
squeezecmd = "UPDATE foo SET LastName = (CASE WHEN ((select count(distinct LastName) from foo) = 1) THEN (SELECT distinct LastName from foo) ELSE LastName END)"
with con:
cur = con.cursor()
cur.execute("CREATE TABLE foo(FirstName TEXT, LastName TEXT)")
cur.executemany("INSERT INTO foo VALUES(?, ?)", names)
cur.execute(squeezecmd)
cur.execute("SELECT * FROM foo")
rows = cur.fetchall()
for row in rows:
print row
Python reorders the "SELECT distinct LastName from foo" so that the NULL is the first value. SQL provides "Smith" as the first value. To ignore the NULL I changed that line to
...THEN (SELECT distinct LastName from foo where LastName is NOT NULL)
EDIT:
Copy from SQL console:
sqlite>
sqlite> SELECT distinct LastName from foo;
Smith
sqlite>
Copy from Python:
with con:
cur = con.cursor()
cur.execute("SELECT distinct LastName from foo")
answer = cur.fetchall()
print answer
Results in
[(None,), (u'Smith',)]

SQLite Insert and Replace with condition

I can not figure out how to query a SQLite.
needed:
1) Replace the record (the primary key), if the condition (comparison of new and old fields entries)
2) Insert an entry if no such entry exists in the database on the primary key.
Importantly, it has to work very fast!
I can not come up with an effective inquiry.
Edit.
MyInsertRequest - the desired expression.
Script:
CREATE TABLE testtable (a INT PRIMARY KEY, b INT, c INT)
INSERT INTO testtable VALUES (1, 2, 3)
select * from testtable
1|2|3
-- Adds an entry, because the primary key is not
++ MyInsertRequest VALUES (2, 2, 3) {if c>4 then replace}
select * from testtable
1|2|3
2|2|3
-- Adds
++ MyInsertRequest VALUES (3, 8, 3) {if c>4 then replace}
select * from testtable
1|2|3
2|2|3
3|8|3
-- Does nothing, because such a record (from primary key field 'a')
-- is in the database and none c>4
++ MyInsertRequest VALUES (1, 2, 3) {if c>4 then replace}
select * from testtable
1|2|3
2|2|3
3|8|3
-- Does nothing
++ MyInsertRequest VALUES (3, 34, 3) {if c>4 then replace}
select * from testtable
1|2|3
2|2|3
3|8|3
-- replace, because such a record (from primary key field 'a')
-- is in the database and c>2
++ MyInsertRequest VALUES (3, 34, 1) {if c>2 then replace}
select * from testtable
1|2|3
2|2|3
3|34|1
Isn't INSERT OR REPLACE what you need ? e.g. :
INSERT OR REPLACE INTO table (cola, colb) values (valuea, valueb)
When a UNIQUE constraint violation occurs, the REPLACE algorithm
deletes pre-existing rows that are causing the constraint violation
prior to inserting or updating the current row and the command
continues executing normally.
You have to put the condition in a unique constraint on the table. It will automatically create an index to make the check efficient.
e.g.
-- here the condition is on columnA, columnB
CREATE TABLE sometable (columnPK INT PRIMARY KEY,
columnA INT,
columnB INT,
columnC INT,
CONSTRAINT constname UNIQUE (columnA, columnB)
)
INSERT INTO sometable VALUES (1, 1, 1, 0);
INSERT INTO sometable VALUES (2, 1, 2, 0);
select * from sometable
1|1|1|0
2|1|2|0
-- insert a line with a new PK, but with existing values for (columnA, columnB)
-- the line with PK 2 will be replaced
INSERT OR REPLACE INTO sometable VALUES (12, 1, 2, 6)
select * from sometable
1|1|1|0
12|1|2|6
Assuming your requirements are:
Insert a new row when a doesn't exists;
Replacing row when a exist and existing c greater then new c;
Do nothing when a exist and existing c lesser or equal then new c;
INSERT OR REPLACE fits first two requirements.
For last requirement, the only way I know to make an INSERT ineffective is supplying a empty rowset.
A SQLite command like following whould make the job:
INSERT OR REPLACE INTO sometable SELECT newdata.* FROM
(SELECT 3 AS a, 2 AS b, 1 AS c) AS newdata
LEFT JOIN sometable ON newdata.a=sometable.a
WHERE newdata.c<sometable.c OR sometable.a IS NULL;
New data (3,2,1 in this example) is LEFT JOINen with current table data.
Then WHERE will "de-select" the row when new c is not less then existing c, keeping it when row is new, ie, sometable.* IS NULL.
I tried the others answers because I was also suffering from a solution to this problem.
This should work, however I am unsure about the performance implications. I believe that you may need the first column to be unique as a primary key else it will simply insert a new record each time.
INSERT OR REPLACE INTO sometable
SELECT columnA, columnB, columnC FROM (
SELECT columnA, columnB, columnC, 1 AS tmp FROM sometable
WHERE sometable.columnA = 1 AND
sometable.columnB > 9
UNION
SELECT 1 AS columnA, 1 As columnB, 404 as columnC, 0 AS tmp)
ORDER BY tmp DESC
LIMIT 1
In this case one dummy query is executed and union-ed onto a second query which would have a performance impact depending on how it is written and how the table is indexed. The next performance problem has potential where the results are ordered and limited. However, I expect that the second query should only return one record and therefore it should not be too much of a performance hit.
You can also omit the ORDER BY tmp LIMIT 1 and it works with my version of sqlite, but it may impact performance since it can end up updating the record twice (writing the original value then the new value if applicable).
The other problem is that you end up with a write to the table even if the condition states that it should not be updated.

Resources