Am working with Web2py and sqlite Db in Ubuntu. Iweb2py, a user input posts an item into an sqlite DB such as 'Hello World' as follows:
In the controller default the item is posted into ThisDb as follows:
consult = db.consult(id) or redirect(URL('index'))
form1 = [consult.body]
form5 = form1#.split()
name3 = ' '.join(form5)
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
conn.execute("INSERT INTO INPUT (NAME) VALUES (?);", (name3,))
conn.commit()
Another code picks or should read the item from ThisDb, in this case 'Hello World' as follows:
location = ""
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
c.execute('select * from input')
c.execute("select MAX(rowid) from [input];")
for rowid in c:break
for elem in rowid:
m = elem
c.execute("SELECT * FROM input WHERE rowid = ?", (m,))
for row in c:break
location = row[1]
name = location.lower().split()
my DB configuration for the table 'input' where Hello World' should be read from is this:
CREATE TABLE `INPUT` (
`NAME` TEXT
);
This code previously workd well while coding with windows7 and 10 but am having this problem ion Ubuntu 16.04. And I keep getting this error:
File "applications/britamintell/modules/xxxxxx/define/yyyy0.py", line 20, in xxxdefinition
location = row[1]
IndexError: tuple index out of range
row[0] is the value in the first column.
row[1] is the value in the second column.
Apparently, your previous database had more than one column.
Related
I'm running a tdload command using a job variables file with values :
SelectStmt = 'select * from database.tablename where column1 > 100',
SourceTdpid = 'hostid',
SourceUserName = 'username',
SourceUserPassword = 'password'
SourceTable = 'database.tablename',
FileWriterFileSizeMax = '10M',
TargetTextDelimiter = '|'
TargetFilename = "file.csv"
FileWriterQuotedData = "Y"
The filter clause in the select statement should return me only 39 rows,
but I'm getting all of the rows from the table in the extracted file.
How to resolve this?
Had to use ExportSelectStmt instead of SelectStmt
Assume I have a simple table in Oracle db
CREATE TABLE schema.d_test
(
id_record integer GENERATED AS IDENTITY START WITH 95000 NOT NULL,
DT DATE NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)
And I have a dataframe in R
dt = c('2022-01-01', '2005-04-01', '2011-10-02')
var = c('sgdsg', 'hjhgjg', 'rurtur')
num = c(165, 1658.5, 8978.12354)
data = data.frame(dt, var, num)%>%
mutate(dt = as.Date(dt))
I'm trying to insert data into Oracle d_test table using the code
data %>%
dbWriteTable(
oracle_con,
value = .,
date = T,
'D_TEST',
append = T,
row.names=F,
overwrite = F
)
But the following error returned
Error in .oci.WriteTable(conn, name, value, row.names = row.names, overwrite = overwrite, :
Error in .oci.GetQuery(con, stmt, data = value) :
ORA-00947: not enough values
What's the problem?
How can I fix it?
Thank you.
This is pure Oracle (I don't know R).
Sample table:
SQL> create table test_so (id number generated always as identity not null, name varchar2(20));
Table created.
SQL> insert into test_so(name) values ('Name 1');
1 row created.
My initial idea was to suggest you to insert any value into the ID column, hoping that Oracle would discard it and generate its own value. However, that won't work.
SQL> insert into test_so (id, name) values (-100, 'Name 2');
insert into test_so (id, name) values (-100, 'Name 2')
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column
But, if you can afford recreating the table so that it doesn't automatically generate the ID column's value but use a "workaround" (we used anyway, as identity columns are relatively new in Oracle) - a sequence and a trigger - you might be able to "fix" it.
SQL> drop table test_so;
Table dropped.
SQL> create table test_so (id number not null, name varchar2(20));
Table created.
SQL> create sequence seq_so;
Sequence created.
SQL> create or replace trigger trg_bi_so
2 before insert on test_so
3 for each row
4 begin
5 :new.id := seq_so.nextval;
6 end;
7 /
Trigger created.
Inserting only name (Oracle will use a trigger to populate ID):
SQL> insert into test_so(name) values ('Name 1');
1 row created.
This is what you'll do in your code - provide dummy ID value, just to avoid
ORA-00947: not enough values
error you have now. Trigger will discard it and use sequence anyway:
SQL> insert into test_so (id, name) values (-100, 'Name 2');
1 row created.
SQL> select * from test_so;
ID NAME
---------- --------------------
1 Name 1
2 Name 2 --> this is a row which was supposed to have ID = -100
SQL>
The way you can handle this problem is to create table with GENERATED BY DEFAULT ON NULL AS IDENTITY like this
CREATE TABLE CM_RISK.d_test
(
id_record integer GENERATED BY DEFAULT ON NULL AS IDENTITY START WITH 5000 NOT NULL ,
DT date NOT NULL,
var varchar(50),
num float,
PRIMARY KEY (ID_RECORD)
)
I am trying to load data to snowflake using the following code, but getting an error.
con <- DBI::dbConnect(
drv = odbc::odbc(),
driver = "SnowflakeDSIIDriver",
server = "<>",
authenticator = 'externalbrowser',
warehouse = "<>",
database = "<>",
UID = "<>",
role = "<>"
)
DBI::dbAppendTable(con, name = DBI::Id(schema = "<>", table = "<>"), value = tmp[1:2,])
tmp was downloaded from Snowflake, the same table using RStudio:
```{sql connection=con, output.var = 'tmp'}
select top 10 *
FROM <>
```
The error seems to be stemming from a VARIANT column where I store a JSON string.
Error in new_result(connection#ptr, statement, immediate) :
nanodbc/nanodbc.cpp:1374: 22000: SQL compilation error:
Expression type does not match column data type, expecting VARIANT but got VARCHAR(2) for column FEATURES
I had this once and it was an invalid JSON (missing brackets somewhere). Probably this helps.
Im using PYODBC to query an SQL DB multiple times based on the values of a pandas dataframe column (seen below as a list of values, since I used the ToList() function to turn the column into a list.
#the connection string
cnxn = pyodbc.connect(driver='{SQL Server}', server = 'NameOfTheServer',autocommit = True,uid ='name',pwd ='password')
cursor = cnxn.cursor()
params = ['3122145', '523532236']
sql = ("""
SELECT SO.column
FROM table AS SO
WHERE SO.column = ?
""")
cursor.executemany(sql, params)
row = cursor.fetchone()
the executemany function throws an error even though I'm using a list:
TypeError: ('Params must be in a list, tuple, or Row', 'HY000')
.executemany is not intended to be used with SELECT statements. If we try, we only get the last row back:
cnxn = pyodbc.connect(connection_string, autocommit=True)
crsr = cnxn.cursor()
crsr.execute("CREATE TABLE #tmp (id int primary key, txt varchar(10))")
crsr.execute(
"INSERT INTO #tmp (id,txt) "
"VALUES (1,'one'),(2,'two'),(3,'three'),(4,'four'),(5,'five')"
)
print(crsr.execute("SELECT * FROM #tmp").fetchall())
"""console output:
[(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four'), (5, 'five')]
"""
sql = "SELECT * FROM #tmp WHERE id = ?"
list_of_tuples = [(1,), (3,), (5,)]
crsr.executemany(sql, list_of_tuples)
print(crsr.fetchall())
"""console output:
[(5, 'five')]
"""
try:
crsr.nextset()
print(crsr.fetchall())
except pyodbc.ProgrammingError as pe:
print(pe)
"""console output:
No results. Previous SQL was not a query.
"""
Instead, we need to build a string of parameter placeholders and use it in an IN clause, like this:
tuple_of_scalars = tuple(x[0] for x in list_of_tuples)
sql = f"SELECT * FROM #tmp WHERE id IN ({','.join('?' * len(tuple_of_scalars))})"
print(sql)
"""console output:
SELECT * FROM #tmp WHERE id IN (?,?,?)
"""
crsr.execute(sql, tuple_of_scalars)
print(crsr.fetchall())
"""console output:
[(1, 'one'), (3, 'three'), (5, 'five')]
"""
I would like to repeat this command as many times as there is still sometextin the field note (several rows from the table itemNotes could have one or more sometext in the field note):
UPDATE itemNotes
SET
note = SUBSTR(note, 0, INSTR(LOWER(note), 'sometext')) || 'abc' || SUBSTR(note, INSTR(LOWER(note), 'sometext')+sometext_len)
WHERE
INSTR(LOWER(note), 'sometext') >= 0;
So a proto-code would be :
While (SELECT * FROM itemNotes WHERE note like "%sometext%") >1
UPDATE itemNotes
SET
note = SUBSTR(note, 0, INSTR(LOWER(note), 'sometext')) || 'abc' || SUBSTR(note, INSTR(LOWER(note), 'sometext')+sometext_len)
WHERE
INSTR(LOWER(note), 'sometext') >= 0;
END
But apparently Sqlite3 doesn't support While loop or for loop. They can be emulated with something like this but I have difficulties integrating what I want with this query:
WITH b(x,y) AS
(
SELECT 1,2
UNION ALL
SELECT x+ 1, y + 1
FROM b
WHERE x < 20
) SELECT * FROM b;
Any idea how to do this?
PS: I don't use replace because I want to replace all the case combinations of sometext (e.g. sometext, SOMEtext, SOmeText...) cf this question
Current input and desired output:
For a single row, a note field could look like (and many rows in the table itemNotescould look like this one):
There is SOmetext and also somETExt and more SOMETEXT and even more sometext
The query should output:
There is abc and also abc and more abc and even more abc
I am doing it on the zotero.sqlite, which is created by this file (line 85). The table is created by this query
CREATE TABLE itemNotes (
itemID INTEGER PRIMARY KEY,
parentItemID INT,
note TEXT,
title TEXT,
FOREIGN KEY (itemID) REFERENCES items(itemID) ON DELETE CASCADE,
FOREIGN KEY (parentItemID) REFERENCES items(itemID) ON DELETE CASCADE
);
You just have your answer in your query:
UPDATE itemNotes
SET
note = SUBSTR(note, 0, INSTR(LOWER(note), 'sometext')) || 'abc' || SUBSTR(note, INSTR(LOWER(note), 'sometext')+sometext_len)
WHERE
note LIKE "%sometext%";
It will update all rows that contain sometext in the note field
UPDATE
If you want to update the field which has multiple occurrences in different cases and maintain the rest of the text the simplest solution imo is to use regex and for that you need an extension
UPDATE itemNotes
SET
note = regex_replace('\bsometext\b',note,'abc')
WHERE
note LIKE "%sometext%";
As recommended by Stephan in his last comment, I used python to do this.
Here is my code :
import sqlite3
import re
keyword = "sometext"
replacement = "abc"
db = sqlite3.connect(path_to_sqlite)
cursor = db.cursor()
cursor.execute(f'SELECT * FROM itemNotes WHERE note like "%{keyword}%"')
for row in cursor.fetchall():
row_regex = re.compile(re.escape(keyword), re.IGNORECASE)
row_regex_replaced = row_regex.sub(replacement, row[2])
rowID = row[0]
sql = "REPLACE INTO itemNotes (itemID,note) VALUES (?,?)"
data = (rowID, row_regex_replaced)
cursor.execute(sql, data)
db.commit()