pyodbc.ProgrammingError: ('HY000', 'The SQL contains 0 parameter markers, - pyodbc

There are connection with Oracle (works good):
import pypyodbc as pyodbc
cnxn=pyodbc.connect("DSN=ORCL;PWD=user1")
cursor=cnxn.cursor()
cursor.execute("select * from t_v01")
s = 'sss'
f = 'fff'
cursor.execute("select * from t_v01 where t_id = ? ",[s,])
#works good (above)
cursor.execute("insert into t_v01 (t_id,t_type) values (?,?)",[(s,f,),] )
Give the mentioned error: pyodbc.ProgrammingError: ('HY000', 'The SQL contains 0 parameter markers,..
But in previous step Python saw the parameter marker "?". Why?

Related

convert variable to text or string in R

I read data from excel file to connect and use in postgreSQL. I could connect to database and need to query using the variables from excel file. This part is working fine.
I declara a variable from excel file say "school" which I need to use dynamically to query in database. below is my query
sid <- '500'
my_school <- RPostgreSQL::dbGetQuery(conn = con, statement = "SELECT * from school where school_id = sid ")
it works if I use "500" instead of sid but I need to use a dynamic variable.
the error I get is :
Error: Failed to prepare query: ERROR: column "sid" does not exist
LINE 1: SELECT * from school where school_id = sid
^
HINT: Perhaps you meant to reference the column "school.db_sid".
>
Could anyone look at this? Thanks.
Use sprintf:
sprintf("SELECT * from school where school_id = %s", sid)
#[1] "SELECT * from school where school_id = 500"
Add quotation marks if appropriate:
sprintf("SELECT * from school where school_id = '%s'", sid)
#[1] "SELECT * from school where school_id = '500'"

Write to Snowflake VARIANT column from R

I am trying to load data to snowflake using the following code, but getting an error.
con <- DBI::dbConnect(
drv = odbc::odbc(),
driver = "SnowflakeDSIIDriver",
server = "<>",
authenticator = 'externalbrowser',
warehouse = "<>",
database = "<>",
UID = "<>",
role = "<>"
)
DBI::dbAppendTable(con, name = DBI::Id(schema = "<>", table = "<>"), value = tmp[1:2,])
tmp was downloaded from Snowflake, the same table using RStudio:
```{sql connection=con, output.var = 'tmp'}
select top 10 *
FROM <>
```
The error seems to be stemming from a VARIANT column where I store a JSON string.
Error in new_result(connection#ptr, statement, immediate) :
nanodbc/nanodbc.cpp:1374: 22000: SQL compilation error:
Expression type does not match column data type, expecting VARIANT but got VARCHAR(2) for column FEATURES
I had this once and it was an invalid JSON (missing brackets somewhere). Probably this helps.

RPostgreSQL and DBI: "operator does not exist: uuid = text"

When using dbReadTable to read in database tables that uses UUID as the primary key, I get the following warning message.
1: In postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver warning: (unrecognized PostgreSQL field type uuid (id:2950) in column 0)
When I modify the table I loaded and try to update the database using, I get the following error message:
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: operator does not exist: uuid = text
I get that the UUID type is not available in R, but is there a way that we can make the database believe the character vector "unique_id" is UUID instead of text?
Code:
library(RPostgreSQL)
library(postGIStools)
pgdrv <- dbDriver(drvName = "PostgreSQL")
# === open connection
db <- DBI::dbConnect(pgdrv,
dbname="database",
host="localhost", port=5432,
user = 'postgres')
# === get tables
users <- dbReadTable(db, "app_users")
# === interaction with tables
users$employee_has_quit[1:5] <- TRUE
# === update tables
postgis_update(conn = db,
df = users,
tbl = "app_users",
id_cols = "unique_id",
update_cols = "employee_has_quit")
# === close conncetion
DBI::dbDisconnect(db)
The problem is a bug in postGIStools. You can see the code they're using to generate this error here
query_text <- paste(query_text, ") AS", tbl_tmp, "(",
paste(quote_id(colnames(df)), collapse = ", "), ")",
"WHERE", paste(paste0(tbl_q, ".", id_q), "=",
paste0(tbl_tmp, ".", id_q),
collapse = " AND "))
Simply put, that won't work. They should be suing placeholders. It assumes that the input type can be the result of make_str_quote (by proxy of df_q and quote_str). That's a faulty assumption as seen here,
CREATE TABLE foo ( a uuid );
INSERT INTO foo VALUES ( quote_literal(gen_random_uuid()) ) ;
ERROR: column "a" is of type uuid but expression is of type text
LINE 1: INSERT INTO foo VALUES ( quote_literal(gen_random_uuid()) ) ...
^
HINT: You will need to rewrite or cast the expression.
My suggestion is you follow the docs,
Note: This package is deprecated. For new projects, we recommend using the sf package to interface with geodatabases.
You may be able to work around this by doing this
CREATE CAST (varchar AS uuid)
WITH INOUT
AS IMPLICIT;

How do I solve Sqlite DB Index Error

Am working with Web2py and sqlite Db in Ubuntu. Iweb2py, a user input posts an item into an sqlite DB such as 'Hello World' as follows:
In the controller default the item is posted into ThisDb as follows:
consult = db.consult(id) or redirect(URL('index'))
form1 = [consult.body]
form5 = form1#.split()
name3 = ' '.join(form5)
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
conn.execute("INSERT INTO INPUT (NAME) VALUES (?);", (name3,))
conn.commit()
Another code picks or should read the item from ThisDb, in this case 'Hello World' as follows:
location = ""
conn = sqlite3.connect("ThisDb.db")
c = conn.cursor()
c.execute('select * from input')
c.execute("select MAX(rowid) from [input];")
for rowid in c:break
for elem in rowid:
m = elem
c.execute("SELECT * FROM input WHERE rowid = ?", (m,))
for row in c:break
location = row[1]
name = location.lower().split()
my DB configuration for the table 'input' where Hello World' should be read from is this:
CREATE TABLE `INPUT` (
`NAME` TEXT
);
This code previously workd well while coding with windows7 and 10 but am having this problem ion Ubuntu 16.04. And I keep getting this error:
File "applications/britamintell/modules/xxxxxx/define/yyyy0.py", line 20, in xxxdefinition
location = row[1]
IndexError: tuple index out of range
row[0] is the value in the first column.
row[1] is the value in the second column.
Apparently, your previous database had more than one column.

Parameters and NULL

I'm having trouble passing NULL as an INSERT parameter query using RPostgres and RPostgreSQL:
In PostgreSQL:
create table foo (ival int, tval text, bval bytea);
In R:
This works:
res <- dbSendQuery(con, "INSERT INTO foo VALUES($1, $2, $3)",
params=list(ival=1,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
But this throws an error:
res <- dbSendQuery(con, "INSERT INTO foo VALUES($1, $2, $3)",
params=list(ival=NULL,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
Using RPostgres, the error message is:
Error: expecting a string
Under RPostgreSQL, the error is:
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: invalid input
syntax for integer: "NULL"
)
Substituting NA would be fine with me, but it isn't a work-around - a literal 'NA' gets written to the database.
Using e.g. integer(0) gives the same "expecting a string" message.
You can use NULLIF directly in your insert statement:
res <- dbSendQuery(con, "INSERT INTO foo VALUES(NULLIF($1, 'NULL')::integer, $2, $3)",
params=list(ival=NULL,
tval= 'not quite null',
bval=charToRaw('asdf')
)
)
works with NA as well.
One option here to workaround the problem of not knowing how to articulate a NULL value in R which the PostgresSQL pacakge will be able to successfully translate is to simply not specify the column whose value you want to be NULL in the database.
So in your example you could use this:
res <- dbSendQuery(con, "INSERT INTO foo (col2, col3) VALUES($1, $2)",
params=list(tval = 'not quite null',
bval = charToRaw('asdf')
)
)
when you want col1 to have a NULL value. This of course assumes that col1 in your table is nullable, which may not be the case.
Thanks all for the help. Tim's answer is a good one, and I used it to catch the integer values. I went a different route for the rest of it, writing a function in PostgreSQL to handle most of this. It looks roughly like:
CREATE OR REPLACE FUNCTION add_stuff(ii integer, tt text, bb bytea)
RETURNS integer
AS
$$
DECLARE
bb_comp bytea;
rows integer;
BEGIN
bb_comp = convert_to('NA', 'UTF8'); -- my database is in UTF8.
-- front-end catches ii is NA; RPostgres blows up
-- trying to convert 'NA' to integer.
tt = nullif(tt, 'NA');
bb = nullif(bb, bb_comp);
INSERT INTO foo VALUES (ii, tt, bb);
GET DIAGNOSTICS rows = ROW_COUNT;
RETURN rows;
END;
$$
LANGUAGE plpgsql VOLATILE;
Now to have a look at the RPostgres source and see if there's an easy-enough way to make it handle NULL / NA a bit more easily. Hoping that it's missing because nobody thought of it, not because it's super-tricky. :)
This will give the "wrong" answer if someone is trying to put literally 'NA' into the database and mean something other than NULL / NA (e.g. NA = "North America"); given our use case, that seems very unlikely. We'll see in six months time.

Resources