Error binding parameter 0 - probably unsupported type - sqlite

I am creating an SQL db and trying to iterate over an excel file and put all the data in to the SQL table as follows but I keep getting an annoying error. I have looked at the data types and still can't get my head around it please let me know if anyone spots what the problem is my code is:
import sqlite3
from openpyxl import load_workbook
#wb = load_workbook(r"LeaguePlayers.xlsx")
#read workbook to get data
wb = load_workbook(filename = r"LeaguePlayers.xlsx", use_iterators = True)
ws = wb.get_sheet_by_name(name = 'Sheet1')
#ws = wb.worksheets
conn = sqlite3.connect("players.db") # or use :memory: to put it in RAM
cursor = conn.cursor()
# create a table
cursor.execute("""CREATE TABLE players
(player TEXT,
team TEXT,
points INTEGER,
cost REAL,
position TEXT)
""")
#Iterate through worksheet and print cell contents
for row in ws.iter_rows():
for cell in row:
cursor.execute("INSERT INTO players VALUES (?,?,?,?,?)", row)
conn.commit()
#----------------------------------------
# display SQL data
#----------------------------------------
c.execute('SELECT * FROM players')
for row in c:
print (row)
The error i get says:
cursor.execute("INSERT INTO players VALUES (?,?,?,?,?)", row)
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.

I really think you need to do some kind of introduction to Python.
You are making two elementary mistakes: looping of the cells in a row but passing the row to the query; passing a complex object as opposed to a native Python type such as an integer or string.
Something like the following is what you want:
player = [cell.value for cell in row]
cursor.execute(query, player)
Note, that execute takes a sequence (tuple or list) as the second argument.

Related

Is it good to use LOOKUP in FOR EACH WHERE clause? - Progress 4GL

Is it good if we use lookup function in for each where clause? Will it cause performance issue? Please help to understand and provide the example how to avoid.
define variable cGroupID as character no-undo.
for each <table> no-lock where lookup(cGroupID,<table.fieldname>) <> 0:
**do something...**
end.
note - table field name can have multiple comma separated group
A lookup function cannot use an index, so yes you can introduce sub-par performance which can be avoided. See the following example using the sports database which will read all records when using lookup and will limit the set to what meets the criteria when breaking the parts out to individual query parts:
def var ii as int no-undo.
def var iread as int64 no-undo.
function reads returns int64:
find _file where _file._file-name = 'customer' no-lock.
find _tablestat where _tablestat._tablestat-id = _file._file-number no-lock.
return _tablestat._tablestat-read.
end function.
iread = reads().
for each customer
where lookup( customer.salesrep, 'dkp,sls' ) > 0
no-lock:
ii = ii + 1.
end.
message 'lookup - records:' ii 'read:' reads() - iread.
ii = 0.
iread = reads().
for each customer
where customer.salesrep = 'dkp'
or customer.salesrep = 'sls'
no-lock:
ii = ii + 1.
end.
message 'or - records:' ii 'read:' reads() - iread.
https://abldojo.services.progress.com/?shareId=6272e1223fb02369b2545bf4
Your example however seems to be performing a reverse lookup, ie the database field contains a comma separated list of values, which seems like not obeying the basic rules of database normalization.
If you want to keep the list in a single field, adding a word index on your comma separated field may help. You can then use contains.

Passing multiple parameters to an SQLite query in Racket

I'm trying to pass more than one variable to an sqlite query in Racket.
(define select-test
(prepare dbconn "SELECT count(*) FROM All_data WHERE Season = ? AND Division = '?'"))
(define season 2019)
(define league "E0")
;(query-value dbconn select-test '(season league))
;(query-value dbconn select-test season league)
There are examples online showing both a list and separate variables as input to a query, but neither of these are working for me.
I either get this message for the list:
query-value: cannot convert given value to SQL type
given: '(season league)
type: parameter
dialect: SQLite
or 'wrong number of parameters for query' if done separately.
Can someone please help with the correct syntax?
The problem is in your SQL. This statement:
SELECT count(*) FROM All_data WHERE Season = ? AND Division = '?'
has one parameter (for the Season comparison). The '?' is just a literal string containing a question mark. Just write ? instead, like this:
SELECT count(*) FROM All_data WHERE Season = ? AND Division = ?
Then call it with two arguments, like this:
(query-value dbconn select-test season league)
By the way, you can double-check the number of parameters the prepared statement has with
(length (prepared-statement-parameter-types select-test))
With SQLite, there isn't any information in the types themselves, but the length tells you the number of parameters.

Unable to write dataframe in R as Update-Statement to Postgis/PostgresSQL

I have the following dataframe:
library(rpostgis)
library(RPostgreSQL)
library(glue)
df<-data.frame(elevation=c(450,900),
id=c(1,2))
Now I try to upload this to a table in my PostgreSQL/Postgis database. My connection (dbConnect) is working for "SELECT"-Statements properly. However, I tried two ways of updating a database table with this dataframe and both failed.
First:
pgInsert(postgis,name="fields",data.obj=df,overwrite = FALSE, partial.match = TRUE,
row.names = FALSE,upsert.using = TRUE,df.geom=NULL)
2 out of 2 columns of the data frame match database table columns and will be formatted for database insert.
Error: x must be character or SQL
I do not know what the error is trying to tell me as both the values in the dataframe and table are set to integer.
Second:
sql<-glue_sql("UPDATE fields SET elevation ={df$elevation} WHERE
+ id = {df$id};", .con = postgis)
> sql
<SQL> UPDATE fields SET elevation =450 WHERE
id = 1;
<SQL> UPDATE fields SET elevation =900 WHERE
id = 2;
dbSendStatement(postgis,sql)
<PostgreSQLResult>
In both cases no data is transferred to the database and I do not see any Error logs within the database.
Any hint on how to solve this problem?
It is a mistake from my site, I got glue_sql wrong. To correctly update the database with every query created by glue_sql you have to loop through the created object like the following example:
for(i in 1:max(NROW(sql))){
dbSendStatement(postgis,sql[i])
}

Use Rs mongolite to correctly (insert? update?) add data to existing collection

I have the following function written in R that (I think) is doing a poor job of updating my mongo databases collections.
library(mongolite)
con <- mongolite::mongo(collection = "mongo_collection_1", db = 'mydb', url = 'myurl')
myRdataframe1 <- con$find(query = '{}', fields = '{}')
rm(con)
con <- mongolite::mongo(collection = "mongo_collection_2", db = 'mydb', url = 'myurl')
myRdataframe2 <- con$find(query = '{}', fields = '{}')
rm(con)
... code to update my dataframes (rbind additional rows onto each of them) ...
# write dataframes to database
write.dfs.to.mongodb.collections <- function() {
collections <- c("mongo_collection_1", "mongo_collection_2")
my.dataframes <- c("myRdataframe1", "myRdataframe2")
# loop dataframes, write colllections
for(i in 1:length(collections)) {
# connect and add data to this table
con <- mongo(collection = collections[i], db = 'mydb', url = 'myurl')
con$remove('{}')
con$insert(get(my.dataframes[i]))
con$count()
rm(con)
}
}
write.dfs.to.mongodb.collections()
My dataframes myRdataframe1 and myRdataframe2 are very large dataframes, currently ~100K rows and ~50 columns. Each time my script runs, it:
uses con$find('{}') to pull the mongodb collection into R, saved as a dataframe myRdataframe1
scrapes new data from a data provider that gets appended as new rows to myRdataframe1
uses con$remove() and con$insert to fully remove the data in the mongodb collection, and then re-insert the entire myRdataframe1
This last bullet point is iffy, because I run this R script daily in a cronjob and I don't like that each time I am entirely wiping the mongo db collection and re-inserting the R dataframe to the collection.
If I remove the con$remove() line, I receive an error that states I have duplicate _id keys. It appears I cannot simply append using con$insert().
Any thoughts on this are greatly appreciated!
When you attempt to insert documents into MongoDB that already exist in the database as per their primary key you will get the duplicate key exception. In order to work around that you can simply unset the _id column using something like this before the con$insert:
my.dataframes[i]$_id <- NULL
This way, the newly inserted document will automatically get a new _id assigned.
you can use upsert ( which matches document with the first condition if found it will update it, if not it will insert a new one,
first you need to separate id from each doc
_id= my.dataframes[i]$_id
updateData = my.dataframes[i]
updateData$_id <- NULL
then use upsert ( there might be some easier way to concatenate strings in R)
con$update(paste('{"_id":"', _id, '"}' ,sep="" ) , paste('{"$set":', updateData,'}', sep=""), upsert = TRUE)

EntityFramework using with database foreign key

Actually I spend whole day on the EntityFramework for foreign key.
assume we have two table.
Process(app_id,process_id)
LookupProcessId(process_id, process_description)
you can understand two tables with names, first table ,use process_id to indicate every application, and description is in the seoncd table.
Actually i try many times and figure out how to do inquery: it was like
Dim result = (from x in db.Processes where x.LookupProcess is (from m in db.LookupProcessIds where descr = "example" select m).FirstOrDefault() select x).FirstOrDefault()
First I want to ask is there easier way to do it.
Second i want to ask question is about insert
p As New AmpApplication.CUEngData.Process
p.app_id=100
p.LookupProcess = (from m in db.LookupProcessIds where descr = "example" select m).FirstOrDefault()
db.AddToProcesses(p)
db.SaveChanges()
from appearance it looks fine, but it give me error says
Entities in 'AmpCUEngEntities.Processes' participate in the 'FK_Process_LookupProcess' relationship. 0 related 'LookupProcess' were found. 1 'LookupProcess' is expected.
can i ask is that insert wrong? and is that my query correct?
For your first question:
Dim result = (from x in db.Processes
where x.LookupProcess.descr = "example"
select x).FirstOrDefault()
Actually, you missed some concepts from DataEntityModel, and its Framework. To manipulate data, you have to call object from contextual point of view. Those allow you to specify to the ObjectStateManager the state of an DataObject. In your case, if you have depending data from FK, you will have to add/update any linked data from leaf to root.
This example demonstrate simple (no dependances) data manipulation. A select if existing and an insert or update.
If you want more info about ObjectStateManager manipulation go to http://msdn.microsoft.com/en-us/library/bb156104.aspx
Dim context As New Processing_context 'deseign your context (this one is linked to a DB)
Dim pro = (From r In context.PROCESS
Where r.LOOKUPPROCESS.descr = LookupProcess.descr
Select r).FirstOrDefault()
If pro Is Nothing Then 'add a new one
pro = New context.PROCESS With {.AP_ID = "id", .PROCESS_ID = "p_id"}
context.PROCESS.Attach(pro)
context.ObjectStateManager.ChangeObjectState(pro, System.Data.EntityState.Added)
Else
'update data attibutes
pro.AP_ID = "id"
pro.PROCESS_ID = "p_id"
context.ObjectStateManager.ChangeObjectState(pro, System.Data.EntityState.Modified)
'context.PROCESS.Attach(pro)
End If
context.SaveChanges()
I hope this will help. Have a nice day!
For your first question, to expand on what #jeroenh suggested:
Dim result = (from x in db.Processes.Include("LookupProcess")
where x.LookupProcess.descr = "example"
select x).FirstOrDefault()
The addition of the Include statement will hydrate the LookupProcess entities so that you can query them. Without the Include, x.LookupProcess will be null which would likely explain why you got the error you did.
If using the literal string as an argument to Include is not ideal, see Returning from a DbSet 3 tables without the error "cannot be inferred from the query" for an example of doing this using nested entities.
For your second question, this line
p.LookupProcess = (from m in db.LookupProcessIds
where descr = "example" select m).FirstOrDefault()
Could cause you problems later on because if there is no LookupProcessId with a process_description of "example", you are going to get null. From MSDN:
The default value for reference and nullable types is null.
Because of this, if p.LookupProcess is null when you insert the entity, you will get the exception:
Entities in 'AmpCUEngEntities.Processes' participate in the 'FK_Process_LookupProcess' relationship. 0 related 'LookupProcess' were found. 1 'LookupProcess' is expected.
To avoid this kind of problem, you will need to check that p.LookupProcess is not null before it goes in the database.
If Not p.LookupProcess Is Nothing Then
db.AddToProcesses(p)
db.SaveChanges()
End If

Resources