How to access columns with blank space in column name with pyodbc - pyodbc

I have a database (Navision) with tons of tables and columns containing blank spaces.
How to access them a pyodbc way (like row.column)
cursor.execute("select [album id], photo_id from [my photos] where user_id=1")
row = cursor.fetchone()
this works
print(row.photo_id)
print(row[0]
those of course doesn't work
print(row.[album id])
print(row."album id")

You can access values by column name via the __getattribute__ method of the Row object:
row = crsr.execute("SELECT 'foo' AS [my column]").fetchone()
print(row.__getattribute__('my column')) ## foo

Related

Query SQLite using uuid returns nothing

I'm querying a SQLite db file using the id column, which is UNIQUEIDENTIFIER type.
The two commands are :
SELECT * FROM scanned_images WHERE id = cast('5D878B98-71B2-4DEE-BA43-61D11C8EA497' as uniqueidentifier)
or
SELECT * FROM scanned_images WHERE id = '5D878B98-71B2-4DEE-BA43-61D11C8EA497'
However, the above commands both returned nothing.
I also tried:
SELECT * FROM scanned_images WHERE rowid = 1
this command returns the correct data.
There is no uniqueidentifier data type in SQLite.
According to the rules of type affinity described here in 3.1. Determination Of Column Affinity, the column's affinity is numeric.
All that this expression does:
cast('5D878B98-71B2-4DEE-BA43-61D11C8EA497' as uniqueidentifier)
is return 5.
You should have defined the column's data type as TEXT, because you have to treat it like TEXT and write the condition:
WHERE id = '5D878B98-71B2-4DEE-BA43-61D11C8EA497'
or:
WHERE id = '(5D878B98-71B2-4DEE-BA43-61D11C8EA497)'
if as shown in the image it contains surrounding parentheses.

How to insert dataframe with auto-increment in sql alchemy /sqlite?

I have this code to insert dataframe into DB (SQLite) :
l1 = df.to_dict(orient='record')
meta = schema.MetaData(bind=db,reflect=True)
t= Table(t1, meta, autoload=True)
Session = sessionmaker(bind=db)
session = Session()
db.execute(t.insert(), l1)
session.commit()
session.close()
It fails because the unique id of the table 'is not unique' (df comes from the db and fields have been modified).
in df, there is one unique id called id, which needs to be incremented.
However, how to specify the unique_id and how to make it increment automatically through sql alchemy?
Solution found (if it helps anybody in same situation):
Just add this code, after creating the list, it would remove the id.
del l1['id']

Error binding parameter 0 - probably unsupported type

I am creating an SQL db and trying to iterate over an excel file and put all the data in to the SQL table as follows but I keep getting an annoying error. I have looked at the data types and still can't get my head around it please let me know if anyone spots what the problem is my code is:
import sqlite3
from openpyxl import load_workbook
#wb = load_workbook(r"LeaguePlayers.xlsx")
#read workbook to get data
wb = load_workbook(filename = r"LeaguePlayers.xlsx", use_iterators = True)
ws = wb.get_sheet_by_name(name = 'Sheet1')
#ws = wb.worksheets
conn = sqlite3.connect("players.db") # or use :memory: to put it in RAM
cursor = conn.cursor()
# create a table
cursor.execute("""CREATE TABLE players
(player TEXT,
team TEXT,
points INTEGER,
cost REAL,
position TEXT)
""")
#Iterate through worksheet and print cell contents
for row in ws.iter_rows():
for cell in row:
cursor.execute("INSERT INTO players VALUES (?,?,?,?,?)", row)
conn.commit()
#----------------------------------------
# display SQL data
#----------------------------------------
c.execute('SELECT * FROM players')
for row in c:
print (row)
The error i get says:
cursor.execute("INSERT INTO players VALUES (?,?,?,?,?)", row)
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.
I really think you need to do some kind of introduction to Python.
You are making two elementary mistakes: looping of the cells in a row but passing the row to the query; passing a complex object as opposed to a native Python type such as an integer or string.
Something like the following is what you want:
player = [cell.value for cell in row]
cursor.execute(query, player)
Note, that execute takes a sequence (tuple or list) as the second argument.

SQLite: select max index for each group

What I'm trying to do:
SELECT MAX(index), scr FROM history WHERE state = "TQA" GROUP BY scr
So, for every 'scr' in the table, I want a row showing the maximum index where the 'state' of that row = 'TQA'.
For some reason this gives me a syntax error near "index".
INDEX is a reserved keyword in SQLite. If you want to use that as a field name, you'll need to quote it;
SELECT MAX("index"), scr FROM history WHERE state = "TQA" GROUP BY scr

Unable to insert values to sqlite table

I need to create a table with table name having some special characters. I am using RSQLite package. Table name I need to create is port.3.1. It is not possible to create a table with this name. So I changed the table name to [port.3.1] based on What are valid table names in SQLite?.
Now I can create the table, but I can't insert a dataframe to that table.
The code I used is as follows:
createTable <- function(tableName){
c <- c(portDate='varchar(20) not null' ,
ticker='varchar(20)',
quantities='double(20,10)')
lite <- dbDriver("SQLite", max.con = 25)
db <- dbConnect(lite, dbname="sql.db")
if( length(which(strsplit(toString(tableName),'')[[1]]=='.') ) != 0){ tableName = paste("[",tableName,"]",sep="") } #check whether the portfolio contains special characters or not
sql <- dbBuildTableDefinition(db, tableName, NULL, field.types = c, row.names = FALSE)
print(sql)
dbGetQuery(db, sql)
}
datedPf <- data.frame(date=c("2001-01-01","2001-01-01"), ticker=c("a","b"),quantity=c(12,13))
for(port in c("port1","port2","port.3.1")){
createTable(port)
lite <- dbDriver("SQLite", max.con = 25)
db <- dbConnect(lite, dbname="sql.db")
if( length(which(strsplit(toString(port),'')[[1]]=='.') ) != 0){ port = paste("[",port,"]",sep="") } #check whether the portfolio contains special characters or not
dbWriteTable(db,port,datedPf ,append=TRUE,row.names=FALSE)
}
In this example, I can insert data frame to table port1 and port2, but it is not inserting on tables [port.3.1]. What is the reason behind this? How can I solve this problem?
Have a look at the sqliteWriteTable implementation, simply by entering that name and pressing enter. You will notice two things:
[…]
foundTable <- dbExistsTable(con, name)
new.table <- !foundTable
createTable <- (new.table || foundTable && overwrite)
[…]
if (createTable) {
[…]
And looking at showMethods("dbExistsTable", includeDefs=T) output you'll see that it uses dbListTables(conn) which will return the unquoted version of your table name. So if you pass the quoted table name to sqliteWriteTable, then it will incorrectly assume that your table does not exist, try to create it and then encounter an error. If you pass the unquoted table name, the creation statement will be wrong.
I'd consider this a bug in RSQLite. In my opinion, SQL statements the user passes have to be correctly quoted, but everywhere you pass a table name as a separate argument to a function, that table name should be unquoted by default and should get quoted in the SQL statements generated from it. It would be even better if the name were allowed in either quoted or unquoted form, but that's mainly to maximize portability. If you feel like it, you can try contact the developers to report this issue.
You can work around the problem:
setMethod(dbExistsTable, signature(conn="SQLiteConnection", name="character"),
function(conn, name, ...) {
lst <- dbListTables(conn)
lst <- c(lst, paste("[", lst, "]", sep=""))
match(tolower(name), tolower(lst), nomatch = 0) > 0
}
)
This will overwrite the default implementation of dbExistsTable for SQLite connections with a version which checks for both quoted and unquoted table names. After this change, passing "[port.3.1]" as the table name will cause foundTable to be true, so RSQLite won't attempt to create that table for you.

Resources