I am trying to extract values and place them in a .db to use later, however my code has bugged and I can no longer even load SQLite - sqlite

Using :
-IronPython
-AutoDesk
-Revit (PyRevit)
-Revit API
-SQLite3
My code is as follows:
try:
conn = sqlite3.connect('SQLite_Python.db')
c = conn.cursor()
print("connected")
Insert_Volume = """INSERT INTO Column_Coordinates
(x, y, z)
VALUES
(1, 2, 3)"""
count = c.execute(Insert_Volume)
conn.commit()
print("Volume values inserted", c.rowcount)
c.close()
except sqlite3.Error as error:
print("Failed to insert data into sqlite table", error)
finally:
if (conn):
conn.close()
print("The SQLite connection is closed")'''
This code used to work within PyRevit, but now does not, with the following error:
Exception : System.IO.IOException: Could not add reference to assembly IronPython.SQLite
Please advise, this is one of the early steps of a large project and therefore is delaying my work quite a bit.
I look forward to your reply.

Related

kubeflow pipeline SDK use dsl.ParallelFor to build a loop

#dsl.pipeline( name='classfier') def classifiertest(): make_classification_com_res = make_classification_com() rng_res = np_random_random_state() uniform_res = rng_uniform(make_classification_com_res.output,rng_res.output) all_datas_res = get_all_datas(x_input=uniform_res.output,y_input=make_classification_com_res.output) forlist= list([0,1,2,3,4,5,6,7,8,9]) with dsl.ParallelFor(forlist) as item_index: for_outter_func(item_index,ds_input=all_datas_res.output)
When I run this pipeline, the following error occurs after clicking the start button of run:
{"error":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1","code":13,"message":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1","details":[{"#type":"type.googleapis.com/api.Error","error_message":"Internal Server Error","error_details":"Failed to create a new run.: InternalServerError: Failed to store run classfier-9xbrk to table: Error 1366: Incorrect string value: '\xE8\xBF\x99\xE4\xB8\x80...' for column 'WorkflowRuntimeManifest' at row 1"}]}
When I delete these two lines of code, pipeline can successfully commit and run.
with dsl.ParallelFor(forlist) as item_index: for_outter_func(item_index,ds_input=all_datas_res.output)
Delete inline comments.
The following writing style is the cause of problems in kfp.
from kfp.components import create_component_from_func
# This comment is OK.
def inputdata_outputtable(input_arr, output_path):
import numpy as np
# This comment is also OK.
_input_arr = np.array(input_arr) # This comment causes an error.
np.save(output_path, _input_arr)

How do i fix the warning message "Closing open result set, cancelling previous query" when querying a PostgreSQL database in R?

Below is a snippet of my code that I use in R to extract IDs from a PostgreSQL database. When I run the function I get the following warning message from R:
In result_create(conn#ptr, statement) :
Closing open result set, cancelling previous query
How do I avoid this warning message from happening without making use of options(warn=-1) at the beginning of my code, suppressing the warning instead of
con <- dbConnect(RPostgres::Postgres(),
user = "postgres",
dbname = "DataBaseName",
password = "123456",
port = 5431)
get_id <- function(connection, table){
query <- toString(paste("SELECT id FROM ", table, sep = ""))
data_extract_query <- dbSendQuery(connection, query)
data_extract <- dbFetch(data_extract_query)
return(data_extract)
}
get_id(con, "users")
I found a method for solving the problem.
I found a thread on GitHub for RSQLite a https://github.com/r-dbi/RSQLite/issues/143. In this thread, they explicitly set n = -1 in the dbFetch() function.
This seemed to solve my problem, and the warning message did not show up again by editing the code like the following:
data_extract <- dbFetch(data_extract_query, n = -1)
The meaning of n is the number of rows that the query should return. By setting this to -1 all rows will be retrieved. By default, it is set to n = -1 but for some reason, in this build of R (3.6.3) the warning will still be shown.
Calling ?dbFetch in R you can see more information on this. I have included a snippet from the R-help page:
Usage
dbFetch(res, n = -1, ...)
fetch(res, n = -1, ...)
Arguments
res An object
inheriting from DBIResult, created by dbSendQuery().
n maximum number of records to retrieve per fetch. Use n = -1 or
n = Inf to retrieve all pending records. Some implementations may
recognize other special values.
... Other arguments passed on to methods.
This issue comes up with other database implementations if the results are not cleared before submitting a new one. From the docs of DBI::dbSendQuery
Usage
dbSendQuery(conn, statement, ...)
...
Value
dbSendQuery() returns an S4 object that inherits from DBIResult. The result set can be used with dbFetch() to extract records. Once you have finished using a result, make sure to clear it with dbClearResult(). An error is raised when issuing a query over a closed or invalid connection, or if the query is not a non-NA string. An error is also raised if the syntax of the query is invalid and all query parameters are given (by passing the params argument) or the immediate argument is set to TRUE.
To get rid of the warning the get_id() function must be modified as follows:
get_id <- function(connection, table){
query <- toString(paste("SELECT id FROM ", table, sep = ""))
data_extract_query <- dbSendQuery(connection, query)
data_extract <- dbFetch(data_extract_query)
# Here we clear whatever remains on the server
dbClearResult(data_extract_query)
return(data_extract)
}
See Examples section in help for more.

RODBC channel error in global environnement

I've set up a connection between R and SQL using the package RODBC and managed to connect and query the database from R.
I've created a small R function which objective is to delete some lines (as parameter) in a specific table.
Here it is (nameDB is the name of my database, and values_conversion another function I did to convert some data from R format to SQL format) :
delete_SQL = function(data, table){
ch = odbcConnect(nameDB,"postgres")
names = sqlColumns(channel=ch,table,schema="public",catalog = nameDB)$COLUMN_NAME
for(i in 1:nrow(data)){
sqlQuery(channel=ch,query=paste0("DELETE FROM public.\"",table,"\" WHERE ",
paste0(names," = ",values_conversion(data[i,]),collapse = " and "),";"),errors = TRUE)
}
odbcCloseAll()
}
Query exemple : "DELETE FROM public.\"lieu_protection\" WHERE lieu_id = 3 and protection_id = 1430;"
The code inside this function works fine when I execute everything directly, but when I call the function it throws an
Error in sqlQuery(channel = ch, query = paste0("DELETE FROM public.\"", :
first argument is not an open RODBC channel
I have a similar function that's getting and returning data from SQL and which is working fine, so I guess it has something to do with the delete, but the error is about the channel, so I'm quite confused.
Thanks to anyone that can help !

db operation exception handling in R

I am using the following function to select values from a table.Table name is given as the parameter to that function.If the table does not exist an error generated and the execution is stoped.I want to do something if the table is not found.Is it possible in R? something like try-catch exception ?
library('RSQLite')
getData <- function(portfolio){
lite <- dbDriver("SQLite", max.con = 25)
db <- dbConnect(lite, dbname = "portfolioInfo.db")
res <- dbSendQuery(db, paste("SELECT * from ",portfolio," ",sep=""))
data <- fetch(res)
return (data)
}
getData("table1")
One way to do this would be to check the class of the data that is returned.
I am not familiar with RSQLite but I guess it will return a dataframe if the table is found and a text error when not?
So a possibility would be to check whether or not an error is raised:
checkQueryResult<-function(data){
if(class(data)=='data.frame') cat('SQL execution worked!')
else cat('Something went wrong, table does not exist?')
}
checkQueryResult(getData("table1"))
But maybe the RSQLite package already has error handling stuff built in?

incremental select using id from SQLite in Twisted

I am trying to select data from a table in SQLite one row ONLY at a time for each call to the function, and I want the row to increment on each call (self.count is initialized elsewhere and 'line' is irrelevant here) I am using an adbapi connection pool in Twisted to connect to the DB. Here is the code I have tried:
def queryBTData4(self,line):
self.count=self.count+1
uuId=self.count
query="SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId}
d = self.dbpool.runQuery(query)
return d
This code works if I just set uid=1 or any other number in the DB (I used autoincrement for uid when I created the DB) but when I try to assign a value to uid (i.e. self.count via uuId) it reports that the operator has to be string or unicode.(I have tried both but it does not seem to help) However, I know that the query statement above works just fine in a previous program when I use a cursor and the execute command but I cannot see why it does not work here. I have tried all sorts of combinations and searched for a solution but have not found anything yet that works.(I have also tried the statement with brackets and other forms)
Thanks for any help or advice.
Here is the entire code:
from twisted.internet import protocol, reactor
from twisted.protocols import basic
from twisted.enterprise import adbapi
import sqlite3, time
class ServerProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
def connectionMade(self):
self.sendLine("conn made")
factory = protocol.ClientFactory()
factory.protocol = ClientProtocol
factory.originator = self
reactor.connectTCP('localhost', 1234, factory)
def lineReceived(self, line):
self._received = line
self.insertBTData4(self._received)
self.sendLine("line recvd")
def forwardLine(self, recipient):
recipient.sendLine(self._received)
def insertBTData4(self,data):
print "data in insert is",data
chx=data
PID=2
device_ID=5
query="INSERT INTO btdata4(co2_data,patient_Id, sensor_Id) VALUES ('%s','%s','%s')" % (chx, PID, device_ID)
dF = self.dbpool.runQuery(query)
return dF
class ClientProtocol(basic.LineReceiver):
def __init__(self):
self.conn = sqlite3.connect('biomed2.db',check_same_thread=False)
self.dbpool = adbapi.ConnectionPool("sqlite3" , 'biomed2.db', check_same_thread=False)
self.count=0
def connectionMade(self):
print "server-client made connection with client"
self.factory.originator.forwardLine(self)
#self.transport.loseConnection()
def lineReceived(self, line):
d=self.queryBTData4(self)
d.addCallbacks(self.sendData,self.printError )
def queryBTData4(self,line):
self.count=self.count+1
query=("SELECT co2_data, patient_Id FROM btdata4 WHERE uid=:uid",{"uid": uuId})
dF = self.dbpool.runQuery(query)
return dF
def sendData(self,line):
data=str(line)
self.sendLine(data)
def printError(self,error):
print "Got Error: %r" % error
error.printTraceback()
def main():
factory = protocol.ServerFactory()
factory.protocol = ServerProtocol
reactor.listenTCP(4321, factory)
reactor.run()
if __name__ == '__main__':
main()
The DB is created in another program, thus:
import sqlite3, time, string
conn = sqlite3.connect('biomed2.db')
c = conn.cursor()
c.execute('''CREATE TABLE btdata4
(uid INTEGER PRIMARY KEY, co2_data integer, patient_Id integer, sensor_Id integer)''')
The main program takes data into the server socket and inserts into DB. On the client socket side, data is removed from the DB one line at a time and sent to an external server. The program also has the ability to send data from the server side to the client side if required but I am not doing so here at the moment.
In queryBTData(), every time the function is called the count increments and I assign that value to uuId, which I then pass to the query. I have had this query statement working in a program where I do not use the adbapi but it does not seem to work here. I hope this is clear enough but if not please let me know and I will try again.
EDIT:
I have modified the program to take one row from the DB at a time (see queryBTData() below) but have come across another problem.
def queryBTData4(self,line):
self.count=self.count+1
xuId= self.count
#xuId=10
return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = ?",xuId)
#return self.dbpool.runQuery("SELECT co2_data FROM btdata4 WHERE uid = 10")
When the count gets to 10 I get an error (which I will post below) which states that: "Incorrect number of bindings supplied. The current statement uses 1, and there are 2 supplied"
I have tried setting xuId to 10 (see commented out line xuId=10) but I still get the same error. However, if I switch the return statements (to commented out return) I do indeed get correct row with no error. I have tried converting xuId to unicode but that makes no difference, I still get the same error. Basically, if I I set uid in the return statement to 10 or more (commented out return) it works, but if I set uid to xuId (i.e. uid=?,xuId) in the first return, it only works when xuId is below 10. The API documentation, as far as I can tell, gives no clue as to why this occurs.(I have also disabled the insert into DB to eliminate this and checked the SQLite3_ limit, which is 999)
Here are the errors I am getting when using the first return statement.
Got Error: <twisted.python.failure.Failure <class 'sqlite3.ProgrammingError'>>
Traceback (most recent call last):
File "c:\python26\lib\threading.py", line 504, in __bootstrap
self.__bootstrap_inner()
File "c:\python26\lib\threading.py", line 532, in __bootstrap_inner
self.run()
File "c:\python26\lib\threading.py", line 484, in run
self.__target(*self.__args, **self.__kwargs)
--- <exception caught here> ---
File "c:\python26\lib\site-packages\twisted\python\threadpool.py", line 207, i
n _worker
result = context.call(ctx, function, *args, **kwargs)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 118, in c
allWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "c:\python26\lib\site-packages\twisted\python\context.py", line 81, in ca
llWithContext
return func(*args,**kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 448, i
n _runInteraction
result = interaction(trans, *args, **kw)
File "c:\python26\lib\site-packages\twisted\enterprise\adbapi.py", line 462, i
n _runQuery
trans.execute(*args, **kw)
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current sta
tement uses 1, and there are 2 supplied.
Thanks.
Consider the API documentation for runQuery. Next, consider the difference between these three function calls:
c = a, b
f(a, b)
f((a, b))
f(c)
Finally, don't paraphrase error messages. Always quote them verbatim. Copy/paste whenever possible; make a note when you've manually transcribed them.

Resources