What happens if I overwrite the pyodbc connection? - pyodbc

I have some function to execute queries in my database and in that functions I have pyodbc.connect().
What happens if I don't close this connection and call other function that have pyodbc.connect() again? It will ignore the function because the connection is already open?
If it works the way i'm thinking, I want to do it to prevent the connection open and close everytime a function is called
Ps: I know it is not the better way to do it, I want to know only if it works how I'm thinking

It will ignore the function because the connection is already open?
No:
import pyodbc
conn_str = (
r'DRIVER=ODBC Driver 17 for SQL Server;'
r'SERVER=(local)\SQLEXPRESS;'
r'DATABASE=myDb;'
"Trusted_Connection=yes;"
)
def get_connections(cnxn):
crsr = cnxn.cursor()
sql = (
"SELECT session_id, login_name, status "
"FROM sys.dm_exec_sessions "
"WHERE program_name = 'Python' "
)
connections = crsr.execute(sql).fetchall()
return connections
def subroutine():
cnxn = pyodbc.connect(conn_str)
return get_connections(cnxn)
cnxn = pyodbc.connect(conn_str)
print(get_connections(cnxn))
# [(56, 'GORD-HP\\Gord', 'running')]
print(subroutine())
# [(56, 'GORD-HP\\Gord', 'sleeping'), (57, 'GORD-HP\\Gord', 'running')]
print(get_connections(cnxn))
# [(56, 'GORD-HP\\Gord', 'running')]

Related

pyhive is it possible to stop a query job if hit ctrl+c

I uses pyhive in jupyter to connect to hive/presto for some adhoc analysis. Something annoying is if I cancel a submitted query job via 'ctrl + c', it only stops the jupyter, but won't stop the query job remotely. Is there a way, when 'ctrl + c', it also sent a signal to stop the remote job?
def get_data_from_presto(sql):
conn = presto.connect(...)
cursor = conn.cursor()
cursor.execute(sql)
while cursor.poll() is not None:
response_json = cursor.poll()
time.sleep(2)
col_names = [i[0] for i in cursor.description]
df = pd.DataFrame(cursor.fetchall(), columns=col_names)

DM SerialControl Communication

Running this script in DM results in the following error during the first execution. Subsequent executions fail on SPOpen(1,9600,1,0,8), which I think implies the serial port is open at that point, but the first execution says it is not.
What is the unexpected error that is preventing communication with the serial port?
SPOpen(1,9600,1,0,8)
SPOpen( "COM1" )
SPSendString(1, "*IDN?" )
string message
number test
message = SPReceiveString(1,8,test)
Result("Acquisition "+message+" "+test+"\n")
SPClose(1)
I can't test the serial commands myself at the moment, and the exact script code of course depends on what is on the other end of the serial connections, i.e. what is expected and what is send back. And also what timeouts/delays need to be expected and cared for.
However, I can see two immediate issues with your script:
The 'SPOpen()' command returns an ID value. You need this ID in the subsequent commands, not the port number.
Whenever the script fails (i.e. throws and error), the command to close the port is never executed and it remains open (and hence blocked). To safeguard against this, you can use a 'Try{}Catch{}' construct.
I would expect your script to look something more akin to the following:
number port = 666
number baud = 9600
number stop = 10
number parity = 0
number data = 8
number portID
try
{
portID = SPOpen( port, baud, stop, parity, data )
Result( "\n Port ("+port+") opened, Handle ID: " + portID )
Result( "\n Sending messge:" + message )
string message = "*IDN?"
SPSendString( portID, message )
Result( "\n messge send." )
// Wait for response
Result( "\n Waiting for response." )
sleep( 0.3 )
number pendingBytes = SPGetPendingBytes(portID)
Result( "\n Pending bytes:" + pendingBytes )
number maxLength = 50
number bytes_back
string reply
while( pendingBytes > 1 )
{
reply += SPReceiveString( portID, maxLength, bytes_back )
pendingBytes = SPGetPendingBytes(portID)
}
Result( "\n Reply:" + Reply )
}
catch
{
// Any thrown error end up here.
// Ensures the port will not remain open
Result( "ERROR OCCURRED.\n" )
break
}
SPClose( portID )
Result( "\n Port ("+port+") closed, using Handle ID: " + portID )
The above is untested code and will surely require some adaptation, but it should get you started. You might need some "delays" when waiting for a result and you might want to wait for specific results in a while-loop.

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

connecting to two different sqlite databases in lua using luasql

Goal
I'm trying to connect to two different databases, one after the other.
I know the first connection is working because I attempt to create a new record, and it works. When I try to connect to the second database and query a table, the logic fails with an error saying that the table I'm querying doesn't exist. But i know it does.
Here's the test code that creates the connection objects:
local database1con
local database2con
local database1env
local database2env
local firstdatabase_connect = function()
if not database1con then
database1env = assert (luasql.sqlite3())
database1con = assert (database1env:connect("database1.sqlite"))
return true
else
return false
end
end
local seconddatabase_connect = function()
if not database2con then
database2env = assert (luasql.sqlite3())
database2con = assert (database2env:connect("database2.sqlite"))
return true
else
return false
end
end
local firstdatabase_disconnect = function()
if database1env then
database1env:close()
database1env = nil
end
if database1con then
database1con:close()
database1con = nil
end
end
local seconddatabase_disconnect = function()
if database2env then
database2env:close()
database2env = nil
end
if database2con then
database2con:close()
database2con = nil
end
end
And here's the logic that tries to actually connect to the databases:
local connected = firstdatabase_connect()
-- run some select & insert commands
firstdatabase_disconnect()
-- now connect to second database
sql = "INSERT INTO users VALUES("..user_id..", "..username.value..", 'test',"..os.date("%Y%m%d%H%M%S")..", Null,Null)"
local db2connected = seconddatabase_connect()
if db2connected then
local res, err = database2con:execute(sql)
if not res and err then
success = false
end
seconddatabase_disconnect()
end
Problem
The insert fails with the following message: LuaSQL: no such table: users
The users tables doesn't exist in database1, but does exist in database2.
What i've tested so far
I thought that perhaps even though I'm disconnecting from the first database, it was somehow checking the wrong db. So after I make the call to firstdatabase_disconnect(), I added another select statement that attempted to select from the first database.
The system failed with a message that the connection object for database1 is nil.
which is good.
I'm not sure what else to test.
If you have any suggestions, I'd appreciate it.

C# - Insert Multiple Records at once to AS400

I have a problem like this:
1. I retrieve data from MySQL using C# ASP .Net. -- done --
2. All data from no.1 will be inserted into table on AS400. -- I got an error on this step --
Error message says that ERROR [42000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0104 - Token ; was not valid. Valid tokens: <END-OF-STATEMENT>.. It's true that I used semicolon to separate queries with each others, but it's not allowed. I've Googling but I can't find the solution.
My question is what the <END-OF-STATEMENT> means of that error message..?
Here is my source code.
private static void doInsertDOCADM(MySqlConnection conn)
{
// Get Temporary table
String query = "SELECT * FROM TB_T_DOC_TEMPORARY_ADM";
DataTable dt = CSTDDBUtil.ExecuteQuery(query);
OdbcConnection as400Con = null;
as400Con = CSTDDBUtil.GetAS400Connection();
as400Con.Open();
if (dt != null && dt.Rows.Count > 0)
{
int counter = 1, maxInsertLoop = 50;
using (OdbcCommand cmd = new OdbcCommand())
{
cmd.Connection = as400Con;
foreach (DataRow dr in dt.Rows)
{
cmd.CommandText += "INSERT INTO DCDLIB.WDFDOCQ VALUES " + "(?,?,?,?);";
cmd.Parameters.Add("1", OdbcType.VarChar).Value = dr["PROD_MONTH"].ToString();
cmd.Parameters.Add("2", OdbcType.VarChar).Value = dr["NEW_MAIN_DEALER_CD"].ToString();
cmd.Parameters.Add("3", OdbcType.VarChar).Value = dr["MODEL_SERIES"].ToString();
cmd.Parameters.Add("4", OdbcType.VarChar).Value = dr["MODEL_CD"].ToString();
if (counter < maxInsertLoop)
{
counter++;
}
else
{
counter = 1;
cmd.ExecuteNonQuery();
cmd.CommandText = "";
cmd.Parameters.Clear();
}
}
if (counter > 1) cmd.ExecuteNonQuery();
}
}
Notes: I used this way (Collect some queries first, and then execute those query) to improve the performance of my application.
As Clockwork-Muse pointed out, the problem is that you can only run a single SQL statement in a command. The iSeries server does not handle multiple statements at once.
If your iSeries server is running V6R1 or later, you can use block inserts to insert multiple rows. I'm not sure if you can do so through the ODBC driver, but since you have Client Access, you should be able to install the iSeries ADO.NET driver. There are not many differences between the ADO.NET iSeries driver and the ODBC one, but with ADO.NET you get access to iSeries specific functions.
With the ADO.NET driver, multiple insert become a simple matter of :
using (iDB2Connection connection = new iDB2Connection(".... connection string ..."))
{
// Create a new SQL command
iDB2Command command =
new iDB2Command("INSERT INTO MYLIB.MYTABLE VALUES(#COL_1, #COL_2", connection);
// Initialize the parameters collection
command.DeriveParameters();
// Insert 10 rows of data at once
for (int i = 0; i < 20; i++)
{
// Here, you set your parameters for a single row
command.Parameters["#COL_1"].Value = i;
command.Parameters["#COL_2"].Value = i + 1;
// AddBatch() tells the command you're done preparing a row
command.AddBatch();
}
// The query gets executed
command.ExecuteNonQuery();
}
}
There is also some reference code provided by IBM to do block inserts using VB6 and ODBC, but I'm not sure it can be easily ported to .NET : http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=%2Frzaik%2Frzaikextfetch.htm
Hope that helps.
When it says <END-OF-STATEMENT> it means about what it says - it wants that to be the end of the executed statement. I don't recall if the AS/400 allows multiple statements per execution unit (at all), but clearly it's not working here. And the driver isn't dealing with it either.
Actually, you have a larger, more fundamental problem; specifically, you're INSERTing a row at a time (usually known as row-by-agonizing-row). DB2 allows a comma-separated list of rows in a VALUES clause (so, INSERT INTO <table_name> VALUES(<row_1_columns>), (<row_2_columns>)) - does the driver you're using allow you to provide arrays (either of the entire row, or per-column)? Otherwise, look into using extract/load utilities for stuff like this - I can guarantee you that this will speed up the process.

Resources