erlang-sqlite3 sqlite3:table_info error - sqlite

I tried to extract table info from sqlite database using sqlite3:table_info/2 function and got an error message.
{ok,Pid} = sqlite3:open(db3).
Sql = <<"CREATE TABLE test (
id INTEGER PRIMARY KEY,
ts TEXT default (datetime('now')),
key TEXT,
val TEXT
);">>.
sqlite3:sql_exec(db3,Sql).
Check table list:
sqlite3:list_tables(db3).
[test]
Try to get table info:
sqlite3:table_info(db3,test).
And now error message:
`=ERROR REPORT==== 1-Mar-2015::19:37:46 ===
** Generic server db3 terminating
** Last message in was {table_info,test}
** When Server state == {state,#Port,
[{file,"../tmp/db3.sqlite"}],
{dict,0,16,16,8,80,48,
{[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]},
{{[],[],[],[],[],[],[],[],[],[],[],[],[],
[],[],[]}}}}
** Reason for termination ==
** {function_clause,[{sqlite3,build_constraints,
[["INTEGER","PRIMARY","KEY"]],
[{file,"src/sqlite3.erl"},{line,1169}]},
{sqlite3,build_table_info,2,
[{file,"src/sqlite3.erl"},{line,1166}]},
{sqlite3,handle_call,3,
[{file,"src/sqlite3.erl"},{line,833}]},
{gen_server,try_handle_call,4,
[{file,"gen_server.erl"},{line,607}]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,639}]},
{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,237}]}]}
** exception exit: function_clause
in function sqlite3:build_constraints/1
called as sqlite3:build_constraints(["INTEGER","PRIMARY","KEY"])
in call from sqlite3:build_table_info/2 (src/sqlite3.erl, line 1166)
in call from sqlite3:handle_call/3 (src/sqlite3.erl, line 833)
in call from gen_server:try_handle_call/4 (gen_server.erl, line 607)
in call from gen_server:handle_msg/5 (gen_server.erl, line 639)
in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 237)
Any ideas?

I've fixed the problem with INTEGER PRIMARY KEY. The default is harder to support, but I've added a fallback so it doesn't crash, at least. As #CL mentions, this parsing is unreliable anyway (since SQLite unfortunately doesn't expose any way to use its own parser).

Related

How to write unittest cases for checking database connection with SQL server in Python?

I have just started using unittest in Python for writing test cases, I have a function that makes the connection with SQL server.
sql_connection.py
def getConnection():
connection = pyodbc.connect("Driver={ODBC Driver 13 for SQL Server};"
"Server="+appConfig['sql_server']['server']+";"
"Database="+appConfig['sql_server']['database']+";"
"UID="+appConfig['sql_server']['uid']+";"
"PWD="+appConfig['sql_server']['password']+";"
"Trusted_Connection=no;",
)
return connection
I have tried below test case for checking database connect or not.
test_connection.py
import pyodbc
getConnection1=getConnection()
class TestDatabseConnection(unittest.TestCase):
def test_getConnection(self):
try:
db_connection = getConnection1.connection
except pyodbc.Error as ex:
sqlstate = ex.args[1]
print(sqlstate)
self.fail(
"getConnection() raised pyodbc.OperationalError. " +
"Connection to database failed. Detailed error message: " + sqlstate)
self.assertIsNone(db_connection)
But still not able to get succeed.
======================================================================
ERROR: test_getConnection (__main__.TestDatabseConnection)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_connection.py", line 23, in test_getConnection
db_connection = getConnection1.connection
AttributeError: 'pyodbc.Connection' object has no attribute 'connection'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Please help me out in this.
A unit test for your getConnection could look like below. I would suggest using a patch and mock from unittest.mock. With a unit test you are only interested in testing the functionality of getConnection and should "Mock" all other function calls within the function. If you want to test the full potential of pyodbc.connect then I would suggest a functional test that actually connects to the databse which would no longer be a unit test. For more information on patch and Mock checkout the docs. These are very powerful and make unit testing fun and easy! unittest.mock
import unittest
from unittest.mock import patch, Mock
import pyodbc
def getConnection():
appConfig = {'sql_server': {'server':'', 'database':'', 'uid':'', 'password':''}}
connection = pyodbc.connect("Driver={ODBC Driver 13 for SQL Server};"
"Server="+appConfig['sql_server']['server']+";"
"Database="+appConfig['sql_server']['database']+";"
"UID="+appConfig['sql_server']['uid']+";"
"PWD="+appConfig['sql_server']['password']+";"
"Trusted_Connection=no;",
)
return connection
#patch('pyodbc_example.pyodbc')
class TestDatabseConnection(unittest.TestCase):
def test_getConnection(self, pyodbc_mock):
pyodbc_mock.connect.return_value = Mock()
connection = getConnection()
self.assertEqual(connection, pyodbc_mock.connect.return_value)

Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR when use openstacksdk to create_server

When I create the openstack server, I get bellow Exception:
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR
My code is bellow:
server_args = {
"name":server_name,
"image_id":image_id,
"flavor_id":flavor_id,
"networks":[{"uuid":network.id}],
"admin_password": admin_password,
}
try:
server = user_conn.conn.compute.create_server(**server_args)
server = user_conn.conn.compute.wait_for_server(server)
except Exception as e: # there I except the Exception
raise e
When create_server, my server_args data is bellow:
{'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'}
My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.

I cant restore the dumpfile,

i am useing Oracle 11g Express and i used this command to restore a dumpfile
impdp SCHEMAS=datamining DIRECTORY=data_pump_dir DUMPFILE=dm.dmp remap_tablespace=system:users
then it gave me the below error:
.
.
.
x number;
y varchar2(200);
l_input utl_file.file_type;
begin
dbms_output.enable;
dbms_output.put_line(year_);
dbms_output.put_line(cycle);
select count(0) into x f
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39083: Object type ALTER_FUNCTION failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
ALTER FUNCTION "DATAMINING"."AUTO_CORR" COMPILE PLSQL_OPTIMIZE_LEVEL= 2
PLSQL_CODE_TYPE= INTERPRETED PLSQL_DEBUG= FALSE PLSCOPE_SETTINGS= 'ID
ENTIFIERS:NONE' REUSE SETTINGS TIMESTAMP '2015-10-02 10:04:21'
ORA-39083: Object type ALTER_FUNCTION failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
.
.
.
.
CREATE FORCE VIEW "DATAMINING"."B_CLEANING_1" ("FILE_", "CUSTOMER_ID", "PRE_DEB
T", "PRICE", "DATE_", "CYCLE_", "YEAR_") AS select b."FILE_",b."CUSTOMER_ID",b."
PRE_DEBT",b."PRICE",b."DATE_",b."CYCLE_",b."YEAR_"
from bills b
where b.c
ORA-39083: Object type VIEW failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
CREATE FORCE VIEW "DATAMINING"."MT" ("FILE_", "CUSTOMER_ID", "SPECIAL_TYPE", "U
SAGE1", "USAGE2", "USAGE3", "CYCLE_", "YEAR_", "END_", "LENGTH_DAY", "CONTRACT_P
OWER", "PRE_DEBT", "PRICE", "C_CYCLE", "B_DATE", "B_CYCLLE") AS select m.file
Processing object type SCHEMA_EXPORT/TYPE/TYPE_BODY
ORA-39083: Object type TYPE_BODY failed to create with error:
ORA-01435: user does not exist
Failing sql is:
CREATE TYPE BODY "DATAMINING"."AUTO_CORRIMPL" IS
STATIC FUNCTION ODCIAggregateInitialize(sctx IN OUT auto_corrImpl)
RETURN NUMBER IS
-- initialize the variables
BEGIN
sctx := auto_corrImpl(0, sys.odcinumberlist());
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateIterate(self IN OUT auto_corrImpl,
VALUE IN NUMBER) RETURN NU
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_01" completed with 69 error(s) at 06:29:50
comment 1:we did remap_tablespace because we wrongly put the schema on system tablespace,
comment 2:we run this code on one computer and it worked but it is not working on other computer
Please guide me...
It seems the datamining user does not exist. Please create it and retry the import.
Thanks
Sabiha

SQLITE_ERROR: Connection is closed when connecting from Spark via JDBC to SQLite database

I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?
Start command for spark-shell:
../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
Reading from the database:
val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
Simple count (fails):
emails.count
Error:
15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
res1: Long = 7945
I got the same error today, and the important line is just before the exception:
15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection
15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement
Looking at the source, close() is called twice:
Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)
context.addTaskCompletionListener{ context => close() }
Line 469
override def hasNext: Boolean = {
if (!finished) {
if (!gotNext) {
nextValue = getNext()
if (finished) {
close()
}
gotNext = true
}
}
!finished
}
If you look at the close() method (line 443)
def close() {
if (closed) return
you can see that it checks the variable closed, but that value is never set to true.
If I see it correctly, this bug is still in the master. I have filed a bug report.
Source: JDBCRDD.scala (lines numbers differ slightly)

Abort statement

I'm trying to abort a task in ada program but I get this error during compilation:
expect task name or task interface class-wide object for "abort"
The code looks like this:
task type Sending_Message;
type Send_Message is access Sending_Message;
declare
send : Send_Message;
begin
send := new Sending_Message;
...
abort send; -- this line throws error
end;
And again when I try line like this:
abort Sending_Message;
I get error:
invalid use of subtype mark in expression or call
Any idea what is wrong?
You have to explicitly dereference the access type:
abort send.all;

Resources