How do I insert rows containing Timestamp values, using clojure.java.jdbc? - sqlite

I'm trying to use clojure.java.jdbc to insert rows into a database. (The database in question is sqlite).
I can create a table like this:
(def db {:classname "org.sqlite.JDBC"
:subprotocol "sqlite"
:subname "/path/to/my/database"})
(with-connection db (create-table :foo [:bar :int]
[:baz :int]
[:timestamp :datetime]))
And this works. But if I try to insert a row into the database, this fails:
(with-connection db (insert-rows :foo
[1 2 (java.sql.Timestamp. (.getTime (java.util.Date.)))]))
Giving an exception: assertion failure: param count (3) != value count (6).
But if I omit the timestamp field from the table definition and insert-rows operation, there isn't a problem. So what am I doing wrong with the timestamp?

(def sqllite-settings
{
:classname "org.sqlite.JDBC"
:subprotocol "sqlite"
:subname "test.db"
}
)
(with-connection sqllite-settings
(create-table :foo
[:bar :int]
[:baz :int]
[:timestamp :datetime]))
(with-connection sqllite-settings (insert-rows :foo
[1 2 (java.sql.Timestamp. (.getTime (java.util.Date.)))]))
(with-connection sqllite-settings
(with-query-results rs ["select * from foo"] (doall rs)))
returned the expected:
({:bar 1, :baz 2, :timestamp 1311565709390})
I am using clojure.contrib.sql
(use 'clojure.contrib.sql)
And the SQLLite driver from here: http://www.zentus.com/sqlitejdbc/
Can you try if contrib.sql works for you ?

Related

How to detect errors while using the `.import` command in the sqlite3 CLI

It seems .import will only fail if the last record fails. Below I use .bail on and the script continues after the first two imports despite them both causing errors.
.bail on
PRAGMA foreign_keys = ON;
create table x (a primary key);
create table y (c references x(a));
.import --csv "|echo 2" x
.import --csv "|seq 2" y
.print 'Did not bail despite failing to insert 1'
create table z (a, b NOT NULL);
.import --csv "|printf 'a\nb,c\n'" z
.print "Did not bail despite failing to insert `a`"
.import --csv "|printf '1,2\n3\n'" z
.print 'This will not print because in this case, the bad record was at the
end'
This outputs:
<pipe>:1: INSERT failed: FOREIGN KEY constraint failed
Did not bail despite failing to insert 1
<pipe>:1: expected 2 columns but found 1 - filling the rest with NULL
<pipe>:1: INSERT failed: NOT NULL constraint failed: z.b
Did not bail despite failing to insert `a`
<pipe>:2: expected 2 columns but found 1 - filling the rest with NULL
<pipe>:2: INSERT failed: NOT NULL constraint failed: z.b

Ionic Sqlite SELECT IN statement doesn't return correct results

I have ionic mobile application where I use Sqlite plugin. I try to run following query:
this.db.executeSql('SELECT * FROM foo_table WHERE id IN (?)', [[1, 3]])
.then(data => {
console.log(data.rows.item(0));
console.log(data.rows.item(1));
// Do something here
});
I have on purpose left showing the database initialization code because it is unnecessary because it works properly with other methods in the same file. In database I have two entities in table_foo which contain some specific data and each entity have id.
When I run above statement it doesn't return those two entities which ids are 1 and 3. Instead it return undefined. I run exact same statement in sqlite console that is SELECT * FROM table_a WHERE id IN (1,3); and this works. It shows correctly the two entities. My question is why Sqlite SELECT IN query above doesn't work properly and how I should properly add many values in params (where is located array of values 1 and 3)? Am I using it (SELECT IN query) wrong?
When I run above query with params as:
[1] -> works
[[1]] -> works
[[1, 3]] -> doesn't work
[1, 3] -> error which is quiet obvious
I found two workarounds for this:
Concatenating array values as strings (not good solution):
const ids = [1, 3];
this.db.executeSql('SELECT * FROM foo_table WHERE id IN (' + ids.toString()')', [])
.then(data => {
console.log(data.rows.item(0));
console.log(data.rows.item(1));
// Do something here
});
Using sql SELECT in SELECT IN query (good solution if you can fetch those values from database table):
this.db.executeSql('SELECT * FROM foo_table_a WHERE id IN (SELECT id FROM foo_table_b)', [])
.then(data => {
console.log(data.rows.item(0));
console.log(data.rows.item(1));
// Do something here
});

metaData.getPrimaryKeys() returns a single row when the key is composite

I have an issue with a compound primary key in JDBC using SQLite driver.
The getPrimaryKeys() method from a DatabaseMetaData object returns a single row when I have verified the key is actually a compound key consisting of two columns.
Does any one have any suggestions / alternatives how the true listing of primary keys can be retrieved?
Here is the code:
DatabaseMetaData meta = con.getMetaData();
ResultSet pks = meta.getPrimaryKeys(null, null, "work_on");
ResultSetMetaData rsmd = pks.getMetaData();
while(pks.next()) {
for (int i = 1; i < rsmd.getColumnCount(); i++) {
System.out.print(pks.getString(i) + " ");
}
System.out.println();
}
It seems you have run into this issue:
https://bitbucket.org/xerial/sqlite-jdbc/issues/107/databasemetadatagetprimarykeys-does-not
Workaround for the current JDBC bug
The bug in the JDBC driver is a bad regular expression matching your SQL string. The regular expression expects at least one whitespace between the KEY keyword and the opening parenthesis. If you write this:
create table work_on (
s_id varchar(4),
p_id varchar(4),
x varchar(4),
primary key(s_id, p_id)
)
The primary key won't be reported correctly (because there's another bug in the fallback logic when the regular expression fails to match anything, that other bug results in only the last PK column to be reported). So, to work around this problem, you could carefully design your tables to always have this whitespace:
create table work_on (
s_id varchar(4),
p_id varchar(4),
x varchar(4),
primary key (s_id, p_id)
-- ^ whitespace here!
)
Workaround by not using the JDBC API
You can always run this query here yourself (which is the JDBC driver's fallback query):
pragma table_info('work_on');
And then collect all those rows that have the pk flag set to true. For instance, the following table
create table work_on (
s_id varchar(4),
p_id varchar(4),
x varchar(4),
primary key(s_id, p_id)
)
... produces this output:
+----+----+----------+-------+----------+----+
| cid|name|type |notnull|dflt_value| pk|
+----+----+----------+-------+----------+----+
| 0|s_id|varchar(4)| 0|{null} | 1|
| 1|p_id|varchar(4)| 0|{null} | 2|
| 2|x |varchar(4)| 0|{null} | 0|
+----+----+----------+-------+----------+----+

Yesod Persistent: SQLite table with id row only

My model looks like the following:
TestGroup
TestPerson
firstName Text
lastName Text
testGroupId TestGroupId
TestObject
objectName Text
testGroupId TestGroupId
In this case the only thing in the TestGroup table is testGroupId. Multiple TestPersons can be in one group (one to many), and one group can have multiple test objects (also one to many).
The following code compiles and run but produces an SQLite error:
postAddTestPersonR :: Handler Value
postAddTestPersonR = do
newTestPerson <- parseJsonBody :: Handler (Result TestPerson)
case newTestPerson of
Success s -> runDB $ do
newTestGroup <- insert $ TestGroup
_ <- insert $ TestPerson (firstName s) (lastName s) newTestGroup
return $ object ["message" .= "it worked"]
Error e -> return $ object ["message" .= e]
The error:
"INSERT INTO \\\"test_group\\\"() VALUES()\": near \")\": syntax error)"
If I open the database and manually add it this way it works and I get a new ID number:
INSERT INTO test_group VALUES (null);
Should I just try to do this in Raw SQL or is there a way around this with persist. A simple solution is just to add a dummy maybe variable to TestGroup and do insert $ TestGroup Nothing but that is a bit hackish and I would like to know if there is a way around it.
It was an internal issue with Yesod. It has been resolved: https://github.com/yesodweb/persistent/issues/222

A generic procedure that can execute any procedure/function

input
Package name (IN)
procedure name (or function name) (IN)
A table indexed by integer, it will contain values that will be used to execute the procedure (IN/OUT).
E.g
let's assume that we want to execute the procedure below
utils.get_emp_num(emp_name IN VARCHAR
emp_last_name IN VARCHAR
emp_num OUT NUMBER
result OUT VARCHAR);
The procedure that we will create will have as inputs:
package_name = utils
procedure_name = get_emp_num
table = T[1] -> name
T[2] -> lastname
T[3] -> 0 (any value)
T[4] -> N (any value)
run_procedure(package_name,
procedure_name,
table)
The main procedure should return the same table that has been set in the input, but with the execution result of the procedure
table = T[1] -> name
T[2] -> lastname
T[3] -> 78734 (new value)
T[4] -> F (new value)
any thought ?
You can achieve it with EXECUTE IMMEDIATE. Basically, you build a SQL statement of the following form:
sql := 'BEGIN utils.get_emp_num(:1, :2, :3, :4); END;';
Then you execute it:
EXECUTE IMMEDIATE sql USING t(1), t(2), OUT t(3), OUT t(4);
Now here comes the tricky part: For each number of parameters and IN/OUT combinations you need a separate EXECUTE IMMEDIATE statement. And to figure out the number of parameters and their direction, you need to query the ALL_ARGUMENTS table first.
You might be able to simplify it by passing the whole table as a bind argument instead of a separate bind argument for each table element. But I haven't quite figured out how you would do that.
And the next thing you should consider: the elements of the table T your using will have a type: VARCHAR, NUMBER etc. So the current mixture where you have both numbers and strings won't work.
BTW: Why do you want such a dynamic call mechanism anyway?
Get from the all_arguments table the argument_name, data_type, in_out, and the position
Build the PLSQL block
DECLARE
loop over argument_name and create the declare section
argument_name data_type if in_out <> OUT then := VALUE OF THE INPUT otherwise NULL
BEGIN
--In the case of function create an additional argument
function_var:= package_name.procedure_name( loop over argument_name);
--use a table of any_data, declare it as global in the package
if function then
package_name.ad_table.EXTEND;
package_name.ad_table(package_name.ad_table.LAST):= function_var;
end if
--loop over argument_name IF IN_OUT <> IN
package_name.ad_table.EXTEND;
package_name.ad_table(package_name.ad_table.LAST):=
if data_type = VARCHAR2 then := ConvertVarchar2(argument_name)
else if NUMBER then ConvertNumber
else if DATE then ConvertDate
...
END;
The result is stored in the table.
To get value use Access* functions

Resources