ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection - sqlite

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?

The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)

I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.

Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}

This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

Related

How can I see the SQL generated by SQLite.NET PCL in Xamarin Studio?

I researched this and all I can find is a suggestion to turn on .Trace = true like this:
db1 = DependencyService.Get<ISQLite>().GetConnection();
db1.Trace = true;
I also tried this:
db2.Trace = true;
var categories = db2.Query<Category>("SELECT * FROM Category ORDER BY Name").ToList();
Debug.WriteLine("xxxx");
Well I did this and then restarted the application. When I view the Application output I just see information on threads started and the xxxx but don't see any SQL trace information.
Can anyone give me advice on this. Thanks
You need to set Trace and Tracer (action) properties on your SQLiteConnection to print queries to output:
db.Tracer = new Action<string>(q => Debug.WriteLine(q));
db.Trace = true;
Look in the Application Output window for lines that begin Executing
Example Output after setting Trace to true:
Executing: create table if not exists "Valuation"(
"Id" integer primary key autoincrement not null ,
"StockId" integer ,
"Time" datetime ,
"Price" float )
Executing Query: pragma table_info("Valuation")
Executing: create index if not exists "Valuation_StockId" on "Valuation"("StockId")
Executing: insert into "Stock"("Symbol") values (?)
Executing Query: select * from "Stock" where ("Symbol" like (? || '%'))
0: A
Ref: https://github.com/praeclarum/sqlite-net/blob/38a5ae07c886d6f62cecd8fdeb8910d9b5a77546/src/SQLite.cs
The SQLite PCL uses Debug.WriteLine which means that the logs are only included in Debug builds of the PCL.
Remove your nuget reference to the sqlite.net PCL (leave the native reference), and instead add SQLite.cs as a class to your project, and execute a debug build, with the Trace flag set, and you'll see the tracing.
I didn't have to do anything special other than include the SQLite.cs file in my Xamarin iOS project for this to work:
using (var conn = new SQLite.SQLiteConnection("mydb.sqlite") { Trace = true }) {
var rows = conn.Table<PodcastMetadata>().Where(row => row.DurationMinutes < 10).Select(row => new { row.Title });
foreach (var row in rows) {
Debug.WriteLine(row);
}
}
Output:
Executing Query: select * from "PodcastMetadata" where ("DurationMinutes" < ?)
0: 10

In Kotlin, how do I read the entire contents of an InputStream into a String?

I recently saw code for reading entire contents of an InputStream into a String in Kotlin, such as:
// input is of type InputStream
val baos = ByteArrayOutputStream()
input.use { it.copyTo(baos) }
val inputAsString = baos.toString()
And also:
val reader = BufferedReader(InputStreamReader(input))
try {
val results = StringBuilder()
while (true) {
val line = reader.readLine()
if (line == null) break
results.append(line)
}
val inputAsString = results.toString()
} finally {
reader.close()
}
And even this that looks smoother since it auto-closes the InputStream:
val inputString = BufferedReader(InputStreamReader(input)).useLines { lines ->
val results = StringBuilder()
lines.forEach { results.append(it) }
results.toString()
}
Or slight variation on that one:
val results = StringBuilder()
BufferedReader(InputStreamReader(input)).forEachLine { results.append(it) }
val resultsAsString = results.toString()
Then this functional fold thingy:
val inputString = input.bufferedReader().useLines { lines ->
lines.fold(StringBuilder()) { buff, line -> buff.append(line) }.toString()
}
Or a bad variation which doesn't close the InputStream:
val inputString = BufferedReader(InputStreamReader(input))
.lineSequence()
.fold(StringBuilder()) { buff, line -> buff.append(line) }
.toString()
But they are all clunky and I keep finding newer and different versions of the same... and some of them never even close the InputStream. What is a non-clunky (idiomatic) way to read the InputStream?
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the idiomatic answers to commonly asked Kotlin topics are present in SO.
Kotlin has a specific extension just for this purpose.
The simplest:
val inputAsString = input.bufferedReader().use { it.readText() } // defaults to UTF-8
And in this example, you could decide between bufferedReader() or just reader(). The call to the function Closeable.use() will automatically close the input at the end of the lambda's execution.
Further reading:
If you do this type of thing a lot, you could write this as an extension function:
fun InputStream.readTextAndClose(charset: Charset = Charsets.UTF_8): String {
return this.bufferedReader(charset).use { it.readText() }
}
Which you could then call easily as:
val inputAsString = input.readTextAndClose() // defaults to UTF-8
On a side note, all Kotlin extension functions that require knowing the charset already default to UTF-8, so if you require a different encoding you need to adjust the code above in calls to include encoding for reader(charset) or bufferedReader(charset).
Warning: You might see examples that are shorter:
val inputAsString = input.reader().readText()
But these do not close the stream. Make sure you check the API documentation for all of the IO functions you use to be sure which ones close and which do not. Usually, if they include the word use (such as useLines() or use()) they close the stream after. An exception is that File.readText() differs from Reader.readText() in that the former does not leave anything open and the latter does indeed require an explicit close.
See also: Kotlin IO related extension functions
【Method 1 | Manually Close Stream】
private fun getFileText(uri: Uri):String {
val inputStream = contentResolver.openInputStream(uri)!!
val bytes = inputStream.readBytes() //see below
val text = String(bytes, StandardCharsets.UTF_8) //specify charset
inputStream.close()
return text
}
inputStream.readBytes() requires manually close the stream: https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/java.io.-input-stream/read-bytes.html
【Method 2 | Automatically Close Stream】
private fun getFileText(uri: Uri): String {
return contentResolver.openInputStream(uri)!!.bufferedReader().use {it.readText() }
}
You can specify the charset inside bufferedReader(), default is UTF-8:
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/java.io.-input-stream/buffered-reader.html
bufferedReader() is an upgrade version of reader(), it is more versatile:
How exactly does bufferedReader() work in Kotlin?
use() can automatically close the stream when the block is done:
https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.io/use.html
An example that reads contents of an InputStream to a String
import java.io.File
import java.io.InputStream
import java.nio.charset.Charset
fun main(args: Array<String>) {
val file = File("input"+File.separator+"contents.txt")
var ins:InputStream = file.inputStream()
var content = ins.readBytes().toString(Charset.defaultCharset())
println(content)
}
For Reference - Kotlin Read File
Quick solution works well when converting InputStream to string.
val convertedInputStream = String(inputStream.readAllBytes(), StandardCharsets.UTF_8)

Cannot remove index in Titan over DynamoDB

I created a mixed index on Titan 1.0 with Dynamo DB backend and Elasticsearch and I'm trying to remove it using the following code
public static void removeIndex(TitanGraph graph, String indexStr) throws ExecutionException, InterruptedException {
TitanManagement m = graph.openManagement();
TitanGraphIndex nameIndex = m.getGraphIndex(indexStr);
Preconditions.checkState(nameIndex!=null, "index "+ indexStr +" doesn't exist");
TitanManagement.IndexJobFuture futureDisable = m.updateIndex(nameIndex, SchemaAction.DISABLE_INDEX);
m.commit();
graph.tx().commit();
futureDisable.get();
// Block until the SchemaStatus transitions from to DISABLED
ManagementSystem.awaitGraphIndexStatus(graph, indexStr)
.status(SchemaStatus.DISABLED).call();
// Delete the index using TitanManagement
m = graph.openManagement();
nameIndex = m.getGraphIndex(indexStr);
TitanManagement.IndexJobFuture futureRemove =
m.updateIndex(nameIndex, SchemaAction.REMOVE_INDEX);
m.commit();
graph.tx().commit();
Preconditions.checkState(futureRemove!=null,
"Couldn't remove index/es because seems like indexes were not disabled."); // fails here.
futureRemove.get();
m = graph.openManagement();
nameIndex = m.getGraphIndex(indexStr);
Preconditions.checkArgument(nameIndex==null);
}
The key doesn't get removed. I get this warning indicating that the index never gets disabled.
---
INFO com.thinkaurelius.titan.graphdb.database.management.GraphIndexStatusWatcher - Some key(s) on index verticesIndex do not currently have status DISABLED: position=INSTALLED
INFO com.thinkaurelius.titan.graphdb.database.management.GraphIndexStatusWatcher - Timed out (PT1M) while waiting for index verticesIndex to converge on status DISABLED
[WARNING]
The code fails on the precondition test.
What am I doing wrong?
It turns out I was using position to define a property key which is a reserved keyword in DynamoDB.
The only way was to:
Delete all DynamoDB tables of Titan from the web console,
change 'position' to 'fieldPosition',
and start my code to create tables and indexes from scratch.

Akka Multi Node Testing available in Java?

I have read the Akka Java documentation about Multi Node Testing, however all codes are in Scala. Is there any reason for that? Google search was unsuccessful as well.
EDIT:
To reduce the tumbleweedness of this question, I did try :). A simple translation to Java of the existing Scala codes migth look like this:
public class ClusterTest {
protected RoleName first;
#Test
public void SimpleClusterListenerClusterJoinTest() throws Exception {
new MultiNodeSpec(new MultiNodeConfig() {{
first = this.role("first");
second = this.role("second");
third = this.role("third");
this.commonConfig(ConfigFactory.parseString(
"akka.crdt.convergent.leveldb.destroy-on-shutdown = on\n" +
"akka.actor.provider = akka.cluster.ClusterActorRefProvider\n" +
"akka.cluster.auto-join = off\n" +
"akka.cluster.auto-down = on\n" +
"akka.loggers = [\"akka.testkit.TestEventListener\"]\n" +
"akka.loglevel = INFO\n" +
"akka.remote.log-remote-lifecycle-events = off")); }}) {
{
Address firstAddress = node(first).address();
#SuppressWarnings("serial")
ArrayList<RoleName> firstnode = new ArrayList<RoleName>() {{
add(first);
}};
Seq<RoleName> fisrtnodeseq = (Seq<RoleName>)JavaConversions.asScalaBuffer(firstnode).toList();
runOn(fisrtnodeseq, null);
Cluster cluster = new Cluster((ExtendedActorSystem) system());
cluster.join(firstAddress);
// verify that single node becomes member
cluster.subscribe(testActor(), MemberEvent.class);
expectMsg(MemberUp.class);
}
#Override
public int initialParticipants() {
return roles().size();
}};
}
}
HOWEVER During the run with the arguments:
-Dmultinode.max-nodes=4 -Dmultinode.host=127.0.0.1 etc. according to Multi Node Testing (if I list here all of the arguments the editor heavily complains :[ ) I will get the following error:
java.lang.IllegalArgumentException: invalid ActorSystem name [ClusterTest_2], must contain only word characters (i.e. [a-zA-Z0-9] plus non-leading '-')
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:497)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at akka.remote.testkit.MultiNodeSpec.<init>(MultiNodeSpec.scala:252)
at com.akkamint.demo.ClusterTest$2.<init>(ClusterTest.java:51)
is the internally generated ActorSystem name wrong?
Besides this I have two questions:
How can access the gossips from Java as in the Scala code,
awaitCond(Cluster(system).latestGossip.members.exists(m ⇒ m.address == firstAddress && m.status == Up))
and I have not found any way to implement the same in Java. My workaround is to subscribe to member events (see above), otherwise I do not know, is this effectively the same or not?
Thunk function (the second argument of runOn method)? What is that? How can use it?

Dynamic properties in Scala

Does Scala support something like dynamic properties? Example:
val dog = new Dynamic // Dynamic does not define 'name' nor 'speak'.
dog.name = "Rex" // New property.
dog.speak = { "woof" } // New method.
val cat = new Dynamic
cat.name = "Fluffy"
cat.speak = { "meow" }
val rock = new Dynamic
rock.name = "Topaz"
// rock doesn't speak.
def test(val animal: Any) = {
animal.name + " is telling " + animal.speak()
}
test(dog) // "Rex is telling woof"
test(cat) // "Fluffy is telling meow"
test(rock) // "Topaz is telling null"
What is the closest thing from it we can get in Scala? If there's something like "addProperty" which allows using the added property like an ordinary field, it would be sufficient.
I'm not interested in structural type declarations ("type safe duck typing"). What I really need is to add new properties and methods at runtime, so that the object can be used by a method/code that expects the added elements to exist.
Scala 2.9 will have a specially handled Dynamic trait that may be what you are looking for.
This blog has a big about it: http://squirrelsewer.blogspot.com/2011/02/scalas-upcoming-dynamic-capabilities.html
I would guess that in the invokeDynamic method you will need to check for "name_=", "speak_=", "name" and "speak", and you could store values in a private map.
I can not think of a reason to really need to add/create methods/properties dynamically at run-time unless dynamic identifiers are also allowed -and/or- a magical binding to an external dynamic source (JRuby or JSON are two good examples).
Otherwise the example posted can be implemented entirely using the existing static typing in Scala via "anonymous" types and structural typing. Anyway, not saying that "dynamic" wouldn't be convenient (and as 0__ pointed out, is coming -- feel free to "go edge" ;-).
Consider:
val dog = new {
val name = "Rex"
def speak = { "woof" }
}
val cat = new {
val name = "Fluffy"
def speak = { "meow" }
}
// Rock not shown here -- because it doesn't speak it won't compile
// with the following unless it stubs in. In both cases it's an error:
// the issue is when/where the error occurs.
def test(animal: { val name: String; def speak: String }) = {
animal.name + " is telling " + animal.speak
}
// However, we can take in the more general type { val name: String } and try to
// invoke the possibly non-existent property, albeit in a hackish sort of way.
// Unfortunately pattern matching does not work with structural types AFAIK :(
val rock = new {
val name = "Topaz"
}
def test2(animal: { val name: String }) = {
animal.name + " is telling " + (try {
animal.asInstanceOf[{ def speak: String }).speak
} catch { case _ => "{very silently}" })
}
test(dog)
test(cat)
// test(rock) -- no! will not compile (a good thing)
test2(dog)
test2(cat)
test2(rock)
However, this method can quickly get cumbersome (to "add" a new attribute one would need to create a new type and copy over the current data into it) and is partially exploiting the simplicity of the example code. That is, it's not practically possible to create true "open" objects this way; in the case for "open" data a Map of sorts is likely a better/feasible approach in the current Scala (2.8) implementation.
Happy coding.
First off, as #pst pointed out, your example can be entirely implemented using static typing, it doesn't require dynamic typing.
Secondly, if you want to program in a dynamically typed language, program in a dynamically typed language.
That being said, you can actually do something like that in Scala. Here is a simplistic example:
class Dict[V](args: (String, V)*) extends Dynamic {
import scala.collection.mutable.Map
private val backingStore = Map[String, V](args:_*)
def typed[T] = throw new UnsupportedOperationException()
def applyDynamic(name: String)(args: Any*) = {
val k = if (name.endsWith("_=")) name.dropRight(2) else name
if (name.endsWith("_=")) backingStore(k) = args.first.asInstanceOf[V]
backingStore.get(k)
}
override def toString() = "Dict(" + backingStore.mkString(", ") + ")"
}
object Dict {
def apply[V](args: (String, V)*) = new Dict(args:_*)
}
val t1 = Dict[Any]()
t1.bar_=("quux")
val t2 = new Dict("foo" -> "bar", "baz" -> "quux")
val t3 = Dict("foo" -> "bar", "baz" -> "quux")
t1.bar // => Some(quux)
t2.baz // => Some(quux)
t3.baz // => Some(quux)
As you can see, you were pretty close, actually. Your main mistake was that Dynamic is a trait, not a class, so you can't instantiate it, you have to mix it in. And you obviously have to actually define what you want it to do, i.e. implement typed and applyDynamic.
If you want your example to work, there are a couple of complications. In particular, you need something like a type-safe heterogenous map as a backing store. Also, there are some syntactic considerations. For example, foo.bar = baz is only translated into foo.bar_=(baz) if foo.bar_= exists, which it doesn't, because foo is a Dynamic object.

Resources