Akka Multi Node Testing available in Java? - integration-testing

I have read the Akka Java documentation about Multi Node Testing, however all codes are in Scala. Is there any reason for that? Google search was unsuccessful as well.
EDIT:
To reduce the tumbleweedness of this question, I did try :). A simple translation to Java of the existing Scala codes migth look like this:
public class ClusterTest {
protected RoleName first;
#Test
public void SimpleClusterListenerClusterJoinTest() throws Exception {
new MultiNodeSpec(new MultiNodeConfig() {{
first = this.role("first");
second = this.role("second");
third = this.role("third");
this.commonConfig(ConfigFactory.parseString(
"akka.crdt.convergent.leveldb.destroy-on-shutdown = on\n" +
"akka.actor.provider = akka.cluster.ClusterActorRefProvider\n" +
"akka.cluster.auto-join = off\n" +
"akka.cluster.auto-down = on\n" +
"akka.loggers = [\"akka.testkit.TestEventListener\"]\n" +
"akka.loglevel = INFO\n" +
"akka.remote.log-remote-lifecycle-events = off")); }}) {
{
Address firstAddress = node(first).address();
#SuppressWarnings("serial")
ArrayList<RoleName> firstnode = new ArrayList<RoleName>() {{
add(first);
}};
Seq<RoleName> fisrtnodeseq = (Seq<RoleName>)JavaConversions.asScalaBuffer(firstnode).toList();
runOn(fisrtnodeseq, null);
Cluster cluster = new Cluster((ExtendedActorSystem) system());
cluster.join(firstAddress);
// verify that single node becomes member
cluster.subscribe(testActor(), MemberEvent.class);
expectMsg(MemberUp.class);
}
#Override
public int initialParticipants() {
return roles().size();
}};
}
}
HOWEVER During the run with the arguments:
-Dmultinode.max-nodes=4 -Dmultinode.host=127.0.0.1 etc. according to Multi Node Testing (if I list here all of the arguments the editor heavily complains :[ ) I will get the following error:
java.lang.IllegalArgumentException: invalid ActorSystem name [ClusterTest_2], must contain only word characters (i.e. [a-zA-Z0-9] plus non-leading '-')
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:497)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at akka.remote.testkit.MultiNodeSpec.<init>(MultiNodeSpec.scala:252)
at com.akkamint.demo.ClusterTest$2.<init>(ClusterTest.java:51)
is the internally generated ActorSystem name wrong?
Besides this I have two questions:
How can access the gossips from Java as in the Scala code,
awaitCond(Cluster(system).latestGossip.members.exists(m ⇒ m.address == firstAddress && m.status == Up))
and I have not found any way to implement the same in Java. My workaround is to subscribe to member events (see above), otherwise I do not know, is this effectively the same or not?
Thunk function (the second argument of runOn method)? What is that? How can use it?

Related

Re-trigger scheduled instances with SAP BO API?

I want to re-trigger all the failed schedules using a java jar file on CMS.
Just for testing I wrote this below program which i suppose would re-trigger a certain schedule, which completed successfully.
Please help me find where did i go wrong since it shows success when I run this program on CMS but the schedule doesn't get triggered
public class Schedule_CRNA implements IProgramBase {
public void run(IEnterpriseSession enterprisesession, IInfoStore infostore, String str[]) throws SDKException {
//System.out.println("Connected to " + enterprisesession.getCMSName() + "CMS");
//System.out.println("Using the credentials of " + enterprisesession.getUserInfo().getUserName() );
IInfoObjects oInfoObjects = infostore.query("SELECT * from CI_INFOOBJECTS WHERE si_instance=1 and si_schedule_status=1 and SI_ID=9411899");
for (int x = 0; x < oInfoObjects.size(); x++) {
IInfoObject oI = (IInfoObject) oInfoObjects.get(x);
IInfoObjects oScheds = infostore.query("select * from ci_infoobjects,ci_appobjects where si_id = " + oI.getID());
IInfoObject oSched = (IInfoObject) oScheds.get(0);
Integer iOwner = (Integer) oI.properties().getProperty("SI_OWNERID").getValue();
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
infostore.schedule(oScheds);
oI.deleteNow();
}
}
}
It seems you missed the putting of your retrieved scheduled object into collection.
The last part in your snippet should be:
oSched.getSchedulingInfo().setScheduleOnBehalfOf(iOwner);
oSched.getSchedulingInfo().setRightNow(true);
oSched.getSchedulingInfo().setType(CeScheduleType.ONCE);
IInfoObjects objectsToSchedule = infostore.newInfoObjectCollection();
objectsToSchedule.add(oI);
infostore.schedule(objectsToSchedule);
oI.deleteNow();
You cannot schedule report directly but rather through collection. Full sample is here.
Also your coding deleting object from repository and rescheduling it again with each iteration seems weird, but it is up to your requirement.

Create nested map from key in groovy

I'm relatively new to groovy and am using it in the context of a gradle build. So please don't be harsh if there is an easy out-of-the-box solution for this.
Basically I'm trying to accomplish the reverse of Return Nested Key in Groovy. That is, I have some keys read from the System.properties map for example user.home and corresponding values like C:\User\dpr. Now I want to create a map that reflects this structure to use it in a groovy.text.SimpleTemplateEngine as bindings:
[user : [home : 'C:\Users\dpr']]
The keys may define an arbitrary deep hierarchy. For example java.vm.specification.vendor=Oracle Corporation should become:
[java : [vm : [spec : [vendor : 'Oracle Corporation']]]]
Additionally there are properties with the same parents such as user.name=dpr and user.country=US:
[
user: [
name: 'dpr',
country: 'US'
]
]
Edit: While ConfigSlurper is really nice, it is somewhat too defensive with creating the nested maps as it stops nesting at the minimum depth of a certain key.
I currently ended up using this
def bindings = [:]
System.properties.sort().each {
def map = bindings
def split = it.key.split("\\.")
for (int i = 0; i < split.length; i++) {
def part = split[i];
// There is already a property value with the same parent
if (!(map instanceof Map)) {
println "Skipping property ${it.key}"
break;
}
if (!map.containsKey(part)) {
map[part] = [:]
}
if (i == split.length - 1) {
map[part] = it.value
} else {
map = map[part]
}
}
map = it.value
}
With this solution the properties file.encoding.pkg, java.vendor.url and java.vendor.url.bug are discarded, which is not nice but something I can cope with.
However the above code is not very groovyish.
You can use a ConfigSlurper :
def conf = new ConfigSlurper().parse(System.properties)
println conf.java.specification.version

ScalikeJDBC + SQlite: Cannot change read-only flag after establishing a connection

Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.

remove duplicates from collection

I want to get a list of all Locales that have a different language, where the ISO3 code is used to identify the language of a Locale. I thought the following should work
class ISO3LangComparator implements Comparator<Locale> {
int compare(Locale locale1, Locale locale2) {
locale1.ISO3Language <=> locale2.ISO3Language
}
}
def allLocales = Locale.getAvailableLocales().toList()
def uniqueLocales = allLocales.unique {new ISO3LangComparator()}
// Test how many locales there are with iso3 code 'ara'
def arabicLocaleCount = uniqueLocales.findAll {it.ISO3Language == 'ara'}.size()
// This assertion fails
assert arabicLocaleCount <= 1
You are using the wrong syntax: you are using Collection.unique(Closure closure):
allLocales.unique {new ISO3LangComparator()}
You should use Collection.unique(Comparator comparator)
allLocales.unique (new ISO3LangComparator())
So simply use () instead of {}, and your problem is solved.
what Adam said.
or...
allLocales.unique{it.ISO3Language}
and you forget about the comparator

Dynamic properties in Scala

Does Scala support something like dynamic properties? Example:
val dog = new Dynamic // Dynamic does not define 'name' nor 'speak'.
dog.name = "Rex" // New property.
dog.speak = { "woof" } // New method.
val cat = new Dynamic
cat.name = "Fluffy"
cat.speak = { "meow" }
val rock = new Dynamic
rock.name = "Topaz"
// rock doesn't speak.
def test(val animal: Any) = {
animal.name + " is telling " + animal.speak()
}
test(dog) // "Rex is telling woof"
test(cat) // "Fluffy is telling meow"
test(rock) // "Topaz is telling null"
What is the closest thing from it we can get in Scala? If there's something like "addProperty" which allows using the added property like an ordinary field, it would be sufficient.
I'm not interested in structural type declarations ("type safe duck typing"). What I really need is to add new properties and methods at runtime, so that the object can be used by a method/code that expects the added elements to exist.
Scala 2.9 will have a specially handled Dynamic trait that may be what you are looking for.
This blog has a big about it: http://squirrelsewer.blogspot.com/2011/02/scalas-upcoming-dynamic-capabilities.html
I would guess that in the invokeDynamic method you will need to check for "name_=", "speak_=", "name" and "speak", and you could store values in a private map.
I can not think of a reason to really need to add/create methods/properties dynamically at run-time unless dynamic identifiers are also allowed -and/or- a magical binding to an external dynamic source (JRuby or JSON are two good examples).
Otherwise the example posted can be implemented entirely using the existing static typing in Scala via "anonymous" types and structural typing. Anyway, not saying that "dynamic" wouldn't be convenient (and as 0__ pointed out, is coming -- feel free to "go edge" ;-).
Consider:
val dog = new {
val name = "Rex"
def speak = { "woof" }
}
val cat = new {
val name = "Fluffy"
def speak = { "meow" }
}
// Rock not shown here -- because it doesn't speak it won't compile
// with the following unless it stubs in. In both cases it's an error:
// the issue is when/where the error occurs.
def test(animal: { val name: String; def speak: String }) = {
animal.name + " is telling " + animal.speak
}
// However, we can take in the more general type { val name: String } and try to
// invoke the possibly non-existent property, albeit in a hackish sort of way.
// Unfortunately pattern matching does not work with structural types AFAIK :(
val rock = new {
val name = "Topaz"
}
def test2(animal: { val name: String }) = {
animal.name + " is telling " + (try {
animal.asInstanceOf[{ def speak: String }).speak
} catch { case _ => "{very silently}" })
}
test(dog)
test(cat)
// test(rock) -- no! will not compile (a good thing)
test2(dog)
test2(cat)
test2(rock)
However, this method can quickly get cumbersome (to "add" a new attribute one would need to create a new type and copy over the current data into it) and is partially exploiting the simplicity of the example code. That is, it's not practically possible to create true "open" objects this way; in the case for "open" data a Map of sorts is likely a better/feasible approach in the current Scala (2.8) implementation.
Happy coding.
First off, as #pst pointed out, your example can be entirely implemented using static typing, it doesn't require dynamic typing.
Secondly, if you want to program in a dynamically typed language, program in a dynamically typed language.
That being said, you can actually do something like that in Scala. Here is a simplistic example:
class Dict[V](args: (String, V)*) extends Dynamic {
import scala.collection.mutable.Map
private val backingStore = Map[String, V](args:_*)
def typed[T] = throw new UnsupportedOperationException()
def applyDynamic(name: String)(args: Any*) = {
val k = if (name.endsWith("_=")) name.dropRight(2) else name
if (name.endsWith("_=")) backingStore(k) = args.first.asInstanceOf[V]
backingStore.get(k)
}
override def toString() = "Dict(" + backingStore.mkString(", ") + ")"
}
object Dict {
def apply[V](args: (String, V)*) = new Dict(args:_*)
}
val t1 = Dict[Any]()
t1.bar_=("quux")
val t2 = new Dict("foo" -> "bar", "baz" -> "quux")
val t3 = Dict("foo" -> "bar", "baz" -> "quux")
t1.bar // => Some(quux)
t2.baz // => Some(quux)
t3.baz // => Some(quux)
As you can see, you were pretty close, actually. Your main mistake was that Dynamic is a trait, not a class, so you can't instantiate it, you have to mix it in. And you obviously have to actually define what you want it to do, i.e. implement typed and applyDynamic.
If you want your example to work, there are a couple of complications. In particular, you need something like a type-safe heterogenous map as a backing store. Also, there are some syntactic considerations. For example, foo.bar = baz is only translated into foo.bar_=(baz) if foo.bar_= exists, which it doesn't, because foo is a Dynamic object.

Resources