ResultSetJdbcGpsDevice & #SearchableComponent - compass-lucene

I'm having an issue with setting up #SearchableComponent objects using org.compass.gps.device.jdbc.ResultSetJdbcGpsDevice and mapping.TableToResourceMapping.
Trying to test this, I set up two tables that map to objects (ignore the id thing unless that's actually causing this issue):
a
id (pk) | name | create_time | update_time
------------------------------------------
b
id (pk|fk) | property1 | property2 | property3
----------------------------------------------
A and B are defined as such:
#Searchable(root=true)
public class A {
#SearchableId
private int id;
#SearchableProperty
private String name;
#SearchableProperty
private long create_time;
#SearchableProperty
private long update_time;
#SearchableComponent
private B b;
// getters, setters
}
#Searchable(root=false)
public class B {
#SearchableId
private int id;
#SearchableProperty
private String property1;
#SearchableProperty
private long property2;
#SearchableProperty
private String property3;
// getters, setters
}
Here's how I'm setting everything up:
conf = new CompassConfiguration()
.setSetting(CompassEnvironment.CONNECTION, "target/index")
.addClass(A.class)
.addClass(B.class);
compass = conf.buildCompass();
ResultSetJdbcGpsDevice jdbc = new ResultSetJdbcGpsDevice();
jdbc.setDataSource(ds);
jdbc.setName("sql-device");
jdbc.setMirrorDataChanges(true);
TableToResourceMapping mappingA = new TabbleToResourceMapping("a", "A");
mapping.addIdMapping(new IdColumnToPropertyMapping("id", "id"));
mapping.addDataMapping(new DataColumnToPropertyMapping("name", "name"));
mapping.addDataMapping(new DataColumnToPropertyMapping("create_time", "create_time"));
mapping.addDataMapping(new DataColumnToPropertyMapping("update_time", "update_time"));
TableToResourceMapping mappingB = new TabbleToResourceMapping("b", "B");
mapping.addIdMapping(new IdColumnToPropertyMapping("id", "id"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property1", "property1"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property2", "property2"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property3", "property3"));
jdbc.addMapping(mappingA);
jdbc.addMapping(mappingB);
gps = new SingleCompassGps(compass);
gps.addGpsDevice(jdbc);
gps.start();
gps.index();
With this, I get the following error from Compass:
java.lang.IllegalStateException: {sql-device}: No resource mapping defined in gps mirror compass for alias [B]. Did you defined a jdbc mapping builder?
at org.compass.gps.device.jdbc.ResultSetJdbcGpsDevice.doStart(ResultSetJdbcGpsDevice.java:127)
at org.compass.gps.device.AbstractGpsDevice.start(AbstractGpsDevice.java:124
at org.compass.gps.impl.AbstractCompassGps.start(AbstractCompassGps.java:166)
...
If I set B as root=true it works fine, and I can search for text in fields for both A and B and get stuff back. However, when A is built and returned by the search, its B instance is empty.
If I search for property2:0 then I retrieve instances of B that map to rows where that column is 0, but I ALSO retrieve every instance of A -- because all of their B components are default instances.
How am I supposed to properly set this up? I feel like I might be missing something in the Compass documentation.

After trying to figure this out I worked out the following solution:
TableToResourceMapping mapping = new TabbleToResourceMapping("a", "A");
mapping.setQuery("select * from a join b using (id)");
mapping.addIdMapping(new IdColumnToPropertyMapping("id", "id"));
mapping.addDataMapping(new DataColumnToPropertyMapping("name", "name"));
mapping.addDataMapping(new DataColumnToPropertyMapping("create_time", "create_time"));
mapping.addDataMapping(new DataColumnToPropertyMapping("update_time", "update_time"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property1", "property1"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property2", "property2"));
mapping.addDataMapping(new DataColumnToPropertyMapping("property3", "property3"));
jdbc.addMapping(mapping);
This may not be the right way to do it, but it works. Searching presents me with an instance A that has the correct data in its fields, including an instance B that also has the correct data.

Related

XStream OutOfMemoryError when generating xml

I have a class defined to store configuration data for my application. I want to save the instances of this out to xml and use XStream for this. But I keep getting outofmemory errors when I try to write an instance.
Here is my class definition:
public class Eol_Target_Variable {
String name;
String alias;
long value;
long default_val;
int size;
int scaling;
int div;
Boolean read_access;
Boolean write_access;
public Eol_Target_Variable(String arg_name, String arg_alias, int arg_value, int arg_size, int arg_scaling,int arg_div)
{
name = arg_name;
alias = arg_alias;
value = arg_value;
default_val = 0;
scaling = arg_scaling;
div = arg_div;
size = arg_size;
read_access = true;
write_access = true;
}
/**
* #return the name
*/
public String getName() {
return name;
}
/**
* #param name the name to set
*/
public void setName(String name) {
this.name = name;
}
...etc for all standard getters and setters
Here is my handler for exporting a single object to xml
public void importConfiguration() {
XStream xstream = new XStream(new DomDriver());
Eol_Target_Variable myvar = new Eol_Target_Variable("jamie", "xtracold", 1977, 16, 1, 1);
String myxml = xstream.toXML(myvar);
System.out.print(myxml);
}
Every time I get "Exception in thread "JavaFX Application Thread" java.lang.OutOfMemoryError: Java heap space" thrown. I cannot see why such a simple class would throw the out of memory error. I have managed to output simple String objects using XStream so the library is working, it is just this custom class that seems to cause problems.
I have also tried to increase the heap allocated at startup with the VM arguments -Xms512m -Xmx1024m but that makes no difference.
Thanks
Jamie
Here is the new class declaration
#XStreamAlias("targetVar")
public class Eol_Target_Variable {
String name;
String alias;
long value;
#XStreamAlias("default")
long default_val;
int size;
int scaling;
int div;
#XStreamOmitField
Node node;
#XStreamOmitField
Boolean read_access;
#XStreamOmitField
Boolean write_access;
public Eol_Target_Variable(String arg_name, String arg_alias, int arg_value, int arg_size, int arg_scaling,int arg_div, Node arg_node)
{
name = arg_name;
alias = arg_alias;
value = arg_value;
default_val = 0;
scaling = arg_scaling;
div = arg_div;
size = arg_size;
node = arg_node;
read_access = true;
write_access = true;
}
I also used a different parser as the basic DOMParser never handled the massive amount of data. When I changed to StaxDriver was at least able to see the streams of text in the debug output as XStream traversed the whole scene graph.
XStream xstream = new XStream(new StaxDriver());
xstream.processAnnotations(Eol_Target_Variable.class);
I don't pretend to fully understand why declaring the class inlined causes problems, but I can reason as to why asking XStream to parse a complete Node might cause issues.
If anyone has any experience with XStream and complex data structures, that are declared inside a JavaFX app I would welcome their input.

JPA why is MapJoin value() path needed

JPA: Map join value() path example,
#Entity
public class Book {
#OneToMany
#MapKey
Map<Long, Chapter> chapters;
}
#Entity
public class Chapter {
#Id Long id;
String name;
}
CriteriaQuery criteriaQuery = criteriaBuilder.createCriteriaQuery(Book.class);
Root root = criteriaQuery.from(Book.class);
MapJoin<Book, Long, Chapter> chapters = (MapJoin)root.join("chapters");
Path chapterName = chapters.get("name")
// or
Path chapterName = chapters.value().get("name");
What is the difference between the last two lines? Map join is the join to the map value type, why value() is needed?
I think your code does not compile.
Anyway - because the join to chapters gives back a MapJoin.
And Map always consist of a Key and a Value.
So if you want to access the value (e.g. the Chapter) then you have to call value().
CriteriaQuery criteriaQuery = criteriaBuilder.createQuery(Book.class);
Root root = criteriaQuery.from(Book.class);
MapJoin join = root.join(Book_.chapters);
Path path = join.value().get(Chapter_.name);

why is realm.classToTable count 0 when I get instance of realm object in Android

I have created my own realm db and I set configuration to new RealmConfiguration.Builder(this).name("mydb.realm").build();
realm = Realm.getInstance(realmConfiguration);
The classToTable count is 0 and count of rows in the table is also 0. Can someone let me know what am I missing. Thanks.
Did you created your models as subsclasses of RealmObject?
Follow sample to create some records in db:
Realm realm = Realm.getInstance(context);
// Define model
public class Dog extends RealmObject {
private String name;
private int age;
// ... Generated getters and setters ...
}
// Create object
Dog dog = new Dog();
dog.setName("Rex");
dog.setAge("1");
// Persist
realm.beginTransaction();
realm.copyToRealm(dog);
realm.commitTransaction();

dynamo db query

I have my dynamo db table as follows:
HashKey(xyz) ,RangeKey(timestamp)
Now for each hash key i have set of range key.
Now i want to query based on set of hashkey and i want only most recent value correspoding to that to be fetched .I dont want to do in memory sorting and then picking most recent version.
Can i do this in any way?
Use case is that i will do a bulkget and pass set of hashkey (say 100) , so i want to get one record for each hashkey
You (currently) can't set constraints on a batch get. See http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/API_BatchGetItems.html
However, for single hash keys, you can set the direction using ScanIndexForward. See http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/API_Query.html for information.
Sample java code:
new QueryRequest().withTableName("table-name")
.withScanIndexForward(false)
.withLimit(1)
.withHashKeyValue(new AttributeValue().withS("hash-key"));
It will not be very efficient though, as you will need to make this call 100 times.
Use ScanIndexForward(true for ascending and false for descending) and can also limit the result using setLimit value of Query Expression.
Please find below the code where used QueryPage for finding the single record.
public EventLogEntitySave fetchLatestEvents(String id) {
EventLogEntitySave entity = new EventLogEntitySave();
entity.setId(id);
DynamoDBQueryExpression<EventLogEntitySave> queryExpression = new DynamoDBQueryExpression<EventLogEntitySave>().withHashKeyValues(entity);
queryExpression.setScanIndexForward(false);
queryExpression.withLimit(1);
queryExpression.setLimit(1);
List<EventLogEntitySave> result = dynamoDBMapper.queryPage(EventLogEntitySave.class, queryExpression).getResults();
System.out.println("size of records = "+result.size() );
result.get(0);
}
#DynamoDBTable(tableName = "PROD_EA_Test")
public class EventLogEntitySave {
#DynamoDBHashKey
private String id;
private String reconciliationProcessId;
private String vin;
private String source;
}
public class DynamoDBConfig {
#Bean
public AmazonDynamoDB amazonDynamoDB() {
String accesskey = "";
String secretkey = "";
//
// creating dynamo client
BasicAWSCredentials credentials = new BasicAWSCredentials(accesskey, secretkey);
AmazonDynamoDB dynamo = new AmazonDynamoDBClient(credentials);
dynamo.setRegion(Region.getRegion(Regions.US_WEST_2));
return dynamo;
}

Using HQL to query on a Map's Values

Let's say I have a map (call it myClass.mapElem<Object, Object>) like so:
Key Val
A X
B Y
C X
I want to write HQL that can query the Values such that I can get back all instances of myClass where mapElem's value is 'X' (where 'X' is a fully populated object-- I just don't want to go through each element and say x.e1 = mapElem.e1 and x.e2=... etc). I know I can do this for the keys by using where ? in index(myClass.mapElem), I just need the corresponding statement for querying the values!
Thanks in advance...
ETA: Not sure if the syntax makes a difference, but the way I am actually querying this is like so:
select myClass.something from myClass mc join myClass.mapElem me where...
You should use elements(). I tried simulating your example with the following class
#Entity
#Table(name="Dummy")
public class TestClass {
private Integer id;
private Map<String, String> myMap;
#Id
#Column(name="DummyId")
#GeneratedValue(generator="native")
#GenericGenerator(name="native", strategy = "native")
public Integer getId() {
return id;
}
#ElementCollection
public Map<String, String> getMyMap() {
return myMap;
}
public void setId(Integer id) {
this.id = id;
}
public void setMyMap(Map<String, String> myMap) {
this.myMap = myMap;
}
}
And persisted a few instances, which constain maps of a similar structure to what you have in your example. (using < String, String > since the values in your example are strings)
The following query gives me all instances of TestClass, where the map contains a specific value
SELECT DISTINCT myClass
FROM TestClass myClass JOIN myClass.myMap myMap
WHERE ? in elements(myMap)
In my particular case, I ended up having to use an SQL query. Not ideal, but it worked.

Resources