Collections in QueryDSL projections - collections

I am trying to use a projection to pull in data from an entity an some relations it has. However. The constructor on the projection takes three arguments; a set, integer and another integer. This all works fine if I don't have the set in there as an argument, but as soon as I add the set, I start getting SQL syntax query errors.
Here is an example of what I'm working with...
#Entity
public class Resource {
private Long id;
private String name;
private String path;
#ManyToOne
#JoinColumn(name = "FK_RENDITION_ID")
private Rendition rendition;
}
#Entity
public class Document {
private Long id;
private Integer pageCount;
private String code;
}
#Entity
public class Rendition {
Long id;
#ManyToOne
#JoinColumn(name="FK_DOCUMENT_ID")
Document doc;
#OneToMany(mappedBy="rendition")
Set<Resource> resources;
}
public class Projection {
#QueryProjection
public Projection(Set<Resource> resources, Integer pageCount, String code) {
}
}
Here is the query like what I am using (not exactly the same as this is a simplified version of what I'm dealing with)....
QRendition rendition = QRendition.rendition;
Projection projection = from(rendition)
.where(rendition.document().id.eq(documentId)
.and(rendition.resources.isNotEmpty())
.limit(1)
.singleResult(
new QProjection(rendition.resources,
rendition.document().pageCount,
rendition.document().code));
This query works fine as long as my projection class does not have the rendition.resources in it. If I try and add that in, I start getting malformed SQL errors (it changes the output sql so that it starts with this.
select . as col_0_0_
So, I guess my main question here is how do I include a Set as an object in a projection? Is it possible, or am I just doing something wrong here?

Using collections in projections is unreliable in JPA. It is safer to join the collection and aggregate the results instead.
Querydsl can also be used for result aggregation http://www.querydsl.com/static/querydsl/3.2.0/reference/html/ch03s02.html#d0e1799
In your case something like this
QRendition rendition = QRendition.rendition;
Projection projection = from(rendition)
.innerJoin(rendition.document, document)
.innerJoin(rendition.resources, resource)
.where(document.id.eq(documentId))
.limit(1)
.transform(
groupBy(document.id).as(
new QProjection(set(resources),
document.pageCount,
document.code)));

Related

java DynamoDBMapper - partially mapped entities - amount of Read Capacity Units

Does java DynamoDB load whole Items when the #DynamoDBTable annotated class maps only a subset of their attributes?
example: "Product" table, holding items with these attributes:
id, name, description. I would like to get the names of several products, without loading the description (which would be a huge amount of data).
Does this code load description from DynamoDB?
#DynamoDBTable(tableName = "Product")
public class ProductName {
private UUID id;
private String name;
#DynamoDBHashKey
#DynamoDBTyped(DynamoDBAttributeType.S)
public UUID getId() { return id; }
public void setId(UUID id) { this.id = id; }
#DynamoDBAttribute
public String getName() { return name; }
public void setName(String name) { this.name = name; }
}
...
DynamoDBMapper dynamoDBMapper = ...
dynamoDBMapper.batchLoad(products); // TODO is description loaded? what is the amount of Consumed Read Capacity Units?
As their docs say:
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that is returned to an application. For this reason, the number of capacity units consumed will be the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number will also be the same whether or not you use a filter expression.
As you see, projections does not impact on the amount of capacity units used.
BTW, in your case, description field will be returned anyway, because you do not need to annotate every field with DynamoDB annotation, only those, who are keys, or named differently, or need custom converters. All non-annotated fields will populated from the corresponding DB fields automatically.

How to use transactional DatastoreIO

I’m using DatastoreIO from my streaming Dataflow pipeline and getting an error when writing an entity with the same key.
2016-12-10T22:51:04.385Z: Error: (af00222cfd901860): Exception: com.google.datastore.v1.client.DatastoreException: A non-transactional commit may not contain multiple mutations affecting the same entity., code=INVALID_ARGUMENT
If I use a random number in the key then things work but I need to update the same key so is there a transactional way to do this using DataStoreIO?
static class CreateEntityFn extends DoFn<KV<String, Tile>, Entity> {
private static final long serialVersionUID = 0;
private final String namespace;
private final String kind;
CreateEntityFn(String namespace, String kind) {
this.namespace = namespace;
this.kind = kind;
}
public Entity makeEntity(String key, Tile tile) {
Entity.Builder entityBuilder = Entity.newBuilder();
Key.Builder keyBuilder = makeKey(kind, key );
if (namespace != null) {
keyBuilder.getPartitionIdBuilder().setNamespaceId(namespace);
}
entityBuilder.setKey(keyBuilder.build());
entityBuilder.getMutableProperties().put("tile", makeValue(tile.toString()).build());
return entityBuilder.build();
}
#Override
public void processElement(ProcessContext c) {
String key = c.element().getKey();
// this works key = key.concat(":" + UUID.randomUUID().toString());
c.output(makeEntity(key, c.element().getValue()));
}
}
...
...
inputData = pipeline
.apply(PubsubIO.Read.topic(pubsubTopic));
windowedDataStreaming = inputData
.apply(Window.<String>into(
SlidingWindows.of(Duration.standardMinutes(15))
.every(Duration.standardSeconds(31))));
...
...
...
//Create a Datastore entity
PCollection<Entity> siteTileEntities = tileSiteKeyed
.apply(ParDo.named("CreateSiteEntities").of(new CreateEntityFn(options.getNamespace(), options.getKind())));
// write site tiles to datastore
siteTileEntities
.apply(DatastoreIO.v1().write().withProjectId(options.getDataset()));
// Run the pipeline
pipeline.run();
Your code snippet doesn't explain how tileSiteKeyed is created. Presumably it's a PCollection<KV<String, Tile>, but if it might have duplicate String keys, that would explain the issue.
Generally a PCollection<KV<K, V>> may contain multiple KV pairs with the same key. If you'd like to ensure unique keys per window, you can use a GroupByKey to do that. That will give you a PCollection<KV<K, Iterable<V>>> with unique keys per window. Then augment CreateEntityFn to take an Iterable<Tile> and create a single mutation with the changes you need to make.
This error indicates that Cloud Datastore received a Commit request with two mutations for the same key (i.e. it tries to insert the same entity twice or modify the same entity twice).
You can avoid the error by only including one mutation per key per Commit request.

How to optimize realm.allobjects(class) in Android

I want to use Realm to replace SqLite in Android to store a list of classes, my code is very simple as below.
public class MyRealmObject extends RealmObject {
public String getField() {
return field;
}
public void setField(String field) {
this.field = field;
}
private String field;
...
}
List<MyObject> myObjects = new ArrayList();
Realm realm = Realm.getInstance(this);
for(MyRealmObject realm : realm.allobjects(MyRealmObject.class)) {
myObjects.add(new MyObject(realm));
}
realm.close();
return myObjects;
However, its performance is actually slower than a simple SqlLite table on my tested device, am I using it the wrong way? Is there any optimization tricks?
Why do you want to wrap all your RealmObjects in the MyObject class?. Especially copying the entire result set means you will loose the benefit of using Realm, namely that it doesn't copy data unless needed to.
RealmResults implements the List interface so you should be able to use the two interchangeably.
List<MyRealmObject> myObjects;
Realm realm = Realm.getInstance(this);
myObjects = realm.allObjects(MyRealmObject.class();
return myObjects;

key prefixes with documents in spring-data-couchbase repositories

I commonly use a prefix pattern when storing documents in couchbase. For example a user document might have a key like "user::myusername" and an order document might have a key of "order::1".
I'm new to spring-data and don't see a way to easily make it work with these prefixes. I can specify a field in my object like:
public class UserLogin {
public static final String dbPrefix = "user_login::";
#Id
private String id;
private String username;
.
.
.
public void setUsername(String username) {
this.username = username;
this.id = dbPrefix + this.username;
}
}
and have a Crud repository
public interface UserLoginRepo extends CrudRepository<UserLogin, String> {
}
This is an ok work around because I can:
...
userLoginRepo.save(login)
UserLogin login = userLoginRepo.findOne(UserLogin.dbPrefix + "someuser");
...
It would be really nice if there were some way to have the repository automatically use the prefix behind the scenes.
not sure if you are asking a question here. Spring-data doesn't have any built-in mechanism to do what you want. If you are using Spring-cache with couchbase this is possible since you can annotate your cached methods with naming patterns like this, for example, in spring cache you can do something like this on the persistence method:
#Cacheable(value = "myCache", key = "'user_login::'+#id")
public UserLogin getUserLogin()
Obviously though, this is Spring cache and not spring data, however both are supported in the same library.

How does versioning work with Flex remote objects and AMF?

Suppose I use the [RemoteClass] tag to endow a custom Flex class with serialization intelligence.
What happens when I need to change my object (add a new field, remove a field, rename a field, etc)?
Is there a design pattern for handling this in an elegant way?
Your best bet is to do code generation against your backend classes to generation ActionScript counterparts for them. If you generate a base class with all of your object properties and then create a subclass for it which is never modified, you can still add custom code while regenerating only the parts of your class that change. Example:
java:
public class User {
public Long id;
public String firstName;
public String lastName;
}
as3:
public class UserBase {
public var id : Number;
public var firstName : String;
public var lastName : String;
}
[Bindable] [RemoteClass(...)]
public class User extends UserBase {
public function getFullName() : String {
return firstName + " " + lastName;
}
}
Check out the Granite Data Services project for Java -> AS3 code generation.
http://www.graniteds.org
Adding or removing generally works.
You'll get runtime warnings in your trace about properties either being missing or not found, but any data that is transferred and has a place to go will still get there. You need to keep this in mind while developing as not all your fields might have valid data.
Changing types, doesn't work so well and will often result in run time exceptions.
I like to use explicit data transfer objects and not to persist my actual data model that's used throughout the app. Then your translation from DTO->Model can take version differences into account.

Resources