Realm model migration strategy - realm

I have run into a problem working with Realm migration blocks and the strategy for migrating realms.
Given an object MyObject with a number of properties:
In version 1 we have the property myProperty
In version 2 we change the property to myPropertyMk2
In version 3 we change the property to myPropertyMk3
Given following migration block:
private class func getMigrationBlock(realmPath: String) -> RLMMigrationBlock {
return { migration, oldSchemaVersion in
if (oldSchemaVersion == RLMNotVersioned) {
NSLog("No database found when migrating.")
return
} else {
NSLog("Migrating \(realmPath) from version \(oldSchemaVersion) to \(RealmMigrationHelper.CURRENT_DATABASE_VERSION)")
}
NSLog("Upgrading MyObject from version %d to %d", oldSchemaVersion, CURRENT_DATABASE_VERSION)
if (oldSchemaVersion < 2) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk2"] = oldObject["myProperty"]
})
}
if (oldSchemaVersion < 3) {
migration.enumerateObjects(MyObject.className(), block: {
oldObject, newObject in
newObject["myPropertyMk3"] = oldObject["myPropertyMk2"]
})
}
NSLog("Migration complete.")
}
}
When I was version 2 of the DB this worked just fine (obviously without the oldSchemaVersion < 3 block), but when I introduced version 3 I started getting the problems because it does not recognise the newObject["myPropertyMk2"] in oldSchemaVersion < 2 block. If I change it to newObject["myPropertyMk3"] it works just fine.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Thank you :)

thanks for your question and report of your issue.
From reading the RLMMigration code this makes perfectly good sense as we work with the old schme and the new scheme, but based on the documentation on realm.io I do not think it makes sense. Then I would have expected it to be scheme less.
As you correctly recognized from the code in RLMMigration, migrations are not scheme-free. The migration closure which you provide should handle migrations from any version in the past to the current version. If your user didn't update your app in between and so skipped one version, there is no chance, that Realm could been aware of your intermediate schema version, as the schema is reflected at runtime. You're generally free to break the backwards-compatibility with existing old versions deliberately, but you would need to take care to reset the configuration to a defined state.
You're for sure right about the point, that this could been better documented. I have created an internal ticket about that.
I have an idea about making a scheme less migration within the block by simply using a dictionary and then finally apply this dictionary to the newObject.
Are there any thoughts on the migration strategy of realms that deals with this? It is mentioned on realms website, but only very briefly.
Depending on your scheme and the amount of data you have, you can reorganize it object-wise in memory via a dictionary and then apply it to newObject as you describe. The current API makes relatively few assumptions and allows an approach like this. But it wouldn't work in that way good for everyone, e.g. if you have large lists of related objects.

Related

How to know where exactly some data has changed on firebase? (Android Studio)

I am making a project that uses firebase which has the structure explained on the image. The nodes are like this userUID->MAC_addr->Data. On the Android side i have the code shown below, as you can see when some data changes it iterates a lot, this is because i dont have the "MAC" string to use it, so when something change i have to bring all the information due that. For example, if i had 2 MAC directions and some data changed on 1 of them i would have to bring the information of the other branch too. How can i improve this?
reference.child(Firebase.auth.uid.toString()).addValueEventListener(object :ValueEventListener{
override fun onDataChange(snapshot: DataSnapshot) {
for (i in snapshot.children) {
//iterates on macAddresses
for(data in i.children){
//iterates on the real data
when(data.key){
"AlarmActivated"->{}
"AlarmOff"->{}
...
}
}
}
I have tried other ways, but all of them needs to know the entire path, i was thinking to save the data on the phone (with preferences, but i dont like that approach, i hope there is a better way)
When you listen on a certain path, you can only get information about the direct child node of that path that was changed. There is no specific information about what lower-level properties were changed.
The typical solution for your use-case is to store the previous snapshot that you got, and then compare that property-for-property to any updates in your application code. You can either use local storage for storing the data yourself, or you can use the built-in caching mechanism of the SDK to handle the storage.

Axon Framework Event Package Refactoring

I have a set of events that have been refactored to another package. This works as is until I execute the event replay. Digging deeper I noticed a payloadtype in the domainevententry table and figure changing this would be sufficient but alas it seems the xml root element of the event needs to be changed as well. I am hoping there is a simple way to do this.
I cannot find any examples on upcasting to different packages or using XStream aliasing so any help would be greatly appreciated.
Thanks
As you noticed, the default payload type stored in events is the fully qualified class name. This ensures that out of the box serialization and deserialization work as intended. However, moving classes around thus means the payload type can no longer be found, requiring some adjustment to be made.
You could have used the EventTypeUpcaster, as referred to in the Reference Guide. The EventTypeUpcaster is dedicated to adjusting the payload type, and thus can also be used to deal with changing package names.
When using (the default) XStreamSerializer, aliasing the tags would indeed also work. How to set aliases can be seen here for example. AS noticed in that sample, the alias is added to the XStream instance. The XStreamSerializer uses an XStream instance to support de-/serialization from/to XML. To adjust the XStream instance, you can simply use the builder paradigm on the XStreamSerializer. The JavaDoc of the builder should be specific enough to help you out how to use it.
Went the long way round with this but it seems to work. As always, backup your database before executing large volume changes. I also restarted the service using the database on completion. Needless to say I will make sure the events are in logical packages before deploying next time :)
Database Engine: Postgres 10
Table: domainevententry
update domainevententry
set
payloadtype = '<new.package.Classname>',
payload = lo_from_bytea(0, decode(REPLACE(
subquery.output,
'<old.package.Classname>',
'<new.package.Classname>'
), 'escape'))
from (
SELECT eventidentifier, payloadtype, encode(lo_get(payload::oid), 'escape') as output FROM domainevententry
WHERE eventidentifier in (
'<event guid 1>',
'<event guid 2>'
)
AND payloadtype = '<old.package.Classname>'
) as subquery
where domainevententry.eventidentifier = subquery.eventidentifier;
Once that is completed I needed to update the OWNER of the large object:
ALTER LARGE OBJECT <LargeObjectId> OWNER TO database_role;
Probably not the most elegant solution but based on the time constraints I have, it did the job. There are probably encoding issues with this solution for the large object but it all worked out for me in the end. Feel free to share any optimizations that would make the above more suitable.
Firing off the Axon Framework replays rebuilt the projections and everything lined up.

Where did LoaderService go?

Upgrading AngleSharp from 0.9.6 to 0.9.9 I have this line of code no longer compiling:
return configuration.With(LoaderService(new[] { requester }));
It complains that LoaderService does not exist in the current context. So what happened to LoaderService? Is there a replacement? Does it still exist but just somewhere else?
Good question. Sorry for being late to the party, but even though you may have solved your problem someone else is having a hard time figuring it out.
LoaderService was essentially just a helper to create a loader. But having a service for anything creating a little thing would be overkill and not scale much. Also AngleSharp.Core would need to define all these. So, instead a generic mechanism was introduced, which allows registering such "creator services" via Func<IBrowsingContext, TService>.
However, to solve your piece of code I guess the following line would do the trick:
return configuration.WithDefaultLoader(requesters: requester);
This registers the default loader creator services (one for documents, one for resources inside documents) with the default options (options involve some middleware etc.).
Under the hood (besides some other things) the following is happening:
// just one example, config.Filter is created based on the passed in options
return configuration.With<IDocumentLoader>(ctx => new DocumentLoader(ctx, config.Filter));

Where should I put a logic for querying extra data in CQRS command flow

I'm trying to implement simple DDD/CQRS architecture without event-sourcing for now.
Currently I need to write some code for adding a notification to a document entity (document can have multiple notifications).
I've already created a command NotificationAddCommand, ICommandService and IRepository.
Before inserting new notification through IRepository I have to query current user_id from db using NotificationAddCommand.User_name property.
I'm not sure how to do it right, because I can
Use IQuery from read-flow.
Pass user_name to domain entity and resolve user_id in the repository.
Code:
public class DocumentsCommandService : ICommandService<NotificationAddCommand>
{
private readonly IRepository<Notification, long> _notificationsRepository;
public DocumentsCommandService(
IRepository<Notification, long> notifsRepo)
{
_notificationsRepository = notifsRepo;
}
public void Handle(NotificationAddCommand command)
{
// command.user_id = Resolve(command.user_name) ??
// command.source_secret_id = Resolve(command.source_id, command.source_type) ??
foreach (var receiverId in command.Receivers)
{
var notificationEntity = _notificationsRepository.Get(0);
notificationEntity.TargetId = receiverId;
notificationEntity.Body = command.Text;
_notificationsRepository.Add(notificationEntity);
}
}
}
What if I need more complex logic before inserting? Is it ok to use IQuery or should I create additional services?
The idea of reusing your IQuery somewhat defeats the purpose of CQRS in the sense that your read-side is supposed to be optimized for pulling data for display/query purposes - meaning that it can be denormalized, distributed etc. in any way you deem necessary without being restricted by - or having implications for - the command side (a key example being that it might not be immediately consistent, while your command side obviously needs to be for integrity/validity purposes).
With that in mind, you should look to implement a contract for your write side that will resolve the necessary information for you. Driving from the consumer, that might look like this:
public DocumentsCommandService(IRepository<Notification, long> notifsRepo,
IUserIdResolver userIdResolver)
public interface IUserIdResolver
{
string ByName(string username);
}
With IUserIdResolver implemented as appropriate.
Of course, if both this and the query-side use the same low-level data access implementation (e.g. an immediately-consistent repository) that's fine - what's important is that your architecture is such that if you need to swap out where your read side gets its data for the purposes of, e.g. facilitating a slow offline process, your read and write sides are sufficiently separated that you can swap out where you're reading from without having to untangle reads from the writes.
Ultimately the most important thing is to know why you are making the architectural decisions you're making in your scenario - then you will find it much easier to make these sorts of decisions one way or another.
In a project i'm working i have similar issues. I see 3 options to solve this problem
1) What i did do is make a UserCommandRepository that has a query option. Then you would inject that repository into your service.
Since the few queries i did need were so simplistic (just returning single values) it seemed like a fine tradeoff in my case.
2) Another way of handling it is by forcing the user to just raise a command with the user_id. Then you can let him do the querying.
3) A third option is ask yourself why you need a user_id. If it's to make some relations when querying the data you could also have this handles when querying the data (or when propagating your writeDB to your readDB)

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

Resources