How to read documents from Change Feed in Azure Cosmos DB since last checkpoint after restart? - azure-cosmosdb

I am using Change Feed processor library to read the Change Feed on a partitioned collection and below is the code for how I have configure it. I ma using most of the default options.
ChangeFeedProcessorOptions feedProcessorOptions = new
{
LeaseRenewInterval = TimeSpan.FromSeconds(15),
};
var docObserverFactory = DocumentFeedObserverFactory.Create(this.destinationCollectionInfo, this.dbRepository);
this.builder
.WithHostName(hostName)
.WithFeedCollection(this.monitoredCollectionInfo)
.WithLeaseCollection(this.leaseCollectionInfo)
.WithProcessorOptions(feedProcessorOptions)
.WithObserverFactory(docObserverFactory);
This runs fine as long as the Change Feed application is running and documents are being inserted/updated in the collection and the Change Feed app picks them up as expected.
The problem happens when I stop the Change Feed app for sometime and insert/update few documents in the Collection. Then when I start the Change Feed app, it doesn't pick changes from where it last left. Those changes that were inserted when the Change Feed app was stopped are lost. But when I set the flag StartFromBeginning to true, it picks everything from the start including changes that were inserted when the Change Feed app was stopped in between for sometime.
My understanding of read from current (StartFromBeginning to false) is that the Change Feed reads documents since it last left. But that doesn't seem to happen. Please help.

There are two ways to continue from exactly where you left it.
The first, and more accurate one, is to store the Continuation token of the last thing you read. That way you can specify it when you start again and it will win over both the StartTime and the StartFromBeginning flags.
The second one is to provide the StartTime property which will try and find the continuation token of a given time automatically. It has an approximate 5 second precision so there is a chance that you might miss some documents though.

Related

Function getUnreadCount is not updated at runtime - Mesibo iOS SDK

In the chat history (summary) page of my app I'm using the function getUnreadCount() on MesiboProfile to get the number of messages currently unread so that I can show an indicator near the message.
The problem is that count is only correct the first time I read the summary from the read session. If it arrives a new message when I already read the summary, that count is not updated.
I saw that the counter gets fixed if I read the summary again but is this the recommended way to update that counter?
I'm using the iOS SDK v1.9.55
In 1.x, the unread count can be updated manually. Set the unread count to zero once you read it or increment it every time you receive a new message. This avoids database access. Here is the 1.x code which does the same.
Update: you can also use getUnreadMessageCount() in the user or group readsession (not the summary session) to get it from the database.
https://github.com/mesibo/ui-modules-ios/blob/master/Messaging/Messaging/UserListViewController.m#L474
In 2.x, we have moved this to API with additional logic.

Can't Edit/Update Certain Items In Database (Table)

I have database that I have multiple orders entered into. Everything seems to be working fine except for a few old entries which will not accept updates/changes to their Fields.
Note: The majority of the Fields are Strings with Possible Values entered via a DropDown Box.
So if I open Order A I can make adjustments just fine and those changes persist even after closing the page and coming back or refreshing.
But if I open Order B, I can make changes via the dropdowns and it looks like they have adjusted, however if I leave the page or refresh all the changes have reverted back.
One piece of info that may be helpful is that each of these orders has at least one Field that contains an entry that is no longer a Possible Value (the original entries were removed/changed per request of the client).
Maybe they are "locked" because of this? Is there a way to look at an error log for a Published app?
I can delete the "corrupt" entries and recreate them (since there are currently only a few), but I would prefer to find a better solution in case this happens again in the future.
Any help would be greatly appreciated.
It's a bug. Such field level value updates should get through.
As workaround you can update prohibited(not possible anymore) values with allowed ones in OnSave Model's Event like:
switch (record.Field) {
case "old_value_1":
record.Field = "new_value_1";
break;
case "old_value_2":
record.Field = "new_value_2";
break;
...
}
Sorry for the inconvenience.
Each deployment has its own log. Have you tried "App Settings > DEPLOYMENTS > (click on the desployment) > VIEW LOGS"?

Elasticsearch bulk update followed by search

In my server I update some documents using bulk API:
{"update":{"_type":"post","_retry_on_conflict":"3","_index":"xxxx","_id":"yyyy"}}
{"doc":{"sentiment":"positive","mood":1,"upgrade":true}}
After I get the response I make a new request for the same document using search:
{"query":{"filtered":{"filter":{"ids":{"values":["yyyy"]}}}}}
But the returned document has no updated value( Still has the old value ). If I wait for some time the updated value appears. I Think that occurs because bulk is async? Is there any way to fix this?
you can use refresh api to force index update, or even add ?refresh=true at the end of bulk command. But normally not recommended. Also, If there are more than one Node, you may need to use synced flush.

Why does the url property key in Firebase snapshot keep changing?

I have not seen any discussion or awareness so far that Firebase does in fact make available a unique identifier--in fact the full URL--to each specific data record via their "snapshot" which they return, i.e. the wrapper around the data record (accessed via snapshot.val()). By doing a basic property examination of the snapshot I discovered that the unique URL is available (see examples below). However, it seems that, for some reason, Firebase keeps changing the name of the key every few days, causing my application to break. I have to go in and re-discover the new URL property key and change it so that it will work again.
Here are three examples of how I have seen the key change so far. Each value is the same, but the key keeps changing over time (i.e.: "Wb", "Xb", "bc").:
getMemberBySnapshot - snapshot has prop Wb with value https://prototype1.firebaseio.com/users/-IwohKfw1l5F3gFqyJJ5
getMemberBySnapshot - snapshot has prop Xb with value https://prototype1.firebaseio.com/users/-IwohKfw1l5F3gFqyJJ5
getMemberBySnapshot - snapshot has prop bc with value https://prototype1.firebaseio.com/users/-IwohKfw1l5F3gFqyJJ5
I have read Firebase's suggestions that developers should use an email address if they want a unique key (what if my model does not use an email field? What if a user wants to change their email?), or Firebase suggests altenatively to retrieve all existing records and then search through them on the client. Neither of these solutions are satisfying. But I'm seeing that they do provide the unique URL to each data record in the 'snapshot'. Why do they not provide a stabilized key so that a developer can call it consistently???
Firebase.js is a compiled script. The names of internal variables will change every time we compile it and release a new version, so you should definitely not be relying on any properties that are not documented on our website.
For your specific case, you should be using:
snapshot.ref().toString()
in order to get the URL.

Limit reading children from a location through client side

var commentsRef = new Firebase('https://test.firebaseio.com/comments');
var last10Comments = commentsRef.limit(10);
//Rendering last 10 comments
last10Comments.on('child_added', function (snapshot) {
});
From the client side a user can change the limit number and can render all comments from comments reference.
Is there any way to restrict reading limit to some number at any point of time for a location?
No, there isn't currently a way to put Firebase security rules around that type of limiting of data. Another approach that would work would be to have another section of the tree that contains a denormalized portion of the data that just contains the last 10 comments and nothing more.
Thanks for bringing this up. I've added this to our internal tracker to keep it in mind when we design V2 of our security API.

Resources