Is there a way to set a document level TTL for every object that we store in RIAK?
For example, if i want to store "value" for a "key" in RIAK, can i set a TTL of 30 seconds on that key so that the element expires on the 31st second?
You can't store a different TTL for each object, but if you're using the Bitcask or in-memory backend you can set a "global" TTL which is applied to all objects stored.
See: FAQ: How can I automatically expire a key from Riak?
In the app config you'd have:
{bitcask, [
{data_root, "data/bitcask"},
{expiry_secs, 30} %% Expire after 30 secs
]},
Related
I want to track changes to documents in a collection in Firestore. I use a lastModified property to filter results. The reason I use this “lastModified” filter is so that each time the app starts the initial snapshot in the listener does not return all documents in the collection.
// The app is only interested in changes occurring after a certain date
let date: Date = readDateFromDatabase()
// When the app starts, begin listening for city updates.
listener = db.collection("cities")
.whereField("lastModified", isGreaterThanOrEqualTo: date)
.addSnapshotListener() { (snapshot, error)
// Process added, modified, and removed documents.
// Keep a record of the last modified date of the updates.
// Store an updated last modified date in the database using
// the oldest last modified date of the documents in the
// snapshot.
writeDateToDatabase()
}
Each time documents are processed in the closure, a new “lastModified” value is stored in the database. The next time the app starts, the snapshot listener is created with a query using this new “lastModified” value.
When a new city is created, or one is updated, its “lastModified” property is updated to “now”. Since “now” should be greater than or equal to the filter date, all updates will be sent to the client.
However, if a really old city is deleted, then its “lastModified” property may be older than the filter date of a client that has received recent updates. The problem is that the deleted city’s “lastModified” property cannot be updated to “now” when it is being deleted.
Example
Client 1 listens for updates ≥ d_1.
Client 2 creates two cities at d_2, where d_1 < d_2.
Client 1 receives both updates because d_1 < d_2.
Client 1 stores d_2 as a future filter.
Client 2 updates city 1 at d_3, where d_2 < d_3.
Client 1 receives this update because d_1 < d_3.
Client 1 stores d_3 as a future filter.
...Some time has passed.
Client 1 app starts and listens for updates ≥ d_3.
Client 2 deletes city 2 (created at d_2).
Client 1 won’t receive this update because d_2 < d_3.
My best solution
Don’t delete cities, instead add an isDeleted property. Then, when a city is marked as deleted, its “lastModified” property is updated to “now”. This update should be sent to all clients because the query filter date will always be before “now”. The main business logic of the app ignores cities where isDeleted is true.
I feel like I don’t fully understand this problem. Is there a better way to solve my problem?
The solution you've created is quite common and is known as a tombstone.
Since you no longer need the actual data of the document, you can delete its fields. But the document itself will need to remain to indicate that it's been deleted.
There may be other approaches, but they'll all end up similarly. As you have to somehow signal to each client (no matter when they connect/query) that the document is gone, keeping the document as a tombstone seems like a simple and good approach to me.
I am using multi-region write (and read) cosmos db. I have multiple change feed observers on the same collection, each updating a different search index (each with own lease prefix). Default consistency level is set to Session.
Using SDK v2 (and change feed processor library v2):
new ChangeFeedProcessorBuilder()
.WithHostName(hostName)
.WithProcessorOptions(hostOptions)
.WithFeedCollection(collectionInfo)
.WithLeaseCollection(leaseInfo)
.WithObserverFactory(observerFactory)
.BuildAsync();
My logs show a situation where 2 out of 3 of those observers received an older version of the updated document:
time t1: document1 created
time t2 (days after t1): document1 updated
time t3:
observer1 received document1 (version at t2)
observer2 received document1 (version at t1)
observer3 received document1 (version at t1)
Question: Does the changefeed processor instance have an affinity to a particular region? In other words, is it possible that it reads the LSN from one region and pulls the documents from another? I was not able to find clear documentation on change feed observers and multi-region. Is the assumption that once the processor instance acquires the lease, it will observe changes from the same region consistently, an incorrect assumption?
The region contacted is the default region (in the case of Multi master, the Hub region, the first one in the Portal list), unless you specify a PreferredLocation in collectionInfo you are using in WithFeedCollection.
DocumentCollectionInfo has a ConnectionPolicy property you can use to define your preference through the PreferredLocations (just like you can do with the normal SDK client). Reference: https://learn.microsoft.com/dotnet/api/microsoft.azure.documents.changefeedprocessor.documentcollectioninfo?view=azure-dotnet
All changes are pulled from that region, the LSN returned and the documents are from that region (they come in the same Change Feed response).
Once an observer acquires a lease, it will read changes for the partition that lease is for, from the region defined in the configuration (default is Hub or whatever you define in PreferredLocations).
EDIT: Are you doing a ReadDocument in your observer after getting the changes? If so, with Session consistency you will need the SessionToken from the IChangeFeedObserverContext.FeedResponse (reference https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.changefeedprocessor.feedprocessing.ichangefeedobservercontext.feedresponse?view=azure-dotnet#Microsoft_Azure_Documents_ChangeFeedProcessor_FeedProcessing_IChangeFeedObserverContext_FeedResponse)
I am trying to create a security rule in firestore where I have a collection (named Transactions) of documents arrange in the timestamp they are created.
I want the security rule to access the latest timestamp of my Transactions collection and only allow to create document in my Transactions if it is within 30 sec after the said latest timestamp. If a new document is added within the 30 second mark then the 30 second timer will reset because there is a new latest timestamp.
I tried putting the value of the latest timestamp in the parent document of this collection but it will be expected that there will be 100 writes/second in my Transactions and I think it will be harder to update the value in the document for every change because of the 1 write/second limit of firestore documents.
My Transactions document has this structure:
alias: 'red_insect'
amount: 1
timestamp: October 21, 2019 at 2:35:15 PM UTC+8
userid: *unique user id*
Any solutions/suggestions will be deeply appreciated.
I'm using Azure CosmosDb to store documents with TTL for documents enabled.
If I upsert an item or replace it, does the TTL count resets and start "counting" from the moment when I update, or it just continues from the "first creation" of the document?
Thank you!
There is a _ts parameter in your document, which is the last modified timestamp. And referring to: Set time to live on an item
So, if you update an item or replace it, the TTL count resets and start "counting" from the moment when you modify it.
I want to write update query to expire transients . I will upadte their time to 1 in wordpress option table.
I have transients starting by the name re_compare and rest after it parameter changes.
My query for update is
$wpdb->update(
'options',
array(
'option_value' => '1', // string
),
array( 'option_name' => '%re_compare%' )
);
Its not working . Basically I want to remove / Expire already existing transients.
But if I delete transients from options table they still show in transient manager plugin. So thought to set their expire time to 1 second.
Deleting or modifying transients from options table via plain SQL isn't recommended. Why? Because the database is actually a default fallback location where the transients are stored, not the primary one. If there's any object cache available, transients are stored there, not in the database. So, in your case, it may very well be the case - you're deleting them from options table, but they are actually read from the object cache.
In general, you don't have to worry about expiring transients. WordPress has a garbage collector that purges them automatically.
If the data in transients became stale and you need to update it earlier than it will expire, use the API function for this:
delete_transient( 'your_transient_name' );
Please also note, that the expiration time is the maximum period of time that transient can live. After that period of time, it will never return stored value. However, it may not be available long before the expiration time, due to object cache eviction, database upgrades etc.
So, in short:
the expiration time is the maximum point in future when the transient call will stop returning the value
it may be lost long before the expiration time comes, due to other reasons
The rules of thumb for working with transients are:
Set your transients with API function
Set expiration time to when you absolutely don't want it to be valid anymore
Delete it with API function if the data changed on your side (and, most likely, regenerate again)
Or just wait for it to expire naturally
It will be garbage-collected by WP later
Do not expect them to always be available to you until the expiration time comes. They are not guaranteed to persist.