How to expire a wordpress transients - wordpress

I want to write update query to expire transients . I will upadte their time to 1 in wordpress option table.
I have transients starting by the name re_compare and rest after it parameter changes.
My query for update is
$wpdb->update(
'options',
array(
'option_value' => '1', // string
),
array( 'option_name' => '%re_compare%' )
);
Its not working . Basically I want to remove / Expire already existing transients.
But if I delete transients from options table they still show in transient manager plugin. So thought to set their expire time to 1 second.

Deleting or modifying transients from options table via plain SQL isn't recommended. Why? Because the database is actually a default fallback location where the transients are stored, not the primary one. If there's any object cache available, transients are stored there, not in the database. So, in your case, it may very well be the case - you're deleting them from options table, but they are actually read from the object cache.
In general, you don't have to worry about expiring transients. WordPress has a garbage collector that purges them automatically.
If the data in transients became stale and you need to update it earlier than it will expire, use the API function for this:
delete_transient( 'your_transient_name' );
Please also note, that the expiration time is the maximum period of time that transient can live. After that period of time, it will never return stored value. However, it may not be available long before the expiration time, due to object cache eviction, database upgrades etc.
So, in short:
the expiration time is the maximum point in future when the transient call will stop returning the value
it may be lost long before the expiration time comes, due to other reasons
The rules of thumb for working with transients are:
Set your transients with API function
Set expiration time to when you absolutely don't want it to be valid anymore
Delete it with API function if the data changed on your side (and, most likely, regenerate again)
Or just wait for it to expire naturally
It will be garbage-collected by WP later
Do not expect them to always be available to you until the expiration time comes. They are not guaranteed to persist.

Related

Denying multiple users editing the same post/blog at the same time. (Wordpress has that functionality)

I have am looking at the way to prevent multiple users opening same post and editing at the same time. For example I open it in one tab and start editing, when someone else wants to open in another tab it shouldnt be possible to edit it.Only one person at the time.
With Wordpress, 2 users cannot edit the same post at the same time. Does anyone know WordPress how does it ? I don't know myself.
I am looking for the solution , thats why i dont know what to try exactly.
If you want to know whether a post (page, product, any custom post type) is currently being edited, use wp_check_post_lock( $post_id ). If a user is currently editing the post, it returns the user's ID. Otherwise it returns false, and you can proceed to edit it.
If you want to mark a post as being edited, use wp_set_post_lock( $post_id ). Calling this will silently override any existing lock, so check first. You should call this function every two minutes, or more often, while editing is in progress, because locks expire after 150 seconds.
This is all implemented via a wp_postmeta entry with meta_key '_edit_lock' and meta_value 'timestamp:userid'. The timestamp is the time the lock was set. For example, '1667470754:123' means userid 123 locked the post at time Thu Nov 03 2022 10:19:14Z. But avoid hitting the wp_postmeta table directly for this. The value may be cached.
You can use the check_post_lock_window filter to alter the lock expiration time if need be.
The _admin_notice_post_locked() function puts up a notice about a post being locked. But this function is designed for use within WordPress core admin pages, so it may not work for you.

Firebase cloud function timestamp

In my current application, I store a value in Firestore for each user something along the lines of this:
User1Doc - hasUsedFeatureToday = true
User2Doc - hasUsedFeatureToday = false
...
At the end of the day, I run a scheduled function that resets all of these to false. This was fine while my application was relatively small, but does not scale very effectively as I'm sure you can imagine.
Each user can only use this aspect of my app once per day, so the only time this field is read is when they try to use it.
I would like to change this system to store a timestamp in the user's document when they use the feature and then check if this timestamp is the same day (Europe/London time) if someone tries to use it again.
Does Firebase offer a way to get a "timezoned" timestamp like this and store/check it with the Firestore?
You can just store a timestamp (UTC). Whenever users logs in to your app, just check the timestamp and update the same. You can always use libraries like Luxon to get local time from the UTC time.
If you want to allow the user to update this timestamp only once, you can use security rules to restrict the same. However, user may try to prevent the timestamp from being updated at first place.
You can instead use Cloud Function to serve the data only when the user requests it. This will be better than updating documents of all users even if they won't be using the feature every day.

Preventing timestamp overlaps in Firestore collection

This is a follow-up/elaboration to a previous question of mine.
In the case of a collection of documents containing a time range represented by two timestamp fields (start and end), how does one go about guaranteeing that two documents don't get added with overlapping time ranges?
Say I had the following JavaScript on form submit:
var bookingsRef = db.collection('bookings')
.where('start', '<', booking.end)
.where('end', '>', booking.start);
bookingsRef.get().then(snapshot => {
// if a booking is found (hence there is an overlap), display error
// if booking is not found (hence there is no overlap), create booking
});
Now if two people were to submit overlapping bookings at the same time, could transactions be used (either on the client or the server) to guarantee that in between the get and add calls no other documents were created that would invalidate the original collection get query where clauses.
Or would my option be using some sort of security create rule that checks for other document time overlaps prior to allowing a new write (if this is at all possible)? One approach to guarantee document uniqueness via security rules seems to be exposing field values in the document ID, but I'm not entirely sure how exposing the start and end timestamp values in the ID would allow a rule to check for overlapping time ranges.
I think transaction is proper approach. According to the documentation:
..., if a transaction reads documents and another client
modifies any of those documents, Cloud Firestore retries the
transaction. This feature ensures that the transaction runs on
up-to-date and consistent data.
This seems to be an answer to your problem. All reads will be retried, if anything will change in the meantime. I think transaction mechanism is exactly for that reason.

How can I query for all new and updated documents since last query?

I need to query a collection and return all documents that are new or updated since the last query. The collection is partitioned by userId. I am looking for a value that I can use (or create and use) that would help facilitate this query. I considered using _ts:
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value]
The problem with _ts is that it is not granular enough and the query could miss updates made in the same second by another client.
In SQL Server I could accomplish this using an IDENTITY column in another table. Let's call the table version. In a transaction I would create a new row in the version table, do the updates to the other table (including updating the version column with the new value. To query for new and updated rows I would use a query like this:
SELECT * FROM table WHERE userId=[some-user-id] and version > [some-value]
How could I do something like this in Cosmos DB? The Change Feed seems like the right option, but without the ability to query the Change Feed, I'm not sure how I would go about this.
In case it matters, the (web/mobile) clients connect to data in Cosmos DB via a web api. I have control of the entire stack - from client to back-end.
As the statements in this link:
Today, you see all operations in the change feed. The functionality
where you can control change feed, for specific operations such as
updates only and not inserts is not yet available. You can add a “soft
marker” on the item for updates and filter based on that when
processing items in the change feed. Currently change feed doesn’t log
deletes. Similar to the previous example, you can add a soft marker on
the items that are being deleted, for example, you can add an
attribute in the item called "deleted" and set it to "true" and set a
TTL on the item, so that it can be automatically deleted. You can read
the change feed for historic items, for example, items that were added
five years ago. If the item is not deleted you can read the change
feed as far as the origin of your container.
Change feed is not available for your requirements.
My idea:
Use Azure Function Cosmos DB Trigger to collect all the operations in your specific cosmos collection. Follow this document to configure the input of azure function as cosmos db, then follow this document to configure the output as azure queue storage.
Get the ids of changed items and send them into queue storage as messages.When you want to query the changed item,just query the messages from the queue to consume them at a specific unit time and after that just clear the entire queue. No items will be missed.
With your approach, you can get added/updated documents and save reference value (_ts and id field) somewhere (like blob)
SELECT * FROM collection WHERE userId=[some-user-id] AND _ts > [some-value] and id !='guid' order by _ts desc
This is a similar approach we use to read data from Eventhub and store checkpointing information (epoch number, sequence number and offset value) in blob. And at a time only one function can take a lease of that blob.
If you go with ChangeFeed, you can create listener (Function or Job) to listen all add/update data from collection and you can store those value in some collection, while saving data you can add Identity/version field on every document. This approach may increase your cosmos DB bill.
This is what the transaction consistency levels are for: https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels
Choose strong consistency and your queries will always return the latest write.
Strong: Strong consistency offers a linearizability guarantee. The
reads are guaranteed to return the most recent committed version of an
item. A client never sees an uncommitted or partial write. Users are
always guaranteed to read the latest committed write.

Efficeintly maintaining a cache of distinct items in a huge DB table

I have a very large (millions of rows) SQL table which represents name-value pairs (one columns for a name of a property, the other for it's value). On my ASP.NET web application I have to populate a control with the distinct values available in the name column. This set of values is usually not bigger than 100. Most likely around 20. Running the query
SELECT DISTINCT name FROM nameValueTable
can take a significant time on this large table (even with the proper indexing etc.). I especially don't want to pay this penalty every time I load this web control.
So caching this set of names should be the right answer. My question is, how to promptly update the set when there is a new name in the table. I looked into SQL 2005 Query Notification feature. But the table gets updated frequently, very seldom with an actual new distinct name field. The notifications will flow in all the time, and the web server will probably waste more time than it saved by setting this.
I would like to find a way to balance the time used to query the data, with the delay until the name set is updated.
Any ides on how to efficiently manage this cache?
A little normalization might help. Break out the property names into a new table, and FK back to the original table, using a int ID. you can display the new table to get the complete list, which will be really fast.
Figuring out your pattern of usage will help you come up with the right balance.
How often are new values added? are new values added always unique? is the table mostly updates? do deletes occur?
One approach may be to have a SQL Server insert trigger that will check the table cache to see if its key is there & if it's not add itself
Add a unique increasing sequence MySeq to your table. You may want to try and cluster on MySeq instead of your current primary key so that the DB can build a small set then sort it.
SELECT DISTINCT name FROM nameValueTable Where MySeq >= ?;
Set ? to the last time your cache has seen an update.
You will always have a lag between your cache and the DB so, if this is a problem you need to rethink the flow of the application. You could try making all requests flow through your cache/application if you manage the data:
requests --> cache --> db
If you're not allowed to change the actual structure of this huge table (for example, due to huge numbers of reports relying on it), you could create a holding table of these 20 values and query against that. Then, on the huge table, have a trigger that fires on an INSERT or UPDATE, checks to see if the new NAME value is in the holding table, and if not, adds it.
I don't know the specifics of .NET, but I would pass all the update requests through the cache. Are all the update requests done by your ASP.NET web application? Then you could make a Proxy object for your database and have all the requests directed to it. Taking into consideration that your database only has key-value pairs, it is easy to use a Map as a cache in the Proxy.
Specifically, in pseudocode, all the requests would be as following:
// the client invokes cache.get(key)
if(cacheMap.has(key)) {
return cacheMap.get(key);
} else {
cacheMap.put(key, dababase.retrieve(key));
}
// the client invokes cache.put(key, value)
cacheMap.put(key, value);
if(writeThrough) {
database.put(key, value);
}
Also, in the background you could have an Evictor thread which ensures that the cache does not grow to big in size. In your scenario, where you have a set of values frequently accessed, I would set an eviction strategy based on Time To Idle - if an item is idle for more than a set amount of time, it is evicted. This ensures that frequently accessed values remain in the cache. Also, if your cache is not write through, you need to have the evictor write to the database on eviction.
Hope it helps :)
-- Flaviu Cipcigan

Resources