My data is stocks in news so everyday once the data changes in my page but in order to keep the page up to date when the data is updated, I used revalidate:1 in getStaticProps it's working fine now but is this comparatively better than using getServerSideProps?
Note: My data updates only once in a day (every day morning)
We have to consider whether the data fetched per user is personalised in nature, if its personalised towards the user currently logged in, its better to use getServerSideProps as it fetches the data during each request and pre-renders it every time, which doesn't involve the usage of caching in any way.
In the case of ISR, it's better suited for rendering a page with content common to all users. Background regeneration ensures traffic is served uninterruptedly, always from static storage, and the newly built page is pushed only after it's done generating. Even when revalidating, the visitor first receives the cached version and only then the updated version. This caching strategy is commonly known as “stale-while-revalidate.”
Having a revalidate score of 1s is low , knowing that u get the update only once a day, it would be better to have it revalidate once every 60s or 120s.
Server-side rendering would be slow on the first render and might decrease the performance when compared to ISR.
Related
I have an e-commerce website and I am trying to figure out how best to deal with the individual product pages. I would like as much static generation as possible for the product, and I know that I can use static generation for most of the page, but the part that concerns me is the user having up to date knowledge of whether the product is in stock.
I know that I can use revalidate to ensure that the product information is up to date, and likely I would set that to around 24 hours as these rarely change, but I wouldn't want to set it to be just one minute when I only really care about the stock information being that up to date.
I feel like the best way to deal with this would be by using a combination of static generation and client side fetching. I would serve all the product info using static generation except for the stock which I could fetch client side. I could also use revalidate on a 24 hour basis to make sure the rest of the product data is up to date. But the in stock would be checked fresh every time the page is accessed.
I used this resource to understand better what to do, but it says on an individual product page I should be using revalidate every minute, but that would be too often I think because we don't update that often, or have so many customers looking at a product every minute that we would get any benefit.
Has anyone played around with this before or know what the best practices might be?
I think it really depends on the business requirements. Based on the information you provided, 24 hrs revalidation with stock information rarely changes, I can think of couple of approaches that make sense:
Use static page with the client-side fetch
You can use statically generated pages for the product details page with 24hr revalidation time. On the client-side, we can fetch the stock information. If you have some sort of cache on the backend for the stock information, the operation should be pretty cheap.
Use static page without the client-side fetch
Based on the number of products you carry, it might make sense to shorter the revalidation time. I'm talking about 10-30 minutes.
If you would like to optimize it further you can use your analytic data to determine products that frequently visited by the users and only generated those pages during build time. For other pages, you can use the fallback options. This approach should allow you to use fewer resources on your server while providing almost up-to-date stock information. There is no additional client-side fetching to complicate the source code.
I am stuck in a case where Google Analytics is recording multiple eCommerce transaction. We have added code on server side to execute GA eCommerce posting code only one time. Still this issue is reproducible for some transaction. The multiple eCommerce transaction are for same transaction Id but on different dates.
On research I found that this case is with small devices (mobile, tablet). The small devices browser caches whole webpage. And when the browser is opened it reload webpage from cache. So each time user opens the browser and page loads from cache hence the causing this issue.
Can anyone help me on this?
Thanks
"Ignore double transaction ids" would be quite a useful setting and we should try and make this a feature request. However at the moment it does not exist.
The only way I can think of would be to use an API script that selects the transaction ids for the last "n" days and then inserts a heap of filters via the management API to exclude hits with that transaction id. After some time (when the caches have presumably expired) you could throw out old filters. This would be only feasible if you have a small number of transactions (I think there is an upper limit to the number of filters a view can have).
Or if your transaction ids are somehow sequential (e.g. if they contain the date) you might be able to construct a regex that matches earlier parts of the sequence (e.g. previous dates) and only let's a transaction pass if it is higher up in the sequence than the last recorded transaction id (or does not let it pass if the date in the transaction id is lower than the current date - remember to update your filter at midnight).
Caveat: I have not actually tried something like this, but it sounds like it should work.
This question relates to WordPress's wp-cron function but is general enough to apply to any DB-intensive calculation.
I'm creating a site theme that needs to calculate a time-decaying rating for all content in the system at regular intervals. This rating determines the order of posts on the homepage, which is paged to allow visitors to potentially view all content. This rating value needs to be calculated frequently to make sure the site has fresh content listed in the proper order.
The rating calculation is not heavy but the rating needs to be calculated for, potentially, 1,000s of items and doing that hourly via wp-cron will start to cause problems for sites with lots of content. Ignoring the impact on page load (wp-cron processes requests on page loads once a certain interval has been reached), at some point the script will reach a time limit. Setting up the site to use "plain ol' cron" will solve the page loading issue but not the timeout one.
Assuming that I have no control over the sites that this will run on, what's the best way to handle this rating calculation on a regular basis? A few things that came to mind:
Only calculate the rating for the most recent 1,000 posts, assuming that the rest won't be seen much. I don't like the idea of ignoring all old content, though.
Calculate the first, say, 100 or so, then only calculate the rating for older groups if those pages are loaded. This might be hard to get right, though, and lead to incorrect listing and ratings (which isn't a huge problem for older content but something I'd like to avoid)
Batch process 100 or so at regular intervals, keeping track of the last one processed. This would cycle through the whole body of content eventually.
Any other ideas? Thanks in advance!
Depending on the host, you're in for a potentially sticky situation. Let me outline a couple of ideal cases and you can pick/choose where you need to.
Option 1
Mirror the database first and use a secondary app (WordPress or otherwise) to do the calculations asynchronously against that DB mirror. When they're done, they can update a static file in the project root, write data to a shared Memcached instance, trigger a POST to WordPress' admin_post endpoint to write some internal state, whatever.
The idea here is that you're removing your active site from the equation. The last thing you want to do is have a costly cron job lock the live site's database or cause queries to slow down as it does its indexing.
Option 2
Offload the calculation entirely to a separate application. Tracking ratings in real time with WordPress is a poor idea as it bypasses page caching and triggers an uncachable request every time a new rating comes in. Pushing this off to a second server means your WordPress site is super fast, and it also means you can have the second server do the calculations for you in the first place.
If you're already using something like Elastic Search on the site, you can add ratings as an added indexing facet. Then just update posts as ratings change, and use the ES API to query most popular posts later.
Alternatively, you can use a hosted service like Keen IO to record and aggregate ratings.
Option 3
Still use cron, but don't schedule it as a cron job in WordPress. Instead, write a WP CLI routine that does the reindexing for you. Then, schedule real cron jobs to process the job.
This has the advantage of using PHP's command line version, which can be configured to skip the timeouts and memory limits imposed on the FPM/CGI/whatever version used to serve the site. It also means you don't have to wait for site traffic to trigger the job - and a long-running job won't block other cron events within WordPress from firing.
If using this process, I would set the job to run hourly and, each hour, run a batch of 1/24th of the total posts in the database. You can keep track of offsets or even processed post IDs in the database, the point is just that you're silently re-indexing posts throughout the day.
Ok! So I have spoken to a google representative about this issue, however since I am not enterprise level, he can't push me to tech support and suggested that I use the SO for answers. Here is the question...
In Google Maps Terms it states the following:
(b) No Pre-Fetching, Caching, or Storage of Content. You must not pre-fetch, cache, or store
any Content, except that you may store: (i) limited amounts of Content for the purpose of
improving the performance of your Maps API Implementation if you do so temporarily (and in
no event for more than 30 calendar days), securely, and in a manner that does not permit
use of the Content outside of the Service; and (ii) any content identifier or key that
the Maps APIs Documentation specifically permits you to store. For example, you must not
use the Content to create an independent database of "places" or other local listings
information.
This led me to originally believe that google would not allow caching of any type of information. However, then I read the following:
When to Use Client-Side Geocoding
The basic answer is "almost always." As geocoding limits are per user session, there is no risk that your application will reach a global limit as your userbase grows. Client-side geocoding will not face a quota limit unless you perform a batch of geocoding requests within a user session. Therefore, running client-side geocoding, you generally don't have to worry about your quota.
Two basic architectures for client-side geocoding exist.
Run the geocoding and display entirely in the browser. For instance, the user enters an address on your page. Your application geocodes it. Then your page uses the geocode to create a marker on the map. Or your app does some simple analysis using the geocode. No data is sent to your server. This reduces load on your server, but doesn't give you any sense of what your users are doing.
Run the geocode in the browser and then send it to the server. For instance, the user enters an address. Your application geocodes it in the browser. The app then sends the data to your server. The server responds with some data, such as nearby points of interest. This allows you to customize a response based on your own data, and also to cache the geocode if you want. This cache allows you to optimize even more. You can even query the server with the address, see if you have a recently cached geocode for it, and if you do, use that. If you don't, then return no result to the browser, and let it geocode the result and send it back to the server to for caching.
So one side says you cannot cache, the other side tells you, you should. Another solution it states is to always use clientside when you can, but then this becomes a grey area as well, because both examples state that you must have a user input data. What if the jquery read data from a div or span and then geocoded the information? The user wouldn't have actually done the geocode,but it was still done client-side? I'm trying to create a site that has a bunch of events generated by users and this site could get pretty loaded, so I am trying to determine the best practice in being able to do this. Google suggested here, so before you go and say this is "off-topic" please note, this is where they stated me to post.
Any feedback would be greatly appreciated.
The first quote does not explicitly forbid caching data at all. It is ambiguous as to how much you can cache (what number explicitly is "limited amounts"?) but it does not forbid caching.
You are allowed to cache the data if it helps improve the performance of your site as long as you retain the data for no longer than 30 days and do not make it available in any way to any other service except the service that originally retrieved the data.
Regarding user interaction - if your user explicitly enters a page with the expectation that they will be shown geocoded information I would assume that this would fulfill "user interaction".
As an example from a project I worked on last year I had it set up to do the following:
- Show markers on the map
- If the user clicked a marker they were shown a popup with data from the cache if available, otherwise a geocode would be performed and the returned information would be cached along with the date/time of the cache.
Another page of the site showed a history of these markers at 5 minute intervals throughout the day. If cached data was present (from clicking the map marker as in the previous part) this would be shown, otherwise a geocode would be performed and the data cached as before. The user clicking to run the report was (in my opinion) enough "user interaction" to not count as pre-fetching as the user had to manually select a timeframe before the report would be displayed.
A cronjob then ran every day at midnight which would go through each record with cached data over 25 days old and remove it.
As it was I was caching much less than 10% of the marker positions being shown (20+ markers being updated every minute, but the report was being run on maybe 3-5 markers each day and only geocoding data for every 5th point).
I have a page that shows statistics for users, this cannot be cached because each user has different statistics and there are many thus the real time query must be made.
What the way to avoid database server overload when user will click F5's to refresh or to ask different queries in short time intervals ?
I think #Jens A. is halfway there - this is a perfect case for caching, calculate the stats, stick them into the cache with a fixed expiry time and then only calculate them if they're not in the cache. By having the expiry time set to an appropriate value (5 minutes, less?) the stats will still be reasonably up to date and will change (update) at a reasonable without having to be calculated every time if the pages are being refreshed continuously.
You could store the generated statistics in your database for some time, and just show the old values if the statistics are requested again.