Concrete5.7.5.2: Full page caching turned on but page doesn't seem to get retrieved from cache - concrete5

I've turned on "Full Page Caching" to "On - In all cases" under "Cache & Speed Settings" and tested it by clearing cache and loading my home page in a separate browser. But it seems the load time seems to remain the same no matter how many times I refreshed the browser, leading me to believe that the CMS is still going to the DB to retrieve the page, instead of from the cache.
I managed to track the codes down to specifically this bit inside /concrete/src/Cache/Page/PageCache.php
/**
* Note: can't use the User object directly because it might query the database.
* Also can't use the Session wrapper because it starts session which triggers
* before package autoloaders and so certain access entities stored in session
* will break.
*/
public function shouldCheckCache(Request $req) {
$session = \Config::get('concrete.session.name');
$r = \Cookie::get($session);
if ($r) {
return false;
}
return true;
}
If I force this function to always return true, I will find that subsequent reloads of the browser see a much shorter load time, which I believe shows that the page is retrieved from the cache.
The function seems to check if the "CONCRETE5" cookie has been set. If so, then the cache will be ignored. I don't really understand what this all means, so hope someone can help shed some light on whether I'm doing something wrong, or what I should do instead.
Thank you for any help!

The initial reason why this existed is to disable cache completely if the user is logged in, likely there's a better way to do this today so we can benefit from discussion in github on this.
The peculiarity you're seeing is an edge case where all of these are true
You are not logged in
You have a session cookie
Cache is built and enabled
This generally doesn't happen for real site users since rendering the site from cache prevents the session from starting. The way to really test this is to open a new incognito tab after the cache has already been built and navigate around your site. From what I can tell, if the cache is invalidated and any one page loads not from cache, every subsequent request goes around cache.
Nicolai is correct in that we can likely benefit from making this a little smarter and more robust, probably a good thing to open a new discussion issue for on github.
EDIT: I opened this github issue for the discussion

Related

What is the difference between fallback false vs true vs blocking of getStaticPaths with and without revalidate in Next.js SSR/ISR?

As of Next.js 10, the getStaticPaths function returns an object that must contain the very important fallback key as documented at: https://nextjs.org/docs/basic-features/data-fetching#the-fallback-key-required
While the documentation is precise, it is quite hard to digest for someone that is just beginning with Next.js, could someone try to provide a simpler or more concrete overview of those options?
How to test
First of all, when testing things out to make sure I had understood them, I was getting really confused because when you run in development mode (next dev) the behavior is quite different than when running in production mode (next build && next start), as it is much more forgiving to help you develop quickly. Notably, in development, getStaticPaths gets called on every render , so everything always gets rendered to their latest version, which is unlike production where more caching might be enabled.
The docs describe the production behavior, so to test things out, you really need to use production mode.
The next issue is that I couldn't easily find an example where you can create and update pages from inside the example itself to easily view their behavior. I finally ended up doing that at: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app while porting the awesome Realworld example project, which produces a simple multiuser blog website (mini Medium clone).
With those tools in hand, I was able to confirm what the docs say. This answer was tested at this commit which has Next.js 10.2.2.
fallback: false
This one is simple: only pages that are generated during next build (i.e. returned from the paths property of getStaticPaths) will be visible.
E.g., if a user creates a new blog page at /post/[post-id], it will not be immediately visible afterwards, and visiting that URL will lead to a 404.
That new post will only become visible if you re-run next build, and getStaticPaths returns that page under paths, which is the case for the typical use case where getStaticPaths returns all the possible [post-id].
fallback: true
With this option, Next checks if the page has been pre-rendered to HTML under .next/server/pages.
If it has not:
Next first quickly returns a dummy pre-render with empty data that had been created at build time.
In this, you are expected to tell the user that the page is loading.
You must handle that case, or else it could lead to exceptions being thrown due to missing properties.
The way to handle this is described in the docs by checking router.isFallback:
import { useRouter } from 'next/router'
function Post({ post }) {
const router = useRouter()
// If the page is not yet generated, this will be displayed
// initially until getStaticProps() finishes running
if (router.isFallback) {
return <div>Loading...</div>
}
// Render post...
if (router.isFallback) {
return <div>Loading...</div>
}
return <div>post.body</div>
}
So in this example, if we hadn't done the router.isFallback check, post would be {}, and doing post.body would throw an exception
After the actual page finishes rendering for the first time with data (the data is fetched with getStaticProps at runtime), the user's browser gets automatically updated to see it, and it stores the resulting HTML under .next/server/pages
If the page is present under .next/server/pages however, either because:
it was rendered by next build
it was rendered for the first time at runtime
Next.js just returns it, without rendering again.
Therefore, If you edit the post, it will not re-render the page cache. The outdated page will be returned at all times, because it is already present under .next/server/pages, so next does not re-render it.
You will have to re-run next build to see updated versions of the pages.
Therefore, this is not what you generally want for the multi-user blog described above. This approach is generally only suitable for websites that don't have user-generated content, e.g. an e-commerce website where you control all the content.
fallback: true: what about pages that don't exist?
If the user accesses a page that does not exist like /post/i-dont-exist, Next.js will try to render it just like any other page, because it checks that it is not in .next/server/pages thinks that it just hasn't been rendered before.
This is unlike fallback: false, where Next.js never generates new pages at runtime, and just returns a 404 direction.
In this case, your code will notice that the page does not exist when getStaticProps queries the database, and then you tell Next.js that this is a 404 with notFound: true as mentioned at: How to return a 404 Not Found page and HTTP status when an invalid parameter of a dynamic route is passed in Next.js? so Next.js renders a 404 page and caches nothing.
fallback: 'blocking'
This is quite similar to fallback: true, except that it does not return the dummy loading page when a page that hasn't been cached is hit for the first time
Instead, it just makes the browser hang, until the page is rendered for the first time.
Future requests to that page are quickly served from the cache however, just like fallback: true.
https://dev.to/tomdohnal/blocking-fallback-for-getstaticpaths-new-next-js-10-feature-1727 mentions the rationale for this, it appears to break certain rather specific features, and is generally not what you want unless you need one of those specific features.
Note that Next.js documentation explicitly states that in fallback: true, it detects crawlers (TODO how exactly? User agent or something else? Which user agents), and does not return the loading page to crawlers, which would defeat the purpose of SSR. https://nextjs.org/docs/basic-features/data-fetching#the-fallback-key-required mentions:
Note: this "fallback" version will not be served for crawlers like Google and instead will render the path in blocking mode.
so there doesn't seem to be a huge advantage for SEO purposes in using 'blocking' over true.
However, if your user is a security freak and disables JavaScript, they will only see the loading page. And are you sure the Wayback machine won't show the loading page? What about wget? Since I like such use cases, I'm tempted to just use fallback: 'blocking' everywhere.
revalidate: Incremental Static Regeneration (ISR)
When revalidate is given, new requests to a page that is in the .next/server/pages cache also make the cache be regenerated. This is called "Incremental Static Regeneration".
revalidate: n means that our server will do at most 1 re-render every n seconds. If a second request comes in before the n seconds, the previously rendered page is returned and a new re-render is not triggered. So large n means users see more outdated pages, but less server workload.
A large re validate could therefore help the server handle large traffic peaks by caching the reply.
This is what we have to use if we want website users to both publish and update their own posts:
either fallback: true or fallback: 'blocking'
together with revalidate: <integer>
revalidate does not make much sense with fallback: false.
When revalidate: <number> is given, behavior is as follows:
if the page is present under .next/server/pages, return this prerendered immediately, possibly rendered with outdated data.
At the same, also kickstart a page rebuild with the newest data.
When the rebuild is finished, the target page won't be automatically updated to the latest version. The user would have to refresh the page to see the updated version.
otherwise, if the page is not cached, do the same that true or 'blocking' would do, by either returning a dummy wait page, or blocking until it gets done, and create the cached page
After a page is built by either of the above cases (first time or not), if it gets accessed again in the next number seconds, do not trigger rebuilds. This way, if a very large number of users is visiting the website, most of the requests won't require expensive server render work: we will do at most one re-render every number seconds.
Explicit invalidation with: res.unstable_revalidate
This is currently beta, but it seems that at last they are introducing a way to explicitly invalidate pages as currently mentioned at: https://vercel.com/docs/concepts/next.js/incremental-static-regeneration
This way, instead of ugly revalidate timeouts, we will be able to just rebuild pages only when needed if we are able to detect the page becoming outdated on the server. Then we can just sever directly every time.
It will presumably be renamed to res.revalidate once it becomes stable.
This awesome development was brought to my attention by Sebastian in the comments.
SSR for a single request (i.e. ignore revalidate) so that users can see the results of their blog page edits
Edit: this use case might best resolved with the upcoming res.unstable_revalidate/res.revalidate.
If for example:
the blog author clicks submit after updating an existing post
and they got redirected to the post view page as is usual behavior, to see if everything looks OK
they first see the outdated version of the post. Then redirected this visit would trigger a rebuilt with the new data they've provided in the edit page, and only after that finishes and the user refreshes they would see the updated page.
So this behavior is also not ideal UI behavior for the editor, as the user would be left thinking:
What just happened, was my edit not registered?
for a few seconds.
This can be solved with "preview mode" which is documented at: https://nextjs.org/docs/advanced-features/preview-mode It was added in Next.js 12. Preview mode checks if come cookies are set, and if so makes getStaticProps rerun regardless of revalidate, just like getServerSideProps.
However, even preview mode does not solve this use case super nicely, because it does not invalidate/update the cache, which is a widely requested thing, related:
Next.js ISR ( Incremental Static Regeneration ), how to rebuild or update a specific page manually or dynamically before the interval/ISR time start?
How to clear/delete cache in NextJs?
so it could still happen that the user visits the page without cache and sees the outdated page. I could work around this by removing the cookies and making a an extra GET request, but this produces an useless get request and adds more complexity.
I learned about this after opening an issue about it at: https://github.com/vercel/next.js/discussions/25677 thanks to #sergioengineer for pointing it out.
Related threads:
https://github.com/vercel/next.js/discussions/11698#discussioncomment-351289
https://github.com/vercel/next.js/discussions/11552
SSR vs ISR: per user-login-based information
ISR is an optimization over SSR. However, like every optimization, it can increase the complexity of the system.
For example, suppose that users can "favorite" blog posts.
If we use ISR, it only makes much sense to pre-render a logged off page, because it only makes sense to pre-render the stuff that is common for multiple users.
Therefore, if we want to show to the user the information:
Have I starred this page yet or not?
then we have to do a second API request and then update the page state with it.
While it may sound simple, this adds considerable extra complexity to the code in my experience.
With SSR however, we could simply check the login cookies sent by the user as usual, and fully render the page perfectly customized to the current user on the server, so that no further API requests will be needed. Much simpler.
So you should really only do it if you benchmark it and it is worth it.
Here's an example of checking login cookies: https://github.com/cirosantilli/node-express-sequelize-nextjs-realworld-example-app/blob/8dff36e4bcf659fd048e13f246d50c776fff0028/back/IndexPage.ts#L23 That sample setup uses the exact same SWR tokens that are being used to make JavaScript API requests but also via cookies. We don't have to worry about XSS in that demo because we only use login on GET requests. All modifying requests like POST are done from JavaScript exclusively, and don't authenticate from cookies.
The ISR dream: infinite revalidate + explicit invalidation + CDN hooks
As of Next.js 12, ISR is wonky for such a CRUD website, what I would really want is for things to work as follows:
when the user creates a blog post, we use a post creation hook to upload the result to a CDN of choice
when a user views a blog post, it goes to the CDN directly and does not touch the server. Only if the user wants to fetch user-specific data such as "have I starred this page" does it make a small API request to the server
when a user updates a blog post, it just updates the result on the CDN of choice
This approach would really lead to ultra-fast page loads and minimal server workload Nirvana.
I think Vercel, the company behind Next.js, might such a CDN system running on their product, but I don't see how to nicely use an arbitrary CDN of choice, because I don't see such hooks. I hope I'm wrong :-)
But just the explicit invalidation + infinite revalidate would already be a great thing to have even without the CDN hook system. Edit: this might be coming with res.unstable_revalidate, see section above.
whenever we want to implement ISR or SSG techniques in dynamic routes , we are supposed to pass the paths,that we want to be statically generated at the build time,
to getStaticPaths function .Although , in some situations we might have new paths that are not returned by getStaticPaths and we have to handle this paths with fallback property that is also returned from getStaticPathsNext.js official docs
.
fallback property can accept 3 values:
false :
new paths will result in a 404 page
true :
new path will be statically generated (getStaticProps is called) - loading state is shown while generating page(via router.isFallback and showing fallback page) - page is rendered with required props after generating - new path will be cached in CDN (later requests will result in cached page) - crawler Bots may index fallback page (not good for Seo)
"blocking" :
new path will be waiting for HTML to be generated (via SSR ) - there will be no loading state(no fallback page) - new path will be cached in CDN (later requests will result in cached page)
NOTE : after Next.js 12 the fallback:true in ISR technique wont be showing fallback page to crawler Bots Read more
When creating dynamic pages in our app (for example a video app), we need to configure how next.js will fallback during the request.
If we know that the pages and our app system are quick, we are sure that our data response will be instant, we can use fallback:blocking. We do not need to show the loading state because Next.js will wait for this page to be fully pre-generated on the server before it serves that.
in fallback:false if a new page is not found 404 page will be displayed. false is used if you want to generate ALL of our dynamic paths during build time. In this case, in getStaticPath you need to fetch how many items you have in your database. Since pre-built pages are served from CDN, its look-up time is pretty quick, you are actually not fetching data, so you do not need a "loading state". You are just checking if the given URL path has a pre-generated page or not. If in the future you need to add more paths, you need to rebuild your app.
In your video app, you might have too many videos so you only prebuild the most popular video pages. If a user visits a video and its page was not pre-generated, you have to do data fetching so you need a "loading" state. Now you need to set the fallback:true. Since data fetching will take time, if you do not show a different component while loading, you might get like "Cannot read property "title" of undefined", since at that moment title of the video is not defined yet.
function Video({ videoId }) {
const router = useRouter()
// If the page is getting generated
if (router.isFallback) {
return <div>Loading...</div>
}
// then return the main component
}

asp.net: Is there a way to "disable" the browser's back button after loggin out?

Is there a way to "disable" the browser's back button after loggin out?
I've read several posts and now I know, that I can disable caching.
( e.g. ASP.NET authentication login and logout with browser back button )
This is working, but I want to disable the back button for security reasons only after logging out (= when there's no Session available anymore).
If I disable caching, the user cannot use the browser's back button while logged in.
I'm using a custom authentication, not the standard of asp.net
Is there a secure (= no javascript) possibility to do this?
As I'm sure you already know, you can't directly disable the "back" button on a browser.
The only methods for preventing a user from going back rely on either setting the page to cache, or involve the use of javascript. Based on the fact that neither of these work for you, there isn't a solution to manage this. I've looked at many articles over the years, and re-searched this several times, and all of the suggestions either use client-side script or the cache.
My best suggestion in your case is to use the cache disable method, and look at how your UI responds to the "back" button and see if there are changes you can make to the design to make it smoother. This may involve checking the session variables, or checking to see if the user is still authenticated, but given your requirements, I believe you're out of luck.
In short, you're going to need to choose the lesser of two evils.
Using the page cache will ensure that people can't use the "back" button at all without using javascript - presumably better security
Using the javascript to delete page history on logout will allow you to prevent users from going back after logged out, but someon ewith noscript turned on, or someone malicious can disable your control.
You didn't specify exactly who you are trying to protect, and from what, but if I'm guessing right, and you're concerned about the user who leaves their PC after logging out, but without closing the browser window, then is the Javascript really a concern?
Thinking it through,the type of person who would do this isn't thinking about how the info can be used maliciously. Someone who is malicious, presumably, is already "thinking like a bad guy" and knows enough to close the browser window.
Either option could be bypassed via malware that intercepts/alters the http headers, javascript, etc, so neither is really 100% effective. The only difference I see is that the javascript option can be broken both by altering the html as it travels across the wire (using something like Fiddler or malware) AND by simply having Javascript disabled. so the page cache option is marginally better for security purposes.
Using https instead of plain http offers a lot more protection in combination with the header method, making it much more effective, because it greatly increases the difficulty of manipulating the data across the wire, and it's not disabled simply by disabling JavaScript.
Either way, I think you need to weigh your options and choose one or the other. As sad as it seems, we can only do so much to protect the users from themselves.
Anything running on the browser can be intercepted and/or disabled. Any page sent to the client-side can be saved: any link URLs, content, javascript, etc.
If you let me load a webpage in a browser on my machine, I can view and save the source of every page I see, and capture every piece of communication to and from the server. If you want HTML to render or javascript to run on my machine, I get to see it and keep it forever if i want.
You could control this by only permitting access to your application through a remote connection to a controlled machine where the application runs, but for a consumer app, this is probably prohibitive.
you could however, discourage most users with something like this
function logout(){
window.open(loggedOutPageURL)
self.close ()
}
a malicious user will have no problem disabling this javascript, or just saving content as he receives it, but this might be the best you can do.
abandon method for dispose the session on logout & check Session!=null on your Data or dashboard page then I think there is not need to disable Back button.
write the script below logout button:
<script language="text/javascript">
window.history.forward(1);
</script>

Read/write data with blocks on Boost cached pages

I have a module that supplies a block. The block is set to BLOCK_NO_CACHE, and its content is pulled from a function. It lets a site admin create a 'message' to display on the site, kind of like CNN, where a breaking update is displayed at the top, and a user can close it by hitting X. When they close it, the action is written and UUID written to their cookie so they don't see that message again.
I am getting reports from Boost users that when someone closes a message, it closes it for everyone. I assume this is because Boost is caching the page and serving a cached page after someone closed the message.
How can I make my module work for people using Boost?
I thought maybe hook_boot might work, but, then again I am not sure if there is a better way to address this.
hook_boot will not do it. Once that page is in the cache no PHP is run. You need to have that block be loaded via AJAX because the state of that block is dependent upon a cookie.
http://drupal.org/project/ajaxblocks and http://drupal.org/project/ajaxify_regions
are 2 projects that easily do this.
Also it would be hard to get breaking updates out if the page is cached. You will have similar issues for varnish users as well.

IE8 Session sharing problem in ASP .Net Application

I am having ASP .Net application which is running perfectly in IE 7.0 but as due to session sharing in IE 8.0 (also in case of new window), application is giving unexpected behavior as session can be modified by other window.
Some quick facts
I know the -NoCache option and open New Session file menu item of IE 8
I just wanted to know that is there any option to disable this session sharing behavior in new window through ASP .Net code (by getting the browser) or any other solution
I also wanted to have your suggestions for future web application development, what we need to take care to avoid session sharing issue
Session sharing has always been there is not unique to Internet Explorer 8. New tabs, Ctrl-N in any browser (IE5,6,7 FF1,2,3 OP6,7,8,9,10 etc) shares the session data of the global process. It just received a fancy name because now tabs can have multiple processes on the computer (not new either), but will still "share" the sessions. And thats' kinda "new".
It is good that you're aware of this, but it's not so good if you're trying to take this "experience" or "feature" away from the user. If you want that, I'd check into JScript/JavaScript solutions instead and issue a warning when a user tries to open several sessions, but I doubt you'd get a good "prohibit sharing sessions across windows" solution. Even notable banks have already given up on this (they never liked this session sharing thing)
From a design perspective: on the server side, it is rather simple. Just always assume that the session is changed. This can, for instance, mean that on one screen, the user is not logged in, on another he is. That's ok. If he refreshes or goes to another page, you'll show him the correct view: logged in user for the same page.
Just make sure that you check for invalidated data as the result of a changed session in another window (i.e., request). But that's a general advice: be liberal in what you accept, but make sure you validate any input.
EDIT: On extra sessions: just treat them like that. It has always been possible that users open up more then one session for the same user (two different browsers). Just as it has always been possible to change a session through another tab, window etc of the same browser.
On the "solving" side: Configure the session as cookieless. This places the session in the URL query params. Any old window not having the SESSIONID in the URL will not be considered part of the session. However, a warning is in place: this approach eventually causes more trouble then it solves (i.e., now you have to worry about with and without session requests from same user, same browser, same ip and it's still possible to "copy" a session by copying the URL or tab).
Moving some of your information from Session to ViewState may help you solve the issues you are having.

How do I force expiration of an ASP.Net session when a user leaves the site?

We have a scenario in which we like to detect when the user has left our site and immediately expire their .Net session. We're using Forms Authentication. We're not talking about a session timeout, which we already have. We would like to know when a user has browsed away from our site, either via a link, by typing in an address or following a bookmark. If they return to our site, even if right away, they will have to log back in (I understand this is not great usability - this is a security requirement we've been given by our client).
My initial instinct is that this is either not possible, or that any solutions will be extremely unreliable. The only solutions we've come up with are:
Add a JavaScript onBlur event handler that tells the server to log out the session when the user leaves the site.
Once the user has logged in, check the HTTP referrer to ensure that the user has navigated from within the site.
Add AJAX polling back to the server to keep the session refreshed, possibly on a 10-second interval. When the call isn't received on time the session would end.
The onBlur seems like the easiest, but possibly least reliable method - I'm not sure if it would even work. There are also issues with the referrer method, as the user could type in an address within the site and not follow a link. The AJAX method seems like it would work, but it's complicated - I'm not even sure how to handle it on the back-end. I'm thinking there might also be scenarios in which that wouldn't always work.
Any ideas would be appreciated. Thanks.
I have gone for a heartbeat type scenario like you describe above. Either Ajax Polling or an IFRAME. When the user closes the browser and a certain timeout elapses (10 seconds?), then you can log them out.
Another alternative would be to have the site run entirely on AJAX. Thus there is only one "URL" that a user can visit and all content is loaded dynamically. Of course you break all sorts of usability stuff this way, but at least you achieve your goal.
If the user closes their browser, or types in a different URL (including selecting a favourite) there is not much for you to detect.
For links on your site, you could create links that forward via your site (i.e. rather than linking to http://example.com/foo you link to http://mysite.com/forwarder?dest=http://example.com/foo).
Just be careful to only forward to sites you intend to, otherwise you can open up security issues with "universal forwarding" being used for phishing etc..
You absolutely, positively need to tell the client that this is not possible. They are having a basic misunderstanding of how the Web works. Be diplomatic, obviously... hell, it's probably someone else's job... but it needs to be done.
Your suggestions, or a combination of them, may work in a simple proof-of-concept... but they will bring you nothing but support nightmares and will not work consistently enough. Worse, you will undoubtably also create situations where users cannot use the application at all due to the security hacks misfiring on them.
Javascript has an onUnload event, which is triggered when the browser is told to leave the page. You can see this on StackOverflow when you try to press the back button or click a link while editing an answer.
You may use this event to trigger an auto-logoff for your site.
I am unsure, however, if this will handle cases wherein the browser is deliberately closed or the browser process externally terminated (I'm guessing it doesn't happen in the 2nd case).
If all navigation within your site is done through .NET postbacks (no simple html links or javascript open statements), you can do automatic logoff and redirect to the login page if the page load is not a postback. This does not end the session on exit, but it looks like it because it enforces a login if manually navigating to your web app. To get this functionality for all pages, you can use a Master page that does this in the Page_Load.
private void Page_Load(object sender, System.EventArgs e)
{
if (!IsPostBack)
{
System.Web.Security.FormsAuthentication.SignOut();
System.Web.Security.FormsAuthentication.RedirectToLoginPage();
}
}

Resources