After asset expiration time reached, it is not getting unpublished - adobe

AEM provides OOTB functionality to automatically unpublish asset when it reaches expiration time. Fortunately, it is working in my local environment, however on Staging it is not working.
I have tried changing the Cron expression in conf "Adobe CQ DAM Expiry Notification", but no luck.
Is there any configuration that I am missing ?
Please suggest

Related

Firebase: Still old version after deploy to hosting, how?

I have received for maintenance a rather primitive react app hosted on firebase (developed by someone else). Now weird things are happening:
I change content of a file (even simple text change)
Run deploy (hosting) - I see in firebase console that deploy was successful
Still: page content is still old, without changes!
How can it be? I use Chrome incognito mode to ensure no cache on browser side, and Firebase docs say that deployment clear server cache.
Turns out it is about cache setttings of firebase. I've seen people wrote default it is about one hour (sic!) so I overridden the cache header settings as referenced and now it is immediately visible after publishing.
https://firebase.google.com/docs/hosting/manage-cache

How can I make the Next.js "dev" server safe for production?

My team's primary goal is to be able to "snapshot" a CMS-driven site to static HTML. This is straightforward using getStaticProps and next export.
But we also need to host an intranet version always fetching latest content from the CMS. Using getStaticProps this is not really possible because its output is cached, and if you use the older getInitialProps you can't "freeze" the server version of its output during export.
next dev makes this easy; it has a service that offers up fresh versions of the JSON files that will be made static during export.
On a long-running site, are there important configuration changes that would make next dev safe/safer to use?
In 9.5 Next added Incremental Static Regeneration, a fancy way of saying getStaticProps will regularly invalidate its output some time after the next time it's requested.
This is still not ideal because someone who makes a CMS edit will want to see the change reflected on the very next request, but instead they'll see the old content, wait a few seconds, and reload the page.
On static export, nothing changes: getStaticProps turns into a static JSON file.

Delay when publishing a new GTM container version

I am wondering how many hours does it takes to a new publishing version of a Google Tag Manager container to take in consideration the modifications ?
I tried to find the answer without result...
Thanks by advance for your help.
Coki
It does not take hours. Publishing a new container will immediately create a new gtm.js file with your changes.
New visitors will receive the new file immediately. Recurring visitors might have a cached version of the file, but then GTM sets http cache headers so that the file should not be cached too long. Some users (in company networks etc.) might sit behind proxy servers that cache and old version of the file.
But most users should receive the updated version of the file minutes after you have published it. I recommend you send the build in Container Version variable as a custom dimension to Google Analytics, so you can always check if changes in your KPIs correspond to specific container versions.

I am using Scripts.Render() on ASP.NET MVC - what can the reason be that 6 hours after the update users still get the old (bundled) script file?

I've updated some scripts within our TypeScript files, but users still seem to get the old versions of the bundled script. How do I know? Because I'm seeing (remote) JavaScript errors in our logs that refer to a class that 100% exists within the new latest version.
It has been more then 6 hours ago and I can verify that the ?={hash} has changed.
I have the feeling the suspects are, one way or in combination any of these:
Static content (IIS)
Scripts.Render("")
TypeScript compilation
Azure
I have the feeling it's in some kind of cache or that it has to do with static content serving. Or with the Script.Render().
What can the reason be? It drives me crazy, because I know the site is broken for some users, but I can't get a fix out.
I've included the headers below.
This is the code for the bundle. Note that core.js is being generated by TypeScript.
{
var bundle = new Bundle("~/scripts/main-bundle");
bundle.IncludeDirectory("~/scripts/fork", "*.js", searchSubdirectories : true );
bundle.IncludeDirectory("~/scripts/app", "*.js", searchSubdirectories : true );
bundle.Include("~/scripts/bundles/core.js");
bundles.Add( bundle );
}
Update
If they are getting the old HTML because it might be cached, so the hashbomb doesn’t change - shouldn’t they still have a consistent pair of JS and HTML?
https://poules.com/us doesn't have a cache-control:max-age specified so you probably have some users on browser cached html which would use the old scripts.
There is a s-maxage set, but that is only an override for public cache max-age which isn't set and this page cache is set to private; so I don't think it is doing anything.
Also, you can check Azure to see what is deployed if you have access to the Azure portal.
best way i find around any web interface that requires a forcing of update client is to use a manifest file, and have that specify the scripts. Your bundle needs to then update the manifest with the correct hash. I normally just use a grunt task for this and placeholders within the manifest.
You then manage the manifest in code. with listeners for updateReady and completed so you know when to refresh the browser.
On another note, Application Cache is for older browsers, for new browsers you have to use the Service worker in order to provide a facility to update your app.
With both applied you will have a release schedule you can manage and mitigate issues like you have been getting at the moment.
Another method if you have an API, is to allow the API to serve up Javascript, based on a given version number for the users current web front end. This way you can then inform the users that a more recent version is up to date, and offer a reload button.
However Application cache through manifest or service workers, is more accessible to other teams, if you have a split, front end and backend setup.
+++++++++++++++++++++
Another reason why, could be because your web font is blocked by ad blockers like Ghostery and AdGuard. This in turn creates a unhandled error
auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82 Uncaught ReferenceError: WebFont is not defined
at auth-dialog-window?openerOrigin=https%3a%2f%2fpoules.com&color=FF533C&pool=&openerType=iframe:82
This could potentially stop other things from working and loading in the right way. You need to make sure that you are picking up and catching all errors and events for all of the users, no matter what plugins they use.
To be fair, on looking it could very well be the adBlockers that are your main issue here.
Anyway hope this all helps
Even if the hashbomb stays the same, the asset/bundle files do expire at their own pace but only after a day; at the moment of writing I see the following dates
Date: Fri, 25 May 2018 17:11:21 GMT
Expires: Sat, 25 May 2019 17:11:22 GMT
If you update your main-bundle at Fri 18:00:00, it might not be used until after Sat 17:11:22.
I also notice you are mixing public and private cache locations; by caching the bundles public, any proxy server (if one is involved) maintains its own cache by which the browser might get served, which might also result in a delay.
Is it an option to not cache your webpages so that the hashbomb is always up to date?
I also usually server my assets without caching but with the Last-Modified header,
doing so ensures that the browers at leasts makes a request for each asset providing the If-Modified-Since header, upon which IIS responds with te 304 status code if there's no change (so that no bytes go over the wire).

Deploy to firebase hosting from a firebase function

Is it possible to deploy static assets from a firebase function to the firebase hosting?
Use case: A blog with static html files. Blog content and meta infos would be stored in the database (content as markdown). On publish or update, a firebase function is triggered which parses the markdown and generates a static html file for the blog post and deploys it to the firebase hosting. After deployment, the function would store the live URL in the database.
Would this workflow be possible? In the current documentation, I cannot find anything about deploy from functions.
As a workaround, I could imagine a setup with travis-ci. The function triggers a rebuild on travis, travis builds the static assets and deploys them to firebase hosting, but this seems like a huge overhead.
I could also pull the markdown content from the db and build on the client, but I really like to try the static file approach for initial loading time reasons.
I have been wanting to do this for a long time, and it seems that with the newly unveiled Firebase Functions Hosting Integration... well, we still can't do exactly what we want. But we can get close!
If you follow the read the post above, you can see how we can now edit the firebase.json redirect a URL(s) to point to a firebase function which will can build the page from markdown stored in firebase and serve that to the client.
The thing is, this happens on every GET request for each page. Which is dumb (for a largely static page like a typical blog). We want static pages that are instantly available without needing to wait for functions to generate anything (even though that happens really fast). We can mitigate that by setting the Cache-Control header to an arbitrarily large number with the response object as in
res.set('Cache-Control', 'public, max-age=600, s-maxage=31536000');
Which will tell the browser to cache the result for 10 minutes, but the CDN to cache it for a year. This almost solves the problem of wanting pre-rendered, instantly available pages for all but the first hit, which will incur the render cost. Plus, the CDN can evict your cached content if it determines that there is not enough traffic to warrant storing it.
Getting closer.
But we aren't quite where we need to be. Say you publish your post and a few days later, notice a typo? Well, I think you are pretty much hosed. Your cached content will continue to be served for the rest of the year unless you do something like:
Change the URL of the Post - This is probably a bad idea, as it will tank any SEO and break links to the page that are already in the wild.
There may be a way to force the CDN to update, perhaps by augmenting your 'publish blog post' process to included a javascript GET request with something odd in the request header, or maybe there is a way to do it with a firebase function any time the post gets updated. This is where I get stuck.
Firebase uses Google's Cloud Platform CDN which includes a mechanism for Cache invalidation, but I don't know that this is readily available from functions -- and even if it does, it still doesn't solve getting evicted from the cache.
Personally, I will probably use the setup I described with a CDN cache age limit of intermediate length. This beats my current approach of sending markdown to the client and rendering locally using (the excellent) showdown.js, which is still really fast, but does require client side javascript and a few cpu cycles.
Hopefully someone will have a solve for this (or someone at firebase can slip pushing to hosting from functions into the next release :) ). I'll update my answer if I get it nailed down.
I've haven't tried this yet, but I hope your cloud function could deploy new static files to firebase hosting with Hosting REST API.
I'll update this answer with function code and tutorial after some tests.
I haven’t fully investigated this yet but I wonder if this is what you’re looking for:
https://gist.github.com/puf/e00c34dd82b35c56e91adbc3a9b1c412
git clone https://gist.github.com/e00c34dd82b35c56e91adbc3a9b1c412.git
firebase-hosting-deploy-file cd firebase-hosting-deploy-file npm
install
perform a dry run, make sure you're not doing something you'll regret node deployFile.js contentsite /index.html
do the deletion for real node deployFile.js contentsite /index.html commit

Resources