Running AEM 6.5.8.0 and we've a range of content in grid containers that no longer render visibly despite the code being present when viewing source.
For instance, on https://www.redfcu.org content below "featured rates" can be seen, edited in author, and the source is present when published, but does not render in the browser. Our implementation partner has not responded to the issue in several days, and Adobe's only solution is restore from back-up, but without knowing the root cause, I'm hesitant to just restore for the sake of it.
content in author
content when published
I see the content now, and I guess the issue has been solved. What was the root cause. In the future, if you have access to the logs, there should be errors or warnings printed in error.log (or in the default project log project-name-project.log) when you try to access that page. Missing content in publish is a common issue. In this case, it could be that this component is reading the child pages below a given path and these pages or experience fragments are not present in the publish instance. Similar scenarios happen when the content is tagged but the tags are not published, or there is a failure in a logic validating if you are on author/publish mode.
Related
So I've been having issues with a large website trying to build itself over and over during publishing, being lazy I wanted to take it offline and back on again with publishing.
I followed these instructions:
http://blogs.msdn.com/b/webdev/archive/2013/10/30/web-publishing-updates-for-app-offline-and-usechecksum.aspx
<EnableMSDeployAppOffline>true</EnableMSDeployAppOffline>
Trouble is the page it creates is empty. The title says Site under Construction. but nothing on page, the body has mention of being IE friendly:
additional hidden content so that IE Friendly Errors don't prevent this message from displaying (note: it will show a 'friendly' 404 error if the content isn't of a certain size).
but thats in a comment, the page is just white blank.
I found this article: Custom app_offline.htm file during publish
Suggesting that C:\Users[user]\AppData\Roaming\Microsoft\VisualStudio\11.0\app_offline.htm has the content for MSDeploy, I'm using VS2013 and went to the 12 folder accordingly, found the file edited it and MSDeploy just laughs in the face of my hopes and smashes my dreams of any sort of on-screen message.
I don't care if all my websites get the same screen, it's an intranet and all my websites look similar anyway, but I just need to get an onscreen message and a meta tag to refresh the page in 1 minute.
Does anyone know where or what to edit?
Our team is having an issue when a publish from the content authoring server to the the content delivery web database will not refresh the content delivery server's cache.
We have a content authoring server which has a master, core, and web database. We also have a content delivery server which has it's own master, core, and web. We have two publishing options. One will publish to the content authoring web database. The other will publish to the content delivery server's web database.
My question(s):
1) How would the publish on the content authoring server know to clear the cache on the content delivery server?
In our site definition file we have defined two events named "publish:end" and "publish:end:remote". The method we've attached here is type="Sitecore.Publishing.HtmlCacheClearer, Sitecore.Kernel", method="ClearCache"
2) What is the difference between "publish:end" and "publish:end:remote"?
We have separate site definition files for each environment. There is one for the content authoring server and one for the content delivery server. Since the publish to the content delivery server's web database occurs on the content authoring server - one would assume that it is using the events declared in the content authoring server's site definition file.
3) Can we add the 'content delivery sites' into the content authoring site definition, and then add them into the "publish:end" and "publish:end:remote" event declarations?
4) Does one need to add them to one or both?
5) What exactly does the content authoring site do when it pulls the list of sites in the "publish:end" and "publish:end:remote" configurations?
As mentioned by Ruud, the Sitecore Scaling guide will help you with a lot of your questions.
The first thing to look for is to see if you have EnableEventQueues set to true. That will make sure that your distributed nodes are looking for the events in the master database.
The second thing to look for is to make sure you have your instance names specified so that the content delivery node knows which events to look for. These settings are InstanceName and Publishing.PublishingInstance. Your authoring node will have the same value in both, but your delivery node should have the authoring nodes name in the PublishingInstance setting.
Lastly, and this might just be me, it seems from your question that you have two core and two master databases? I'm not sure why your delivery node need its own core and master databases, unless you are replicating. From my understanding, the EventQueue table in the Core database is what is used by the all the instances to communicate events to each other. Make sure that in some way your delivery and authoring are able to 'share' the EventQueue table, either by using the same Core DB, or by replicating your authoring Core out to your delivery instance.
Between those three things and the Sitecore Scaling guide, you should be able to get your delivery instance to listen to the publishing events from authoring.
When searching for solutions to this issue none of the standard advice worked for me.
Only when I followed the steps from this page did my cache start clearing correctly again: http://sitecoreskills.blogspot.co.uk/2015/08/sitecore-html-cache-doesnt-clear.html
The solution turned out to be completely unrelated to the HTML cache or the event queues.
Sitecore maintains a legacy Lucene index called __System. In our case that index was locked so it couldn't be updated. That somehow prevented the clearance of the HTML cache. The answer was to simply delete it!
It turns out that the __System index can be removed without any problems - Sitecore just recreates it afterwards.
After hours of troubleshooting, we tried changing the InstanceName on the ScalabilitySettings.config file and the cache started to clear after a publish.
I believe you can change the InstanceName to any value and it started working.
I am trying to implement the Features module on one of my Drupal 7 sites for managing blocks. I have a couple questions though. 1st, when you create a new feature on the source site, do you then take that newly created feature and put it in your modules directory and enable it on the source server AND the destination server or JUST the destination server?
Also, I'm wondering how it works when you are trying to manage blocks with a test server and a live server when the live server is a clone of test. In other words we create a test server, construct our site including content and blocks and when it's finished we clone test to live. Then we install the features module on test and create a feature that contains ALL of our custom blocks. When I did this though and moved that feature to the live server and enabled it, it was immediately in an overridden state. Are features only meant for moving NEW blocks from one site to another and not meant to manage blocks that already exist on BOTH servers? Should I create the feature containing all the blocks on the test server and then delete the blocks on the live server and THEN enable the feature on live which would populate the blocks on live. I'm just not sure if I'm missing something or going about this the wrong way.
THANKS
UPDATE: OK, I'm pulling my hair out over here. Again, so I have two sites a source and a destination. The destination was is an exact clone of the source. I have three blocks on both sites that I would like to manage via features. SO, on the source site, I decided to test with just ONE block first. I first edited the block so it would be different than the one on the destination site. I then created the feature including the block and block settings (by the way I'm using Features Extra to accomplish this) and then I place the feature on the destination site and when I activate the feature, the feature is actually NOT in an overridden state and the changes that I made to the block on the source site, show up on the destination site no problem. HOWEVER, if I try to add the other two blocks now to this feature on the source site and recreate it and export it out to the destination site, the feature on the destination site is now in an overridden state which is fine, but no matter how many times I "Revert" the feature to take the blocks out of the dB and into code, it will NOT get out of an overridden state. I have flushed the cache, disabled the feature and re-enabled, and tried reverting and it is stuck as overridden and I do not see the changes to the other two blocks that I made. I then thought maybe it's because I am doing three blocks at once. I then took JUST block number two by itself and created a feature for it and put it on the destination site and it gets stuck in overridden status. Same goes for block number 3. Block number one by itself is fine and does not get stuck in overridden status. It's just block number 2 and three. As far as I can tell all three blocks were created the same exact way and do not have any different settings as far as roles, pages etc. I am stumped on this one for sure.
comment doesn't allow this much log post, so posting as answer.
I can't say much without having exact problem. But This is how features works. You have to do changes in a source site. then create feature of that changes. Now On destination site you have to enable that feature. If you already have that changes in destination site, than feature will be overridden, you revert it and get changes.
As you saying you added two other blocks in feature, but you didn't change anything in those blocks, so they are already in destination site. that why features in overridden state. when you revert It does changes, but sometime it doesn't changes state on (admin/structure/features/).
I don't know your exact requirement, but I think you should change do changes in source site and then pick them in feature and enable on destination site.
In %TRIDION_HOME%\web\WebUI\WebRoot\Configuration\System.config we can increment the modification attribute's value to instruct the Content Manager to force a download of items.
The setting is mentioned on the PowerTools discussion but also on the Skinning the Content Manager Explorer topic on SDL Live Content.
<server version="6.1.0.55920" modification="7">
Alternatives to updating the CME include clearing browser cache (CTRL+Shift+Delete in Chrome) or setting cache settings per user.
Question
Should I use this for any CM-side changes such as GUI extensions, schema changes, or template linked schemas? Or does it only apply to certain parts of the Content Manager Explorer?
In other words, after a schema and template change, what's the best way to make users get the latest versions of components, schema drop-downs, and template selections?
The values of the modification and version attributes become part of the URL of every CSS and JavaScript file that the Tridion UI generates/merges and of many of the static (image) files too. So the URLs look like this edit_v.6.1.0.55920.7.aspx?mode=css. Since the browser sees this as a new URL, there is no way it can have the file in its cache yet. And thus it will always have to download the files from the server, instead of using (possibly outdated) files from the local cache.
This is a technique of injecting some version information into the URL is known as "URL fingerprinting". Google commonly embeds a hash-value of the file into the URL, ensuring that the fingerprinting happens without requiring the developers to increase a version number manually. But whichever way of fingerprinting is used, the technique is a pretty efficient way to ensure that all browsers download the latest version of your code.
If you are developing a GUI extension, you can indeed typically get the same effect by clearing your browser cache or even disabling it completely (for the Tridion domain). But once you roll out your extension to a non-development server, changing the modification attribute is the most certain way to ensure that all your users get the latest JavaScript/CSS changes without each of them having to clear their cache manually.
The URL fingerprinting in Tridion only affects CSS, JavaScript and image files. The actual CMS data (such as Schemas and Components) is loaded using XMLHttpRequests and thus not affected by the modification attribute.
As far as I know,
<server version="6.1.0.55920" modification="7">
This clears only JS and CSS related caching. When a User access the CM then CM loads all the files including latest copies.
Should I use this for any CM-side changes such as GUI extensions, schema changes, or template linked schemas? Or does it only apply to certain parts of the Content Manager Explorer?
For this line, answer is No. Since when ever user does any changes to schema, changes should refresh on all publications. Currently this is not happening on the browser.
Hopefully this might be fixed in on coming versions.
In other words, after a schema and template change, what's the best way to make users get the latest versions of components, schema drop-downs, and template selections?
Currently user should do a forceful refresh to get updated info on all publications.
The SDL Tridion CMS interface caches CMS Items in order to provide faster browsing and loading of its own interface. This does mean that sometimes:
Custom GUI extensions may not display latest versions of the files
Recently created or modified CMS items may not be shown, or show the latest version.
This is why sometimes a new keyword isn't shown within a component field, or a new component template isn't shown when trying to add a component page.
Incrementing the modification number in the node will cause all CMS items to show the latest versions to the CMS user(s). You'll see if uses this value to reference CSS and JS files used by the CMS GUI.
As a developer I also turn off my Firefox cache (I prefer firefox for the firebug extension which is great for working with GUI extensions) as this means you don't need to go and change this value, a simple browser refresh seems to always do the trick. Turning off cache is explained here : https://superuser.com/questions/23134/how-to-turn-off-the-firefox-cache
So I have an umbraco setup with a 'content' root node and then a 'home' node under that. Under the 'home' node is the content and the URLs are the name of those nodes for example I have a 'about u's node under home and it's URL is '/about-us/'
In the case of the 'news' node, below 'about us', its children some times get '/home/about-us/news/title' of the story as the URL, which throws a 404. I can see that this is the URL of the node on the properties tab, but if I republish it it returns to '/about-us/news/title' for a period before returning to the broken link.
I have only seen this behavior on this node, which contains new-item document types. I basically watched the umbraco tv video and created it following along.
It seems to be to be a umbraco bug, but I would really appreciate any help with the issue
In the web.config, there is a setting called umbracoHideTopLevelNodeFromPath. This causes the behavior you are describing when it is set to false. Do you perhaps have multiple people working on the site and publishing different versions of the web.config that have this setting changed?
When publishing a node with the setting set to false, it would add the /home part to the URL. Otherwise, it would leave off the /home.
Once the Umbraco application has started, there are several processes that run on a regular basis (e.g. to check for expired content). It is possible to piggy-back on these by creating a custom class that inherits from umbraco.cms.businesslogic.ApplicationBase. If you have created one that uses the Document.AfterPublish eventhandler then I would check that it is not causing the issue.
I'm assuming that you haven't written one of these though, so the only other thing I can suggest is checking whether it is an installed package that is causing the issue. Have you installed any Umbraco packages? If so, do they have any automated behaviours, like creating folders etc. If so, this may be causing the issue. The author of a package will usually have a website, codeplex project etc. and they will usually have a issue list or blog.
Edit
I've just quickly checked and uBlogsy, one of the plugins you mention exactly this. It has auto moving and sorting of posts. This is described in the release notes. If you are using this tool for creating news pages, then this will be your issue.
I followed obsidian's link in his answer and read about someone else having the same issue. It seems to be traces back to a umbraco.library:NiceUrlFullPath call in the RSS creator that was feeding news items. I replaced the umbraco.library:NiceUrlFullPath calls with umbraco.library:NiceUrl calls and the issue has disappeared.