CDN integration seems to be a hot topic among Tridion crowd. But, somehow, available discussions mainly revolve around pushing content to/fro CDN. What i'm specifically interested is:
What will be the proper way of modifying/prefixing inline images outbound links to use CDN?
The simplest way to go would be to create some post-processing TBB, operating on Output item, and place it inside 'Default Finish Actions'. Though, doing this on CD side would seem to be more correct, ain't it so?
EDIT
Consider fancier case: what if not only I want to modify image paths, but wrap the whole image links into ASP.Net controls. Where do I do this?
EDIT 2
So far, implemented tag to ASP.Net control replacement via TBB. Went smooth, only needed to keep an eye on the following subtle matters:
Consider CSS inline styles (i.e.: background-image: url(..))
New TBB needs to be placed after any link-manipulating logic (e.g.: Extract Binaries from Html, Publish Bnaries in Package, Link Resolver)
The quickest and most robust implementation is probably with a simple string replacements (in contrast to regexp's or XML parsing)
To keep standard "Preview" logic intact, some condition is necessary to trigger the logic
If you decide to go with ASP.NET controls for your CDN-hosted images, you may consider these phases/steps:
write a TCDL tag (e.g. <tcdl:image id="..." path="...") on CM during rendering
write a TCDL TagHandler implementation that transforms the TCDL into an ASP.NET include during deployment
write the ASCX control to do the CDN lookup proper when the visitor requests the page
I'm not sure if both step 2 and 3 are needed. You might also simply write the CDN path during the deployment phase (step 2 above).
At the same time I'd expect you to upload (updated) images to the CDN using a deployer extension, so that it also happens during phase 2.
Related
I am using a Repeater web part in Kentico to pick out pages from the content tree, to generate nicely repeatable snippets of structured HTML, based on an ASCX transformation. (No surprises here - its been working great!).
However, a new requirement landed whereby alongside the existing HTML structure mentioned above, each repeated item must also have an area where we can add any amount of additional content; based on other web parts.
I have previously written a few "layout" type web parts; implementing CMSAbstractLayoutWebPart, as described here, which has allowed me to generate a repeating amount of web part zones, so I feel like I'm half way there. The issue with that though is that as it stands, I don't seem to be able to make use of the great power and flexibility of the transformations on the page type (which I really think I need to do, and seems like it should be possible..).
I thought I may be able to specify the WebPartZone control in the transformation markup directly, like in the following:
<%# Register Src="~/CMSInlineControls/WebPartZone.ascx" TagName="CMSWebPartZone" TagPrefix="cms" %>
<cms:CMSWebPartZone ZoneID="ZoneIDHere" runat="server" />
<div>
<h3><%# Eval("Heading") %></h3>
<p><%# Eval("Summary") %></p>
</div>
But the design view doesn't seem to pick up the web part zone; so I'm assuming the page lifecycle may not allow me to do this as I'd hoped.
So what I would like to know is:
Is it possible to include WebPartZone control in a transformation such that I can then bring in new web parts in Design view?
If not, what is the recommended way to go about this? (If a custom web part is the way to go, I'd like to clone the Repeater web part in the first instance, as many of its existing properties will be needed - but presumably this must still inherit from CMSAbstractLayoutWebPart?
Thanks!
Update
Good point about the editor's experience; I would definitely like to keep this as consistent as possible. The issue for me is that the requirements that drive my data structures are not always fully understood - and are certainly subject to change. Also, they are liable to vary (albeit subtly) across different products. So I've been trying to keep templates and page types more or less the same across the board, and push out the differences into page properties that drive web part config through macros. So given that the transformation approach won't work, I expect a custom web part is the right fit for me.
I shall post my findings!
I think adding a web part zone into transformation is not a right direction as web part zone should be a part of page template (not transformation) in order to utilize it.
I'd probably try to organize my content so each item you currently showing in the repeater has any number of child pages (potentially of a different type) and use something like hierarchical viewer in order to present all of them on the page. It allows using different transformation based on either page type or node level. Another advantage of this approach is that you keep editors experience consistent.
In the end, I was able to use transformation markup to specify the generation of web part zones. I went down the route of creating a custom web part that inherits from CMSAbstractLayoutWebPart, rather than using CMSRepeater web part or similar...
Here's a breakdown of things I needed to do this:
Gave the custom layout-type web part some properties with which to query the content tree, and supply them to a TreeProvider.SelectNodes() method in the web part code once it has initialised (by overriding the OnInit() method)
Gave the web part a TransformationName property so that the raw markup can be retrieved using TransformationInfoProvider.GetTransformation(this.TransformationName)
Used the markup above and resolved macros within it using each node from the node query
Example of macro resolution code (HTML transformations with macros)
protected virtual string ResolveNode(TreeNode node)
{
var resolver = this.ContextResolver.CreateChild();
resolver.AddAnonymousSourceData(node);
return resolver.ResolveMacros(rawTransformationMarkup);
}
Then I go looking for placeholder text in the transformation markup and use the methods available in the CMSAbstractLayoutWebPart parent class(es), as detailed here, to Append() the resolved markup and also call AddZone() as necessary to tap into the response string builder
Summary: The great functionality of the API allowed me to completely avoid the use of any repeater controls. I could generate web part zones as part of the layout web part usual layout generation process.
It would be nice if I could figure out how to resolve the expressions in SCRIPT tags in ASCX transformations to complete the story, but by using HTML transformations I can use the above to accomplish what I need.
I have a website I’m converting from Classic ASP to ASP.NET. The old site allowed users to edit the website template to create their own design for their section of the website (think MySpace, only LESS professional.)
I’ve been racking my brain trying to figure out how to do this with .NET. My sites generally use master pages, but obviously that won’t work for end-users.
I’ve tried loading the HTML templates as a regular text file and parsing it to ‘fit around’ the content place holders. It is as ugly as it sounds.
There’s got a be something generally regarded as the best practice here, but I sure can’t find it.
Any suggestions?
How much control do you want your users to have?
There are a few ways to implement this. I'll give you a quick summary of all the ideas I can think of:
Static HTML with predefined fields.
With this approach you get full control and you minimize the risk of any security vulnerabilities. You would store per-user HTML in a database table somewhere. This HTML would have predefined fields with some markup, like using {fieldName}. You would then use a simple parser to identify curly brackets and replace fieldName with a string value pulled from a Dictionary<String,String> somewhere.
This approach can be fast if you're smart with string processing (i.e. using a state-machine parser to find the curley brackets, sending the rebuilt string either directly to the output or into a StringBuilder, etc, and importantly not using String.Replace). The downside is it isn't a real templating system: there's no provision for looping over result-sets (assuming you want to allow that), expression evaluation, but it works for simple "insert this content into this space"-type designs.
Allow users to edit their own ASPX or ASCX files.
Why build your own templating system if you can use ASP.NET's? Well, this approach is the simplest if you want to build a quick 'n' dirty reporting system, but it fails terribly for security. Unfortunately you cannot secure any <% %> / <script runat="server"> code in ASPX files in a sandbox or use CAS owing to how ASP.NET works (I looked into this myself earlier: Code Access Security on a per-view ASP.NET MVC basis ).
You don't need to store the actual ASPX and ASCX files in the website's filesystem, you can store the files in your database using a VirtualPathProvider implementation, but getting it to work right can be a bit of a pain (especially as the ASP.NET runtime compiles ASPX files, so you'd need to inform it if an ASPX file was modified by the user). You also need to be aware that ASPX loading is tied into the user's request path (unless you're using Routing or MVC) so you're better off using ASCX, not that it matters really.
A custom ASP.NET handler that runs in its own CAS sandbox that implements a fully-fledged templating engine
This is the most painful option, and it exists between the two: you get the flexibility of a first-class templating engine (loops, fields, evaluation if necessary) without needing to open your application up to gaping security flaws. The downside is you need to build pretty much everything by yourself. I won't go into detail here.
Which option you go for depends on what your requirements are. MySpace's system was more "skinning" than "templating", in that the users were free to set a stylesheet and define some arbitrary common HTML rather than modify their page's general template directly.
You can easily implement a MySpace-like system in ASP.NET assuming that each skinnable feature is implemented as a Control subclass, just extend your Render method to allow for the insertion of said arbitrary HTML. Adding custom stylesheets is also easy: just add it inside a <style type="text/css"> element in your page's <head>.
When/if you do allow the user to enter HTML directly, I strongly recommend you parse it and filter out any dangerous elements (such as <script>, <style>, <object>, etc). If you want to allow for the embedding of YouTube videos and related then you should analyse <object> elements to ensure they actually are of YouTube videos, extract the video ID, then recreate the element from a known, trusted template. It is important that any custom HTML is "tag-balanced" (you can verify this by passing it through a strict XML parser instead of a more forgiving HTML parser, as XHTML is (technically) a subset of HTML anyway), that way any custom markup won't break the page.
Have fun.
We all know that we're supposed to combine our CSS into one file, but per site or per page? I've found pro's and cons to both.
Here's the scenario:
Large site
CSS files broken out into one file for global styles and many for modules
Solution A: Combine ALL the CSS files for the whole site into one file:
Best part is that the one file would be cached on every page after the initial hit! The downside is that naming convention for your selectors (classes and id's) becomes more important as the chance for a namespace collision increases. You also need a system for styling the same module differently on separate pages. This leads to extra selectors in your CSS which is more work for the browser. This can cause problems on mobile devices like the iPad that don't have as much memory and processing power. If you're using media queries for responsive design, you're troubles compound even further as you add in the extra styles.
Solution B: Combine one CSS file per page template:
(By page template I mean one layout, but many different pages, like an article page)
In this scenario, you lose most of the issues with selecting described above, but you also lose some of the cache advantages. The worst part of this technique is that if you have the same styles on 2 different page templates then they'll be download twice, once for each page! For instance, this would happen with all your global files. :(
Summary:
So, as is common in programming, neither solution is perfect, but if anyone has run into this and found an answer I'd love to hear it! Especially, if you know of any techniques that help with the selector issue of Solution A.
Of course, combine and minify all the global styles, like your site template, typography, forms, etc. I would also consider combining the most important and most frequently used module styles into the global stylesheet, certainly the ones that you plan to use on the home page or entry point.
Solution B isn't a good one: the user ends up downloading the same content for each unique layout/page when you could have just loaded parts of it from the last page's cache. There is no advantage whatsoever to this method.
For the rest, I would leave them separate (and minified) and just load them individually as needed. You can use any of the preloading techniques described on the Yahoo! Developer network's "Best Practices for Speeding Up Your Web Site" guide to load the user's cache beforehand:
Preload Components
By preloading components you can take advantage
of the time the browser is idle and request components (like images,
styles and scripts) you'll need in the future. This way when the user
visits the next page, you could have most of the components already in
the cache and your page will load much faster for the user. There are actually several types of preloading:
Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a
sprite image is requested onload. This sprite image is not needed on
the google.com homepage, but it is needed on the consecutive search
result page.
Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On
search.yahoo.com you can see how some extra components are requested
after you start typing in the input box.
As far as the conflicting selectors go: combining all the files and doing it any other way should not make a difference, this is a problem with your design. If possible, have all modules "namespaced" somehow, perhaps by using a common prefix for classes specific to the module, like blog-header or storefront-title. There is a concept called "Object-oriented CSS" that might reduce the need for lots of redundant CSS and module-specific class names, but this is normally done during the design phase, not something you can "tack on" to an existing project.
Less HTTP requests is better, but you have to take file size into consideration as well as user behavior and activity. The initial download time of the entry page is the most important thing, so you don't want to bog it down with stuff you won't use until later. If you really want to crunch the numbers, try a few different things and profile your site with Firebug or Chrome's developer tools.
i think you can make global.css that store style that need every template.
And you could make css in each template.
Or simply use css framework like lescss
I developed a (small) company website in Visual Studio, and I'm addicted to learning more. I really just have two simple questions that I can't google.
1 - Asp:hyperlinks:
What is the purpose of an asp.hyperlink? I know I can't use these in my resource files -- I have to convert 'em all back to html links. At first, asp:hyperlinks looked sophisticated, so I made all my links asp:hyperlinks. Now I'm reverting back. What's the purpose of an asp:hyperlink, if any?
2 - Resource Files and strings:
In localizing my website, I have found that I'm putting the .master resource files in the directory's App_LocalResources folder VS created, because you can't change the top line stuff in a .master file and put a culture/uiculture in there. But all of my regular .aspx pages are going into the root App_GlobalResources folder into 1 of 4 language resource files (de, es-mx, fr, en). I'm making 2 or 3 strings per .aspx page. So when you have 47 pages in your website, that's about 100 strings on a resource page.
I just learned about all of the resources stuff from this forum and MSDN tutorials, so I have to ask, 'cause it's a lot of work. Is this okay? Is it normal? Am I going about this the wrong way?
I've never used resources, so can't comment on that.
Differences between asp:hyperlink and a tag that I know of:
asp:hyperlink is converted to an A tag by the ASP.NET engine when output to the browser.
It is possible asp:hyperlink could make browser specific adjustments, to overcome browser bugs/etc.. which is kind of the point of ASP.NET, or at least one of them. If not already in it, they could be added later, and by using those objects you'll get that when/if added.
Both can be used in code behind (you can set runat="server" for an A tag), but the asp:hyperlink has better compile-time checking in most cases -- strong type-casting for more items vs generic objects.
asp:hyperlinks are easier to get HTML bloat, but only if used with a poor design. For example, it is easy to set font styles and colors on them.. but I wouldn't, since that generates in-line styles that are usually pretty bloated compared to what you would do by hand or in a CSS file.
asp:hyperlinks support the "~/Folder/File.ext" syntax for the TargetUrl (href), which is nice in some projects if you use a lot of different URLs and sub-folders and want the server to handle mapping in a "smart" way.
The purpose of is to display a link to another webpage.
With the resource files, since you're not a programmer and just developing a small program, use something you're comfortable with. Resource files are easy to use for beginners when you want to localize your web content -- and yes, it's normal to be adding many strings if you need them.
For #1
Using a hyperlink control over just a piece of text will allow you to access the control at runtime and manipulate its contents if you want to change the link dynamically, if you have static links that will never change then its simpler to just use plain text ie. <a href=''>
A couple of years ago, we had a graphic designer revamp our website. His results looked great, but he unfortunately introduced a new unsupported font by the web browser.
At first I was like, "What!?!"... since most of our content is dynamic and there was no real way to pre-make all of the images. There was also the issue of multiple languages (since we knew Spanish was on the horizon).
Anyway, I decided to create some classes to auto-generate images via GDI+ and programatically cache them as needed. This solved most of our initial problems. However, now that our load has increased dramatically, there has been a drain on our UI server.
Now to the question... I am looking to replace most of the dynamic GDI+ images with a standard web browser font. I am thinking of keeping some of the rendered GDI+ images and putting them in a resx file, but plan to replace most of them with Tahoma or Arial fonts via asp:Labels.
Which have you found to be a better localized image solution?
Embedding images into the resx
Only adding the image url into the resx
Some other solution
My main concern is to limit the processing on the UI server. If that is the case, would adding the image url to the resx be a better solution compared to actually embedding the image into the resx?
You should only need to generate each image once, and then save it on the hard disk. The load on your site shouldn't increase the amount of processing you have to do. That being said, it almost sounds like you are using images for things you shouldn't be. If there are so many different images that you can't keep up with generating them, it's time to abandon your fancy images for things that shouldn't be images, and go back to straight text. If the user doesn't have the specified font installed, it should just fall back to a similar looking font. CSS has good support for this.
see my response here
This can be done manually or using some sort of automated (CMS) system.
The basic method is to cache your images in a language specific directory structure and then write an HTTP handler that effectively removes the additional directory layer. eg:
/images/
/en/
header1.gif
/es/
header1.gif
In your markup or CSS you would just reference /images/header1.gif. The http hander then uses session (if language is user specific), or config (if site specific) to choose which directory to serve the image from.
This provides a clean line bewteen code and content, and allows for client side caching. Resx is great for small strings but I much prefer a system like this for images and larger content. especially on the web where it is typically easy to switch images around.
I had the same problem a few years back and our interface team pointed us to SIFr. http://wiki.novemberborn.net/sifr/
You embed your font into a Flash movie and then use the SIFr JavaScript to dynamically convert your text into your font. Because it's client-side, there is no server-side impact.
If the user doesn't have Flash or JavaScript installed, they get the closest web-friendly font.
As an added bonus: because your content is still Text -- Google can search and index the content -- a huge SEO optimization.
Because of caching, I'd rather add only the image url into the resx. Caching is much better for static content (i-e plain files ) than for generated content.
I'd be very cautious about putting text in images at all, CSS with appropriate font-family fallback is probably the correct response on accessibility and good MVC grounds.
Where generation really is required I think Kiblee and JayArr outline good solutions