Question about Page Output Caching in ASP.NET - asp.net

I wanted to use this statement
<%# OutputCache Duration="20" Location="Any" VaryByParam="none"%>
for our homepage.
(this works by the way)
But there are multiple domains pointing to the same site, like mydomain.fr and mydomain.ch.
Then, in the basepage i set the culture of the site to fr-FR if they typed mydomain.fr, de-CH when they typed mydomain.ch etc.
I was wondering, as both url's would load the same page /default.aspx, is the page served the same for both users (so when .fr comes first, the .ch visitor sees the (cached) .fr page), or does the framework think and say: hey, mydomain.fr/default.aspx is not the same as mydomain.ch/default.aspx, even if it's the same fysical page, so let's NOT take the cached one and recreate (and cache) a new version?
I've read about varybyheader for the page output caching, but .fr vs .ch is not a header i think?

You could vary it by the HOST header, which would mean that each domain would have its own cache set.
The HOST header contains the hostname/domain name that the browser loaded; So, mydomain.fr or mydomain.ch, etc.

I would highly suggest that you don't automatically set the culture based on the domain they used to get to your site.
Instead, just respect the culture settings of the browser. One reason is that they could very well be going to your French site, but prefer things in English. The various versions of the browsers will have sent you their chosen culture settings.
Also, offer the users a little language icon (usually a flag) at the top of your site. This should allow them to change their language of choice.
As to your actual question: if you are implementing culture resources properly then you don't have to worry about caching. It's taken care of for you.

Related

How to invalidate browser cache using just configuration in the webserver?

For a long time I've been updating ASP.NET pages on the server and never find the correct way to make changes visible on files like CSS and images.
I know if a append something in the URL the browser will think the file is another one:
<img src="/images/myLogo.png?v=1"/>
or perhaps changing its name:
<img src="/images/myLogo.v1.png"/>
Unfortunately it does not look the correct way. In a case were I'm using App_Themes the files in this folder are automatically injected in the page in a way I can't easily change the URL.
So my question is:
When I'm publishing de ASP.NET Application on the server what is the correct way to signal to IIS (and it notify browser after that) that a file was changed? It is not automatic? Should I change some configuration in IIS or perhaps make some "decoration" in the code?
I've already tried many questions here in SO like "ASP.NET - Invalidate browser cache", "How to refresh the browser cache of an image?", "Handle cached images? How to get the browser to show the new version?", and even "What is an elegant way to force browsers to reload cached CSS/JS files?" but none of them actually take another aproach else in a way you must handle it manually in the code instead of IIS or ASP.NET configuration.
The closer I could find is "Asking browsers to cache our images (ASP.NET/IIS)" where they set expiration but not based on the fact the files were update. Instead they used days or hour to cache those file so they would updated even when no changes were made.
I'm want to know if IIS or ASP.NET offers something related to this, automatically send to the browser that the files was changed. Is it possible/built in?
The options you have to update the browser side, cached item are:
Change the file name
Add url parameter
Place it on cache for a limited time (eg for couple of hours)
Compare the date-time of creation.
Signaling with eTag.
With the three two you avoiding one server call for each item, but the third option load it again after some time.
With the others you have to make one call to the server to see if needs to be load it again.
So you can not have all here, there is not correct way, and you need to chose what is the best for you, and what you can do. The faster from client perspective is the (1) and (2) options.
The direct answer to your question is to use eTag, or date-time compare of the file creation, but you loose that way, a call to the server, you only win the size of what is travel back.
Some more links:
http eTag
How do I support ETags in ASP.NET MVC?
Configuring ETags with Http module in asp.net
How to control web page caching, across all browsers?
Jquery getScript caching
and you can find even more.

Upgrade Dnn 5 to Dnn7 now page names with dashes not matching static links

So I upgraded a very large DNN5 site to DNN7 for a client. Now all static links in the page content that point to page names with dashes are broken.
so the page name is Mid-Size Truck in the CMS
the static link from the old site that worked is www.somesite.com/MidSizeTruck.aspx
the upgraded dnn is now calling the link www.somesite.com/Mid-SizeTruck.aspx
so now when you click on the old static link it cant find the new page urls that have the dash included. There are thousands of these static links, is there a way to get DNN to remove the dashes like it use to?
I did however notice if the there is a space on both sides of the dash in the page name, dnn removes it from the url.
so ch1 - First Article
becomes
/ch1FirstArticle.aspx
Did you switch to the Advanced URL Provider (which is a web.config change)? Otherwise, I wouldn't expect the URLs to change.
There's a TabUrl table that has been introduced, which lets you specify URL aliases for pages (which will 301 redirect to the "real" URL). However, you may need to switch to the advanced URL provider to do that (which is theoretically unsafe since it might change URLs, but since it's mismatched, may not be any worse). To do that switch, find the <friendlyUrl /> element in the web.config, then change the urlFormat attribute to advanced for the DNNFriendlyUrl entry.
If DNNFriendlyUrl isn't the default friendly URL provider (e.g. you're using iFinity's 3rd party URL provider), that may be part of the issue, as well. At that point, you'd need to do some research with the developer of that URL provider.

Browser Caching in ASP.NET application

Any suggestions on how to do browser caching within a asp.net application. I've found some different methods online but wasn't sure what would be the best. Specifically, I would like to cache my CSS and JS files. They do change, however, it is usually once a month at the most.
Another technique is to stores you static images, css and js on another server (such as a CDN) which has the Expires header set properly. The advantage of this is two-fold:
The expires header will encourage browsers and proxies to cache these static files
The CDN will offload from your server serving up static files.
By using another domain name for your static content, browsers will download faster. This is because serving resources from four or five different hostnames increases parallelization of downloads.
If the CDN is configured properly and uses cookieless domain then you don't have unnecessary cookies going back and forth.
Its worth bearing in mind that even without Cache-Control or Expires headers most browsers will cache content such as JS and CSS. What should happen though is the browser should request the resource every time its needed but will typically get a "304 Unmodified" response and the browser then uses the cached item. This is can still be quite costly since its a round trip to the server but the resource itself isn't sent so the bytes transfered is limited.
IE left with no specific instructions regarding caching will by default use its own heuristics to determine if it should even bother to re-request an item its cached. This despite not being explicitly told that it can cache a resource. Its hueristics are based on the Last-Modified date of the resource, the older its the less likely it'll have changed now is its typical reasoning. Very wooly.
Frankly if you want to make a site perfomant you need to have control over such cache settings. If you don't have access to these settings then don't wouldn't worry about performance. Just inform the sponsor that it may not perform well because they haven't facilitated a platform that lets you deliver that.
You best bet to do this is to set an Expires header in IIS on the folders you want the content cached. This will tell most modern browsers and proxies to cache this static content. In IIS 6:
Right click on the folder (example CSS or JS) you want to be cached by the browser.
Click properties
Go to the HTTP Headers Tab
Check "Enabled content expiration"
Set some long period for expiration, like "Expires after 90 days"
Yahoo Developer's Blog talks about this technique.
Unless you configure IIS to give asp.net control of js/css/image requests it won't see them by default, hence your best plan (for long term maintainability) is to deliberately tweak the response headers at your firewall/trafficmanager/server or (better and what most of the world does at this point) to version your files in path, i.e:
Instead of writing this in your mark-up:
http://www.foo.com/cachingmakesmesad.css
Use this:
http://www.foo.com/cachingmakesmesad.css?v1
..and change the version number when you need to effectively clear cache. If that's every time then you could even append a GUID or datestamp instead, but I can't think of any occasion where I would want to do that really.
I thought your question was anti-cache but re-reading it I see I wasted a good answer there :P
Long story short, browsers are normally very aggressively pro-caching "simple" resources so you shouldn't have to worry about this, but if you did want to do something about it you would have to have access to a firewall/trafficmanager/IIS for the reasons above (ASP.NET won't be given a chance by default).
However... there is no way you can absolutely force caching, and nor should you. What is and isn't cached is rightfully the decision of the end-user, all you can do is strongly request.
In .net you can set up your JavaScript, CSS and Images as embedded resources.
.Net will then handle the file expiration for you.
The downside to this approach is that you have to do a new build for each set of changes (this might be an upside, depending on your deployment and workflow).
You could also use ETags, but from what I understand in some cases it doesn’t work well if you have mix of IIS and apache Webservers hosting your images, (or if you plan to switch in the future).
You can just make sure the file date is newer, and let the server handle it for you, but you’ve got to make sure the server is configured correctly.
You can cache static content by adding following code in web.config
<system.webServer>
<staticContent>
<clientCache httpExpires="Tue, 12 Apr 2016 00:00:00 GMT" cacheControlMode="UseExpires" />
</staticContent>
</system.webServer>
See the clientCache documentation for more details.

Master Page compilation and Absolute URLs

UPDATED 03/04/09
In response to some comments, a sample from the master page looks like this. This is not an asp.net control, this is hard coded html
<span class="topleft"><span class="bottomleft">About us</span></span>
This renders on the production server as
<span class="topleft"><span class="bottomleft">About us</span></span>
MYDOMAIN is the true domain name of our main site, NEWDOMAIN is a perfectly valid DNS entry which points to the same site.
UPDATED 02/04/09
All the URLS are absolute in the sense that they begin http://
I don't think this can be a browser issue as the actual rendered source code (viewed via view source) has been changed. Checked in both IE7/8 and Firefox 3 and witnessed same behaviour.
Original Question
I have an ASP.Net 2.0 application which has several master pages. This is essentially mocked up to look exactly like our main website, but because it runs on a different server all of the URLs for menu items etc are given absolute URLs to our main site.
This works fine on my development machine, but on the production server all of the URLs which are absolute are changing at runtime, but they still end up at the same pages when clicked.
Is this a DNS issue? Does ASP.Net do some DNS resoltion of URL's when the master page and content are merged? If so then why does it not have the same effect on my local machine, they are on the same domain.
No, it's not a DNS issue, and ASP.net doesn't do any DNS resolution. That's all the responsibility of the browser you're viewing the page in.
However, there are several circumstances that can lead to inconsistent URLs being served in the page's mark-up, which may be interpreted differently by the client's browser.
Browser's will always interpret a URL beginning "http://" the same way - it's an absolute URL, so the destination will always resolve to the same thing. Make sure all your URLs to your main site begin "http://".
URLs beginning "www." (no http://) will be treated as relative URLs - i.e. if the page containing the URL is at http://www.google.com, you're essentially asking for http://www.google.com/YourUrl. You'll find this almost certainly isn't the behaviour you're looking for.
URLs beginning with a leading slash (/) will be treated as absolute on the current domain. For example "/Tools", within Google will result in a request to "http://www.google.com/Tools". If there's no leading slash, the browser will treated the URL as being relative to the page currently being viewed (i.e. a URL of "Tools" when you're viewing a page in the "en" folder would result in a request to "en/Tools".
I think this is where your problems are arising. For consistent behaviour, I find it's a good rule of thumb to ensure all URLs begin with a leading slash. If you want to ensure all your hyperlinks generated by your ASP code are correct, use the tilde (which ASP will replace with the path to the application root folder):
<asp:Hyperlink id="Test1" runat="server" NavigateUrl="~/Tools/Default.aspx">Tools</asp:Hyperlink>
This way, it doesn't matter where your page is in your site structure, whether you're using Cassini, a web site in IIS or a virtual directory in IIS - the URL will always resolve to the correct address.
If you want to output a URL that isn't a property of a server control, use the ResolveUrl method:
Tools
Hope this helps.
By "absolute" I assume you mean they start with a "/" rather than a folder name?
If you are using the ASP.NET hyperlink control, then these will tend to modified to start at the application root.
Edit for comment
Can you give us an example of how the urls are being changed? i.e. From http://www.example.com/somepage.aspx to http://www.example.com/trackingpage.aspx?somequerysting - or is the domain changing? or something else?
You say "they still end up at the same pages" - so clearly things are working. Have you got any HttpHandlers registered in the web.config on your production servers that could be modifying the URLs for you so that they all go through some logging system? I.e. taking the response from the server, processing the resultant HTML, modifying all links - does it happen with simple anchor tags as well as Hyperlink controls?
Are you using a custom base page that is performing additional steps in PreRender or Render that's different on production to your developer machine that is changing the URLs?

Where content based websites store their content?

Sites like cnn.com or foxnews.com.
Where do they store all the articles? In html files? In database?
More logically to store everything in DB but how to generate a static link to something that is inside DB?
It's not that they have a a dynamic page load like: LoadArticle.aspx?ArticleID=123, every article has it's own address.
Please explain how this is done.
They use a special content management library called VoodooLib.dll.
Seriously, when you write something to a database, you normally generate some kind of unique identifier - 123, for example. It gets permanently associated with that record (article content). After that it is used to generate the same id as part of an Url at any time later.
As for the static link, it is a simple matter of Url Rewriting.
You generate static links to display on a page because they work much better for SEO. When a request for that static Url hits the server, it gets substituted for something "server friendly" and then gets to be processed.
They probably use some form of Content Management System (CMS). There are many different ones out there - most store the actual content in a database or as XML (some store XML in a database). They will the either publish that content as static HTML pages or, more commonly now, as dynamic pages that are cached. Many use what are known as "friendly URLs" that are virtual addresses that are mapped to the actual physical file path using URL-rewriting techniques.
Note you can't tell whether a page is dynamic or static simply from the extension. It is quite possible to have dynamic pages that end in the .html extension.
Just because the URL looks "static" doesn't mean it is; they could be using something like mod_rewrite or an IIS ISAPI to make the URLs more search engine friendly.
For the high-volume news sites that you mention, however, they may very well generate the pages statically in order to prevent overloading the database with repeated requests for the same article.
Look at the URl of this page, it doesn't have xxx.aspx?some-query-string
You are refering to using friendly URLs.
To do something like that, one common way is to use URL Rewrite and/or some custom HTTPModule
Here's a good reference: http://weblogs.asp.net/scottgu/archive/2007/02/26/tip-trick-url-rewriting-with-asp-net.aspx
Just because a page has a normal URL does not mean that it isn't serving dynamic content. With the Apache mod_rewrite module, it is possible to manipulate URLs. So, for example, a page like http://www.domain.tld/permalink/12345/message-title-slug can be converted internally to http://www.domain.tld/permalink/index.php?id=12345&slug=message-title-slug.
I do not know exactly what cnn.com and foxnews.com use, but I would bet that they use a Content Management System (CMS) which serves all pages dynamically, with the content stored either in a database or on the filesystem, and with authoring/publishing all being performed through the particular CMS.
Just checking cnn.com, the article links have in them
Year
Location (US or WORLD/specificlocationid)
Month
Day
Article name.
All of this information together can be used to uniquely identify any article (even less of it is probably actually needed). The dynamic content loading page address could easily be hidden by some method of URL rewriting, and then the information in the requested URL is used to determine which article in the DB is to be served up.
I don't know why all the other answerers seem to assume that some form of URL rewriting is necessary to create friendly URLs. It's not true at all.
It's perfectly possible to write web serving code that splits a URL into parameters - eg year, month, title - and pass that directly to the code that gets the content from the database, without any need to rewrite the URL. Most modern web frameworks such as Django and Rails include this functionality out of the box.
This is done through mod-rewrite techniques.
Here's an article about the mod rewriting engine: http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html
And here's their "guide": http://httpd.apache.org/docs/2.0/misc/rewriteguide.html
I hope that helps. It should make for a good starting point. Goodluck.

Resources