Is it possible to serve always stale/cached data from CDN edge servers like Akamai. ?
Reason is if there is some problem in origin server and It might need 2-3 days to solve it.My origin server responds properly but I don’t want it to get overloaded and want CDN to keep serving the cached data instead for sometime.
Best Regards,
Saurav
Yes, Akamai can serve stale content if the request to the origin times out or produces an error code. Here's a screen shot of the "Caching" and "Cache HTTP Error Responses" behaviors.
Note, however, that your content will need to be fairly popular to remain in cache. If it's not popular, then it may be evicted before you're able to repair your origin.
A better alternative is to implement a Site Failover ruleset which allows you to serve your page with alternate content from a separate origin, or static assets from Akamai's NetStorage. Here's a screenshot of a typical Match of a failed origin and the standard Fail Over behavior.
The "Action" field provides the following options, which can each be configured to your needs:
Serve stale content
Redirect to a different location
Use alternate hostname in this property
Use alternate hostname on provider network
Serve alternate content from NetStorage
Related
I need to send different resources (specially images) for same urls depending on a complex logic based on different factors (cookie, IP, time, random). I want to take advantage of CDNs (cache, availability, proximity). So, I want this CDN to make a call to my server in order to decide which resource serve to any request. It is very important to not use redirects, so the user will never see a 30X status code.
For clarification:
User makes a request to http://resources.mydomain.com/img/a.jpg, which domain is under CDN
CDN makes a call to my server, sending url requested, cookies and user IP
My server returns the name of the real resource to serve (http://hidden.mydomain.com/img/a-version3.jpg)
CDN requests that image if not in cache
CDN responds to user request sending a-version3.jpg data, but without any redirect
Is it possible using any current commercial solution?
Yes, I think it is already supported by CDNetworks long time ago.
It is called "Origin Logic Control" now. You can check the description from http://www.cdnetworks.com/wp-content/uploads/2013/08/CDNetworks-ContentAccel-DS-EN2.pdf:
Allows a customer’s domain to require checking with the origin on every request.
You can return a special HTTP header (or special HTTP body, I am not sure now) to tell CDNetworks to return resources directly (and using cached version if available), not 30x status code.
You can enable Redirect Chasing to get what you are looking for. Alternatively, look at the Akamai blog post on Edge Redirect for a faster option.
I have an HTML5 app written in static html/js/css (it's actually written in Dart, but compiles down to javascript). I'm serving the application files via CDN, with the REST api hosted on a separate domain. The app uses client-side routing, so as the user goes about using the app, the url might change to be something like http://www.myapp.com/categories. The problem is, if the user refreshes the page, it results in a 404.
Are there any CDN's that would allow me to create a rule that, if the user requests a page that is a valid client-side route, it would just return the (in my case) client.html page?
More detailed explanation/example
The static files for my web app are stored on S3 and served via Amazon's CloudFront CDN. There is a single HTML file that bootstraps the application, client.html. This is the default file served when visiting the domain root, so if you go to www.mysite.com the browser is actually served www.mysite.com/client.html.
The web app uses client-side routing. Once the app loads and the user starts navigating, the URL is updated. These urls don't actually exist on the CDN. For example, if the user wanted to browse widgets, she would click a button, client-side routing would display the "widgets" view, and the browser's url would update to www.mysite.com/widgets/browse. On the CDN, /widgets/browse doesn't actually exist, so if the user hits the refresh button on the browser, they get a 404.
My question is whether or not any CDNs support looking at the request URI and rewriting it. So, I could see a request for /widgets/browse and rewrite it to /client.html. That way, the application would be served instead of returning a 404.
I realize there are other solutions to this problem, namely placing a server in front of the CDN, but it's less ideal.
I do this using CloudFront, but I use my own server running Apache to accomplish this. I realize you're using a server with Amazon, but since you didn't specify that you're restricted to that, I figured I'd answer with how to accomplish what you're looking to do anyway.
It's pretty simple. Any time you query something that isn't already in the cache on CloudFront, or exists in the Cache but is expired, CloudFront goes back to your web server asking it to serve up the content. At this point, you have total control over the request. I use the mod_rewrite in Apache to capture the request, then determine what content I'm going to serve depending on the request. In fact, there isn't a single file (spare one php script) on my server, yet cloudfront believes there are thousands. Pretty sure url rewriting is standard on most web servers, I can only confirm on lighttp and apache from my own experience though.
More Info
All you're doing here is just telling your server to rewrite incoming requests in order to satisfy them. This would not be considered a proxy or anything of the sort.
The flow of content between your app and your server, with cloudfront in between is like this:
appRequest->cloudFront
if cloudFront has file, return data to user without asking your server
for the file.
If cloudFront DOESN'T have the file (or it has expired), go back to
the origin server and ask it for a new copy to cache.
So basically, what is happening in your situation is this:
A)app->ask cloudfront for url cloud front doesn't have
B)cloudfront
then asks your source server for the file
C)file doesn't exist there,
so the server tells cloudFront to go fly a kite
D)cloudFront comes back empty handed and makes your app 404
E)app crashes and
burns, users run away and use something else.
So, all you're doing with mod_rewrite is telling your server how it can re-interpret certain formatted requests and act accordingly. You could point all .jpg requests to point to singleImage.jpg, then have your app ask for:
www.mydomain.com/image3.jpg
www.mydomain.com/naughtystuff.jpg
Neither of those images even have to exist on your server. Apache would just honor the request by sending back singleImage.jpg. But as far as cloudfront or your app is concerned, those are two different files residing at two different unique places on the server.
Hope this clears it up.
http://httpd.apache.org/docs/current/mod/mod_rewrite.html
I think you are using the URL structure in a wrong way. the path which is defined by forward slashes is supposed to bring you to a specific resource, in your example client.html. However, for routing beyond that point (within that resource) you should make use of the # - as is done in many javascript frameworks. This should tell your router what the state of the resource (your html page or app) is. if there are other resources referenced, e.g. images, then you should provide different paths for them which would go through the CDN.
we have nearly 13 domains within our company and we would like to serve images from one application in order to leverage caching.
for example, we will have c1.example.com and we will put all of our product images under this application. but here I have some doubts;
1- how can I force client browser's to cache the image and do not request it again?
2- when I reference those images on my application, I will use following html markup;
<img scr="http://c1.example.com/core/img1.png" />
but this causes a problem when I run the website under https. It gives warning about the page. It should have been used https//c1.example.com/core/img1.png when I run my apps under https. what should I do here? should I always use https? or is there a way to switch between auto?
I will run my apps under IIS 7.
Yes you need to serve all resources over https when the html-page is served over https. Thats the whole point of using https.
If the hrefs are hardcoded in the html one solution could be to use a Response Filter that will parse all content sent to the client and replace http with https when necessary. A simple Regular Expression should do the trick. There are plenty of articles out there about how these filters are working.
About caching you need to send the correct cache-headers and etag. There are several of questions and answers on this on SO like this one IIS7 Cache-Control
You need to use HTTP headers to tell the browser how to cache. It should work by default (assuming you have no query string in your URLs) but if not, here's a knowledge base article about the cache-control header:
http://support.microsoft.com/kb/247404
I really don't know much about IIS, so I'm not sure if there are any other potential pitfalls. Note that browsers may still send HEAD requests sometimes.
I'd recommend you setup the image server so that HTTP/S is interchangeable, then just serve HTTPS Urls from HTTPS requests.
I am trying to determine if there is a way to check the availability of a potentially large list of urls (> 1000000) without having to send a GET request to every single one.
Is it safe to assume that if http://www.example.com is inaccessible (as in unable to connect to server or the DNS request for the domain fails), or I get a 4XX or 5XX response, then anything from that domain will also be inaccessible (e.g. http://www.example.com/some/path/to/a/resource/named/whatever.jpg)? Would a 302 response (say for whatever.jpg) be enough to invalidate the first assumption? I imagine sub domains should be considered distinct as http://subdomain.example.com and http://www.example.com may not direct to the same ip?
I seem to be able to think of a counter example for each shortcut I come up with. Should I just bite the bullet and send out GET requests to every URL?
Unfortunately, no you cannot infer anything from 4xx or 5xx or any other codes.
Those codes are for individual pages, not for the server. It's quite possible that one page is down and another is up, or one has a 500 server-side error and another doesn't.
What you can do is use HEAD instead of GET. That retrieves the MIME header for the page but not the page content. This saves time server-side (because it doesn't have to render the page) and for yourself (because you don't have to buffer and then discard content).
Also I suggest you use keep-alive to accelerate responses from the same server. Many HTTP client libraries will do this for you.
A failed DNS lookup for a host (e.g. www.example.com) should be enough to invalidate all URLs for that host. Subdomains or other hosts would have to be checked separately though.
A 4xx code might tell you that a particular page isn't available, but you couldn't make any assumptions about other pages from that.
A 5xx code really won't tell you anything. For example, it could be that the page is there, but the server is just too busy at the moment. If you try it again later it might work fine.
The only assumption you should make about the availability of an URL is that "Getting an URL can and will fail".
It's not safe to assume that a sub domain request will fail when a parent one does. Namely because inbetween your two requests your network connection can go up, down or generally misbehave. It's also possible for the domains to be changed in between requests.
Ignoring all internet connection issues. You are still dealing with a live web site that can and will change constantly. What is true now might not be true in 5 minutes when they decide to alter their page structure or change the way the display a particular page. Your best bet is to assume any get will fail.
This may seem like an extreme view point. But these events will happen. How you handle them will determine the robustness of your program.
First don't assume anything based on a single page failing. I have seen many cases where IIS will continue to serve static content but not be able to serve any dynamic content.
You have to treat each host name as unique you cannot assume subdomain.example.com and example.com point to the same IP. Or even if they do there is no guarentee that are the same site. IIS again has host headers that allows you to run multiple sites using a single IP Address.
If the connection to the server actually fails, then there's no reason to check URLs on that server. Otherwise, you can't assume anything.
In addition to what everyone else is saying, use HEAD requests instead of GET requests. They function the same, but the response doesn't contain the message body, so you save everyone some bandwidth.
I have URLs of the form http://domain/image/⟨uuid⟩/42x42/some_name.png. The Web server (nginx) is configured to look for a file /some/path/image/⟨uuid⟩/thumbnail_42x42.png, and if it does not exist, it sends the URL to the backend (Django via mod_wsgi) which then generates the thumbnail. Then the backend emits a 302 redirect to exactly the same URL that was requested by the client, with the idea that upon this second request the server will notice the thumbnail file and send it directly.
The question is, will this work with all the browsers? So far testing has shown no problems, but can I be sure all the user agents will interpret this as intended?
Update: Let me clarify the intent. Currently this works as follows:
The client requests a thumbnail of an image.
The server sees the file does not exist, so it forwards the request to the backend.
The backend creates the thumbnail and returns 302.
The backend releases all the resources, letting the server share the newly generated file to current and subsequent clients.
Having the backend serve the newly created image is worse for two reasons:
Two ways of serving the same data must be created;
The server is much better at serving static content. What if the client has an extremely slow link? The backend is not particularly fast nor memory-efficient, and keeping it in memory while spoon-feeding the client can be wasteful.
So I keep the backend working for the minimum amount of time.
Update²: I’d really appreciate some RFC references or opinions of someone with experience with lots of browsers. All those affirmative answers are pleasant but they look somewhat groundless.
If it doesn't, the client's broken. Most clients will follow redirect loops until a maximum value. So yes, it should be fine until your backend doesn't generate the thumbnail for any reason.
You could instead change URLs to be http://domain/djangoapp/generate_thumbnail and that'll return the thumbnail and the proper content-type and so on
Yes, it's fine to re-direct to the same URI as you were at previously.