I am implementing rackspace cloudfiles for a site. If a user uploads a profile image we want to store it on cloudfiles in a CDN enabled container. This works only it takes a couple of secs before the file is available on the CDN.
So when you upload your profile -> we store it in cloud -> reload the page it often isn't available yet resulting in a broken image.
Has anybody experienced this issue and if so how did you work around it?
Yes,this is a function of how CDN's work.
The CDN (in the case of RackSpace, Akamai) must become aware of your content, so it takes some time for your content to show up on a CDN (usually just a matter of minutes).
Related
There is a PWA based on vuejs, which is connected to firebase.
The app has several pages with images. Each page has several images – to see all images on the page you need to scroll down the page. All images are stored on firebase storage.
When you scroll down the page at the first moment images are not displayed because the app downloads them. After a few seconds images appear.
The task is to avoid the downloading of the images during scroll down the page and display the images immediately during the scroll.
At this moment:
all images are uploaded to the firebase storage with the following metadata: metadata = { cacheControl: 'public, max-age=300000000, s-maxage=300000000' }
when the image is downloaded it appears in the app cache. When you scroll down all pages – all images are in the app cache. In this case all images are displayed immediately during the scroll.
When you refresh the page - all images are deleted from the cache.
To resolve the task my suggestion is to download all images into cache when you open PWA. And it is desirable to download images into cache for the next 3 days.
Please, help me find the way to download all images into cache for the next 3 days at the point you open the PWA.
Or if my suggestion is not the best way to resolve the task – please, advice better way to solve the task.
Thank you!
It would be nice if you could share some code with us. Especially how your Service Worker (SW) is working.
There are multiple points you should consider:
Every SW has a lifecycle and some of them are set to clear the cache on the Lifescyle event "Activated". Check if yours is doing that.
One thing you should do is catch all GET request to the images and return them from cache if they are there and from server if not (this should be done in the SW)
You can store thenm in the SW while he is getting installed. Just make sure not to save to much at once because the installation could faill very easy then.
Try to save them in smaller batches to the cache as you scroll down in advance and in a async way. You can save data to the cache also from the front end code.
I'm making a gallery site for a client, to be used for internal use. It is browsing hi-res images, so there's literally over 1 GB of images, each 3-4 MB, so loading the images through the web isn't an option due to load time.
My idea was to store the images on each machine locally, but maintain a central database online so all machines are in sync, and load the images using "file:///C:/images/file.jpg". But apparently browsers don't allow a website to load files from the local computer (for obvious security reasons).
How can I get around this?
Do I have to create a browser plugin myself to get access to the file system?
Alternatively, is there a better way to achieve my goal of (a) a centralized database of images and data, but (b) images stored locally?
Thanks for any advice you can offer.
You can store your images in your centralized database, but it would be of interest to also store smaller, resized images so that if the user is interested in the smaller version, s/he can click it, or hover over it, and have it load the larger version. 3-4MB isn't that insane for most computers to load, so long as the page isn't trying to load them all at once.
To get access to the file system, you can use the web-host's file access links, or you can use an FTP client, given that you know the FTP username/password.
Just realizing that I may have erred in setting up Amazon Cloudfront (origin pull), not S3 buckets.
When navigating to the homepage, http://www.occupyhln.org (WordPress domain), the browser tries to connect to the A-name I set up, which is http://cdn.occupyhln.org ... and eventually loads as www.occupy in the browser address bar.
However, when I type in http://cdn.occupyhln.org, that loads in the address bar as well. I was under the impression that this isn't recommended either.
Am I correct in assuming this is adding an unnecessary redirect and slowing down page load times? I thought I only wanted my static files to be hosted by Amazon (.js, .css, .jpg, .png, etc.).
What can I do to remedy this error -- assuming it is one -- and prevent it from happening in the future? Any guidance would be appreciated!
I just saw the cdn.occpyhin.org when I went to that page... so your mistake is that you should have created a CNAME aka alias rather than an A Record which resolves to the domain... in this case a sub domain cdn.occpyhin.org. You should use the plugin W3 Total Cache. It takes a lot of guess work out of optimizing your site. In addition, modifications that W3 Total Cache can help you will are just as important (if not more important) as CDN.
If your not using AWS Route 53 for your name servers I would recommend doing that.
I wanted to know that, is there some special requirement for a website to make use of CDN ?
i mean is there some special scheme(or atleast considerations) on which your website must be build right from the start to make use of CDN (Content delivery network).
is there anything that can stop a website from making use of CDN, for example the way it references the content files, static file paths or any other thing conceivable.
Thanks
It depends.
You have two kinds of CDN services:
Services like AWS Cloudfront that require you to upload the files in some special place that they read from (eg. AWS S3) - In this case you need have a step in your build process to correctly upload the files and handle the addresses somehow inside your application
Services like Akamai that just need you to change and tweak your DNS records so they will serve the request to your users instead of you - In this case you would have two domains (image.you.com and image2.you.com) and have the image.you.com pointing to Akamai and image2.you.com pointing to the original source of the file. Whenever a user requested an image in Akamai, they would come to you through the "back door", fetch it and starting serving that file always.
If you use the second approach it's really simple to have a CDN supporting your application.
There are a whole bunch of concerns when dealing with CDN solutions.
The first one is that a CDN can't serve a dynamic page - i.e. a page that is unique to every user. Typically, that includes PHP, ASPX, JSP, RubyOnRails etc. - so if you're hoping to support lots of users for a dynamic site, you have to come up with another solution. Some CDN providers support "Edge Side Includes" - this allows you to glue dynamic pages together with cached content on the CDN, but this creates quite a complex application.
Of course, even on a dynamic application, a CDN can still serve static files - images, stylesheets, javascript files, videos etc.
#Tucaz explains the two major options here (actually, Akamai also provides a "filestore" CDN option). If you select the second option - effectively, the CDN becomes a caching reverse proxy in front of your website - it makes sense to tweak the cache headers on your HTTP server, and tell the CDN to honour those. Make sure you set your .ASPX files to not cache!
I'm working on my first Windows Azure application and I'm wondering how people go about managing CSS & JS files within their apps?
At the moment my CSS and JS are just part of my cloud app so every time I make a small CSS change the app needs to be redeployed which isn't ideal. Is it best practice to remove those components from the cloud app and deploy them elsewhere? If that is the case where is the best place to store them? Inside a cloud storage account using blobs or something else?
Bear in mind that if you put your assets in storage, each time there is a page request that includes a link to storage, it counts as a storage transaction. Currently, they are priced at $0.01 per 10,000, so it would take a while to be costly. But if you have 2 CSS files, 2 JS files and 4 images on a given page, that's 8 transactions per page request.
If you get 1000 page requests per day * 30 days that's 240,000 per month / 10,000 = $0.24. Not a big deal if your page requests stay low. But, if your site is even remotely higher traffic, it can start to add up quickly.
Yeah, throw your assets into a public container in storage and build absolute urls to the storage account container from the web app (use a helper method). This way you can just sparsely upload assets as they change.
Next step would be to expose the container over the CDN to get the distributed edge caching too.
We store our JS and CSS in blobs with the Azure CDN and it works great.
A completely different 'solution' might be to check out:
http://blogs.msdn.com/b/windowsazure/archive/2011/07/12/now-available-windows-azure-accelerator-for-web-roles.aspx
I personally haven't used them yet but they're supposed to let you alter/update your web role projects without needing to redeploy the entire thing.
I am not sure if this will work as easily as you might expect for CSS files if they are being referenced from a different domain.
CSS files that are hosted on a different domain might be blocked by the browser. See Cross-Origin Resource Sharing: http://www.w3.org/TR/cors/ However I am not sure if this is widely implemented.
An alternative might be to use a handler which forwards requests for the CSS files on your server to the blob.