I try to get an answer on the question if it is possible to prevent Cloudflare from caching my images.
The link to the files would be https://my-domain/wp-content/uploads/2019/*
The * on the end indicates that all what is in /2019/ should be ignored.
I read this post:https://www.itsupportguides.com/knowledge-base/tech-tips-tricks/how-to-exclude-wp-admin-from-cloudflare/ which is indicating that it is possible for http://my-domain/wp-admin/ but I'm not sure if it will also work for my uploads folder. See the last sentence in the image below.
I want this folder ignored because I'm using another cdn for my images.
You should be able to do that with a Page Rule.
If a Page Rule is set to "Bypass Cache", then the resources that match that page rule will not be cached. Note that we will still act as a proxy, and our other performance features will still be active - content just won't be served from our cache and fetched from the origin directly.
Ryan
Related
I'm generating dynamic CSS URLs for cache-busting. I.e. they're in the format styles-thisisthecontenthash123.css.
I also want to use HTTP Link headers to load the files slightly faster. I.e. have the header Link: <styles-thisisthecontenthash123.css>; rel=stylesheet
I'm pretty sure it's possible to do this in Fastly using VCL, but I'm not familiar enough with the ecosystem to figure it out. The CSS URL is in index.html, which is cached. I'm thinking I can open index.html and maybe use regex to parse out the CSS URL. How would I do this?
If I'm understanding your question correctly, you want to include a link header for all requests for index.html. You can do that with Fastly, but if the URL for the CSS file is changing you're not going to be able to pull that info out with VCL (you can't inspect the response body).
You could use edge dictionaries and whenever your CSS filename changes, update the reference via the API.
Thing is, if you're going to make an API call whenever the file changes, might as well just keep the filename consistent (styles.css) and whenever you publish a new version send a cache invalidation (purge). Fastly will clear the cache in ~150ms, so you then all you have to do is add the header which is can be done in the Fastly web portal with a condition.
I've been using S3 to host static websites and I've made changes to the HTML & CSS files and have seen those changes reflected in the past. For some reason I go to do the exact same thing I've done before, change the style of one of my sites, and no change would take place. In-fact after deleting all previous files, the old build was still rendering. I had no version control on that particular bucket.
Content-type is set to 'text/css'. My file structure is normal with index.html being in the root. My normal steps of creating or updating new or existing sites has not changed, but S3 has for some reason.
When I click on the index.html file and go to the public url link, it reflects all my changes.
My only fix is to add the full url to the style link.
<link href="https://s3.amazonaws.com/{bucket-name}/css/style.css">
Does anyone know why this is happening or how to fix it other than adding the http link? If not, I hope my solution helps others for this weird S3 issue. Normally you can just upload your files to a bucket, set the policy and finally enable hosting after stating the root html file.
It might be due to your browser caching, where it's loading locally stored assets (CSS stylesheet) from a previous time you've visited the URL rather than fetching the new resources in an attempt to speed up load times. There are settings you can change in your browser to determine how long your browser will hold onto cached resources before fetching new ones.
By setting the stylesheet link directly to the s3 bucket URL, it will cause it to fetch the new stylesheet every time the page is loaded, which leads me to believe that caching is the issue here.
Try clearing your cache and see if it solves the problem.
Here is a deeper explanation of the concept with respect to browsers, and a list of commands to perform a cache refresh depending on what browser/OS you have!
I think it's the CSS folder's doesn't allow you to access the files inside. If you make the folder public, it will work.
Select all your files and folders, go to actions tab and then select make public to allow objects to access one another.
The situation is the following: I created a site with Plone, developed, used, but behind a test URL. Now it has to be published, but the test URL is not appropriate and I don't want to move the site. I think, if I use a redirect, it won't be appear in the URL-bar, only in the case of site start page. Am I wrong? (The test URL should not be used, because it will be a "semi-official" site.) What do you suggest to do?
As far as I can see Plone uses absolute URLs everywhere. I can add relative URLs, but if I create a new page, a new event, etc., then they have absolute URLs on other automatically generated inner pages. Is there any way to convert these URLs to relative paths? Is there any setting possibilty where only a checkbox changes this default setting?
Plone does not store your URLs in the database. It uses the inbound host header (and any virtual hosting configuration set up with rewrite rules in Apache or Nginx) to calculate the correct absolute URL when rendering the page.
In other words - as soon as you actually point the relevant domain name to the server with your Plone instance, it'll just work.
P.S.
You should put a bit more effort into asking your question. This is just a copy and paste of a half-finished email chain where you tried to get the answer from me in private. It's not very easy to understand what you're asking.
I think what you are looking for is url rewriting to handle virtual hosting. ie to get your site to appear as if it's the root url of a domain.
This is normally done via the webserver that normally sits in front of plone. For apache, here is a howto
http://plone.org/documentation/kb/plone-apache/virtualhost
for other servers
http://plone.org/documentation/manual/plone-community-developer-documentation/hosting
You can also achieve this directly in zope (via ZMI) using something called the Virtual Host
Monster. see http://docs.zope.org/zope2/zope2book/VirtualHosting.html
PS. I don't think your question is badly worded. Plone does serve pages with a "base" tag and what appears to be absolute urls. They aren't baked into the database but it's also not obvious that the solution to getting the url you want is the VHM url syntax and a proxying frontend webserver. There is a reason why it doesn't use relative urls... which I can't remember it was so long ago.
I have a web page containing am entry form. HTTPS is enabled via an Apache redirect for all requests matching that page. Unfortunately, because the CSS pulls in external images using 'background-image: url(/images/...)', the browser will generate a warning message that the page contains mixed content.
What's the best way to resolve this issue?
Update 2014.12.17:
Now that SSL is encouraged for everyone and doesn’t have performance
concerns, this technique is now an anti-pattern. If the asset you > need is available on SSL, then always use the https:// asset.
Allowing the snippet to request over HTTP opens the door for attacks
like the recent Github Man-on-the-side attack. It’s always safe to
request HTTPS assets even if your site is on HTTP, however the reverse
is not true.
More guidance and details in Eric Mills’ guide to CDNs & HTTPS.
Source: Paul Irish – The Protocol-relative URL
Here is a very popular solution:
There's this little trick you can get away with that'll save you some headaches:
In HTML
<img src="//domain.com/img/logo.png">
In CSS
div{background: url(//path/to/image.png);}
You should also enable HTTPS for your static resources, and then make sure that the <link> refers explicitly to the HTTPS url for the CSS resource (whose relative urls will then be interpreted relative to the HTTPS base of the CSS file).
You should use full URL for your image:
https://your.domain.com/img/image.png`
or
https://your.domain.com/route/to/img/image.png
This solved my problem some time ago.
I have a widget that is added to random websites. The widget needs to fill an iframe with content. I need the iframe source to be from the same domain as the website it is embedded in.
To do this I want to ask the site owners to put a file in their root folder that will be used as a proxy to my server.
My question is -how can I implement such proxy with static html/js/? file without using a server side scripting?
I'm not really clear how such a proxy would work/help either way. Are you looking for something like <base />?
The only issue with the base element is that it can't be turned off once it's turned on. If the iframe is the last thing on the page, or at least the last thing with either src or href, you could set it just above. But I'm not sure that this will allow js to access the iframe as though it were by proxy.
And again, I'm still not sure how a file on the remote server will make the iframe seem like it's on your domain. And I have serious doubts whether the site owners will extend such a favor, since doing so would allow hackers to use your site as a backdoor into their server.
I'm not sure how browsers/js policy is in terms of redirects and rewrites, but maybe you could go with something like pointing the iframe to your own server, and having that page actually go to their page, either by mod_rewrite or a redirect. Either way would be server side, so maybe that's not an option. I have heard tale of another thing that works, but have yet to see it in action... You have the site owners add a script with:
document.domain = "yourserver.net";
And be sure to set it on your script as well. This makes them play nice, supposedly. But they may not go for that if it breaks their site for other things, unless there is someway their page can tell it's inside of an iframe and can set that property conditionally.
Good luck