CSS changes not reflecting on site - asp.net

Whenever we make changes to the CSS, it generally takes 24 hours to reflect those changes on my site. I have tried clearing the server cache and browser cache but it doesn't help too. Is there any other way to make the CSS changes reflect immediately after updation?
it happens in all the browsers... when i check it in the browser , i can access my css file with two paths eg : i store my css in folder named "Cssfolder" and my css name is say 135.css
So when i access the folder paths, Cssfolder/135.css & cssfolder/135.css, one of the path shows me latest css whereas other one shows me old css.Notice the "c" is captital in one path whereas small in other path.
Thanks.

I've found this to be a pretty common problem in a lot of my projects. I would suggest two things...
If it's just an app that you are working on you can use the CSS Cachebuster during development.
Following the idea behind the Cachebuster I have found that often adding the timestamp of the CSS file as a query string off of the CSS link will help in telling the browser that the file is different... something like... whatever.css?12212009035543

You might want to use a monitoring tool, like Live Http Headers for Firefox, to see the requests and responses to and from the server. This usually solves a lot of problems for me. Take a look at the "Expire" headers and conditional requests (like: "If-modified-since"). This said, take a look at server and client local times and timezones - it might be that they differ significantly and conditional GET requests "seem to be" handled correctly, because of future or otherwise mangled timestamps.
You can force to load the current css directly from the server with appending a random unique value to the url, like http://example.com/Cssfolder/135.css?983274928374 and http://example.com/cssfolder/134.css?08973249827. There's no way that this would ever get cached unless you use the same random value twice.
This way you learn where to look further for the solution to your problem: At the server, the ISP/a proxy or your browser.

You really need to see whether this is server side or client side. If the server is still serving the old CSS then clearly you've got no chance on the client side.
I've occasionally seen times where I've had to show the CSS in the browser, and then next time I've been to the real page, it's used that new CSS. Usually just hitting refresh does it.
Do you have any web caches like Akamai involved anywhere?
If you try to go to the CSS page from a computer which has never seen the old version, which version does it show?
EDIT: Changed answer to reflect edits in question.

I have been dealing with this issue in the past, and ended up writing a httpmodule to deal with it.
It's pretty simple, it just finds all script/css links in head tag (they now need to have runat=server) and appends the assembly version number to the link, in the same way as Tim K describes. This way im sure my clients always fetches the newest css/scripts when my app is updated in production, and never have to deal with this issue again.

Maybe Internet Service Provider cache, as in this case?

I was perplexed by this issue then someone said Ctrl+F5. Worked for me :)

When I am developing and I need to be sure that I am seeing changes as I work, I stick the css in the page ie
<style type="text/css">
/* your css */
</style>
Or you could constantly change the name of the css file itself, not very useful in a production environment, but perhaps okay while developing.
I know it doesn't solve the problem, but for developing it is okay.

Related

How to check for live issues with google tag manager?

Previously working code that downloads a csv file from our site, now fails. Chrome, Safari and Edge don't display anything helpful except "Blob Blocked", but Firefox shows a stack trace;
Uncaught TypeError: Location.href setter: Access to 'blob:http://oursite.test/7e283bab-e48c-a942-928c-fae0907fdc82' from script denied.
Then a stack dump from googletagmanager
This appears to be a fault in the tagmanager code introduced in the last couple of weeks.
The fault appears in all browsers and is resolved immediately by commenting out the tag manager. The problem reported by a customer on the production system, and then found on both staging and locally. The customer advised they had used the export function successfully 2 weeks ago.
The question really is, do Google maintain a public facing issues log for things like the tag manager?
It's not about GTM as a library really, it's about poor user implementation. It's not up to Google to check for user-introduced conflicts with the rest of the site's functionality.
What you could do is go to GTM, and see what has been released in the past two weeks. Inspect things and look for anything that could interfere with the site's functionality. At the same time - do the opposite, see all the front-end changes introduced during this time frame by the web-dev team.
Things to watch for is mostly unclosured JS deployed in custom HTML tags. junior GTM implementation specialists like to use the global space, taking the global variables, often after the page is loaded, thus overwriting front-end's variables.
Sometimes, people would deploy minified unclosured code to the DOM, thus chaotically taking short var names. To the same end.
This is likely the easiest and most common way for GTM to break front-end. There definitely still are many ways to do so besides this though.
If this doesn't help, there's another easy way to debug it: make a new workspace from Default (or whatever is live), go into the preview mode and confirm that the issue still happens. Now start disabling newest created fired tags one by one and pinpoint which one causes the issue.
Let us know what it was.
Solution was to replace the previous tag manager code with the latest recommended snippet

How does it appear that MDN can detect a request from an iframe on the Server Side and send no content?

Please Note: This question is not related directly to Server-side detection that a page is shown inside an IFrame, as I'm showing you an instance where it would appear that the guys at MDN (Mozilla Developer Network) are already detecting that content is being delivered to an iframe, although, if you read through this, I discuss the possibility that this isn't server-side related at all; it might be some sort of "rights" issue declared some how or in some way I don't know about. The point is to understand how something already existing works.
First of all, I do not desire to rip off MDN (Mozilla Developer Network) content as my own. I'm asking this because I'm truly puzzled by it. The guys at MDN seem to have pulled of a nice trick, and I'd like to know it, but maybe its simpler than I realized.
The code is only:
<iframe src="https://developer.mozilla.org/en-US/docs/HTML/HTML5"></iframe>
Take, for example, this fiddle:
http://jsfiddle.net/jfcox/D3UNZ/
Do you notice how there's no content in the iframe? There doesn't appear to be any content related to the request on the Chrome network tab.
I assure you, that'd work on a "normal" website, like example.org. see http://jsfiddle.net/jfcox/nPwcu/
So, I ask, what is it that they are doing to detect that a request is being made from an iframe?
Is there some Browser-Fu I don't know about? Oddly enough, that might be the case. From IE9.
To help protect the security of information you enter into this
website, the publisher of this content does not allow it to be
displayed in a frame.
Wow! Ok, so maybe it's not server-side, maybe it's all Browser-Fu. Even so, how does IE9 and these other browsers know what I don't know? What do I need to look up to learn about this?
I have my own suspicions, namely that there's some file at the root of the website like crossdomain.xml for flash that defines permissions about content usage or whatever, but I still wouldn't even know where to start if that's the case.
Turns out, it's a pretty simple copy protection. All you need to do is set a response header.
https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options
https://datatracker.ietf.org/doc/html/draft-ietf-websec-x-frame-options-00
Yes, "frame", instead of "iframe".
:eyeroll:
I suppose the name makes sense, considering the possibility somebody could still attempt to use old HTML 4 frame tags for whatever purpose, and I would expect most browser/DOM engines have baked-in support of frame tags given HTML history. Netscape created/supported frames as early as version 2.0 and iframe was a later, purely-Microsoft invention that found wide adoption, IIRC.

Protocol-relative URL in CSS for multiple subdomains

Our php-driven website has recently added ssl certificates to support the https protocol and we are having problems with IE6 through IE8 although our pages do not have resources called through http.
I have read this post : http://paulirish.com/2010/the-protocol-relative-url/
So, basicaly, I need to replace all the
background: url('/images/whatever.gif');
With :
background: url('//www.mydomain.com/images/whatever.gif');
I'm not quite a fan of using my domain name across several hundred css files to start with, but suppose I do : what would be the best practice to do so for my development, test and staging environments which are all on different subdomains than the production site. I would need to use dynamic representations of the domain name in the css files, most probably driven from some sort of config file, but how ?
You don't have to add your hostname to use protocol-relative URLs. The form you're already using is protocol-relative, because it doesn't specify a protocol.
Can you detail the problems you are having? Have you confirmed with a test that the URL with a domain name will solve your problem?
PS: If you have hundreds of CSS files, you'll probably be happier with a dynamic generation system anyway, but that's a separate matter.
The problems are popups in IE6, 7, 8 that say there is mixed content in the page (which should be http resources included in an https page). Chrome, FF4 and up and IE9 do not show those popups, and this is correct. There are no http included resources.
Several blog posts seem to point to background urls as the source of this problem. One of the posts (http://blogs.msdn.com/b/ieinternals/archive/2009/06/22/https-mixed-content-in-ie8.aspx) has a comment from Eric Law at MSFT, who states :
The debugger reports that the following is the URL that is triggering
the prompt:
"about:/images/lightview/inner_slideshow_play.png"
Of course, that URL doesn't actually exist in your markup. It looks
like there's dynamic creation of an IFRAME and injection of content
into that frame. The default URL for an empty frame is about:blank,
which leads to the prompt.
and ...
Other quirks to be aware of: In IE6, we treat "about:blank" as
insecure content, as well as "javascript:" and "res:". In IE7, we
fixed the "about:blank" case, but we have not (yet) changed javascript
and res.
So the problem is known and confirmed by MSFT for their older browsers, which create an IFRAME and inject content that then generates the error.
Most workarounds I have stumbled upon point to using protocol-relative urls, like in the first url I showed. I'm not sure you can consider 'background: url('/images/whatever.gif');' as a protocol-less call, because of this infamous IE6 to 8 bug.
--Edit : Working on a solution. We have found this in our javascript files and it seems it has been the real problem from the beginning :
<input target="_blank"class="sub" type="button" style="background-image:url(../images/name.gif);">
Ok ! Got it.
By the way, if ever anybody runs accross the need to find exactly what problems they are having with IE6, IE7 or IE8 on https webpages that are incorrectly reported as containing mixed content, use this script : http://www.enhanceie.com/dl/scriptfreesetup.exe
So in the end it was the button I talked about in the last post. Changing it to an imported class, swapping background-image for just background and getting rid of the ../ at the beginning did the trick.
Thanks all for your help, I'll still flag an answer on Ned's input, since it was of some help.

ASP.NET renders incomplete HTML

I have an asp.net C# .net 3.5 page which contains several user controls. I am noticing that sometimes the html loaded on the browser is incomplete. It seems to get cut-off.
Any suggestions on how to troubleshoot whats the root cause?
This can be symptomatic of server errors or proxy problems. I would use Fiddler to check what's going back and forth between your browser and the server. If you are getting any 500 (server error) response codes, that would be a good place to look.
Another thing to check would be javascript errors on the page, because depending on what your javascripts are doing, errors can prevent loading of other content in some cases.
womp probably has most of the bases covered, but the other angle that can lead to issues like this would be exceptions getting eaten but causing processing to stop, thereby sending half the page or somesuch.
Verify that your HTML is being written to the page by viewing the source code of the page after it loads. My guess is that the HTML that is being output is invalid, and that the browser isn't able to properly display it. Make sure all your HTML tags are properly closed and balanced.
It could also be an issue with the request being ended midway through. Try removing one control at a time from the page and see if the situation improves. If it does, you'll know which control is to blame.
It is quite unlikely that it is the same problem, but I had that happen before where the page had a custom filter attached to response.filter which reformatted the output to fix up some dotnet SEO problems. And the one we wrote had a bug where one regular expression consumed a bit too much copy in some instances and broke the output

IIS CSS Caching

When we are developing new sites or testing changes in new ones that involve css after the new code is committed and someone goes to check the changes they always see a cached version of the old css. This is causing a lot of problems in testing because people never are sure if they have the latest css on screen (I know shift and clicking refresh clears this cache but I can't expect end users to know to do this). What are my possible solutions?
If you're serving your CSS from static files (or anything that the query string doesn't matter for), try varying that to ensure that the browser makes a fresh request, as it will think that it's pulling a completley different resource, so have for example:
"styles.css?token=1234" in the CSS reference in your markup and change the value of "token" on each CSS check-in
In your development environment, set the Expires header much lower. In your Production environment, set it higher, and then set it low about a week before you do your release.
Its not a great solution, but I've gotten around this before at the page level by adding a querystring to the end of the call to the CSS file:
<link href="/css/global.css?id=3939" type="text/css" rel="stylesheet" />
Then I'd randomize the id value so that it always loads a different value on page load. Then I'd take this code out before pushing to production. I suppose you could also pull the value from a config file, so that it only has to be loaded once per commit.
Similar (a bit more detail) answers given for the JavaScript version of this question, which has the same problem/solution
Help with aggressive JavaScript caching

Resources