Can you solve my odd Sharepoint CSS cache / customising problem? - css

I have a weird situation with my sharepoint css.
It is deployed as part of a .wsp solution and up until now everything has been fine.
The farm it deploys too has a couple of webfront ends and a single apps server and SQL box.
The symptom is that if I deploy the solution, then use a webbrowser to view the page it has no styles, and if I access the .css directly I see the first 100 or so bytes of the .css.
However if I go into sharepoint designer and look at the file it is looks fine, and if I check it out and publish it (customising the file but not actually changing anything in it) then the website works fine and the css downloads completely.
There is some fairly complex caching on the servers Disk based and object caches. as far as I can tell I have cleared these (and an issreset should clear them anyway... shouldn't it?)
I have used this tool to clear the blobcache from the whole farm http://blobcachefarmflush.codeplex.com/

The problem you're describing is one I've encountered before. Let me share what I know, what I suspect, and how I'd go about troubleshooting your scenario.
First off, it sounds like you suspect caching as a potential problem source. In the case of the MOSS publishing feature set, you really have three different cache mechanisms in operation: the object cache, the BLOB cache, and the page output cache. The only mechanism that should be in-play, assuming it's turned on with default settings, is the BLOB cache. Neither the object cache nor the page output cache should be touching stand-alone stylesheets like you have.
You've tried flushing the cache the flush using the farm-level BLOB cache flush feature, and that will instruct MOSS to dump all BLOB cache data. You can verify this by reviewing the file system to ensure that only the three .bin folders remain following a flush.
To your specific question regarding an IISRESET: no, and IISRESET actually won't clear the BLOB cache. The contents of the BLOB cache persist beyond the life of the application pool servicing the web application. You either need to use a feature to clear out the cache (as you have been), or perform a manual file delete. I don't recommend the latter unless you absolutely have no other course of action. If you do elect to go the manual route to try it, ensure that you shutdown the W3SVC service before deleting files out of the file system. If you don't, the actual file deletion process can get into a race condition with cache repopulation and lead to corruption. After you've deleted files with a stopped W3SVC, you can start the W3SVC back up again.
For more information on the internals of the BLOB cache and how it operates, I'll point you to a blog article of mine: http://sharepointinterface.com/2009/06/18/we-drift-deeper-into-the-sound-as-the-flush-comes/
To see if the BLOB cache is a factor in the behavior you're seeing, you can modify the web.config for your web application(s) and adjust the file pattern to remove CSS from the list of file types in the <BlobCache> element and then restart IIS (or at least recycle the app pool).
Another possibility, based on experience, is that you're seeing something other than BLOB cache abnormalities. The key observation for me comes in the form of you observing that a direct request for the CSS stylesheet returns only the first 100 bytes or so.
Do you, by any chance, have any intelligent network hardware (that is, intrusion detection hardware or anything that might be performing application/layer-7 filtering) between the WFE and you, the caller? Intrusion detection and IPS systems are the source of many of the types of problems you're seeing, and they're one of my first stops whenever I see "oddball" behavior like you're describing. In the case of one of my clients, I saw a problem meeting your description (CSS and JS files getting truncated) due to an intervening Juniper firewall with active IPS. Turning off IPS (to test) cleared things up immediately. After that, the networking team sought an update from Juniper to correct the issue to ensure that IPS could remain active.
Try turning off BLOB caching (or removing the CSS extension from the file pattern) to see if that makes a difference. If not, talk to your network team to see if something is happening to the response stream coming back to you. That's where I'd start; hopefully, one of those two things will do the trick for you.
Small side note: if you have a free moment and are up to it, I'd like to hear about your experience with the BlobCacheFarmFlush solution you pulled down from CodePlex. I authored it, and I'd love to hear your thoughts -- good or bad :-)
Sean (sean#sharepointinterface.com)

Related

Gradual increase in I/O usage on website

So I've attached the resource usage of my wordpress website over the past 30 days.
You can see the I/O usage has been getting higher and more frequent. I think this is a problem that has caused a massive drop in visits to my site.
I asked my host why this is and he said backs up usually contribute largely to this. Only thing is, I backup once a month not every day.
I've tried optimising my database, disabling plugins but I don't understand why it keeps getting higher.
I have a Analytics plugin that refreshes every hour but I've had that all year and I/O usage only started getting high recently.
The only thing I can think of is Wp Super Cache and CloudFlare not working well together. I've tried different configurations but hasn't helped.
Any help would be appreciated.
I think this is a pretty standard IO log, Over time your db does get a lot bigger and so does your users who end up using a lot of IO. I dont think there is anything to panic right away, but obviously if this is a very huge difference from what you are used to see normally then i think you should look into it seriously. I take caching very seriously and i usually use W3 total cache for this kind of performance optimization. Its a bit tricky in the begining but once you are used to it, it very easy.
I know you might just want to improve the IO, for which mostly you just need caching but here are somethings that i would do to get the most performance out of a site.
1) If you are using a VPS or dedicated server install memcache or something like Redis, and then configure your plugin according to it. You might have to enable it in your php.ini file but once installed you will see the difference. It will execute the code and give you a save the results in the RAM, on the next request instead of executing the php code it will just hand over the same results. Now it depends on your website, and whether you want to cache it or not. You can setup individual pages to use caching as well.
2) If your plugin has options to automatically minify and combine html/css/js files then use it, if not then you should minify and combine them into a single file or as less number of files as possible and then manually upload to your server. It will reduce a lot of time that is spent on requesting a file and waiting for getting the response back. Its usually in milliseconds but if you have a lot of files then it does add up to seconds + unnecessary load on the server.
3)If your plugin has gzip feature, then enable it. It will allow your users to download the gzipped css and js files instead of the original large files. This will enormously reduce the number of bits a browser have to download on every attempt.
4) Enable caching of files on the browser, your plugin might already have this, but if not then you will have to set some headers which will tell the browser to cache the css and js files in the user browser. So the next time when the user goes to the next page on your website, instead of calling the css/js files from the server it loads them directly from the Cache.
5) Upload your css/js/images files to a CDN, that way whenever someone requests a file it will use the shortest route to get your users browser.
6) If your site is not just a personal blog and is making serious money or you just want to please all the huge growing number your users. Then i would suggest you look into auto scaling server platforms, where you set some triggers and the number of servers automatically increase when facing a lot of users / IO and once the number of users go back to normal it automatically scales down. One of the big boys for this sort of service would be AWS beanstalk, microsoft azure. Or you can use beanstalkd with digital ocean which is a cheap alternative.
7) Wordpress is quite compatible with facebook's HHVM which is an opensource virtual machine designed to use php as just in time (JIT). Php is an interpreted language i.e its written in C/C++ (you can checkout the code at github), so when ever you refresh a page, hundred's of line of php code is interpretted by C++ and then compiled and executed. What HHVM does is that it compiles the code and keep it in memory, so when someone else requests the same page it already has a compiled version so it just executes and serves it. So it removes 30-40% of the compiling time from every request, which in turn makes your site 30-40% faster. Now PHP7 is already out last month and it does have a lot of performance upgrades, so if you are still not sure about HHVM you should definitely try upgrading to PHP7.

ASP.NET: are aspx/ascx files accessed from disk on every request?

I googled forever, and I couldn't find an answer to this; the answer is either obvious (and I need more training) or it's buried deep in documentation (or not documented). Somebody must know this.
I've been arguing with somebody who insisted on caching some static files on an ASP.NET site, where I thought it's not necessary for a simple fact that all other files that produce dynamic HTML are not cached (by default; let's ignore output caching for now; let's also ignore the caching mechanism that person had in mind [in-memory or out on network]). In other words, why cache some xml file (regardless on how frequently it's accessed) when all aspx files are read from disk on every request that map to them? If I'm right, by caching such static files very little would be gained (less disk-read operations), but more memory would be spent (if cached in memory) or more network operations would be caused (if cached on external machine). Does somebody know what in fact happens when an aspx file is [normally] requested? Thank you.
If I'm not mistaken ASPX files are compiled at run-time, on first access. After the page is compiled into an in-memory instance of a Page class, requests to the same resource (ASPX page) are serviced against the object in memory. So in essence, they are cached with respect to disk-access.
Obviously the dynamic content is generated for every request, unless otherwise cached using output caching mechanisms.
Regarding memory consumption vs disk access time, I have to say that from the performance stand point it makes sense to store objects in memory rather than reading them from disk every time if they are used often. Disk access is 2 orders of magnitude slower than access in RAM. Although inappropriate caching strategies could push frequently used objects out of memory to make room for seldom used objects which could hurt performance for obvious reasons. That being said, caching is really important for a high-performance website or web application.
As an update, consider this:
Typical DRAM access times are between 50 - 200 nano-seconds
Average disk-access times are in the range of 10 - 20 milliseconds
That means that without caching a hit against disk will be ~200 times slower than accessing RAM. Of course, the operating system, the hard-drive and possible other components in between may do some caching of their own so the slow-down may only occur on first hit if you only have a couple such files you're reading from.
Finally, the only way to be certain is to do some benchmarking. Stress-test both implementations and choose the version that works best in your case!
IIS does a large amount of caching, so directly, no. But, IIS checks for ANY changes in the web directory and reloads any changed files as they get changed. Sometimes IIS gets borked and you have to restart it to detect changes, but usually it works pretty good.
P.S. The caching mechanisms may flush data frequently based on server usage, but the caching works for all files in the web directory. Any detected changes to source code causes IIS to flush the web applicaiton and re-compile/re-load as well.
I believe that the answer to your question depends on both the version of IIS you're using, and configuration settings.
But I believe that it's possible to configure some combinations of IIS/.Net to avoid checking the files - there's an option to pre-compile sites, so no code actually needs to be deployed to the web server.

Which Has Better Performance - Configuration In AppSettings or Database?

I'm looking for feedback as to where I should store application configuration data and default text values that will have the best performance overall. For example, I've been sticking stuff like default URL values and the default text for error messages or application instructions inside the web.config, but now I'm wondering if that will scale....and if it's even the right thing to do in the first place.
As mentioned before, this really shouldn't matter - the settings, be they in the web.config or in a database, should be 'read-once' and then cached, so this really shouldn't matter.
I can almost guarantee that there will be other parts of your code much slower than this.
As a side note, and not performance related, but if you need to worry about site uptime, you can edit configuration in a database on the fly, but changing the web.config will cause an appdomain restart and subsequent loss of sessions.
If it's a single server setup (as opposed to a Web farm) store it in the Web.Config file. There is a new release of the Web Farm framework and you could check details for that type of scenario here:
http://weblogs.asp.net/scottgu/archive/2010/09/08/introducing-the-microsoft-web-farm-framework.aspx
If the database is on the same machine performance difference may be neglect-able. If you need to cross a wire to get to some database it's most likely slower then a directly accessible and not too large web.config.
I really would prefer keeping things simple here, just use the web.config. It probably is already getting cached in memory by some system component. If you feel it's too slow measure it and then perhaps go for a more complicated solution.
Having said that you could simply read in all configuration at application start-up and hold it in memory. This way any performance difference is mitigated to just the application's start-up time. You also get the added benefit of being able to validate the configuration files at start-up.
Not sure about your defaults but just ask yourself if they are environment specific and is it really something you want to change without recompilation. Being configuration files, what is the use case? Is some operations guy or developer going to set these defaults? Are you planning to document them? Are you going to make your application robust against whatever is configured?
Are these defaults going to be used on multiple environments/installations (used for localization for instance)? Then perhaps it's better to use a different and separate file which you can redeploy separately when needed.
How often are the settings going to change?
If they never change, you can probably put them in web.config, and configure IIS so that it doesn't monitor the file for changes. That will impose a small startup penalty, but after that there should be no performance penalty.
But honestly, there are probably dozens of other places to improve before you start worrying about this - remember, "Premature Optimization is the root of all evil" :)

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

Single ASP.net site with Multiple Instances & web.configs

We have a legacy ASP.net powered site running on a IIS server, the site was developed by a central team and is used by multiple customers. Each customer however has their own copy of the site's aspx files plus a web.config file. This is causing problems as changes made by well meaning support engineers to the copies of the source aspx files are not being folded back into the central source, so our code base is diverging. Our current folder structure looks something like:
OurApp/Source aspx & default web.config
Customer1/Source aspx & web.config
Customer2/Source aspx & web.config
Customer3/Source aspx & web.config
Customer4/Source aspx & web.config
...
This is something I'd like to change to each customer having just a customised web.config file and all the customers sharing a common set of source files. So something like:
OurApp/Source aspx & default web.config
Customer1/web.config
Customer2/web.config
Customer3/web.config
Customer4/web.config
...
So my question is, how do I set this up? I'm new to ASP.net and IIS as I usually use php and apache at home but we use ASP.net and ISS here at work.
Source control is used and I intend to retrain the support engineers but is there any way to avoid having multiple copies of the source aspx files? I hate that sort of duplication!
If you're dead-set on the single app instance, you can accomplish what you're after using a custom ConfigurationSection in your single web.config. For the basics, see:
http://haacked.com/archive/2007/03/12/custom-configuration-sections-in-3-easy-steps.aspx
http://msdn.microsoft.com/en-us/library/2tw134k3.aspx
Example XML might be:
<YourCustomConfigSection>
<Customers>
<Customer Name="Customer1" SomeSetting="A" Another="1" />
<Customer Name="Customer2" SomeSetting="B" Another="2" />
<Customer Name="Customer3" SomeSetting="C" Another="3" />
</Customers>
</YourCustomConfigSection>
Now in your ConfigSection Properties, expose Name, SomeSetting, and Another. When the Property is accessed or set, use a condition (request domain or something else that uniquely identifies the Customer) to decide which to use.
With the proper implementation, the app developers don't need to be aware of what's going on behind the scenes. They just use CustomSettings.Settings.SomeSetting and don't worry about which Customer is accessing the app.
I know it might seem annoying, but the duplication is actually a good thing. The problem here is with your process, not with the way the systems are setup.
Keeping the sites separate is actually a good thing. Whilst it looks like "duplication" it's actually not. It's separation. Making changes in the production code by your support engineers should be actively discouraged.
You should be looking at changing your process to change once deploy everywhere. This will make everything a lot easier for you in the long run.
To actually answer your question, the answer is no, you can't do it. The reason is that web.config isn't designed to store user level settings, it's designed to store per application instance settings. In your case, you need an application instance per user which means separate config files.
For your system to work, you need to be able to preemptively tell the application which config file to use, which isn't possible without some sort of input from the user.
Use an external source control application and keep rolling out updates as required.
It isn't really a good idea to let your live site be updated by support engineers in real time anyway.
Depending on what is actually in the web config, and what settings differ between customers, you could opt to use a single web config, and store other customer specific configuration options in a database or some other custom xml/text file. As long as the specific customer settings in the web.config don't have to do anything with how IIS operates, and you are just using it to store values, then this solution might work out well for you.
Thank you all again for your answers. After reading through them and having a think what I think I will do is leave the multiple instances alone for now and I will try to improve our update process first. then I will develop a new version of the application that has the user configuration information in the database layer and then pick the user based on the request domain or URL as someone suggested. That way I can have a single application instance supporting multiple different client configurations cleanly.
As most of the client configuration data is really presentation or data source related, nothing complicated. I think we ended up with multiple application instances mostly because the original programmer hadn't been expecting multiple customers and didn't design for that so when someone came along later and added a second customer they just duplicated the application which is wasteful as each instance is about 99.99% identical to the original.
I am implementing this as we speak.
In the main web.config, I have 1 item per installation. It points me toward the custom config file I built for each client (and toward the custom masterpage, css, images, etc).
Using WebConfigurationManager.OpenWebConfiguration, I open the new webconfigs in their subdirectories. I determine which one to use by using System.Web.HttpContext.Current.Request.Url.OriginalString, and determining the uRL that called me. Based on that URL, I know which web.config to use.
From that point forward the clients all use the same codebase. They have their own databases too.
The idea of having to update 30-40 installations when we make an update scares the death out of me. We do not want to support 30-40 codebases, so there won't be customization beyond the master page, css, and images.
I wrote a custom class lib that knows how to switch to the proper webconfig, and read the custom section I built with all our settings.
The only issue I have now is the FormsAuthentication Cookie. I need to be able to switch that as well. Unfortunately, the property for the name is read only
If I understand correctly, it sounds like you have multiple deployments (one for each client) where the only difference is the web.config, right?
First off, although I don't know your unique situation, I would generally urge you to stay with separate installs. It usually allows much more flexibility. Off the top of my head: are you ever going to have customizations, or different clients running different versions? Are you sure? The easiest way to stay flexible here is to keep going with separate installs.
In my opinion, it isn't ugly at all if your practices are aligned properly. Based on some things you mentioned, you have trouble in that area - obviously, possible source control buy-in/training issues. But you are aware of that. I would also take a hard look at your deployment procedures and so on. I have a feeling you might have further issues in that area, and I mean absolutely no offense.
That said, let's say you want to move forward with this.
You didn't say whether all the clients share a single common database, but I'm thinking no, since designing that type of system is often not worth the extra complexity (which can be severe in systems of any size) so people often opt to keep them separate.
What that means is that you have store your connection string somewhere. Usually that would be web.config... So that seems to break our plan.
Really, the apparent elegance of this situation is almost always wildly offset by the challenges it introduces. If I thought about it hard enough, I could maybe find a way around this by introducing another database that intelligently manages connection strings or maybe delving into keeping all your login info directly in web.config (which is possible but... not ideal), however my gut says the work will be wasted because some day you will end up going back to how you're doing it now.
Also: changing code directly in production is obviously not the best practice here. But you if you are on a monolithic shared platform with any amount of traffic, that can never ever ever happen. Food for thought.
Let me know if I'm missing something!

Resources