What this part is doing? Especially why its time consuming, if I dont have ViewState and big object graph? It can take sometimes 3rd of request speed.
Found that this problem, is only on dev server.
I have also noticed that this stage is comparatively expensive even without ViewState (even with a nearly blank page)
I was able to speed this stage up drastically by running my project on my local IIS instance rather than running on the Visual Studio development web server. Compiling in release mode versus debug mode seemed to give a marginal improvement as well.
My guess is that 1) it is nothing to worry about and 2) the VS web server may be less optimized than IIS for some piece of the process. For example, IIS may cache machine values (such as registry settings, certificates, etc) and the VS web server may not.
If during SaveStateComplete an encryption routine is run (such as when EnableViewStateMac="true"), a call to local machine resources might be much more expensive running on the VS web server even if nothing is actually being encrypted.
I don't consider this a great answer; if you are really concerned you could profile ASP.NET to see what it is actually doing (for example, which BCL methods are being invoked).
the role of SaveStateComplete is to save control viewstate/controlstate before it is rendered. So, unless you have disable viewsate(EnableViewState="false" at page level) for your page/controls, the job is always done. Moreover, disabling viewstate doesn't disable controlstate.
Do you have many controls on your page?
Related
Our company releases updates of our rich client application (written mainly with ASP.NET, WCF services, and ASP.NET AJAX) on our client's Windows Server 2008 IIS 7 web server. Once in awhile, we have large releases of updates. And sometimes there are bugs that users catch right after the release that are not caught during automation testing nor stage testing. Is there a way to smoothly deploy ASP.NET code on IIS 7 while users are still on without disrupting their workflows containing code that was not affected? I've found that if I just copy the code from stage (without the web.config) manually, and paste it into the production web root folder, nobody is really kicked off. But I'm wondering if there are any side-effects to this strategy for users diligently working in the application. I'm just wondering if any other connections may get interrupted or how they're even handled in this situation (ie: SQL connection, WCF Service calls, whether they keep the same session and if that will have any impact, etc..)? If I chose this method, I'd have something in the web.config that would display a message to every user (in the master page)--like a banner, that says "Please log off, and clear your cache", so they would see updates to issues addressed. But this would only be relevant to the impacted users.
If someone doesn't think this is a good strategy for minor updates, and has a better strategy, like changing a web.config setting that forces the user to a different server or something while the deployment is taking place. Or some other methodology, my ears are listening. Obviously the latter sounds safer, but I just don't know how this could be done. I've read about load balanced servers, but I think this type of server setup is done for different purposes, like if a server goes down, doesn't it? Or would this be the best solution as you take one site down? I'm welcome to any idears.
I used to stress about minimal-impact for releases too, but now we take it down. The reality is twofold:
You cannot guarantee that everything someone is working on right now is NOT something you're about to update. Consider this: A user is working on x.aspx and is in the middle of a postback. You drop a new x.aspx.
With enough notice, maintenance windows are a way of life. Users should expect that, from time to time, you need exclusive access to the application to make updates, etc.
It's just too hard to keep all the plates in the air when you really don't know what someone might be working on while you deploy. Especially if database updates are in the mix!
If there is a load balancer in the mix: you would remove the server(s) with the old code and add add the server(s) with the new code. This lets the traffic die out on the old code servers without kicking people out. The new server(s) picks up the traffic.
We do this with a new release to one server at a time until we replace all of the code in our server farm. It gives the application time to bake in the real world. If issues comes up, and they have, you only have to revert a single server. using the load balances makes it easier.
It is (usually) a seamless transition. Of course you need to make sure you app can handle any database changes etc.
Recently our customers started to complain about poor performance on one of our servers.
This contains multiple large CMS implementations and alot small websites using Sitefinity.
Our Hosting team is now trying to find the bottlenecks in our environments, since there are some major issues with loadtimes. I've been given the task to specify one big list of things to look out for, devided into different the parts (IIS, ASP.NET, Web specific).
I think it'd be good to find out how many instances of the Sitecore CMS we can run on one server according to the Sitecore documentation e.d. We want to be able to monitor and find out where our bottleneck is at this point. Some of our websites load terribly slow, other websites load very fast. Most of our Sitecore implementations that run on this server have poor back-end performance, and have terrible load times after a compilation.
Our Sitecore solutions run on a Win 2008 64 server with Microsoft SQL Server 2008 for db's.
I understand that it might be handy to specify more detailed information about our setup, but I'm hoping we'd be able to get some usefull basic information regarding how to monitor and find bottlenecks e.d.
What tools / hints / tips & tricks do you have?
do NOT use too many different asp.net pools, called and as dedicate pool in plesk. Place more sites on the same pool.
More memory, or stop non used programs/services on the server
Check if you have memory limits on the application pool that make the pool continues auto-restarts.
On the database, set Recovery Mode to simple.
Shrink database files, and reindex database, from inside the program
after all that Defrag your disks
Check the memory with process explorer.
To check whats starts with your server use the autoruns but be careful not to stop any critical service and the computer never starts again. Do not stop services from autoruns, use the service manager to change the type to manual. Also many sql serve services they not need to run if you never used them.
Some other tips
Move the temporary files / and maybe asp.net build directory to a different disk
Delete all files from temporary dir ( cd %temp% )
Be sure that the free physical memory is not zero, using the process exporer. If its near zero, then your server needs memory, or needs to stop non using programs from running.
To place many sites under the same pool, you need to change the permissions of the sites under the new share pool. Its not difficult, just take some time and organize to know what site runs under what pool. Now let say that you have 10 sites, its better to use 2 diferent pools, and spread the sites on this pools base on the load of each site.
There are no immediate answer to Sitecore performance tuning. But here are some vital tips:
1) CACHING
Caching is everything. The default Sitecore cache parameters are rarely correct for any application. If you have lots of memory, you should increase the cache sizes:
http://learnsitecore.cmsuniverse.net/en/Developers/Articles/2009/07/CachingOverview.aspx
http://sitecorebasics.wordpress.com/2011/03/05/sitecore-caching/
http://blog.wojciech.org/?p=9
Unfortunately this is something the developer should be aware of when deploying an installation, not something the system admin should care about...
2) DATABASE
The database is the last bottleneck to check. I rarely touch the database. However, the DB performance can be increased with the proper settings:
Database properties that improves performance:
http://www.theclientview.net/?p=162
This article on index fragmentation is very helpful:
http://www.theclientview.net/?p=40
Can't speak for Sitefinity, but will come with some tips for Sitecore.
Use Sitecores caching whenever possible, esp. on XSLTs (as they tend to be simpler than layouts & sublayouts and therefore Sitecore caching doesn't break them, as Sitecore caching does to asp.net postbacks), this ofc will only help if rederings & sublayouts etc are accessed a lot. use /sitecore/admin/stats.aspx?site=website to check stuff that isn't cached
Use Sitecores profiler, open up an item in the profiler and see which sublayouts etc are taking time
Only use XSLTs for the simplest content, if it get anymore complicated than and I'd go for sublayouts (asp.net controls), this is a bit biased as I'm not fond of XSLT, but experience indicates that .ascx's are faster
Use IIS' content expiration on the static files (prob all of /sitecore and if you have some images, javascript & CSS files) this is for IIS 6: msdn link
Check database access times with Sitecore Databasetest.aspx (the one for Sitecore 6 is a lot better than the simple one that works on Sitecore 5 & 6) Sitecore SDN link
And that's what I can think of from the top of my head.
Sitecore has a major flaw, its uses GUIDs for primary keys (amongst other poorly chosen data types), this fragments the table from the first insert and if you have a heavily utilised Sitecore database the fragmentation can be greater than 90% within an hour. These is not a well-designed database and recommend looking at other products until they fix this, it is causing us a major performance headache (time and money).
We are at a stand still we cannot add anymore RAM cannot rebuild the indexes more often
Also, set your IIS to recycle the app_pool ONLY once a day at a specific time. I usually set mine for 3am. This way the application never goes to sleep, recycle or etc. Best to reduce spin up times.
Additionally configure IIS to 'always running' instead of 'on starup'. This way, when the application restarts, it recompiles immediately and again, is ready to roar.
Sitefinity is really a fantastic piece of software (hopefully my tips above get the thumbs up, and not my endorsement of the product). haha
Ok, this is one of those really weird errors that seems like the machine's just messing with you.
We have 2 websites, ASP.NET, both were 2.0, and we upgraded them both to 4.0.
They're the same exact codebase, but the web.config files are different, they point at different databases, and they run as separate web apps in IIS.
After the upgrade, one works and one doesn't.
The one that doesn't work will throw a bunch of javascript errors around the Microsoft AJAX Control toolkit like 'Sys is not defined', 'Type is not defined', and '__nonMSDOMBrowser is not defined' (in firebug). When I use the Scripts panel in firebug it lists all the different '...ScriptResource.axd?d=IOBqtxq...' scripts, but when I ask to look at them, many of them will return 'Failed to load source for: /ScriptResource.axd?d=IOBqtxq7p...'.
A couple of them do come back with the CodePlex copyright and some javascript, but many of them don't. And the truly weird thing? If we recycle the app pool for the broken site, we don't get those errors the first time we hit the site. The postback works, we log in, etc. Then we go back and hit it again, javascript errors are back and no postbacks.
Any ideas?
Ok, I hate answering my own questions, but since no one else is weighing in, this is the best we've come up with.
There's a setting in IIS for the website that specifies Web Garden Threads, which I assume is how many threads to use if the site is in a load balanaced web garden. We had this new site set to 7, which is how it was in .NET 2.0. Apparently 2.0 is more forgiving (or ignores it), but 4.0 freaks out. The single request to the site is a request for lots of different resources, which end up being handled by different threads, which as you can imagine makes for chaos. And it's different every time depending on which threads do what.
So, unless anyone else has an explanation for this, I'll close this.
Running many applications out of the same app pool can cause really strange ajax behaviors. Often times you'll see this with apps sharing the DefaultAppPool.
Try creating a separate app pool for the application.
I've recently deployed an ASP.NET application to my shiny new VPS and while I'm happy with the general performance increase that a VPS can give over a shared hosting solution, I'm unhappy with the startup time of my application.
My web application takes a fair amount of time to start up when my client first hits it. I'm not running it in debug mode (disabled that in my web.config), and it doesn't have any real work to do on startup - I have no code in my application start event handler, I don't start any extra threads, nothing. The first time my client hits my application it takes a good 15-20 seconds to respond. Subsequent calls take 1-2 seconds, unless I wait a few minutes for my application to shut down. Then it's back to a 15-20 second startup time.
(I'm aware that my timing benchmark is very unscientific, those numbers should just give a feel for the performance on startup of my app).
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
So, after that book-sized preface, here are my questions:
Is my understanding of ASP.NET's compilation incorrect? How does it actually work?
Is there a way I can force IIS to cache my binaries, or keep my application alive indefinitely?
If it's a bad idea to do either of the things in my previous question, why is it a bad idea, and what can I do instead to increase startup performance?
Thanks!
Edit: it appears my question is a slight duplicate of this question (I thought I did a better job of searching for an answer to this on here, haha). I think, however, that my question is more comprehensive, and I'd appreciate if it wasn't closed as a duplicate unless there are better, already-asked questions on here that address this.
IIS also shuts down your web app after a given time period, depending on its configuration. I'm not as familiar with IIS7 and where to configure this, so you might want to do a little research on how to configure this (starting point?).
Is it bad? Depends on how good your code is. If you're not leaking memory or resources, probably not.
The other solution is to precompile your website. This might be the better option for you. You'll have to check it out and see, however, as it may come with a downside, depending on how you interact with your website.
My understanding of ASP.NET was that IIS (7.0, in this case), compiles a web application the first time it is ever run, and then caches those binaries until such a time as the web application is changed. Is my understanding incorrect?
That is correct. Specifically, the assemblies are built as shadow copies (not to be confused with the volume snapshot service / shadow copy feature). This enables you to replace the code in the folder on the fly without affecting existing running sessions. ASP.NET will detect the change, and compile new versions into the target directory (typically Temporary ASP.NET Files). More on that process: Understanding ASP.NET Dynamic Compilation
If its purely the compilation time then often the most efficient approach is to hit the website yourself after the recycle. Make a call at regular intervals to ensure that it is you who receives the 15 second delay not your client.
I would be surprised if that is all compilation time however (depending on hardware) - do you have a lot of static instances of classes? Do they do a lot of work on start-up?
Either with tracing or profiling you could probably quite quickly work out where the start-up time was spent.
As to why keeping a process around is a bad idea, I believe its due to clear-up. No matter how well we look after our data or how well behaved the GC is, there is a good clear-up performed by restarting the process. Things like fragmentation can go away and any other resource issues that build up over time are cleared down. Therefore it is quite a bad idea to keep a server process running indefinitely.
I'm new to ASP.net but not to C#, .net or Web Development.
In PHP it was nice to be able to go right to the browser and refresh whenever I made a change. Using ASP.net with VS08 however seems a bit awkward.
Should I launch a development server and keep it open, refreshing the browser when I make a change or should I close the development server between editing code?
Sorry if this sounds silly but I'm just not sure what the "accepted" practice is here.
When developing with Cassini, I always just let it run as it saves the startup time whenever I want to debug. It doesn't hurt to do so, it'll always reflect the current built DLLs and works pretty well.
You have two options when going from Visual Studio to the browser. The first is to debug the application by pressing F5 and running it. The second is to choose the "View In Browser" command (or use the keyboard shortcut Ctrl+Shift+W). The latter doesn't attach debuggers to the web server which makes it much faster than debugging but doesn't support things like code-behind break points or breaking into code upon unhandled asp.net excpetions.
For fastest development, minimize the number of times that your web server is started and how many times you attach the debugger to the web server.
I launch my dev server and then leave it running all the time. Instead of closing down the dev server, I detach the debugger, and when I need to step through code I attach it again. This gives me the best of both worlds: saves me the web server startup time AND runs pages fast most of the time.
Depends what stage of development I am in.
If I am adding lots of files (something that you can't do in Debug mode), changing lots of code in App_Code (which will reset the app) or adding lots of components from the toolbox, then I will tend to leave Debug off.
Once I start actively debugging a site, then I will leave it on as much as possible.
I use IIS even for development so there is no "launch" of the web server at all. That being said, I will typically alt-tab between code I'm working on and the browser window while I am fine-tuning HTML, Ajax calls, or the code behind. Bigger changes require a restart and you will receive a message that the code is no longer in synch with the web site (during an exception) when you must restart.
The bottom line: you can debug while running in an ASP.NET application and, as long as you can get away with it, it is an effective way of tuning your software.