How to prevent useless disk access for Web Api endpoints? - asp.net

I'm trying to optimize our .Net 4.0 Web Api Server (Classic 64 bits) and by tracing disk access I found out that whenever I hit a WebApi url (/collections/products/4452) IIS does several disk access. It tries to check for the existence of all these files and directories (using procmon.exe):
/collections
/collections/web.config
/collections/products/
/collections/products/web.config
The IIS server serves content from a UNC path (because it's a Web Farm) so all those useless disk access add up. Tracing reveals that this adds up to 8ms of latently.
I'm trying to figure out how to prevent those file accesses.
So far I have tried.
Remove all handlers except the ExtensionlessUrlHandler-ISAPI-4.0_64bit
Remove all handlers from both <system.web><httpHandlers> and <system.webServer><handlers> and write a dummy handler that returns 'Hello World'
In both cases the file checks still occur (using procmon.exe). I'm about to try and write a low level C++ module but before we go that way I wanted to ask advice here.
I suspect that before going in my handler it does some magic in aspnet_isapi.dll, but since I can't access the code, I would hope that there is some configuration I could attempt before moving forward.

Related

Application Insights - Getting only client side data, no server data.

I have an ASP.Net MVC 4 application hosted on Windows Server 2008. I'm using Microsoft Application Insights, and it's working perfectly for client side metrics such as Client Processing Time, Custom Events, Users, Sessions, Page Views, etc. However, I cannot get any server-side metrics such as Processor Time or Available Memory. The areas are all covered by a banner that says something to the effect of "Learn how to collect server request data". When I click on the banner, it shows a blade with instructions, all of which I've already completed (the quick start).
In addition to installing the Application Insights SDK through VS 2013 (0.12.0-build17386), I've also installed and configured the Application Insights Status Monitor on the server. I've restarted IIS, and even restarted the server. Despite all this, I cannot get any server metrics. I've read the troubleshooting guide, and I've checked everything mentioned therein such as making sure the app pool identity is part of the "Performance Monitor Users" group.
I feel as though there is something I have to do to the ApplicationInsights.config file in order to either turn on and / or define the server metrics I want, but I simply cannot find any documentation on this.
Any help or suggestions would be greatly appreciated. Thanks!
No you shouldn't do anything additional with ApplicationInsights.config. Performance counters are the part of default monitoring package and almost all problems are related to that user is not the part of 'Performance Monitor Users' group, but it's not your case.
To be sure that config is correct you can check that the following module is defined in ApplicationInsights.config:
<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCollector.PerformanceCollectorModule, Microsoft.ApplicationInsights.Extensibility.PerfCollector"/>
Also do you see any notifications in the StatusMonitor and/or traces/exceptions in the Diagnostic search at the application insights resource overview blade?
Ok, we've got it. There was an ApplicationInsights.config in the root folder of the application, and that was the only one I've ever looked at. At Yulia Safarova's suggestion, I discovered another one inside the bin folder. This one did NOT have the module definition specified. (It was basically empty). I copied all the contents of the one from the root into the one in the bin folder, and all the data started to flow.
If you are looking for the server data like CPU, Memory, Response rate to be displayed on the Azure Application Insight, then along with the addition of above module, also make sure that the web application Identity user is part of the administrator group on the server. and below flag is turned on in web.config
"EnableAppInsightUsageCollection" value="true"

Run Batch File in Asp.net C#

I am using asp.net C# 4.0
I have a batch which will open a text file.
Batch File query
ECHO OFF
start D:\accounts\request\08__processing\0377e792-4ca9-4550-b78c-de2bdf26611f.txt
ASp.net Code
System.Diagnostics.Process.Start("D:\\bacthFile.bat");
when i double click on the batch file its opening the text file.But when run above code its not opening any notepad.
it is not showing any exception also.
Please Help
Thanks
This is the wrong way to read a text file from ASP.NET, you need to use System.IO and put files you need to read somewhere where ASP.NET can get at them, eg.. App_data. That said, here is how to do it the wrong way: IIS runs on an invisible virtual window--it's a service. There is a way to get some services to display the UI, I forget how. In any case, you'd have to RDP onto the server, to see such a window after starting the service and goosing it into displaying a UI (this trick might not even work for IIS).
Next, as commenters noted, on IIS, you'll have one set of credentials different from your own (depends on what version of IIS)-- in any case, it will have restricted access and be somewhat sandboxed. If you set up impersonation and windows authentication in the web.config, sometimes you can get your request to run with your credentials.
Next, if the web host runs in medium trust, you might not be able to launch arbitrary apps from the asp.net appdomain.
Finally, the only way this could ever work is if your app is always running locally with casinni (the visual studio development server)-- but in that case, you could simplify things a lot by using a console app instead of asp.net unless you really need the HTML templating, say for output.

ASP.NET File Monitoring errors in Event Log

We are getting frequent errors in the Event Viewer, Application section. The source is ASP.NET 4.0.30319.0, category is File Monitoring. The Event ID is 1185. Text says "Failed to start monitoring changes to "file-path-here" because the network BIOS command limit has been reached." Then there is a reference to Microsoft knowledge base article 810886.
The question is: what process or service is doing this file monitoring, and why? We are not aware of how this is running or how it started. The monitoring seems to look at various folders on our web site, some are .NET folders, some are not.
We are looking for explanation of what is causing this monitoring; then we will try to address the errors.
When a asp.net starts to run a site, is monitoring one basic file (on the root of the web site), this is the app_offline.htm and if he finds it is stop the program showing only this file.
If find that other file change is recompile them if this is nessesary, but still showing the app_offline.htm if exist and not run the site.
Ones you remove the app_offline.htm the web pages starts run again, but asp.net still monitoring for this file - if exist or not.
So this is the monitoring of the asp.net you search for. Of course this is the default behaviour of asp.net. If you have install other software or something else have been on computer and fill it with monitoring, this is something different. I assume that you have too many web sites asp.net on the same server ? 500 or more ? if not then maybe you start searching for other software that make this monitoring of your files.
Analysis
How to find your self whats happends, download the Handle from sysinternals and run it, make the out on a text file like handle.exe >> result.txt and look the results.
http://technet.microsoft.com/en-us/sysinternals/bb896655
Look there if any suspicious program have open huge amount of files and what program is that. Monitoring files and directories are shown like
runningprogram.exe pid: 1352 ServerName\User
AC: File (RW-) D:\Monitor1
E8: File (RW-) D:\Monitor2
F8: File (RW-) D:\Monitor3
408: File (RWD) D:\InetPub\MySite
More
I check on my servers and found that a blog creation program have add a monitoring on every directory blog - I do not know why - but this is the way they have made it, to monitoring every blog for some reason, maybe you have something similar that creates a lot of file/directory monitoring for some reason.
The monitoring is being done by IIS (or the aspnet process with IIS6). It's watching for changes to files so that the site can be recompiled when needed.
You didn't mention your environment, but I used to run into this problem frequently when trying to run websites from Windows XP when the sites were located on a remote file share. I think the error comes up due to a limitation in CIFS (the network stack for file shares). Windows Server didn't seem to have the same limitations.
So, a few possible fixes:
Switch to Windows Server (or possibly Win 7)
Switch to a Web Application (doesn't allow recompiles)
Move your files from a remote share to a local drive

IIS7 Profiling

Is there a way to profile IIS7? (freeware?)
Number of connections
Bandwidth usage
Errors (Event Viewer?)
-...
thx, Lieven Cardoen
ps: Something similar to mssqlserver profiling
There's nothing quite like MSSQL's profiler, but a set of tools:
Perfmon will show you the # of current connections per website. Perfmon.msc, web service, current connections, select website, click add. Don't like the interactive nature of perfmon? No problem, use logman.exe, a nice CLI for perfmon.
Bandwidth usage you can get from your log files if you enable bytes sent & bytes received in your iis log files. This is also available via performance counters - web service, bytes sent/received/sec. I think the two complement each other fairly well.
IIS7 has a new feature called Failed Request Tracing. You can tell it to log on all 500's,or any .aspx page that takes 15 seconds to run, or based on event severity. It saves all of this information in an XML file for you under \inetpub so it's easily parseable, and also gives you a nice XSLT to view it in your browser and drill down if you like.
http://learn.iis.net/page.aspx/266/troubleshooting-failed-requests-using-tracing-in-iis7/
Try out the Administration Pack for IIS 7.0. It has:
Configuration Editor:
The configuration editor module will help you manage your configuration files. This tool is available for server administrators only. It allows you to edit any section, attribute, element or collection in your configuration file. In addition to editing these values you are also able to lock and unlock them. The configuration editor also allows you to generate scripts based on the actions you take as well as search the file to see where values are being used.
IIS Reports:
The IIS Reports module enables you to view key statistics about your website. You can also generate your own module reports to gather information relevant to you and your business. Currently you can view the output of these reports as charts and/or tables.
Database Manager:
This module is no longer part of the Administration Pack and instead is offered as a separate download in the IIS Download Center.
UI Extensions:
UI Extension modules allow you to manage existing features through IIS Manager.
The FastCGI module allows you to manage your FastCGI settings.
The two ASP.NET modules allow you to manage your authorization and custom errors settings.
Finally the HTTP Request Filtering allows you to setup rules for http request filtering.

How to deploy an ASP.NET Application with zero downtime

To deploy a new version of our website we do the following:
Zip up the new code, and upload it to the server.
On the live server, delete all the live code from the IIS website directory.
Extract the new code zipfile into the now empty IIS directory
This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed.
Any suggestions on a 0 second downtime method?
You need 2 servers and a load balancer. Here's in steps:
Turn all traffic on Server 2
Deploy on Server 1
Test Server 1
Turn all traffic on Server 1
Deploy on Server 2
Test Server 2
Turn traffic on both servers
Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine.
The Microsoft Web Deployment Tool supports this to some degree:
Enables Windows Transactional File
System (TxF) support. When TxF support
is enabled, file operations are
atomic; that is, they either succeed
or fail completely. This ensures data
integrity and prevents data or files
from existing in a "half-way" or
corrupted state. In MS Deploy, TxF is
disabled by default.
It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions.
I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase:
for an existing path/url:
path: \web\app\v2.0\
url: http://app
Copy new (or modified) website to server under
\web\app\v2.1\
Modify IIS metabase to change the website path
from \web\app\2.0\
to \web\app\v2.1\
This method offers the following benefits:
In the event new version has a problem, you can easily rollback to v2.0
To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool.
You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail).
A full tutorial can be found here.
I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them.
For my configuration, I had a web directory for each A and B site like this:
c:\Intranet\Live A\Interface
c:\Intranet\Live B\Interface
In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header.
When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B.
OK so since everyone is downvoting the answer I wrote way back in 2008*...
I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now.
We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites.
Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name.
Here's how we do it:
We have a post build step that copies generated DLLs into a 'bin-pub' folder.
We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server
We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them.
That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime.
The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review.
This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing.
**We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting.
*Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it).
Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent.
The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps.
Be aware that this can result in old and new AppDomains executing in parallel!
The problem is how to synchronize changes to databases etc.
By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up.
To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature)
Now you need a way to block requests for the new AppDomain.
I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made.
(I use a marker file in the web app to enable the mutex wait behaviour)
Once the new web app is running I delete the marker file.
The only zero downtime methods I can think of involve hosting on at least 2 servers.
I would refine George's answer a bit, as follows, for a single server:
Use a Web Deployment Project to pre-compile the site into a single DLL
Zip up the new site, and upload it to the server
Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc)
Use IIS Manager to change the name of folder containing the site
Keep the old folder around for a while, so you can fallback to it in the event of problems
Step 4 will cause the IIS worker process to recycle.
This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely).
Of course, it's a little more involved when there are multiple servers and/or database changes....
To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server)
Direct all traffic to Site/Server 2
Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version
Deploy to Site/Server 1 and warm it up as much as possible
Execute database migrations transactionally (strive to make this possible)
Immediately direct all traffic to Site/Server 1
Deploy to Site/Server 2
Direct traffic to both sites/servers
It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible.
If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone.
Since database migrations are involved, rolling back to a previous version is often impossible.
There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way.
This is how I do it:
Absolute minimum system requirements:
1 server with
1 load balancer/reverse proxy (e.g. nginx) running on port 80
2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports
(or even just two reverse-proxy applications on 2 different TCP ports without any sandbox)
Workflow:
start transaction myupdate
try
Web-Service: Tell all applications on all web-servers to go into primary read-only mode
Application switch to primary read-only mode, and responds
Web sockets begin notifying all clients
Wait for all applications to respond
wait (custom short interval)
Web-Service: Tell all applications on all web-servers to go into secondary read-only mode
Application switch to secondary read-only mode (data-entry fuse)
Updatedb - secondary read-only mode (switches database to read-only)
Web-Service: Create backup of database
Web-Service: Restore backup to new database
Web-Service: Update new database with new schema
Deploy new application to apt-repository
(for windows, you will have to write your own custom deployment web-service)
ssh into every machine in array_of_new_webapps
run apt-get update
then either
apt-get dist-upgrade
OR
apt-get install <packagename>
OR
apt-get install --only-upgrade <packagename>
depending on what you need
-- This deploys the new application to all new chroots (or servers/VMs)
Test: Test new application under test.domain.xxx
-- everything that fails should throw an exception here
commit myupdate;
Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number)
#client: notify of reload and that this causes loss of unsafed data, with option to abort
# time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps
Decomission/Recycle array_of_old_webapps, etc.
catch
rollback myupdate
switch to read-write mode
Web-Service: Tell all applications to send web-socket request to unblock read-only mode
end try
A workaround with no down time and I am regularly using is:
Rename running .NET core application dll to filename.dll.backup
Upload the new .dll (web application is available and serving the requests while file is being uploaded)
Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel.
IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime.
I am still searching for more better method than this..!! :)
IIS/Windows
After trying every possible solution we use this very simple technique:
IIS application points to a folder /app that is a symlink (!) to /app_green
We deploy the app to /app_blue
We change the symlink to point to /app_blue (the app keeps working)
We recycle the application pool
Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks)
Someone called it a "poor man's blue-green deployment" without a load balancer.
Nginx/linux
On nginx/linux we use "proper" blue-green deployment:
nginx reverse proxy points to localhost:3000
we deploy to localhost:3001
warmup the localhost:3001
switch the reverse proxy
shot down localhost:3000
(or use docker)
Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine.
I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time.
Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.

Resources