Slow AppFabric, high CPU and Memory Usage - asp.net

I implemented AppFabric 1.1 to my ASP.NET web application. I am using Read Through approach because I just need to read images from my SQL database and store them in the cache. So I will have chance to retrieve those data as fast as possible.
I am checking shell and I can see that my application is reading successfully from cache and write to the cache if cache is empty. However, AppFabric is not as fast as I expected. The version without AppFabric is faster than the one with AppFabric. In addition to that, when I use Appfabric, I can see that there is high CPU and memory usage.
What are the potential reasons of that? What do you suggest to me?
Appreciated to your ideas,

So without more details, it's hard to tell for sure, but I can try to help from my experience with AppFabric. Are we talking about high memory usage on the AppFabric server, or the client computer(not sure if you are using a web app, or something else)
AppFabric will be slower than in-proc memory, also AF should not be on the same server as you application.
How are you creating the AppFabric DataCacheFactory? Are you creating for every request? That is bad, as it is expensive, so it should be a static/singleton. I do something like
public class AppFabricDistributedCacheManagerFactory {
private static DataCacheFactory _dataCacheFactory;
public void Initialize()
{
if (_dataCacheFactory == null)
{
_dataCacheFactory = new DataCacheFactory();
}
}
......
Do you have local cache enabled in AppFabric, for images it seems appropriate.
Make sure your Provider is not throwing exceptions and is only calling Appfabric when it really should. Put fiddler on you dev box and watch the requests. So watch for
First call to AF, are you using regions? Make you create it.
If you are creating regions? Do you make it exists before you save? Just in case you are look at this code.. before did this.. I had a few issues
public void SaveToProvider(string key,TimeSpan duration ,string regionName,object toSave)
try
{
Cache.Put(key, toSave, duration , regionName);
}
catch (DataCacheException cacheError)
{
// Look at the ErrorCode property to see if the Region is missing
if (cacheError.ErrorCode == DataCacheErrorCode.RegionDoesNotExist)
{
// Create the Region and retry the Put call
Cache.CreateRegion(_regionName);
Cache.Put(key, toSave, duration , regionName);
}
}
Watch the requests when you request a item not is cache.. see that is call AF then loads the image and call AF again to save.
Watch the request when you know the item in already loaded, if you are using local cache you should see no AF requests..One if you are not.

Related

Does iis recycle cleans memory?

I am having a web application deployed to IIS, my app uses static Dictionary which is filled in from an external api frequently.
Sometimes I observe that the Dictionary is being cleared once in a while & I suspect it is because of IIS Automatic Recycle.
Can anyone please confirm that this could be a reason?
So basically my question would be will IIS Recycle cleans up the static memory that a webapp is using? (Although I understand that this will only happens when there are no active connections to the server)
Yes, the IIS by default recycles your app pool by calling a garbage collector to clear the memory on every 20 minutes.
You can see Idle-timeout setting in your app pool -> Advanced settings, but better do not change it.
All static things are "Bad" do not use them, your option is caching. You can make a generic cache service that is using the default MVC cache and make it thread safe.
You can also use the [OutputCache] attribute on child actions controller and set minutes. Between this interval the data will be cached
Or you can implement your own caching logic.
From all the three things I will suggest you the first one with using the default MVC cache. I will provide you a sample implementation thanks to #TelerikAcademy and #NikolayKostov
namespace Eshop.Services.Common
{
using System;
using System.Web;
using System.Web.Caching;
using Contracts;
public class HttpCacheService : IHttpCacheService
{
private static readonly object LockObject = new object();
public T Get<T>(string itemName, Func<T> getDataFunc, int durationInSeconds)
{
if (HttpRuntime.Cache[itemName] == null)
{
lock (LockObject)
{
if (HttpRuntime.Cache[itemName] == null)
{
var data = getDataFunc();
HttpRuntime.Cache.Insert(
itemName,
data,
null,
DateTime.Now.AddSeconds(durationInSeconds),
Cache.NoSlidingExpiration);
}
}
}
return (T)HttpRuntime.Cache[itemName];
}
public void Remove(string itemName)
{
HttpRuntime.Cache.Remove(itemName);
}
}
}
The usage of it is super simple with anonymous function and time interval
You can set it as a protected property of a Base Controller and to Inherit BaseController in every controller you use. Than you will have the cache service in every controller and you can simply use it that way
var newestPosts = this.Cache.Get(
"newestPosts",
() => this.articlesService.GetNewestPosts(16).To<ArticleViewModel().ToList(),
GlobalConstants.DefaultCacheTime);
Let's assume that GlobalConstants.DefaultCacheTime = 10
Hope that this answer will be useful to you. :)
If you look at this MS article: https://technet.microsoft.com/pl-pl/library/cc753179(v=ws.10).aspx
In addition to recycling an application pool on demand when problems occur, you can configure an application pool to recycle a worker process for the following reasons:
At a scheduled time
After an elapsed time
After reaching a number of requests
After reaching a virtual memory threshold
After reaching a used memory threshold
So if IIS recycle would not clean up memory recycling it on memory threshold would not make sense. Additionally, IIS recycle cause application restart so it's obviously clears it memory too.

Using ffmpeg in asp.net

I needed a audio conversion library. After already pulling my hair..I have given up on the fact that there is no such audio library out there..every library out there has some or the other problem.
The only option left is ffmpeg which is the best but unfortunately you cannot use it in asp.net (not directly I mean). Every user on the website that will convert a file; will launch an exe?; I think I will hit the server memory max soon.
Bottom Line: I will try using ffmpeg.exe and see how many users it can support simultaneously.
I went to the ffmpeg website and in the windows download section I found 3 different version; static, shared and dev.
Does any one know which would be the best? All packed in one exe (static) or dll's separely and exe small, wrt using it in asp.net?
PS: any one has a good library out there..would be great if you can share.
Static builds provide one self-contained .exe file for each program (ffmpeg, ffprobe, ffplay).
Shared builds provide each library as a separate .dll file (avcodec, avdevice, avfilter, etc.), and .exe files that depend on those libraries for each program
Dev packages provide the headers and .lib/.dll.a files required to use the .dll files in other programs.
ffMpeg is the best library out there from what I have used but I wouldn't recommend trying to call it directly from asp.net.
What I have done, is accepted the upload, stored it on the server, or S3 in my case, then have a worker role (if using something like Azure) and a process that continuously looks and monitors for new files to convert.
If you needed a realtime like solution, you could update flags in your database and have an AJAX solution to poll the database to keep providing progress updates, then a link to download once the conversion is complete.
Personally my approach would be
Azure Web Roles
Azure Worker Role
ServiceBus
The WorkerRole starts up and is monitoring the ServiceBus Queue for messages.
The ASP.NET site uploads and stores the file in S3 or Azure
The ASP.NET site then records information in your DB if needed and sends a message to the ServiceBus queue.
The WorkerRole picks this up and converts.
AJAX will be needed on the ASP.NET site if you want a realtime monitoring solution. Otherwise you could send an email when complete if needed.
Using a queuing process also helps you with load as when you are under heavy load people just wait a little longer and it doesn't grind everything to a halt. Also you can scale out your worker roles as needed to balance loads, should it ever become too much for one server.
Here is how I run ffMpeg from C# (you will need to change the parameters for your requirements)
String params = string.Format("-i {0} -s 640x360 {1}", input.Path, "C:\\FilePath\\file.mp4");
RunProcess(params);
private string RunProcess(string Parameters)
{
//create a process info
ProcessStartInfo oInfo = new ProcessStartInfo(this._ffExe, Parameters);
oInfo.UseShellExecute = false;
oInfo.CreateNoWindow = true;
oInfo.RedirectStandardOutput = true;
oInfo.RedirectStandardError = true;
//Create the output and streamreader to get the output
string output = null; StreamReader srOutput = null;
//try the process
try
{
//run the process
Process proc = System.Diagnostics.Process.Start(oInfo);
proc.ErrorDataReceived += new DataReceivedEventHandler(proc_ErrorDataReceived);
proc.OutputDataReceived += new DataReceivedEventHandler(proc_OutputDataReceived);
proc.BeginOutputReadLine();
proc.BeginErrorReadLine();
proc.WaitForExit();
proc.Close();
proc.Dispose();
}
catch (Exception)
{
// Capture Error
}
finally
{
//now, if we succeeded, close out the streamreader
if (srOutput != null)
{
srOutput.Close();
srOutput.Dispose();
}
}
return output;
}

Webmatrix.Data.Database Connection String Cleared After Form Submit

I'm developing an ASP.NET (Razor v2) Web Site, and using the WebMatrix.Data library to connect to a remote DB. I have the Database wrapped in a singleton, because it seemed like a better idea than constantly opening and closing DB connections, implemented like so:
public class DB
{
private static DB sInstance = null;
private Database mDatabase = null;
public static DB Instance
{
get
{
if (sInstance == null)
{
sInstance = new DB();
}
return sInstance;
}
}
private DB()
{
mDatabase = Database.Open("<Connection String name from web.config>");
return;
}
<Query Functions Go Here>
}
("Database" here refers to the WebMatrix.Data.Database class)
The first time I load my page with the form on it and submit, a watch of mDatabase's Database.Connection property shows the following: (Sorry, not enough rep to post images yet.)
http://i.stack.imgur.com/jJ1RK.png
The form submits, the page reloads, the submitted data shows up, everything is a-ok. Then I enter new data and submit the form again, and here's the watch:
http://i.stack.imgur.com/Zorv0.png
The Connection has been closed and its Connection String blanked, despite not calling Database.Close() anywhere in my code. I have absolutely no idea what is causing this, has anyone seen it before?
I'm currently working around the problem by calling Database.Open() before and Database.Close() immediately after every query, which seems inefficient.
The Web Pages framework will ensure that connections opened via the Database helper class are closed and disposed when the current page has finished executing. This is by design. It is also why you rarely see connections explicitly closed in any Web Pages tutorial where the Database helper is used.
It is very rarely a good idea to have permanently opened connections in ASP.NET applications. It can cause memory leaks. When Close is called, the connection is not actually terminated by default. It is returned to a pool of connections that are kept alive by ADO.NET connection pooling. That way, the effort required to instantiate new connections is minimised but managed properly. So all you need to do is call Database.Open in each page. It's the recommended approach.

Adobe AIR HTTP Connection Limit

I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.

NHibernate Memory Leak

My company has an ASP.Net application that runs out of memory and throws out of memory exceptions after only a couple of days of activity by our customers. I am able to reproduce the error in our testing environment and I created a hang dump using adplus. When looking at the largest/most objects on the heap I noticed that we have over 500,000 NHibernate.SqlCommand.Parameter objects. This cannot be correct! We had 33 sessionfactories instantiated total and we have 1 sessionfactory per client database. The version of nhibernate we are using is 2.1.0.4000.
We have disabled second-level cache, query plan cache, and query cache. We still see 500,000 NHibernate.SqlCommand.Parameter in the memory dump.
Has any body seen this behavior?
We have a similar problem with our application (NHibernate 2.1.2.4000, ODP.net 2.111.7.0 on Windows 7). When we insert data into the database, we end up with a huge memory and handle leak:
for (int i=1;i<10000;i++)
{
using (var session = _sessionFactory.OpenSession();
{
var tx = session.OpenTransaction()
// insert a few rows into one table
tx.Commit()
}
}
The only fix for the problem is to set Enlist=false in the connection string or use the OracleClientDriver instead of the OracleDataClientDriver. This problem did not happen in NHibernate 1.2. There was an even worse connection leak when we tried this with TransactionScope.

Resources