Why does Image.FromFile keep a file handle open sometimes? - gdi+

I am doing a lot of image processing in GDI+ in .NET in an ASP.NET application.
I frequently find that Image.FromFile() is keeping a file handle open.
Why is this? What is the best way to open an image without the file handle being retained.
NB: I'm not doing anything stupid like keeping the Image object lying around - and even if I was I woudlnt expect the file handle to be kept active

I went through the same journey as a few other posters on this thread. Things I noted:
Using Image.FromFile does seem unpredictable on when it releases the file handle. Calling the Image.Dispose() did not release the file handle in all cases.
Using a FileStream and the Image.FromStream method works, and releases the handle on the file if you call Dispose() on the FileStream or wrap the whole thing in a Using {} statement as recommended by Kris. However if you then attempt to save the Image object to a stream, the Image.Save method throws an exception "A generic error occured in GDI+". Presumably something in the Save method wants to know about the originating file.
Steven's approach worked for me. I was able to delete the originating file with the Image object in memory. I was also able to save the Image to both a stream and a file (I needed to do both of these things). I was also able to save to a file with the same name as the originating file, something that is documented as not possible if you use the Image.FromFile method (I find this weird since surely this is the most likely use case, but hey.)
So to summarise, open your Image like this:
Image img = Image.FromStream(new MemoryStream(File.ReadAllBytes(path)));
You are then free to manipulate it (and the originating file) as you see fit.

I have had the same problem and resorted to reading the file using
return Image.FromStream(new MemoryStream(File.ReadAllBytes(fileName)));

Image.FromFile keeps the file handle open until the image is disposed. From the MSDN:
"The file remains locked until the Image is disposed."
Use Image.FromStream, and you won't have the problem.
using(var fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
return Image.FromStream(fs);
}
Edit: (a year and a bit later)
The above code is dangerous as it is unpredictable, at some point in time (after closing the filestream) you may get the dreaded "A generic error occurred in GDI+". I would amend it to:
Image tmpImage;
Bitmap returnImage;
using(var fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
tmpImage = Image.FromStream(fs);
returnImage = new Bitmap(tmpImage);
tmpImage.Dispose();
}
return returnImage;

Make sure you are Disposing properly.
using (Image.FromFile("path")) {}
The using expression is shorthand for
IDisposable obj;
try { }
finally
{
obj.Dispose();
}
#Rex in the case of Image.Dispose it calls GdipDisposeImage extern / native Win32 call in it's Dispose().
IDisposable is used as a mechanism to free unmanaged resources (Which file handles are)

I also tried all your tips (ReadAllBytes, FileStream=>FromStream=>newBitmap() to make a copy, etc.) and they all worked. However, I wondered, if you could find something shorter, and
using (Image temp = Image.FromFile(path))
{
return new Bitmap(temp);
}
appears to work, too, as it disposes the file handle as well as the original Image-object and creates a new Bitmap-object, that is independent from the original file and therefore can be saved to a stream or file without errors.

I would have to point my finger at the Garbage Collector. Leaving it around is not really the issue if you are at the mercy of Garbage Collection.
This guy had a similar complaint... and he found a workaround of using a FileStream object rather than loading directly from the file.
public static Image LoadImageFromFile(string fileName)
{
Image theImage = null;
fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
{
byte[] img;
img = new byte[fileStream.Length];
fileStream.Read(img, 0, img.Length);
fileStream.Close();
theImage = Image.FromStream(new MemoryStream(img));
img = null;
}
...
It seems like a complete hack...

As mentioned above the Microsoft work around causes a GDI+ error after several images have been loaded. The VB solution for me as mentioned above by Steven is
picTemp.Image = Image.FromStream(New System.IO.MemoryStream(My.Computer.FileSystem.ReadAllBytes(strFl)))

I just encountered the same problem, where I was trying to merge multiple, single-page TIFF files into one multipart TIFF image. I needed to use Image.Save() and 'Image.SaveAdd()`: https://msdn.microsoft.com/en-us/library/windows/desktop/ms533839%28v=vs.85%29.aspx
The solution in my case was to call ".Dispose()" for each of the images, as soon as I was done with them:
' Iterate through each single-page source .tiff file
Dim initialTiff As System.Drawing.Image = Nothing
For Each filePath As String In srcFilePaths
Using fs As System.IO.FileStream = File.Open(filePath, FileMode.Open, FileAccess.Read)
If initialTiff Is Nothing Then
' ... Save 1st page of multi-part .TIFF
initialTiff = Image.FromStream(fs)
encoderParams.Param(0) = New EncoderParameter(Encoder.Compression, EncoderValue.CompressionCCITT4)
encoderParams.Param(1) = New EncoderParameter(Encoder.SaveFlag, EncoderValue.MultiFrame)
initialTiff.Save(outputFilePath, encoderInfo, encoderParams)
Else
' ... Save subsequent pages
Dim newTiff As System.Drawing.Image = Image.FromStream(fs)
encoderParams = New EncoderParameters(2)
encoderParams.Param(0) = New EncoderParameter(Encoder.Compression, EncoderValue.CompressionCCITT4)
encoderParams.Param(1) = New EncoderParameter(Encoder.SaveFlag, EncoderValue.FrameDimensionPage)
initialTiff.SaveAdd(newTiff, encoderParams)
newTiff.Dispose()
End If
End Using
Next
' Make sure to close the file
initialTiff.Dispose()

Related

load large image from database and return it to the client side

Hi:
IN my application,I have some images saved in the db,so I create a ImgDownLoad.aspx to retrive the image and retun them,since the image in the db may very large(some of them is more than 20M),so I generate some thumbnails ,this is the code:
page_load(){
string id=Requset.QueryString["id"];
string imgtype=Requset.Querystring["itype"];
if(imgType=="small")
{
//request the thumbnail
string small_loaction=getSmallLocationById(id);
if(!File.exists(small_location)
{
byte[] img_stream =getStreamFromDb(id);
Image img=Image.frameStream(new MemsorStream(img_steam));//here,I often get the out of memory error,but I am not sure when it will happen.
generateSmallImage(img,location)
}
Response.TransferFile(small_location);
}
else if(imgType=="large"){
byte[] img_stream =getStreamFromDb(id);
new MemorySteam(img_stream).writeTo(Response.outputstream);
}
}
Anything wrong?
ALso,since I do not know the image format,so I can not add the
Response.contenttype="image/xxx";
What confusing me most is that I will meet the out of memory error,so I change the code:
try{
byte[] img_stream =getStreamFromDb(id);
Image img=Image.frameStream(new MemsorStream(img_steam));//here,I often get the out of memory error,but I am not sure when it will happen.
generateSmallImage(img,location)
}
catche(exceptin e){
//the small image can not generated,just return the whole image
new MemorySteam(img_stream).writeTo(Response.outputstream);
return;
}
In this case,I will avoid the out of memory problem,but some large image can not downloaded sometime.
So I wonder if there are any ways to handle the large image stream?
Take a large image for exmaple:
resolution:12590x4000
size:26M.
In fact,I have opened a large image(almost 24M) with the mspaint,and then save the image again,I found that it size is much smaller than at first. So is it possible to resize the image in the server side? Or other good manners to hanle my problem?
Firstly, you're not disposing of the Image and Stream type instances that you create - given subsequent calls, over time, this is bound to cause issues; particularly with images around the 20meg mark!
Also, why create the thumbnails every call? Create once and cache, or flush to disk: either way, serve a 'one you made earlier' rather than do this processing over and over.
I would recommend, however, you try an minimise the size (in bytes) of the images. Some might argue that they shouldn't be in the database if over 1meg, but store them on disk and a file name in the database. I guess that's open for debate, browse if interested.
To your comment, I'd urge you not to allow other scopes take control of resources 'owned' by another; dispose of items in the scope that creates them (obviously sometimes some things need to stick around, but what is responsible for them should be clear). Here's a little rework of some of your code:
if (imgType == "small")
{
string small_loaction = getSmallLocationById(id);
if(!File.exists(small_location)
{
byte[] imageBytes = getStreamFromDb(id);
using (var imageStream = new MemoryStream(imageBytes))
{
using (var image = Image.FromStream(imageStream))
{
generateSmallImage(image, small_location)
}
}
}
Response.TransferFile(small_location);
}
else if (imgType=="large")
{
byte[] imageBytes = getStreamFromDb(id);
Response.OutputStream.Write(imageBytes, 0, imageBytes.Length);
}

ASP.NET to PowerPoint: File gets corrupted when adding image

I have used this example when exporting data to PowerPoint:
I have modified the GenerateSlidesFromDB() method:
public void GenerateSlidesFromDB()
{
string slideName = #"C:\Users\x\Desktop\output.pptx";
File.Copy(#"C:\Users\x\Desktop\Test.pptx", slideName, true);
using (PresentationDocument presentationDocument = PresentationDocument.Open(slideName, true))
{
PresentationPart presentationPart = presentationDocument.PresentationPart;
SlidePart slideTemplate = (SlidePart)presentationPart.GetPartById("rId2");
string firstName = "Test User";
SlidePart newSlide = CloneSlidePart(presentationPart, slideTemplate);
InsertContent(newSlide, firstName);
newSlide.Slide.Save();
DeleteTemplateSlide(presentationPart, slideTemplate);
presentationPart.Presentation.Save();
}
}
As you can see I overwrite the placeholder with "Test User", and it works like a charm.
I need to add an image (as a placeholder) to this pptx-file.
When I do that (and run the code again) I get a corrupted pptx-file?
Error message:
PowerPoint removed unreadable content
in output.pptx. You should review
this presentation to determine whether
any content was unexpectedly changed
or removed.
Edit: If I try the original code (which is slightly modified since I dont have Adventureworks), I get some other kind of error message:
This file may have become corrupt or damaged for the following reasons:
Third-party XML editors sometimes create files that are not compatible with Microsoft Office XML specifications.
The file has been purposely corrupted with the intent to harm your computer or your data.
Be cautious when opening a file from an unknown source.
PowerPoint can attempt to recover data from the file, but some presentation data, such as shapes, text,and formatting, may be lost.
Do one of the following:
If you want to recover data from the file, click Yes.
If you do not want to recoverdata from the file, click No.
Ok, sorry for this useless post. My bad.
Solution:
string imgId = "rIdImg" + i;
ImagePart imagePart = newSlide.AddImagePart(ImagePartType.Jpeg, imgId);
MemoryStream stream3 = new MemoryStream();
using (FileStream file = File.Open(#"C:\Users\x\Desktop\Test.jpg", FileMode.Open))
{
byte[] buffer = new byte[file.Length];
file.Read(buffer, 0, (int)file.Length);
stream3.Write(buffer, 0, buffer.Length);
imagePart.FeedData(new MemoryStream(buffer));
}
SwapPhoto(newSlide, imgId);

File not found error with FileStreamResult controller action

I have a controller action declared as follows:
[Authorize(Order = 0, Roles = "Requester,Controller,Installer")]
public FileStreamResult ExportJobCards()
The body of this method builds a collection of CSV lines, and attempts to return them as a file as follows:
using (var sw = new StreamWriter(new MemoryStream()))
{
foreach (var line in lines)
{
sw.WriteLine(line);
}
return new FileStreamResult(sw.BaseStream, "text/csv");
}
When I request this action using the following action link...
Html.ActionLink("Export to Excel", "ExportJobCards")
...the export method executes properly, i.e. all the required CSV data is present in the lines collection in the above code, but I get a File Not Found error rendered as the end result.
EDIT:
In agreement with Tommy's observation, I moved the return out of the using, and I now get a file, but the file is empty. The new code that actually produces a file, ableit empty, is:
var sw = new StreamWriter(new MemoryStream());
foreach (var line in lines)
{
sw.WriteLine(line);
}
sw.Flush();
return new FileStreamResult(sw.BaseStream, "text/csv");
With your current setup, the Using statement is disposing of the StringWriter before the return can complete, which is resulting in the null reference/file not found error. Remove the using statement or set the StringWriter to another variable before you exit out and you should be good to go on getting rid of the File Not Found error.
A thought on your second issue now, looking into memorystreams as filestream results, you may need to change your return to this
sw.BaseStream.seek(0, SeekOrigin.Begin)
return new FileStreamResult(sw.BaseStream, "text/csv");
as the pointer is still at the end of the stream when you return.
It throws that error because you're not giving it a file stream. What you want is the FileContentResult into which you can pass arbitrary content. This content needs to be a byte array of your content, probably easiest to:
use a stringbuilder rather than a streamwriter
get your string from the builder
use the static method System.Text.UnicodeEncoding.Unicode.GetBytes(string) to get the byte array
Give the byte array to FileContentResult
As you have to write this code anyway the easiest thing to do would be to create a new FileStringResult that inherits from the base FileResult that can take in a string or stringbuilder. Override WriteFile(HttpResponseBase response) to do the string to byte[] conversion and push that into the response. Take a look at the FileStreamResult class from the MVC sources, it's very small and easy to do.

Out Of Memory exception on System.Drawing.Image.FromFile()

I have an image uploader and cropper which creates thumbnails and I occasionally get an Out Of Memory exception on the following line:
Dim bm As Bitmap = System.Drawing.Image.FromFile(imageFile)
The occurance of the error is tiny and very rare, but I always like to know what might be causing it. The imageFile variable is just a Server.MapPath to the path of the image.
I was curious if anyone had experience this issue previously and if they had any ideas what might be causing it? Is it the size of the image perhaps?
I can post the code if necessary and any supporting information I have, but would love to hear people's opinions on this one.
It's worth knowing that OutOfMemoryException doesn't always really mean it's out of memory - particularly not when dealing with files. I believe it can also happen if you run out of handles for some reason.
Are you disposing of all your bitmaps after you're done with them? Does this happen repeatably for a single image?
If this wasn't a bad image file but was in fact the normal issue with Image.FromFile wherein it leaves file handles open, then the solution is use Image.FromStream instead.
using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
using (Image original = Image.FromStream(fs))
{
...
Using an explicit Dispose(), a using() statement or setting the value to null on the bitmap doesn't solve the issue with Image.FromFile.
So if you App runs for a time and opens a lot of files consider using Image.FromStream() instead.
I hit the same issue today while creating Thumbnail images for a folder full of images. It turns out that the "Out Of Memory" occured exactly at the same point each time. When I looked at the folder with the images to be converted I found that the file that was creating the problem was thumbs.db. I added some code to make sure that only image files were being converted and the issue was resolved.
My code is basically
For Each imageFile as FileInfo in fileList
If imageFile.Extension = ".jpg" Or imageFile.Extension = ".gif" Then
...proceed with the conversion
End If
Next
Hope this helps.
Also check if you haven't opened the same file somewhere else. Apparently, when you open a file twice (even with File.Open()) OutOfMemoryException is thrown too...
Also you can open it in read mode, (if you want to use it in two place same time)
public Image OpenImage(string previewFile)
{
FileStream fs = new FileStream(previewFile, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
return Image.FromStream(fs);
}
This happens when the image file is corrupted. It is a bad error message, because memory has nothing to do with it. I haven;t worked out the coding, but a try/catch/finally will stop the program from abending.
I had a similar problem today when I was trying to resize an image and then crop it, what happened is I used this code to resize the image.
private static Image resizeImage(Image imgToResize, Size size)
{
int sourceWidth = imgToResize.Width;
int sourceHeight = imgToResize.Height;
float nPercent = 0;
float nPercentW = 0;
float nPercentH = 0;
nPercentW = ((float)size.Width / (float)sourceWidth);
nPercentH = ((float)size.Height / (float)sourceHeight);
if (nPercentH < nPercentW)
nPercent = nPercentH;
else
nPercent = nPercentW;
int destWidth = (int)(sourceWidth * nPercent);
int destHeight = (int)(sourceHeight * nPercent);
Bitmap b = new Bitmap(destWidth, destHeight);
Graphics g = Graphics.FromImage((Image)b);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(imgToResize, 0, 0, destWidth, destHeight);
g.Dispose();
return (Image)b;
}
And then this code for the crop...
private static Image cropImage(Image img, Rectangle cropArea)
{
Bitmap bmpImage = new Bitmap(img);
Bitmap bmpCrop = bmpImage.Clone(cropArea,
bmpImage.PixelFormat);
return (Image)(bmpCrop);
}
Then this is how I called the above code...
Image img = Image.FromFile(#"C:\Users\****\Pictures\image.jpg");
img = ImageHandler.ResizeImage(img, new Size(400, 300));
img = ImageHandler.CropImage(img, new Rectangle(0, 25, 400, 250));
long quality = 90;
I kept getting errors on the crop part, the resizer worked fine!
Turns out, what was happening inside the resizer was throwing errors in the crop function. The resized calculations were making the actual dimensions of the image come out to be like 399 rather than 400 that I passed in.
So, when I passed in 400 as the argument for the crop, it was trying to crop a 399px wide image with a 400px width bmp and it threw the out of memory error!
Most of the above code was found on http://www.switchonthecode.com/tutorials/csharp-tutorial-image-editing-saving-cropping-and-resizing
If an image is an icon then different loading handling is required, like in next function:
public static Image loadImage(string imagePath)
{
Image loadedImage = null;
if (!File.Exists(imagePath)) return loadedImage;
try
{
FileInfo fileInfo = new FileInfo(imagePath);
if (fileInfo.Extension.Equals(".jpg") || fileInfo.Extension.Equals(".jpeg") ||
fileInfo.Extension.Equals(".bmp") || fileInfo.Extension.Equals(".png") ||
fileInfo.Extension.Equals(".gif"))
{
loadedImage = Image.FromFile(imagePath);
}
else if (fileInfo.Extension.Equals(".ico"))
{
Bitmap aBitmap = Bitmap.FromHicon(new
Icon(imagePath, new Size(200, 200)).Handle);
loadedImage = ImageFuncs.ResizeImage(aBitmap, new Size(30, 30));
}
}
catch (Exception eLocal)
{
MessageBox.Show(imagePath + " loading error: " + eLocal.Message);
}
return loadedImage;
}
I had the same problem with a utility I wrote to convert TIFF(s) to PDF(s). Often I would get the "out of memory" error on the same line as you.
System.Drawing.Image.FromFile(imageFile)
Then I discovered the error only happened when the file extension was ".tiff" and worked fine after I renamed it with an extension of ".tif"
I have had the same issue, before looking else where in the code wanted to make sure if I can open the Image with any Image viewer and figured out that the Image is corrupted/damaged though it's a .PNG file with 1KB size. Added a new Image in the same location, then It worked fine.
I am having same problem batch processing Tiff files. Most of the files aren't throwing an exception but few files are throwing "Out of Memory" exception in ASP.NET 4.0. I have used binary data to find out why just for few files and from within same folder. It can't be permission issue for ASP.NET ASPNET or NETWORK SERVICE account because other files are working file.
I have opened iTextSharp.text.Image class and found that there are many overloaded methods for GetInstance(). I have resolved my problem using following code: note: catch block will run for problematic files.
iTextSharp.text.Image image = null;
try
{
var imgStream = GetImageStream(path);
image = iTextSharp.text.Image.GetInstance(imgStream);
}
catch {
iTextSharp.text.pdf.RandomAccessFileOrArray ra = null;
ra = new iTextSharp.text.pdf.RandomAccessFileOrArray(path);
image = iTextSharp.text.pdf.codec.TiffImage.GetTiffImage(ra, 1);
if (ra != null)
ra.Close();
}
If you're serving from IIS, try recycling the Application Pool. This solved a similar image upload "Out of Memory" error for me.
I created a minimal form example that still gives me errors.
private void button1_Click(object sender, EventArgs e)
{
string SourceFolder = ImageFolderTextBox.Text;
string FileName = "";
DirectoryInfo Mydir = new DirectoryInfo(SourceFolder);
FileInfo[] JPEGS = Mydir.GetFiles("*.jpg");
for (int counter = 0; counter < JPEGS.Count(); counter++)
{
FileName = Mydir + "\\" + JPEGS[counter].Name;
//using (Image MyImage = System.Drawing.Image.FromFile(FileName))
using (FileStream fs = new FileStream(FileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
StatusBtn.BackColor = Color.Green;
}
}
}
I tried both the commented out line using Image.FromFile() as well as the line using FileStream(). Both produced file errors.
The Image.FromFile() error was:
System.OutOfMemoryException: 'Out of Memory'
The filestream() error was:
System.UnaurthorizedAccessException: 'Access to the path 'E:\DCIM\100Canon\dsc_7218.jpg' is denied.
I placed a Breakpoint just prior to the lines producing the error and I am able to open the image file using the Windows image viewer. I then closed the viewer and after I advanced to the next line and get the error, I can no longer view the image with the Windows viewer. Instead, I get a message that I do not have permission to access the file. I am able to delete the file.
This error is repeatable. I've done it over 10 times. Each time, after I get the error, I delete the file used for FileName.
All files were verified to be non-corrupt.
My original code that used Image.FromFile() worked fine when I compiled it 2 years ago. In fact, the .exe file runs just fine. I made a minor change somewhere else in the code and was surprised to find that the code would not compile without this error. I tried the FileStream() method based on the information on this page.

Storing XSLT in SQL Server 2005 with xml type?

I have a lot of XSL files in my ASP.NET web app. A lot. I generate a bunch of AJAX HTML responses using this kind of generic transform method:
public void Transform(XmlDocument xml, string xslPath)
{
...
XslTransform myXslTrans = new XslTransform();
myXslTrans.Load(xslPath);
myXslTrans.Transform(xml,null, HttpContext.Current.Response.Output);
}
I'd like to move the XSL definitions into SQL Server, using a column of type xml.
I would store an entire XSL file in a single row in SQL, and each XSL is self-contained (no imports). I would read out the XSL definition from SQL into my XslTransform object.
Something like this:
public void Transform(XmlDocument xml, string xslKey)
{
...
SqlCommand cmd = new SqlCommand("GetXslDefinition");
cmd.AddParameter("#xslKey", SqlDbType.VarChar).Value = xslKey;
// where the result set has a single column of XSL: "<xslt:stylesheet>..."
...
SqlDataReader dr = cmd.ExecuteReader();
if(dr.Read()) {
SqlXml xsl = dr.GetSqlXml(0);
XslTransform myXslTrans = new XslTransform();
myXslTrans.Load(xsl.CreateReader());
myXslTrans.Transform(xml,null, HttpContext.Current.Response.Output);
}
}
It seems like a straightforward way to:
add metadata to each XSL, like lastUsed, useCount, etc.
bulk update/search capabilities
prevent lots of disk access
avoid referencing relative paths and organizing files
allow XSL changes without redeploying (I could even write an admin page that selects/updates the XSL in the database)
Has anyone tried this before? Are there any caveats?
EDIT
Caveats that responders have listed:
disk access isn't guaranteed to diminish
this will break xsl:includes
The two big issues I can see are:
We use a lot of includes to ensure that we only do things once, storing the XSLT in the database would stop us from doing that.
It makes updating XSLs more interesting - we've been quite happy to dump new .xsl files into deployed sites without doing a full update of the site. For that matter we've got bits of code that look for client specific xsl in a folder and those bits of code can reach back up to common code (templates) in the root - so I'm not sure about the redeploy thing at all, but this will depend very much on the particular use case, yours is certainly different to ours.
In terms of disk access, hmm... the db still has to go access the disk to pull the data and if you're talking about caching then the db isn't a requirement for enabling caching.
Have to agree about the update/search options - you can do stuff with Powershell but that needs to be run on the server and that's not always a good idea.
Technically I can see no reason why not (excepting the wish to do includes as above) but practically it seems to be fairly balanced with good arguments either way.
I store XSLTs in a database in my application dbscript. (However I keep them in an NVARCHAR column, since it also runs on SQL Server 2000)
Since users are able to edit their XSLTs, I needed to write a custom validator which loads the text of TextBox in a .Net XslCompiledTransform object like this:
args.IsValid = true;
if (args.Value.Trim() == "")
return;
try
{
System.IO.TextReader rd = new System.IO.StringReader(args.Value);
System.Xml.XmlReader xrd = System.Xml.XmlReader.Create(rd);
System.Xml.Xsl.XslCompiledTransform xslt = new System.Xml.Xsl.XslCompiledTransform();
System.Xml.Xsl.XsltSettings xslts = new System.Xml.Xsl.XsltSettings(false, false);
xslt.Load(xrd, xslts, new System.Xml.XmlUrlResolver());
xrd.Close();
}
catch (Exception ex)
{
this.ErrorMessage = (string.IsNullOrEmpty(sErrorMessage) ? "" : (sErrorMessage + "<br/>") +
ex.Message);
if (ex.InnerException != null)
{
ex = ex.InnerException;
this.ErrorMessage += "<br />" + ex.Message;
}
args.IsValid = false;
}
As for your points:
file I/O will be replaced by database-generated disk I/O, so no gains there
deployment changes to providing an INSERT/UPDATE script containing the new data

Resources