I am trying to upload a file with the FileUpload control. When file is uploaded, I extract information from it and then i want to delete it.
I manage to upload it, save it and get the info from it, but when i try to delete it i get the follwing exception
"The process cannot access the file 'D:\IIS**OMITTED***\V75 personal ny.csv' because it is being used by another process.
string fn = Path.GetFileName(fu.PostedFile.FileName);
string SaveLocation = Server.MapPath("UploadedCSVFiles") + "\\" + fn;
FileInfo fi = new FileInfo(SaveLocation);
fu.PostedFile.SaveAs(SaveLocation);
fu.PostedFile.InputStream.Dispose();
DataTable dt = AMethodThatUsesFile(SaveLocation);
fi.Delete();
Try this code to delete file.
System.IO.File.Delete(SaveLocation );
You specified a method AMethodThatUsesFile(SaveLocation);. If it uses any classes like StreamReader to read file, please close the reader using StreamReader.Close(); method before trying to delete
dispose the fi before deleting. and then us File.Delete(). remember to use using statements when use disposable objects, or dispose it after use.
using System.io
File.Delete(Server.MapPath("../Nurturing/" + fnevents));
FileInfo fInfoEvent;
fInfoEvent = new FileInfo(fnevents);
fInfoEvent.Delete();
here fnevents is the name of the file that u are deleting. Nurturing is the name of the folder.
Related
I'm using iText to create a PDF from a PDF template which is then emailed. The code is to be executed repeatedly. The created temporary file can't be deleted or overwritten. The error message is "The process cannot access the file 'Tmp.pdf' because it is being used by another process."
string path = Server.MapPath("files");
string tmp = path + #"\Tmp.pdf";
PdfDocument pdfDoc = new PdfDocument(new PdfReader(path + #"\Template.pdf"), new PdfWriter(tmp));
PdfAcroForm form = PdfAcroForm.GetAcroForm(pdfDoc, true);
form.GetField("Content").SetValue(tmpItems.Text);
form.FlattenFields();
pdfDoc.Close();
// Email PDF
MailMessage mailMsg = new MailMessage();
Attachment data = new Attachment(tmp, MediaTypeNames.Application.Octet);
// Add time stamp information for the file.
ContentDisposition disposition = data.ContentDisposition;
disposition.CreationDate = System.IO.File.GetCreationTime(tmp);
disposition.ModificationDate = System.IO.File.GetLastWriteTime(tmp);
disposition.ReadDate = System.IO.File.GetLastAccessTime(tmp);
mailMsg.Attachments.Add(data);
File.Delete(tmp);
I assumed the file had been closed (pdfDoc.Close();) releasing the resource. The second time the code snippet is used (to create another version to be emailed) to overwrite the file, the error occurs at line 3. As a potential fix, I tried to delete the file but again the error occurs at the deletion point.
This is a very short snippet of code. What is the other process holding on to the file? What am I doing wrong?
I am trying to read an Excel sheet using C# which is to be loaded by end user from fileUpload control.
I am writing my code to save the file on server in event handler of another button control(Upload). But when I click on Upload Button I am getting this exception:
The process cannot access the file 'E:\MyProjectName\App_Data\sampledata.xlsx' because it is being used by another process.
Here is the code that I have used in event handler:
string fileName = Path.GetFileName(file_upload.PostedFile.FileName);
string fileExtension = Path.GetExtension(file_upload.PostedFile.FileName);
string fileLocation = Server.MapPath("~/App_Data/" + fileName);
//if (File.Exists(fileLocation))
// File.Delete(fileLocation);
file_upload.SaveAs(fileLocation);
Even deleting the file is not working, throwing the same exception.
Make sure, some other process is not accessing that file.
This error might occurs whenever you are trying to upload file, without explicitly removing it from memory.
So try this:
try
{
string fileName = Path.GetFileName(file_upload.PostedFile.FileName);
string fileExtension = Path.GetExtension(file_upload.PostedFile.FileName);
string fileLocation = Server.MapPath("~/App_Data/" + fileName);
//if (File.Exists(fileLocation))
// File.Delete(fileLocation);
file_upload.SaveAs(fileLocation);
}
catch (Exception ex)
{
throw ex.Message;
}
finally
{
file_upload.PostedFile.InputStream.Flush();
file_upload.PostedFile.InputStream.Close();
file_upload.FileContent.Dispose();
//Release File from Memory after uploading
}
The references are hanging in memory, If you are using Visual Studio try with Clean Solution and Rebuild again, if you are in IIS, just do a recycle of your application.
To avoid this problems try to dispose the files once you used them, something like:
using(var file= new FileInfo(path))
{
//use the file
//it will be automatically disposed after use
}
If i have understood the scenario properly.
For Upload control, I don't think you have to write code for Upload Button. When you click on your button,your upload control has locked the file and using it so it is already used by one process. Code written for button will be another process.
Prior to this, check whether your file is not opened anywhere and pending for edit.
In a ListView edittemplate, I need to allow the user to replace an image. When the form is submitted for updating how can I determine if the user is uploading a new image and get that file info?
Thanks,
James
You could try something like this if you want to compare file size. Granted file size comparison is not the best but FileInfo has lots of other attributes you should be able to use to make sure.
FileInfo oldFileInfo; // get old file's fileInfo
var tempPath = "some-temp-path-";
var tempFile = String.Format("{0}\{1}", tempPath, FileUpload1.FileName);
FileUpload1.SaveAs(tempFile);
FileInfo tempFileInfo = new FileInfo(tempFile);
if(tempFileInfo.Length == oldFileInfo.Length)
{
// ask to upload a different image
}
else
{
// do other stuff
}
Grab the name off of the FileUpload.PostedFile.
I have a lot of XSL files in my ASP.NET web app. A lot. I generate a bunch of AJAX HTML responses using this kind of generic transform method:
public void Transform(XmlDocument xml, string xslPath)
{
...
XslTransform myXslTrans = new XslTransform();
myXslTrans.Load(xslPath);
myXslTrans.Transform(xml,null, HttpContext.Current.Response.Output);
}
I'd like to move the XSL definitions into SQL Server, using a column of type xml.
I would store an entire XSL file in a single row in SQL, and each XSL is self-contained (no imports). I would read out the XSL definition from SQL into my XslTransform object.
Something like this:
public void Transform(XmlDocument xml, string xslKey)
{
...
SqlCommand cmd = new SqlCommand("GetXslDefinition");
cmd.AddParameter("#xslKey", SqlDbType.VarChar).Value = xslKey;
// where the result set has a single column of XSL: "<xslt:stylesheet>..."
...
SqlDataReader dr = cmd.ExecuteReader();
if(dr.Read()) {
SqlXml xsl = dr.GetSqlXml(0);
XslTransform myXslTrans = new XslTransform();
myXslTrans.Load(xsl.CreateReader());
myXslTrans.Transform(xml,null, HttpContext.Current.Response.Output);
}
}
It seems like a straightforward way to:
add metadata to each XSL, like lastUsed, useCount, etc.
bulk update/search capabilities
prevent lots of disk access
avoid referencing relative paths and organizing files
allow XSL changes without redeploying (I could even write an admin page that selects/updates the XSL in the database)
Has anyone tried this before? Are there any caveats?
EDIT
Caveats that responders have listed:
disk access isn't guaranteed to diminish
this will break xsl:includes
The two big issues I can see are:
We use a lot of includes to ensure that we only do things once, storing the XSLT in the database would stop us from doing that.
It makes updating XSLs more interesting - we've been quite happy to dump new .xsl files into deployed sites without doing a full update of the site. For that matter we've got bits of code that look for client specific xsl in a folder and those bits of code can reach back up to common code (templates) in the root - so I'm not sure about the redeploy thing at all, but this will depend very much on the particular use case, yours is certainly different to ours.
In terms of disk access, hmm... the db still has to go access the disk to pull the data and if you're talking about caching then the db isn't a requirement for enabling caching.
Have to agree about the update/search options - you can do stuff with Powershell but that needs to be run on the server and that's not always a good idea.
Technically I can see no reason why not (excepting the wish to do includes as above) but practically it seems to be fairly balanced with good arguments either way.
I store XSLTs in a database in my application dbscript. (However I keep them in an NVARCHAR column, since it also runs on SQL Server 2000)
Since users are able to edit their XSLTs, I needed to write a custom validator which loads the text of TextBox in a .Net XslCompiledTransform object like this:
args.IsValid = true;
if (args.Value.Trim() == "")
return;
try
{
System.IO.TextReader rd = new System.IO.StringReader(args.Value);
System.Xml.XmlReader xrd = System.Xml.XmlReader.Create(rd);
System.Xml.Xsl.XslCompiledTransform xslt = new System.Xml.Xsl.XslCompiledTransform();
System.Xml.Xsl.XsltSettings xslts = new System.Xml.Xsl.XsltSettings(false, false);
xslt.Load(xrd, xslts, new System.Xml.XmlUrlResolver());
xrd.Close();
}
catch (Exception ex)
{
this.ErrorMessage = (string.IsNullOrEmpty(sErrorMessage) ? "" : (sErrorMessage + "<br/>") +
ex.Message);
if (ex.InnerException != null)
{
ex = ex.InnerException;
this.ErrorMessage += "<br />" + ex.Message;
}
args.IsValid = false;
}
As for your points:
file I/O will be replaced by database-generated disk I/O, so no gains there
deployment changes to providing an INSERT/UPDATE script containing the new data
I am doing a lot of image processing in GDI+ in .NET in an ASP.NET application.
I frequently find that Image.FromFile() is keeping a file handle open.
Why is this? What is the best way to open an image without the file handle being retained.
NB: I'm not doing anything stupid like keeping the Image object lying around - and even if I was I woudlnt expect the file handle to be kept active
I went through the same journey as a few other posters on this thread. Things I noted:
Using Image.FromFile does seem unpredictable on when it releases the file handle. Calling the Image.Dispose() did not release the file handle in all cases.
Using a FileStream and the Image.FromStream method works, and releases the handle on the file if you call Dispose() on the FileStream or wrap the whole thing in a Using {} statement as recommended by Kris. However if you then attempt to save the Image object to a stream, the Image.Save method throws an exception "A generic error occured in GDI+". Presumably something in the Save method wants to know about the originating file.
Steven's approach worked for me. I was able to delete the originating file with the Image object in memory. I was also able to save the Image to both a stream and a file (I needed to do both of these things). I was also able to save to a file with the same name as the originating file, something that is documented as not possible if you use the Image.FromFile method (I find this weird since surely this is the most likely use case, but hey.)
So to summarise, open your Image like this:
Image img = Image.FromStream(new MemoryStream(File.ReadAllBytes(path)));
You are then free to manipulate it (and the originating file) as you see fit.
I have had the same problem and resorted to reading the file using
return Image.FromStream(new MemoryStream(File.ReadAllBytes(fileName)));
Image.FromFile keeps the file handle open until the image is disposed. From the MSDN:
"The file remains locked until the Image is disposed."
Use Image.FromStream, and you won't have the problem.
using(var fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
return Image.FromStream(fs);
}
Edit: (a year and a bit later)
The above code is dangerous as it is unpredictable, at some point in time (after closing the filestream) you may get the dreaded "A generic error occurred in GDI+". I would amend it to:
Image tmpImage;
Bitmap returnImage;
using(var fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
tmpImage = Image.FromStream(fs);
returnImage = new Bitmap(tmpImage);
tmpImage.Dispose();
}
return returnImage;
Make sure you are Disposing properly.
using (Image.FromFile("path")) {}
The using expression is shorthand for
IDisposable obj;
try { }
finally
{
obj.Dispose();
}
#Rex in the case of Image.Dispose it calls GdipDisposeImage extern / native Win32 call in it's Dispose().
IDisposable is used as a mechanism to free unmanaged resources (Which file handles are)
I also tried all your tips (ReadAllBytes, FileStream=>FromStream=>newBitmap() to make a copy, etc.) and they all worked. However, I wondered, if you could find something shorter, and
using (Image temp = Image.FromFile(path))
{
return new Bitmap(temp);
}
appears to work, too, as it disposes the file handle as well as the original Image-object and creates a new Bitmap-object, that is independent from the original file and therefore can be saved to a stream or file without errors.
I would have to point my finger at the Garbage Collector. Leaving it around is not really the issue if you are at the mercy of Garbage Collection.
This guy had a similar complaint... and he found a workaround of using a FileStream object rather than loading directly from the file.
public static Image LoadImageFromFile(string fileName)
{
Image theImage = null;
fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
{
byte[] img;
img = new byte[fileStream.Length];
fileStream.Read(img, 0, img.Length);
fileStream.Close();
theImage = Image.FromStream(new MemoryStream(img));
img = null;
}
...
It seems like a complete hack...
As mentioned above the Microsoft work around causes a GDI+ error after several images have been loaded. The VB solution for me as mentioned above by Steven is
picTemp.Image = Image.FromStream(New System.IO.MemoryStream(My.Computer.FileSystem.ReadAllBytes(strFl)))
I just encountered the same problem, where I was trying to merge multiple, single-page TIFF files into one multipart TIFF image. I needed to use Image.Save() and 'Image.SaveAdd()`: https://msdn.microsoft.com/en-us/library/windows/desktop/ms533839%28v=vs.85%29.aspx
The solution in my case was to call ".Dispose()" for each of the images, as soon as I was done with them:
' Iterate through each single-page source .tiff file
Dim initialTiff As System.Drawing.Image = Nothing
For Each filePath As String In srcFilePaths
Using fs As System.IO.FileStream = File.Open(filePath, FileMode.Open, FileAccess.Read)
If initialTiff Is Nothing Then
' ... Save 1st page of multi-part .TIFF
initialTiff = Image.FromStream(fs)
encoderParams.Param(0) = New EncoderParameter(Encoder.Compression, EncoderValue.CompressionCCITT4)
encoderParams.Param(1) = New EncoderParameter(Encoder.SaveFlag, EncoderValue.MultiFrame)
initialTiff.Save(outputFilePath, encoderInfo, encoderParams)
Else
' ... Save subsequent pages
Dim newTiff As System.Drawing.Image = Image.FromStream(fs)
encoderParams = New EncoderParameters(2)
encoderParams.Param(0) = New EncoderParameter(Encoder.Compression, EncoderValue.CompressionCCITT4)
encoderParams.Param(1) = New EncoderParameter(Encoder.SaveFlag, EncoderValue.FrameDimensionPage)
initialTiff.SaveAdd(newTiff, encoderParams)
newTiff.Dispose()
End If
End Using
Next
' Make sure to close the file
initialTiff.Dispose()