I'm a little stuck in trying to upload files into our SQL DB using FileStream. I've followed this example http://www.codeproject.com/Articles/128657/How-Do-I-Use-SQL-File-Stream but the difference is we upload the file in 10mb chunks.
On the first chunk a record is created in the DB with empty content (so that a file is created) and then OnUploadChunk is called for each chunk.
The file is uploading ok but when I check, a new file has been created for each chunk, so for a 20mb file for example I have one which is 0kb, another which is 10mb and the final one which is 20mb. I'm expecting one file of 20mb.
I'm guessing this is perhaps to do with getting the transaction context or incorrectly using TransactionScope which I dont quite fully grasp yet. I presume this may be different for each chunk with it going to and from client to server.
Here is the method which is called every time a chunk is sent from the client (using PlupLoad if of any relevance).
protected override bool OnUploadChunk(Stream chunkStream, string DocID)
{
BinaryReader b = new BinaryReader(chunkStream);
byte[] binData = b.ReadBytes(chunkStream.Length);
using (TransactionScope transactionScope = new TransactionScope())
{
string FilePath = GetFilePath(DocID); (Folder path the file is sitting in)
//Gets size of file that has been uploaded so far
long currentFileSize = GetCurrentFileSize(DocID)
//Essentially this is just Select GET_FILESTREAM_TRANSACTION_CONTEXT()
byte[] transactionContext = GetTransactionContext();
SqlFileStream filestream = new SqlFileStream(FilePath, transactionContext, FileAccess.ReadWrite);
filestream.Seek(currentFileSize, SeekOrigin.Begin);
filestream.Write(binData, 0, (int)chunkStream.Length);
filestream.Close();
transactionScope.Complete();
}
}
UPDATE:
I've done a little research and I believe the issue is around this:
FILESTREAM does not currently support in-place updates. Therefore an update to a column with the FILESTREAM attribute is implemented by creating a new zero-byte file, which then has the entire new data value written to it. When the update is committed, the file pointer is then changed to point to the new file, leaving the old file to be deleted at garbage collection time. This happens at a checkpoint for simple recovery, and at a backup or log backup.
So have I just got to wait for the garbage collector to remove the chunked files? Or should I perhaps be uploading the file somewhere on the file system first and then copying it across?
Yes, you will have to wait for Sql to clean up the files for you.
Unless you have other system constraints you should be able stream the entire file all at once. This will give you a single file on the sql side
Related
My question is a bit similar to this one but it is with ASP.NET and my requirements are slightly different: Android append files to a zip file without having to re-write the entire zip file?
I need to insert data to a zip-file downloaded by users (not much 1KB of data at most, this is data for Adword off-line conversion actually). The zip-file is downloaded through an ASP.NET website. Because the zip file is already large enough (10's of MB) to avoid overloading the server, I need to insert these data without re-compressing everything. I can think of two ways to do this.
Way A: Find a zip-technology that lets embed a particular file in the ZIP file, this particular file being embedded uncompressed. Assuming there is no checksum, it'd be then easy to just override the bits of this un-compressed file with my specific data, in the zip file itself. If possible, this would have to be supported by all unzip tools (Windows integrated zip, winrar, 7zip...).
Way B: Append an extra file to the original ZIP file without having to recompress it! This extra file would have to be stored in an embedded folder in the ZIP file.
I looked a bit at SevenZipSharp which has an enumeration SevenZip.CompressionMode with values Create and Append that leads me to think that Way B could be implemented. DotNetZip seems also to work pretty well with Stream according to FAQ.
But if Way A could be possible I'd prefer it much since no extra zip library would be needed on the server side!
Ok, thanks to DotNetZip I am able to do what I want in a very resource efficient way:
using System.IO;
using Ionic.Zip;
class Program {
static void Main(string[] args) {
byte[] buffer;
using (var memoryStream = new MemoryStream()) {
using (var zip = new ZipFile(#"C:\temp\MylargeZipFile.zip")) {
// The file on which to override content in MylargeZipFile.zip
// has the path "Path\FileToUpdate.txt"
zip.UpdateEntry(#"Path\FileToUpdate.txt", #"Hello My New Content");
zip.Save(memoryStream);
}
buffer = memoryStream.ToArray();
}
// Here the buffer will be sent to httpResponse
// httpResponse.Clear();
// httpResponse.AddHeader("Content-Disposition", "attachment; filename=MylargeZipFile.zip");
// httpResponse.ContentType = "application/octe-t-stream";
// httpResponse.BinaryWrite(buffer);
// httpResponse.BufferOutput = true;
// Just to check it worked!
File.WriteAllBytes(#"C:\temp\Result.zip", buffer);
}
}
I am creating custom timer job service in SharePoint 2010 using asp.net 3.5 and c#.In this service, business logic is that i have to create zip file containing list of applications as excel report for each client.for this, i am using Ionic.zip third party dll and ZipFile class for creating zip file and storing this zip file on hard disk having some path.here scenario is that my code contains two foreach loops, upper for list of clients and inner for list of applications.each client may have no. of applications.I am adding these applications to zip file, storing it on hard disk and attaching this file to mail for sending to clients, but my problem is that I am trying to delete zip file before gone to next client, so that there should not be any files on hard disk, but I am getting error as "The process cannot access the file because it is being used by another process".also I have tried to attach output stream for excel report as mail attachment but I am getting zero bytes in attachment. how should i overcome this error.
I am giving simple code below
foreach(list of clients)////may have no. of clients
{
string zipFileDownloadPath = String.Empty;
foreach(list of applications)//may have no. of applications
{
HttpWebResponse resp = (HttpWebResponse)httpReq.GetResponse();
Stream excelReport = resp.GetResponseStream();
zipFile.AddEntry(appName, excelReport);
}
zipFileDownloadPath = clientFolder + #"\" + client["client_name"] + "_" + reportDate + ".zip";
zipFile.Save(zipFileDownloadPath);
mail.Attachments.Add(new Attachment(zipFileDownloadPath));
smtp.Send(mail);//mail have body, subject etc.
//here I am deleting files
if (Directory.Exists(clientFolder))
{
Directory.Delete(clientFolder, true);//here I am getting error
}
}
I the above code I have also tried so save zipfile to output stream so that there should not be any need for storing files on hard disk and attach this stream to mail attachment, but problem is that, i am getting proper bytes in output stream but when mail is sent, i am getting zero byes in attachment.
//here is code for attaching output stream to mail
foreach(list of clients)////may have no. of clients
{
foreach(list of applications)//may have no. of applications
{
HttpWebResponse resp = (HttpWebResponse)httpReq.GetResponse();
Stream excelReport = resp.GetResponseStream();
zipFile.AddEntry(appName, excelReport);
}
Stream outputStream = new MemoryStream();
zipFile.Save(outputStream);
mail.Attachments.Add(new Attachment(outputStream,"ZipFileName" MediaTypeNames.Application.Zip);));
smtp.Send(mail);//mail have body, subject etc.
}
Try moving the position of the stream to it's begiining before sending it to the attachement:
outputStream .Seek(0, SeekOrigin.Begin);
Also before deleting your file make sure you dispose the zipFile object:
zipFile.Dispose()
Or alternately (better) wrap it in a using statement.
Also unless I am missing something if you are using streams, why do you need to save the files to the harddrive? just use the streams, something along the lines of:
var ms = new new MemoryStream();
zipFile.Save(ms);
ms.Seek(0, SeekOrigin.Begin);
mail.Attachments.Add(new Attachment(ms,"ZipFileName" MediaTypeNames.Application.Zip));
zipFile.Dispose()
Special thanks to Luis.Luis has solved my problem.
Hi Everyone Finally I have solved my problem. problem was that I was saving the zip file on output stream so stream was reading exact bytes and reaching at it's last position and I was attaching same stream to attachment that's why i was getting zero bytes in mail attachment.so solution for this is that seek the position of output stream to begin after saving to zip file and before attaching to it to mail. please refer following code for reference.
Stream outputStream = new MemoryStream();
zipFile.Save(outputStream);
outputStream .Seek(0, SeekOrigin.Begin);
mail.Attachments.Add(new Attachment(outputStream,"ZipFileName" MediaTypeNames.Application.Zip);));
Long time lurker first time poster. Working with .Net / Linq for just a few years so I'm sure I'm missing something here. After countless hours of research I need help.
I based my code on a suggestion from https:http://damieng.com/blog/2010/01/11/linq-to-sql-tips-and-tricks-3
The following code currently saves a chosen file (pdf, doc, png, etc) which is stored in an sql database to the C:\temp. Works great. I want to take it one step further. Instead of saving it automatically to the c:\temp can I have the browser prompt so they can save it to their desired location.
{
var getFile = new myDataClass();
//retrieve attachment id from selected row
int attachmentId = Convert.ToInt32((this.gvAttachments.SelectedRow.Cells[1].Text));
//retrieve attachment information from dataclass (sql attachment table)
var results = from file in getFile.AttachmentsContents
where file.Attachment_Id == attachmentId
select file;
string writePath = #"c:\temp";
var myFile = results.First();
File.WriteAllBytes(Path.Combine(writePath, myFile.attach_Name), myFile.attach_Data.ToArray());
}
So instead of using File.WriteAllBytes can I instead take the data returned from my linq Query (myFile) and pass it into something that would prompt for the user to save the file instead?). Can this returned object be used with response.transmitfile? Thanks so much.
Just use the BinaryWrite(myFile.attach_Data.ToArray()) method to send the data since it is already in memory.
But first set headers appropriately, for example:
"Content-Disposition", "attachment; filename="+myFile.attach_Name
"Content-Type", "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
Content-type guides the receiving system on how it should handle the file. Here are more MS Office content types. If they are known at the point the data is stored, the content-type should be stored, too.
Also, since the file content is the only data you want in the response, call Clear before and End after BinaryWrite.
I have an ASP.NET web application that allows the user to upload a file from his PC to a SQL Server database (which is later used to generate an image for an tag). Is there an "easy" way to test the image within .NET to validate that it does not contain anything malicious before saving it?
Right now, I use this:
MemoryStream F = new MemoryStream();
Bitmap TestBitmap = new Bitmap(Filename);
TestBitmap.Save(F, System.Drawing.Imaging.ImageFormat.Png);
int PhotoSize = (int)F.Length;
Photo = new byte[PhotoSize];
F.Seek(0, SeekOrigin.Begin);
int BytesRead = F.Read(Photo, 0, PhotoSize);
F.Close();
Creating TestBitmap fails if it is not an image (e.g. if Filename is the name of a text file), but apparently this doesn't stop a file that is an image with malicious code appended to it from loading as an image, so saving it as a MemoryStream and then writing the stream to a byte array (which is later saved in the database) supposedly fixes this.
To avoid people pass programs and other information's using the ability to upload photos to your site you can do two main steps.
Read and save again the image with your code to remove anything elst.
Limit the size of each image to a logical number.
To avoid some one upload bad code and run it on your server you keep an isolate folder with out permission to run anything. More information's about that on:
I've been hacked. Evil aspx file uploaded called AspxSpy. They're still trying. Help me trap them‼
And a general topic on the same subject: Preparing an ASP.Net website for penetration testing
I have written downloading a file in a simple manner:
#ResourceMapping(value = "content")
public void download(ResourceRequest request, ResourceResponse response) {
//...
SerializableInputStream serializableInputStream = someService.getSerializableInputStream(id_of_some_file);
response.addProperty(HttpHeaders.CACHE_CONTROL, "max-age=3600, must-revalidate");
response.setContentType(contentType);
response.addProperty(HttpHeaders.CONTENT_TYPE, contentType);
response.addProperty(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename*=UTF-8''"
+ URLEncoder.encode(fileName, "UTF-8"));
OutputStream outputStream = response.getPortletOutputStream();
byte[] parcel = new byte[4096];
while (serializableInputStream.read(parcel) > 0)
outputStream.write(parcel);
outputStream.flush();
serializableInputStream.close();
outputStream.close();
//...
}
The SerializableInputStream is described here - JavaDocs. It allows an InputStream to be serialized and, for instance, passed over remoting.
I read from input and write it to the output, not all bytes at once. But unfortunately the portlet isn't "streaming" the contents - the file (e.g. an image) is sent to the browser only after reading the entire input stream - this is how it looks like. I see the file being read from the database (from live logs), but I don't see any "growing" image on the screen.
What am I doing wrong? Is it possible to really stream a file in Liferay 6.0.6 and Spring Portlet MVC?
Where are you doing this? I fear that you're doing this instead of rendering your portlet's HTML (e.g. render phase). Typically the portlet content is embedded in an HTML page, thus you need the resource phase, which (roughly) behaves like a servlet.
Also, the code you give does not match the actual question you ask: You use a comment //read from input stream (file), write file to os and ask what to do differently in order to not have the full content in memory.
As the comment does not have anything in memory and you could loop through reading from the input file while writing to the output stream: What's the underlying question? Do you have problems with implementing download-streaming in a portal environment or difficulties (i.e. using too much memory) reading from a file while writing to a stream?
Edit: Thanks for clarifying. Have you tried to flush the stream earlier? You can do that whenever you want - e.g. every loop (though that might be a bit too much). Also, keep in mind that the browser as well as the file itself must handle it in a way that you expect: If an image is not encoded "incrementally" a browser might not show it that way.
Have you tried this with huge files as well? It might be that the automatic flushing is just not triggered because your files are too small for it to be triggered...
Also, I think that filename*=UTF-8'' looks strange. Might be valid encoding, but I've never seen this