save multiple image to local from folder in asp.net - asp.net

ive got a file download issue can you help me for that...
here is the code:
DirectoryInfo directoryInfo = new DirectoryInfo(Server.MapPath(#"/Bailiffs/BailiffFiles/"));
string cukurNumber = string.Empty;
if (txtCukurNumber.Text != string.Empty) {
cukurNumber = txtCukurNumber.Text;
}
FileInfo[] fileInfoEnum = directoryInfo.GetFiles(cukurNumber + "*");
Response.Clear();
Response.AddHeader("Content-Disposition", "attachment;filename=" + txtCukurNumber.Text + ".zip");
Response.ContentType = "application/zip";
using (ZipOutputStream zipStream = new ZipOutputStream(Response.OutputStream)) {
zipStream.SetLevel(9);
byte[] zipBuffer = new byte[4096];
foreach (FileInfo fileInfo in fileInfoEnum) {
string fileFullName = fileInfo.FullName;
ZipEntry zipEntry = new ZipEntry(Path.GetFileName(fileFullName));
zipEntry.DateTime = DateTime.Now;
zipStream.PutNextEntry(zipEntry);
using (FileStream fileStream = File.OpenRead(fileFullName)) {
int sourceBytes = 0;
do {
sourceBytes = fileStream.Read(zipBuffer, 0, zipBuffer.Length);
zipStream.Write(zipBuffer, 0, sourceBytes);
} while (sourceBytes > 0);
}
}
zipStream.Finish();
zipStream.Close();
Response.Flush();
Response.End();
}
this code must be get all image files by filter and save to disk but save file dialog of browser is opening just one time and one bizarre file is saving... where am i doing wrong...
thanks..
Edit : Bizzarre file issue is solved now the main issue is single file saving instead of multiple..
thanks again...

Although you are looping over each file in the directory, once you do your Response.End() on the first iteration of the loop, the response to the user is done. They would only get the first file that is found by the enumerator.
The browser doesn't have a concept of receiving multiple files in the way you are attempting.
You may consider collecting the various image files and putting them together in a ZIP file, and then returning a single ZIP back to the user.
Here is example code that will build a ZIP (using SharZipLib) of the images and reply with a single file called "images.zip"
Include these using statements for SharpZipLib:
using ICSharpCode.SharpZipLib.Core;
using ICSharpCode.SharpZipLib.Zip;
using ICSharpCode.SharpZipLib.Checksums;
Then in the method where you want to stream back the ZIP file:
DirectoryInfo directoryInfo = new DirectoryInfo(Server.MapPath(#"/Bailiffs/BailiffFiles/"));
string cukurNumber = string.Empty;
if (txtCukurNumber.Text != string.Empty) {
cukurNumber = txtCukurNumber.Text;
}
IEnumerable<FileInfo> fileInfoEnum = directoryInfo.EnumerateFiles( cukurNumber + "*" );
Response.Clear();
Response.AddHeader( "Content-Disposition", "attachment;filename=images.zip" );
Response.ContentType = "application/zip";
using( ZipOutputStream zipstream = new ZipOutputStream( Response.OutputStream ) ) {
zipstream.SetLevel( 9 ); // 0-9, 9 being the highest compression
byte[] buffer = new byte[4096];
foreach( FileInfo fileInfo in fileInfoEnum ) {
string file = fileInfo.FullName;
ZipEntry entry = new
ZipEntry( Path.GetFileName( file ) );
entry.DateTime = DateTime.Now;
zipstream.PutNextEntry( entry );
using( FileStream fs = File.OpenRead( file ) ) {
int sourceBytes;
do {
sourceBytes = fs.Read( buffer, 0, buffer.Length );
zipstream.Write( buffer, 0, sourceBytes );
} while( sourceBytes > 0 );
}
}
zipstream.Finish();
zipstream.Close();
}
Response.Flush();
Response.End();

Related

File Upload to Database for ASP.Net Webpages

Im having some trouble finding a way to Upload a document to the database in varbinary(max) with ASP.Net Webpages 2 and I would also like to download it.
So far what i have is this below which supposed to upload a file to a directory on the website but it isn't doing anything. Any help would be great. Thanks
var fileName = "";
var fileSavePath = "";
int numFiles = Request.Files.Count;
int uploadedCount = 0;
for (int i = 0; i < numFiles; i++)
{
var uploadedFile = Request.Files[i];
if (uploadedFile.ContentLength > 0)
{
fileName = Path.GetFileName(uploadedFile.FileName);
fileSavePath = Server.MapPath("~/UploadedFiles/" +
fileName);
uploadedFile.SaveAs(fileSavePath);
uploadedCount++;
}
}
message = "File upload complete. Total files uploaded: " +
uploadedCount.ToString();
The following code goes at the top of the page where you have your file upload. Note that you should amend the table and field names according to your database. Also, you should ensure that the form that includes your upload control has the enctype attribute set to multipart/form-data:
#{
int id = 0;
var fileName = "";
var fileMime = "";
if (IsPost) {
var uploadedFile = Request.Files[0];
fileName = Path.GetFileName(uploadedFile.FileName);
if(fileName != String.Empty)
{
fileMime = uploadedFile.ContentType;
var fileStream = uploadedFile.InputStream;
var fileLength = uploadedFile.ContentLength;
byte[] fileContent = new byte[fileLength];
fileStream.Read(fileContent, 0, fileLength);
var db = Database.Open("FileUploading");
var sql = "INSERT INTO Files (FileName, FileContent, MimeType) VALUES (#0,#1,#2)";
db.Execute(sql, fileName, fileContent, fileMime);
}
}
}
To display a file from the database, you need a separate "handler" file that contains just this code:
#{
int id = 0;
if(Request["Id"].IsInt()){
id = Request["Id"].AsInt();
var db = Database.Open("FileUploading");
var sql = "Select * From Files Where FileId = #0";
var file = db.QuerySingle(sql, id);
if(file.MimeType.StartsWith("image/")){
Response.AddHeader("content-disposition", "inline; filename=" + file.FileName);
} else {
Response.AddHeader("content-disposition", "attachment; filename=" + file.FileName);
}
Response.ContentType = file.MimeType;
Response.BinaryWrite((byte[])file.FileContent);
}
}
This file is used as the src attribute for an image file or as the URL for a link to a file that should be downloaded such as a PDF or Word file. If you call this handler file "download.cshtml", the link for an image file saved in the database should look like this:
<img src="download.cshtml?Id=#id" alt="" />
where the Id parameter value is the id fo the file in the database. A download link looks like this:
Click Here
All of this has been taken from my article: http://www.mikesdotnetting.com/Article/148/Save-And-Retrieve-Files-From-a-Sql-Server-CE-Database-with-WebMatrix. The only difference between the article which features a SQL Compact database is that the data type for files in SQL CE is image as opposed to varbinary(max) in SQL Server.
based on your code..you are not uploading the image to the database. instead u're saving the image on your folder which is located in your root / UploadedFiles
to store the image in the database..you should use this code..
using (Stream fs = uploadedFile.PostedFile.InputStream)
{
using (BinaryReader br = new BinaryReader(fs))
{
byte[] bytes = br.ReadBytes((Int32)fs.Length);
string contentType = uploadedFile.PostedFile.ContentType;
SqlParameter[] arParams = new SqlParameter[2];
arParams[0] = new SqlParameter("#ID", SqlDbType.Int);
arParams[0].Value = 1; 'example
arParams[1] = new SqlParameter("#contentType", SqlDbType.Varchar(50));
arParams[1].Value = contentType;
arParams[2] = new SqlParameter("#document", SqlDbType.Varbinary(MAX));
arParams[2].Value = bytes;
SqlHelper.ExecuteNonQuery(SQLConn, CommandType.StoredProcedure, "Upload_Attachment", arParams);
}
}

Problems with downloading pdf file from web api service

I'm trying to set up a web api service that searches for a .pdf file in a directory and returns the file if it's found.
The controller
public class ProductsController : ApiController
{
[HttpPost]
public HttpResponseMessage Post([FromBody]string certificateId)
{
string fileName = certificateId + ".pdf";
var path = #"C:\Certificates\20487A" + fileName;
//check the directory for pdf matching the certid
if (File.Exists(path))
{
//if there is a match then return the file
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(path, FileMode.Open);
stream.Position = 0;
result.Content = new StreamContent(stream);
result.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment") { FileName = fileName };
result.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/pdf");
result.Content.Headers.ContentDisposition.FileName = fileName;
return result;
}
else
{
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.Gone);
return result;
}
}
}
I'm calling the service with the following code
private void GetCertQueryResponse(string url, string serial)
{
string encodedParameters = "certificateId=" + serial.Replace(" ", "");
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(url);
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
httpRequest.AllowAutoRedirect = false;
byte[] bytedata = Encoding.UTF8.GetBytes(encodedParameters);
httpRequest.ContentLength = bytedata.Length;
Stream requestStream = httpRequest.GetRequestStream();
requestStream.Write(bytedata, 0, bytedata.Length);
requestStream.Close();
HttpWebResponse response = (HttpWebResponse)httpRequest.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
byte[] bytes = null;
using (Stream stream = response.GetResponseStream())
using (MemoryStream ms = new MemoryStream())
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while (stream.CanRead && count > 0);
ms.Position = 0;
bytes = ms.ToArray();
}
var filename = serial + ".pdf";
Response.ContentType = "application/pdf";
Response.Headers.Add("Content-Disposition", "attachment; filename=\"" + filename + "\"");
Response.BinaryWrite(bytes);
}
}
This appears to be working in the sense that the download file dialogue is shown with the correct file name and size etc, but the download takes only a couple of seconds (when the file sizes are >30mb) and the files are corrupt when I try to open them.
Any ideas what I'm doing wrong?
Your code looks similar to what Ive used in the past, but below is what I typically use:
Response.AddHeader("content-length", myfile.Length.ToString())
Response.AddHeader("content-disposition", "inline; filename=MyFilename")
Response.AddHeader("Expires", "0")
Response.AddHeader("Pragma", "Cache")
Response.AddHeader("Cache-Control", "private")
Response.ContentType = "application/pdf"
Response.BinaryWrite(finalForm)
I post this for 2 reasons. One, add the content-length header, you may have to indicate how large the file is so the application waits for the whole response.
If that doesn't fix it. Set a breakpoint, does the byte array content the appropriate length (aka, 30 million bytes for a 30 MB file)? Have you used fiddler to see how much content is coming back over the HTTP call?

Server.map path not working in asp.net

I am using this code to download a excel file which exist in my solution. I have added a folder FileUpload and added a excel file UploadCWF.xlsx. My code is workin in local host. But not working when I host this to server.I am getting error - Could not find a part of the path. My code -
string filePath = HttpContext.Current.Server.MapPath("~/FileUpload/");
string _DownloadableProductFileName = "UploadCWF.xlsx";
System.IO.FileInfo FileName = new System.IO.FileInfo(filePath + "\\" + _DownloadableProductFileName);
FileStream myFile = new FileStream(filePath + "\\" + _DownloadableProductFileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
//Reads file as binary values
BinaryReader _BinaryReader = new BinaryReader(myFile);
//Check whether file exists in specified location
if (FileName.Exists)
{
try
{
long startBytes = 0;
string lastUpdateTiemStamp = File.GetLastWriteTimeUtc(filePath).ToString("r");
string _EncodedData = HttpUtility.UrlEncode(_DownloadableProductFileName, Encoding.UTF8) + lastUpdateTiemStamp;
Response.Clear();
Response.Buffer = false;
Response.AddHeader("Accept-Ranges", "bytes");
Response.AppendHeader("ETag", "\"" + _EncodedData + "\"");
Response.AppendHeader("Last-Modified", lastUpdateTiemStamp);
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment;filename=" + FileName.Name);
Response.AddHeader("Content-Length", (FileName.Length - startBytes).ToString());
Response.AddHeader("Connection", "Keep-Alive");
Response.ContentEncoding = Encoding.UTF8;
//Send data
_BinaryReader.BaseStream.Seek(startBytes, SeekOrigin.Begin);
//Dividing the data in 1024 bytes package
int maxCount = (int)Math.Ceiling((FileName.Length - startBytes + 0.0) / 1024);
//Download in block of 1024 bytes
int i;
for (i = 0; i < maxCount && Response.IsClientConnected; i++)
{
Response.BinaryWrite(_BinaryReader.ReadBytes(1024));
Response.Flush();
}
}
catch (Exception es)
{
throw es;
}
finally
{
Response.End();
_BinaryReader.Close();
myFile.Close();
}
}
else
System.Web.UI.ScriptManager.RegisterStartupScript(this, GetType(),
"FileNotFoundWarning", "alert('File is not available now!')", true);
Please some one help me.
You should first concat filepath and filename then get path using server.mappath.
You should write code like this
string filePath = HttpContext.Current.Server.MapPath("~/FileUpload/UploadCWF.xlsx");
System.IO.FileInfo FileName = new System.IO.FileInfo(filePath);

Downloading file on client side using absolute path .NET

string FilePath = HttpUtility.UrlDecode(Request.QueryString.ToString());
string[] s = FilePath.Split(new char[] { ',' });
string path = s[0];
string FileName = s[1];
String str = HttpContext.Current.Request.Url.AbsolutePath;
System.Web.HttpResponse response = System.Web.HttpContext.Current.Response;
response.ClearContent();
response.Clear();
// response.ContentType = "text/plain";
response.AddHeader("Content-Disposition", "attachment; filename=" + FileName+ ";");
response.TransmitFile(path+FileName);
response.Flush();
response.End();
Above is the code in which i get location of audio file from another page . the audio file is located on a remote machine which is accesible using url e.g. http:\servername\audiofiles\filename.wav . response.Transmit and .WriteFile requires virtual path whereas response.Write() does not download file . How can i give the absolute url instead of virtual path to download file
Found the answer my self from another place :
string FilePath = HttpUtility.UrlDecode(Request.QueryString.ToString());
string[] s = FilePath.Split(new char[] { ',' });
string path = s[0];
string FileName = s[1];
int bytesToRead = 10000;
byte[] buffer = new Byte[bytesToRead];
try
{
HttpWebRequest fileReq = (HttpWebRequest)HttpWebRequest.Create(path+FileName);
HttpWebResponse fileResp = (HttpWebResponse)fileReq.GetResponse();
if (fileReq.ContentLength > 0)
fileResp.ContentLength = fileReq.ContentLength;
stream = fileResp.GetResponseStream();
var resp = HttpContext.Current.Response;
resp.ContentType = "application/octet-stream";
resp.AddHeader("Content-Disposition", "attachment; filename=\"" + FileName + "\"");
resp.AddHeader("Content-Length", fileResp.ContentLength.ToString());
int length;
do
{
if (resp.IsClientConnected)
{
// Read data into the buffer.
length = stream.Read(buffer, 0, bytesToRead);
// and write it out to the response's output stream
resp.OutputStream.Write(buffer, 0, length);
resp.Flush();
//Clear the buffer
buffer = new Byte[bytesToRead];
}
else
{
// cancel the download if client has disconnected
length = -1;
}
} while (length > 0); //Repeat until no data is read
}
finally
{
if (stream != null)
{
//Close the input stream
stream.Close();
}
}
}

Streaming a zip file over http in .net with SharpZipLib

I'm making a simple download service so a user can download all his images from out site.
To do that i just zip everything to the http stream.
However it seems everything is stored in memory, and the data isn't sent til zip file is complete and the output closed.
I want the service to start sending at once, and not use too much memory.
public void ProcessRequest(HttpContext context)
{
List<string> fileNames = GetFileNames();
context.Response.ContentType = "application/x-zip-compressed";
context.Response.AppendHeader("content-disposition", "attachment; filename=files.zip");
context.Response.ContentEncoding = Encoding.Default;
context.Response.Charset = "";
byte[] buffer = new byte[1024 * 8];
using (ICSharpCode.SharpZipLib.Zip.ZipOutputStream zipOutput = new ICSharpCode.SharpZipLib.Zip.ZipOutputStream(context.Response.OutputStream))
{
foreach (string fileName in fileNames)
{
ICSharpCode.SharpZipLib.Zip.ZipEntry zipEntry = new ICSharpCode.SharpZipLib.Zip.ZipEntry(fileName);
zipOutput.PutNextEntry(zipEntry);
using (var fread = System.IO.File.OpenRead(fileName))
{
ICSharpCode.SharpZipLib.Core.StreamUtils.Copy(fread, zipOutput, buffer);
}
}
zipOutput.Finish();
}
context.Response.Flush();
context.Response.End();
}
I can see the the worker process memory growing while it makes the file, and then releases the memory when its done sending. How do i do this without using too much memory?
Disable response buffering with context.Response.BufferOutput = false; and remove the Flush call from the end of your code.
use Response.BufferOutput = false; at start of ProcessRequest and flush response after each file.
FYI. This is working code to recursively add an entire tree of files, with streaming to browser:
string path = #"c:\files";
Response.Clear();
Response.ContentType = "application/zip";
Response.AddHeader("Content-Disposition", string.Format("attachment; filename=\"{0}\"", "hive.zip"));
Response.BufferOutput = false;
byte[] buffer = new byte[1024 * 1024];
using (ZipOutputStream zo = new ZipOutputStream(Response.OutputStream, 1024 * 1024)) {
zo.SetLevel(0);
DirectoryInfo di = new DirectoryInfo(path);
foreach (string file in Directory.GetFiles(di.FullName, "*.*", SearchOption.AllDirectories)) {
string folder = Path.GetDirectoryName(file);
if (folder.Length > di.FullName.Length) {
folder = folder.Substring(di.FullName.Length).Trim('\\') + #"\";
} else {
folder = string.Empty;
}
zo.PutNextEntry(new ZipEntry(folder + Path.GetFileName(file)));
using (FileStream fs = File.OpenRead(file)) {
ICSharpCode.SharpZipLib.Core.StreamUtils.Copy(fs, zo, buffer);
}
zo.Flush();
Response.Flush();
}
zo.Finish();
}
Response.Flush();

Resources