I need to create some block diagrams on my ASP.NET page. Is it best done by drawing on Bitmap? How to display this dynamically generated Bitmap object?
Create a http handler that writes the bitmap to the response stream.
Heres a link on handlers themselves http://www.dotnetperls.com/ashx.
If you can write a file to the file system, using some form of naming convention, so that your not generating it over and over again.
If you have it written to a file you can write that to the response stream using context.Response.WriteFile(path);
You'll need to set appropriate headers for the response if you want to cahce something like the below should be ok.
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetLastModified(lastWrite);
context.Response.Cache.SetETag(string.Format("\"{0}\"", lastWrite.Ticks));
context.Response.ContentType = "image/png";
you can check these headers on an incoming request and return a 304 with something like (do a null check before)
if (context.Request.Headers[since] >= lastwrite || context.Request.Headers[eTag] >= lastwriteTicks) {
context.Response.StatusCode = 304;
context.Response.StatusDescription = "Not Modified";
return;
}
If you need to generate fresh every time dont worry about caching and just write your iamge to the context.Response.OutputStream.
Related
I'm using an .aspx page to serve an image file from the file system according to the given parameters.
Server.Transfer(imageFilePath);
When this code runs, the image is served, but no Last-Modified HTTP Header is created.
as opposed to that same file, being called directly from the URL on the same Server.
Therefor the browser doesn't issue an If-Modified-Since and doesn't cache the response.
Is there a way to make the server create the HTTP Headers like normally does with a direct request of a file (image in that case) or do I have to manually create the headers?
When you make a transfer to the file, the server will return the same headers as it does for an .aspx file, because it's basically executed by the .NET engine.
You basically have two options:
Make a redirect to the file instead, so that the browser makes the request for it.
Set the headers you want, and use Request.BinaryWrite (or smiiliar) to send the file data back in the response.
I'll expand on #Guffa's answer and share my chosen solution.
When calling the Server.Transfer method, the .NET engine treats it like an .aspx page, so It doesn't add the appropriate HTTP Headers needed (e.g. for caching) when serving a static file.
There are three options
Using Response.Redirect, so the browser makes the appropriate request
Setting the headers needed and using Request.BinaryWrite to serve the content
Setting the headers needed and calling Server.Transfer
I choose the third option, here is my code:
try
{
DateTime fileLastModified = File.GetLastWriteTimeUtc(MapPath(fileVirtualPath));
fileLastModified = new DateTime(fileLastModified.Year, fileLastModified.Month, fileLastModified.Day, fileLastModified.Hour, fileLastModified.Minute, fileLastModified.Second);
if (Request.Headers["If-Modified-Since"] != null)
{
DateTime modifiedSince = DateTime.Parse(Request.Headers["If-Modified-Since"]);
if (modifiedSince.ToUniversalTime() >= fileLastModified)
{
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
return;
}
}
Response.AddHeader("Last-Modified", fileLastModified.ToString("R"));
}
catch
{
Response.StatusCode = 404;
Response.StatusDescription = "Not found";
return;
}
Server.Transfer(fileVirtualPath);
We have a webforms application that generates parametric documents. The user supplies some information, clicks a button, and our web service generates Word documents.
The service works for one document at a time but not batches. We want to add the ability to process more than one document. We now have the code below, where contactIdsForLetters is a List<int>.
foreach (int contactId in contactIdsForLetters)
{
string parameters = string.Format("ContactID~{0}", contactId);
string defaultFilename = Reporting.Utilities.CreateDefaultFileName(outputformat);
byte[] bytes = Reporting.Reports.CreateReport(selectedReportId, parameters, outputformat, out serviceCallWasSuccessful);
if (!serviceCallWasSuccessful || bytes == null)
{
Reporting.Reports.LogReportActivity(selectedReportId, string.Empty, parameters, userLogin, false);
return;
}
Reporting.Reports.LogReportActivity(selectedReportId, string.Empty, parameters, userLogin, true);
Reporting.Utilities.SendResponse(defaultFilename, bytes);
}
When running the above code, only one document is ever returned. One document is processed (the For-Each never gets to the second item in contactIdsForLetters), a dialog pops up asking to open or save the file, and after clicking open, Word opens with the document. Everything is happening like it should but we can't get the For-Each to process the second and subsequent documents.
The users want a seperate Word session for each document returned. Subsequent documents will need to open in their own Word session.
How do I loop through a List<int>, send each int to a service one-at-a-time, and open a Word session for each returned document?
Here is SendResponse() ...
public static void SendResponse(string defaultFilename, byte[] bytes)
{
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ClearHeaders();
HttpContext.Current.Response.Buffer = false;
HttpContext.Current.Response.ContentType = "application/octet-stream";
HttpContext.Current.Response.AppendHeader("Content-Disposition", string.Format("attachment; filename={0}", defaultFilename));
HttpContext.Current.Response.AppendHeader("Content-Length", bytes.Length.ToString());
HttpContext.Current.Response.BinaryWrite(bytes);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.End();
}
There should be no problem sending a List<int> to the service and having the service return List<byte[]>.
I don't know what you mean by "open a Word session". I hope you're not trying to call Word from an ASP.NET application. That's not supported, is unreliable, and doesn't work very well.
ASP.NET runs on top HTTP.
HTTP is based on a per request basis.
Each request returns a response if the browser is able to reach the server.
This response contains a stream (we can call it File in your case).
After the server flushes that stream (1st file) the request connection to the server is closed and the request life cycle ends, closing any threads related to that request.
This is why you never get to the second document.
You are going to have to deliver the documents in a compressed container like a zip file so all the documents go in a single stream. First get all the documents, package them and send the response to the client.
Hope this helps.
UPDATE: Also you might want to look at using AJAX to generate a javascript call to the server returning different links to pull the different documents. It would be like clicking on different invisible links that open different documents after the user clicks a single button or link. Using this approach is really easy to achieve what you want. You can trigger the click event for all the invisible links with javascript.
This is my first time writing code that allows a user to download a file uploaded by another user.
I've written an ASHX file, download.ashx, with code that looks like this:
s = context.Request.QueryString.ToString();
byte[] buffer = new ReplacementTicketFileIO().GetSpecifiedFile(s);
context.Response.BinaryWrite(buffer);
context.Response.Flush();
context.Response.End();
When a user clicks on a link to download.ashx with the appropriate querystring, the file is downloaded, but the browser wants to display the content in the browser window. If the user right-clicks on the link, he can download the file, but the name of the file defaults to download.ashx.
I would like to accomplish two things:
1) I would like to be able to specify the default name of the file downloaded on the user's device based on the querystring.
For instance, if the user clicks on download.ashx?linkedfile=car.pdf, I would like for the browser to default to car.pdf for the name of this file.
2) I would like for the browser to default to saving the link, as opposed to opening the link in the browser window.
Is it reasonable for me to want to do this, or is there a better way to download files? Please let me know.
Set the Content-Disposition HTTP header. E.g.
Content-Disposition: attachment; filename=hello.jpg
You can do that in C# using:
Response.AddHeader("Content-Disposition", "attachment; filename=hello.jpg");
Here is something I have for excel files and I believe it forces a download rather than a new window. There is a page property for QueryString. You would just need to capture the QueryString and use it in this code as well as determining the content type. The String.Format will give you clean code.
private string _ExcelFilename
{
get
{
return (Request.QueryString["xls"] != null) ? Request.QueryString"xls"] : "bis";
}
}
Page.Response.Clear();
Page.EnableViewState = false;
Page.Response.Clear();
Page.Response.ContentType = "application/vnd.ms-excel";
Page.Response.AppendHeader("Content-Disposition", String.Format("attachment; filename={0}_{1}.xls", _ExcelFilename, DateTime.Now.ToString("yyyyMMdd")));
Page.Response.Write(excel);
Page.Response.Flush();
Page.Response.End();
I've found a few posts about retrieving HTML from an ASPX page, mostly by overriding the render method, using a WebClient, or creating an HttpWebRequest. All these methods return the HTML of the page as it's loaded, but I was hoping to actually retrieve the HTML after the user has entered information.
The purpose behind this is that I work in IT, and I'm attempting to build a logging library that has an overload that essentially does a "screen-scrape" on the page just as the user encounters an exception, that way I can log the exception, and create an HTML file in a sub-directory of the logging directory that shows the page exactly as the user had it before clicking "submit" or having some other random error, and add an "ID" to the error that's logged telling whoever is fixing the issue which page to look at.
I hope I've provided enough information, because I really have no idea where to start.
Also, We'd like to do this through our own library, because our logging library is included in our common library, and many of our common library functions use our logging class.
Hmmm...
If you want to see what the user sees after they've been using the page, you're most likely going to have to do some fancy client-side scripting.
A naive approach:
When the clicks the submit button, fire a JavaScript event that encodes the DOM and either passes it as a form variable to the server, or executes a separate AJAX request with the encoded data as a parameter. ("Encode" in this case may be as simple as grabbing document.innerHtml, but I haven't checked.)
This potentially introduces a lot of overhead to every form submission, so I'd keep it out of production code.
I'm not sure why you need the rendered HTML as part of your exception log - I've never found it necessary for server-side debugging.
You getting HTML code from a website. You can use code like this.
string urlAddress = "http://www.jobdoor.in";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(urlAddress);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (response.CharacterSet == null)
readStream = new StreamReader(receiveStream);
else
readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
string data = readStream.ReadToEnd();
response.Close();
readStream.Close();
}
For an application I'm working on, I need to allow the user to upload very large files--i.e., potentially many gigabytes--via our website. Unfortunately, ASP.NET MVC appears to load the entire request into RAM before beginning to service it--not exactly ideal for such an application. Notably, trying to circumvent the issue via code such as the following:
if (request.Method == "POST")
{
request.ContentLength = clientRequest.InputStream.Length;
var rgbBody = new byte[32768];
using (var requestStream = request.GetRequestStream())
{
int cbRead;
while ((cbRead = clientRequest.InputStream.Read(rgbBody, 0, rgbBody.Length)) > 0)
{
fileStream.Write(rgbBody, 0, cbRead);
}
}
}
fails to circumvent the buffer-the-request-into-RAM mentality. Is there an easy way to work around this behavior?
It turns out that my initial code was basically correct; the only change required was to change
request.ContentLength = clientRequest.InputStream.Length;
to
request.ContentLength = clientRequest.ContentLength;
The former streams in the entire request to determine the content length; the latter merely checks the Content-Length header, which only requires that the headers have been sent in full. This allows IIS to begin streaming the request almost immediately, which completely eliminates the original problem.
Sure, you can do this. See RESTful file uploads with HttpWebRequest and IHttpHandler. I have been using this method for a few years and have a site that has been tested with files of at least several gigabytes. Essentially, you want to create your own IHttpHandler, which is easier than it sounds.
In a nutshell, you create a class that implements the IHttpHandler
interface, meaning you have to support the IsReusable property and the ProcessRequest method. On top of that, there is a minor change to your web.config, and it works like a charm. At this stage in the life cycle of the request, the entire file being uploaded does not get loaded into memory, so it neatly steps around out of memory issues.
Note that in the web.config,
<httpHandlers>
<add verb="*" path="DocumentUploadService.upl" validate="false" type="TestUploadService.FileUploadHandler, TestUploadService"/>
</httpHandlers>
the file referenced, DocumentUploadService.upl, doesn't actually exist. That is just there to give an alternate extension so that the request is not intercepted by the standard handler. You point your file upload form to that path, but then your FileUploadHandler class kicks in and actually receives the file.
Update: Actually, the code I use is different from that article, and I think I stumbled on the reason it works. I use the HttpPostedFile class, in which "Files are uploaded in MIME multipart/form-data format. By default, all requests, including form fields and uploaded files, larger than 256 KB are buffered to disk, rather than held in server memory."
if (context.Request.Files.Count > 0)
{
string tempFile = context.Request.PhysicalApplicationPath;
for(int i = 0; i < context.Request.Files.Count; i++)
{
HttpPostedFile uploadFile = context.Request.Files[i];
if (uploadFile.ContentLength > 0)
{
uploadFile.SaveAs(string.Format("{0}{1}{2}",
tempFile,"Upload\\", uploadFile.FileName));
}
}
}