ASP.Net: Generating a file to download - asp.net

I have a file that I need to copy, run a command against the copy that specializes it for the person downloading it, and then provide that copy to a user to download. I'm using ASP.Net MVC2, and I've never done anything like this. I've searched around for the simplest way to do it, but I haven't found much, so I've come up with a plan.
I think what I'll do is generate a guid, which will become the name of a folder I'll generate at the same level of the source file that the copy is made from. I'll then copy the file to that folder, run my command against it, provide a link to the file, and I'll have some service that runs every now and then that deletes directories that are more than a day old.
Am I over thinking this? Is there an easier, simpler, or at least more formal way to do this? My way seems a bit convoluted and messy.

Can you process in memory and stream it to the client with a handler?
That is how I do things like this.
Basically, your download link points to an HttpHandler, typically async, that performs the processing and then streams the bits with a content-disposition of 'attachment' and a filename.
EDIT: I don't do much MVC but what Charles is describing sounds like and MVC version what I describe above.
Either way, processing in memory and streaming it out is probably your best bet. It obviates a lot of headaches, code and workflow you would have to maintain otherwise.

What sort of command do you have to run against it?
Because it would be ideal to process it in memory in a controller's action using MVC's FileResult to send it to the client.
Charles

Related

What are the benefits of deploying binary dll for website rather than source code?

I have a small internal app, and I am arguing against myself why I should not just copy the entire source folder to production, as supposed to Publish, which compiles the .cs files to .dll.
But I can't think of any realistic benefits one way or another, other than to reduce the temptation to make direct logic change on production. What do you think?
It eliminates the temptation to just change that one little thing in production...
Also, it secures the code against malicious changes, it adds extra steps between "build" and "deploy" which can be used as a natural QA speed bump, it increases start up time and a billion other things.
Two main things:
As antisanity points out, it lets you verify that all the pages on your site actually compile, which goes a long way toward catching a number of bugs before they get very far.
The website will end up compiling these files the first time they get accessed anyway. By precompiling them, you'll save time on the first load, which will make your application feel a little more responsive to a few of your users.
Well, I can agree with you only if you're talking just about views. If you're talking about controllers, I guess you'd need 'em compiled in order to run :).
Okay, joking aside, I'm for a complete binary deployment mainly for:
being sure that my code compiles (at least)
speed up view generation (or first time compile)
simplify management of patches (I deliver just a dll and not the entire webapp)
regards
M.
Well, for one thing... it makes sure that your site compiles.
Apart from that, check out Hanselman's Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong
There are a number of reasons why you should publish your application:
It will perform better;
You know that the code compiles;
It's cleaner (no .cs files cluttering the folder);
Some security benefits by not exposing the source code;
You can package your application for deployment to testing, staging, and production

How to get a web page to return a value to a Desktop application caller?

I am trying to create a moderately complex web page. This is not something I have ever done before, and nor do I feel happy about doing it, but I need to do it. I am not asking you to write it for me, mealy tell me what to research and learn, so that I can hopefully, eventually get it done.
I have not got a clue about how to start, and this is my question.
I need to make an autoupdater for my application. This has been done, and it is working perfectly. However, my application currently downloads version.txt, and reads it to work out whether it needs to download the new application. This is hideous, and slow, and was only supposed to be very temporary. It is also very annoying that we have to update this file every time we release a new version.
My boss wants me to create a webpage that reads the version data from the uploaded .exe, and then returns that to the Desktop application. Therefore, I would be able to call www.example.com/version.aspx, and it would return the version number, such as 1.1. I could then compare to the current version (don't worry, it is generated on the fly, and not hard coded) and then I could download the application if required.
Here comes my question. How would I go about this? I have heard of CGI scripts, and asp.net. Which one of these has the power to solve my problem. If you could just tell me that, then I will be all sorted, as I could read up on it, learn, and broaden my knowledge.
If this is not possible, or not easily possible, is there any way of reading the file version of a remote .exe, without downloading it. This would also be preferable in many ways.
Thank you so much, and I am so sorry for my complete ignorance in this topic.
Richard
P.S. I did try to explain this to my boss, and suggested that maybe he could either do it, or help me, but he is not very good at web applications, and refused, saying that it would broaden my education in this matter. Ahhh!
EDIT: Somehow forgot to add: I normally program in C#, although this application should be so small, that it would not really matter. Also, C# code would be ideal, if there is a way to check the version of a file on a remote server.
EDIT: Thanks!
If you can read the version of some exe file from within a desktop application, using C#, then you can use the exact same code to read it from within an ASP.Net web application.
The advantage of asp.net over CGI (in your case) is that you can use C# in the backend.
A couple of hints:
Server.MapPath("file.exe") returns the complete pathname of a "file.exe" next to the requested aspx file, independent of where you install that web-application.
If you only want to return the version number, use
this code:
Response.Clear();
Response.Write(versionnumber);
Response.End();
in a Page_Load method, after you have read the versionnumber of the exe, of course.

100% cpu usage in xml/xslt driven asp.net web app

The web app uses XML from a web service, which is then transformed to HTML using XSLT. The app uses a HttpModule to get the XML using AddOnPreRequestHandlerExecuteAsync.
Classes Used:
XmlDocument - stores the xml.
XslCompiledTransform - stores the transform, is cached in Application.
Asynchronous HttpWebRequest using BeginGetResponse/EndGetResponse
HttpModule with hooked AddOnPreRequestHandlerExecuteAsync events.
I do not want to use the XPathDocument unless there are no other possible optimizations. It would take some complicated code to get all the XML together without the ability to write to the XmlDocument. There is additional XML that does not come from the web service that must also be added to the document.
Any suggestions would be nice. The server doesn't seem to be having memory issues, if that is telltale of anything, just really high cpu usage.
Thanks.
UPDATE
After much searching I found that the issue causing the cpu to race was actually an infinite (or near) loop, which was not in my code at all, and hidden from my profiling due to the nature of where it was coming up. Lesson here, if it doesn't make sense, look for alternative reasoning for the issue before tearing your code apart.
What version of .NET? It's been a while since I've done anything with it XML/ XSL, but .NET 2.0 had some memory issues in XslCompiledTransform. While that could be the issue, it's more likely something in the code. Can you provide some sample XML and the XSL doc?
What happens if you save both out as static files and try to run the transform (create a small standalone script or unit test that just does this to see if it's an issue)? Make sure you're disposing of your XslCompiledTransform object as soon as you're done with it (and the XML doc as well).
When I run into issues with XSL transforms, I usually save a sample XML document and apply my XSL in Cooktop. It's a little hard to figure out at first, but it's a good sanity check to make sure you don't have a glaring error in your XSL.
Consider using Linq to XML to do the transformation - 350 kB is a large text/xml document from a transformation standpoint - it might be faster than an XSLT tranformation. See here for a sample.
Is the web service on the same server? If so, does testing the service by itself show high CPU usage?
How are you putting the transformed document into cache?
Try to user a Profiler. DotTrace and ANTS have trial versions. This should make you able to pin point your problems. (The nice thing about dotTrace is, that it integrates with unit tests.)

What's the best way to structure this kind of remote service?

I'm not sure if this is technically a web service or not but I have a Flash file that periodically needs to make a round trip to a DB. As it stands, AS3 uses the URLLoader class to exchange XML with an ASP.NET/VB file on the server. The aspx code then goes to the DB and returns whatever information is requested back to the Flash file.
As my program grows and I need to execute a larger variety of tasks on the server, I'm wondering if I should just keep placing functions in that same aspx file and specify in AS3 which function I should load for any given task. OR, is it better to break up my functionality into several different aspx files and call the appropriate file for the task?
Are there any obvious pros and cons to either method that I should consider?
(note: I have put all of VB functions on the aspx pages rather than the code behind files because I was having trouble accessing the i/o stream from the code behind.)
Thanks.
T
When you are saying you need to execute a large variety of tasks you should think about breaking the code down into multiple files. Though this question cannot be answered in general and the solution is always specific to the problem this might help:
Reasons to keep all code in one file on the server side:
Code for different tasks heavily depends on each other
Effort for separating the tasks into files is too high
The variety/count of different tasks is manageable
You are the only developer
Every tasks works correct and is fail safe (Since they are all in one file I assume one error will break all tasks)
Reasons to separate tasks into different files:
The file is getting too big, unreadable and unmaintainable
Different tasks should not depend on each other (Separation of concerns)
There are multiple developers working on different tasks
Many new tasks will be added
A task could contain errors and should not break every other task
That is all I can think of right now. You will sure find more reasons for yourself. As said I would separate the tasks as I think the effort is not too high.

ASP.NET XML ObjectDataSource Wrapper Class Examples

I want to use XML instead of SQLServer for a simple website.
Are their any good tutorials, code examples, and/or tools available to make a (prefer VB.NET) wrapper class to handle the basic list, insert, edit, and delete (CRUD) code?
The closest one I found was on a Telerik Trainer video/code for their Scheduler component where they used XML to handle the scheduling data in the demo. They created an ObjectDataSource class. Here is a LINK to that demo if anyone is interested.
[Reply to Esteban]
it would make deployment easier for clients that use godaddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
i have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.
In these days of SQL Server Express, I'd say there's really no reason for you not to use a database.
I know this doesn't really answer your question, but I'd hate to see you roll out code that will be a nightmare to maintain and scale.
Maybe you could tell us why you want to use XML files instead of a proper database.
It would make deployment easier for clients that use go-daddy where the database isn't in the app_data folder. also backing up those websites would be as simple as FTP the entire thing.
I have concerns about possible collisions on saving. especially if I add something as simple as a click counter to say a list of mp3 files visitors to the site can access.

Resources