I have some scenario in my application where i need to give multiple download functionality.
As much i come to know that multiple download is not possible using HTTP.Either we have to use multiple popup with javascript or we can do using web client.
I want to know that is there any other way to this.
Can we use FTP for multiple download in asp.net?
if yes, then how?
Check out this JQuery plug-in that does most of the work for you (and doesn't require you to ZIP all files):
https://github.com/biesiad/jquery-multidownload/ (fixed link)
No you can't use FTP in ASP.NET, however you could add all files to a single zip-file and let the user download that file (containing all files).
I would be tempted to zip the files on the server and send them down that way. Is that possible?
Alternatively you could try emailing the files to them.
Via HTTP you would probably need to use AJAX or iframes. The iframe option would be akin to making new windows just the user wouldnt need to see them. In this way you would have multiple http requests. I'm not sure how all of the browsers would deal with this though at the minute I am just speculating.
Most modern websites solve this problem by sending you the collection of files in a zipped package, sites like Google Drive for example. This is a perfect solution especially if you are expecting a large number of files.
However, if you expect a relatively low fixed number of files, and these files are statically stored in the server (meaning not generated at run-time), you can develop a client-side ajax script (jquery presumably) that will download the files one by one.
Initially this script would request a list of files to download (in json format for example), once received then it will enumerate through the result, firing an asynchronous request for the file one by one.
Related
I have a .jpg file which represents the current image from a webcam. User's will be downloading this file at an interval of once a second. Because there could be dozens of users reading it, this could be dozens of times a second (which is normal for any web server).
Problem is, this image is updated by a 3rd party application also once a second which "spiders" my local networks webcam portal image. This is so we can build our webcams into our current administration panel.
The problem I am already finding is ASP.net sometimes gets an error it can not access the file because it is open for write permissions by the bot. Likewise, the bot can not access it because IIS is feeding it to the user.
The bot uses io.streamwriter to save the data to the file, and my script uses Response.WriteFile to send the file to the script. (I need to use an actual ASP.net page with a JPG content-type that feeds the file to make sure only users with a active session can view the JPG).
My question is what is the best practices for this? I know why it's happening but what is the best resolution for this? Would storing as a BLOB in a database maybe be smarter since databases are created for concurrent read/writing already? Is there an easier way of doing this with a file I have not thought of yet?
Thanks in advance,
Anthony Greco
Using a BLOB will work if the readers use SNAPSHOT isolation model (SQL Server 2005 and up). See Download and Upload images from SQL Server via ASP.Net MVC for how to stream an image from a BLOB, and see Understanding Row Versioning-Based Isolation Levels for a lecture on SNAPSHOT.
But using a BLOB may be overkill, you could get away with something much simpler. For instance, if you only have one ASP.Net process, then you could have a global volatile variable for the current file name. The writer writes the JPG into a new file, and then updates the global 'current' file name with an Interlocked.CompareExchange operation (it has to be Compare because a newer writer might actually finish faster, outrun a previous writer, and you want to preserve the latest update). There are still some issues left to solve (find out the file name at startup, clean up old files etc) but they are all fairly ease to solve.
If you have a farm of servers, or multiple ASP.Net processes serving the site, then things could get complicated. I would still do a rotating file name and do a try-and-error approach (try to respond with newest file, fall back to previous older one if conflict is detected).
You could get the bot to write the data to a different filename and then do a delete and rename to the filename being served by ASP.Net. This should reduce the file lock time down to the time for a delete and rename to occur. To clarify:
ASP.Net serving image from "webcam.jpg"
bot writes image data to "temp.jpg"
when last image byte written, bot deletes "webcam.jpg" and renames "temp.jpg" to "webcam.jpg"
ASP.Net should check "webcam.jpg" exists, if not wait 10ms (or suitable small increment) and check again.
I'm building an app in ASP.NET that will store some pictures of objects. The pictures will be uploaded by suppliers and downloaded by subscribers. In between, they will have to be edited before becoming available to subscribers.
The editing involves creating a cropping path tightly around the object in the picture, in which some advanced desktop image software will have to be used I suppose.
My problem is in exchanging pictures between my ASP.NET app and the desktop software in a manner that is easy and transparent for the user.
I've done some thinking and I've come up with:
- Manually downloading and uploading the image (Not much user friendly...)
- An image editing program that can upload to a web service (Haven't found yet...)
- Develop a plug-in for an image editing program (Too advanced...)
I'd appreciate any suggestions you may have, thank you!
It sounds like you need some automation to move files between the web server and a file share. I am assuming that the number of images that need to be processed is pretty large, because if it's not, then the overhead of downloading/re-uploading each would not be that much.
So do the following:
1) Create an API for your web app that lists files that are available, or new files since some date/time, or files that have been marked as "new". The API should probably also allow marking a status on them (so you can tell it when you've finishing pulling something down, and it won't be offered again) if you don't want to trust date/time as an indicator of it being new.
2) Write an app (non-web) that runs on a schedule and uses this API to automatically download files to a shared filesystem area in your local network, and marks them as "downloaded"
The app should also monitor these files (the ones it downloaded & saved to your local share) for changes, and if changed, upload them back to your web app. To do this you may need to keep a database of filenames and modification dates/times.
This shouldn't be too hard to write in whatever language you are using for your web (assume c# or vb). By "API" I just mean, a web page that provides a list in a standardized format (e.g. json) that you can parse with your automation application, and another page that allows posting the file back for re-upload.
I'm assuming that the web server is not your own, or generally, you can't simply have it save the file uploads directly to some area where your image editors can access them. Otherwise you could just do that.
Meanwhile I came out with another possible solution.
I'm thinking of having our own windows app on the editor's computers. This app will be associated with a custom extension. When an editor downloads a file (with this extension) for editing, it will be opened in our application which in turn will open the image in some editor program.
This app will be monitoring the files for changes, and in such case, it will upload these images.
Any thoughts on this?
Im doing this project where i need to download files through a webservice (images, videos). The download MUST go through an existing webservice. The existing webservice was made when there were no need to upload and download files but the project has changed and now we need to do It through a webservice.
Right now I have implemented the download as a method that returns a byte[], I open a streamreader and resds the entire file into a byte[] and returns it to my method. This is working file on small files <~1Mb, above it takes too long time. I want to show some progress (e.g. when the user downlaods a 20Mb video) which i cannot do right now. And i want to make it download much faster (is a strategy to use multithreading and several threads that downloads a part of the file?). It is within a WPF application i need to do this.
Any ideas on how to approach this?
You can't do what you want to do using old ASMX web services. They will buffer the input internally, several times.
You need a way to move to WCF, at least for this new function. You can keep the old code, but you need a new, WCF service to properly handle the new.
I'm coding in ASP.NET and want to store audio files (.mp3, or smaller formats) in a MySQL database; which, I can then retrieve based on certain conditions. Is this possible? Are there any preferred methods to having Audio files on your web pages (besides embedding them in the HTML).
Most solutions that store files in a database do not scale well, but you can certain store audio files, or any other type of file, as a blob (binary large object) in MySQL. You can create an ashx handler that performs the retrieval from the database and writes the content to the ASP.NET output stream as raw binary data. You can then create links that point to the ASHX handler and perform any query logic you want in there based on URL parameters.
If you are using a MySQL database, it seems to do well (at least in my experience) with blobs. It takes a relatively short time to load the MP3 and if you tune your database for audio, you can probably even get better performance (I pretty much use default settings).
One thing to remember is that you define the MIME-type so that users know what they are getting when they click a link to access your MP3.
Again, all of this is my own experience. YMMV.
I prefer to store large files outside of the database, unless there is some overwhelming need to keep everything there.
You could store the location of the file in the database and have the files outside of the webapp directory, so they can't be accessed directly.
Then, in the url for playing the music you can just have a cgi program that will just send that data to the browser, with the correct mime type.
I need to download two Excel files onto the client, and then run a (diff) executable against them. I know how to download a single Excel file, from
here. But how to download a second one automatically in succession? And then how to run a batch command on them? Is this even realistic? Any guidance or pointers would be greatly appreciated.
Thanks,
Mike
To download multiple files at once you have two main options:
1) Just open multiple windows to your page generation script to download multiple files as per http://www.webdeveloper.com/forum/showpost.php?s=b4f6b25edeb6b7ea55434c4685a675fe&p=950225&postcount=6
2) Archive the files into a package (zip/arj/7z etc..) and send the archive to the client.
eg. http://www.motobit.com/tips/detpg_multiple-files-one-request/
As for doing the diff client-side that is a lot more tricky as Shhnap has already mentioned. If you are doing this for a controlled client base you may be able to get them to allow permissions for an ActiveX script that runs something client side. (Or fire off a console application) - but if you don't have fine control over the client environment then i can't think of a way to do it.
As Shhnap suggested can you not just do the comparison server-side (and then send this to the client as a third file?)
Well, just some pointers because I'm not sure I completely understand the problem. You a user to be given two downloads at the same time and then run a diff command against those two files? On the server or the client i'm not sure? You'll have alot of problems automating the client side version because forcing people to run client side code is usually frowned upon by virus protection software.
The server side diff sounds exactly like a CGI moment to me: http://www.cs.tut.fi/~jkorpela/perl/cgi.html. That will allow you to generate a web-page that shows the diff between the two. CGI allows you to run programs on your server and display their output in a webpage; that's the simple explanation.
If that was not quite what you wanted then feel free to give me a comment and i'll try and edit to answer correctly.