I currently have an asp.net page which a loggd in user goes to and theres a bunch of dynamically generated links to zip files that he or she owns and can downloads.
Currently they click download and I have no way of knowing if it completes succesfully etc so can't log it. I do log the attempt.
Is there are good download manager or solution I can use so they will have progress bars on the site, they can queue multiple ones up and most importantly I can track failed and successful downloads.
Thanks!
You cannot do this with pure asp.net and ajax as the browser sandbox doesn't give you access to the users computer to save the files.
So you need to use some sort of plugin.
Here's an ActiveX plugin that does what you want but it only works in IE and its expensive. I wouldn't go there...
As you are using Asp.net, a better option would be to write your own download utility in Silverlight.
Bare in mind though, that you'll probably annoy some of your users by forcing them to use your downloader and it will take considerable effort to get a high speed, robust downloader that can compete with the existing browser download managers out there. eg Free Download Manager
Related
I'm currently developing a Chrome extension to use with LinkedIn Sales, and I'm having issues while testing the front end.
Due to some style changes, I had to refresh the page multiple times and now my account is temporarily banned because they confused me with an automated tool.
Does anyone know a workaround for this? Or, alternatively, can I create some type of developer account to use?
TIA!
Due to some style changes, I had to refresh the page multiple times
This is really a suspicious action.
So isn't your "Chrome extension" also an automated tool?
Either get some API access (you find available APIs here https://developer.linkedin.com/product-catalog)
or if you just do some visual brush-up for users, you could try to make an offline copy of the page once (with a tool like HTTrack) and then use that copy for development.
For the Sales API it says
It's required that all integrations are built by approved partners for the SNAP program. To become a partner, visit this page.
I am trying to restrict the user from downloading the page as .html or .aspx file from browser.
Or is there a way to change the content of file if its downloaded?
This is a complex area, with lots of moving parts. The short answer is "there is no way to do this with 100% success; there are a few things you can do which make it harder".
Firstly, you can include JavaScript to disable the right click context menu. This doesn't stop Ctrl+S, but might discourage casual attempts.
Secondly, you can use DRM in the browser (though this is primarily aimed at protecting media content. As browser support is all over the show, this isn't realistic right now.
Thirdly, you could write your site as a single page web application, and build some degree of authentication into the "retrieve content" logic. This way, saving the page to disk wouldn't bring the content along, just the "page furniture". However, any mechanism you include to only download content when you think you should is likely to be easily subverted by anyone who is moderately motivated.
Also, any steps you take to stop people persisting your pages locally are likely to break the caching mechanisms on which the internet depends for performance, so your site would likely be dramatically slower.
No you can't stop them.
Consider how the web actually works here: once the user has visited your website and loaded your page into their browser, they have already downloaded it - the web page was transmitted from your server to their computer and appeared on their screen.
All they have to do then is click the Save button to keep it permanently on their disk. That doesn't involve downloading it again, it just copies the page data from a temporary folder to a permanent one. Of course it's also possible for people to use another HTTP client (i.e. not a browser, but maybe an existing program, or some code they wrote themselves) to visit the URL of your page and save the returned contents.
It's not clear what problem think you would solve by stopping people from saving pages. Saving the page is something done within the browser - you as a site developer don't control the user's browser, so you can't prevent that. And if you stop them from downloading your page in the first place then - by definition - you also stop them from using your website...which kind of defeats the point of having one :-).
If you've got some sort of worry about security, you'll have to clarify exactly what you are concerned about, and maybe you can get advice about a sensible way to deal with it.
My web application allows authorised users to upload videos using the ASP.NET WebForms FileUpload web control, which in the past have been around 100-200MB. I had to obviously make some changes to the web.config so that files of this size could be uploaded.
However, the authorised users now want to upload video files which are 500MB+
The maxAllowedContentLength has now been set to 629145600 (600MB).
However, when uploading the videos, after a while the page responds with:
Page not found
This only happens with large videos, so I know this issue has something to do with the file size.
Why is this happening? And also, should I really be increasing the limit to 500MB+? Is there a better way of getting such large files onto the web server?
Check out this blog post by Jon Galloway, its a bit old but still relevant:
Large file uploads in ASP.NET
Its got answers to your questions about:
page not found
setting the correct maxAllowedContentLength
There's recommendations for various controls you can use, both free and commercial.
I've used the flash control and it worked great.
Alternative Solution
Provide an FTP area for each user to upload too.
It allows users:
easily batch upload many files (harder in the browser)
takes advantage of resume on disconnect
Then you provide a GUI for the user, to consume the files.
Have you considered using jQuery File Upload https://github.com/blueimp/jQuery-File-Upload/ there are versions available for .net and mvc (see the git hub wiki). It takes all the heart ache out of implementing large file uploads in .net and provides a lovely interface too. Since discovering this a while ago I never use anything else! I've successfully implmented a few times now and seen uploads ~2GB working successfully.
I have this ASP.NET web site that allows users to download program installation packages (just normal files). I want to be able to track when a download is completed (i.e. the file has been fully downloaded to the user's computer) and then invoke a Google Analytics script that reports a completed download as a 'Goal' (obviously, one of my goals is to increase file downloads).
The problem is that I need to support direct file URLs, as opposed to the "redirect page" solution. This is because a lot of traffic comes from software download sites that explicitly demand a direct file URL when submitting a product. Perhaps, they do their own file analysis (i.e. virus checking). But with this set of limitations, a typical scenario is:
The user visits my product listing on a software download site
The user clicks the "Download" button on this site
The "Download" page is typically a redirect that finally brings the user to my file via the direct URL I've initially submitted, i.e. http://www.ko-sw.com/somefile.exe
If under these conditions, an exact solution for monitoring is not possible, maybe there exists a workaround? What comes to my mind is temporarily storing the number of performed downloads on the server and then accessing an administrative page that somehow reports this number to Google Analytics and finally sets it back to zero. With this workaround, there is at least no need to try to attach a javascript handler to a non-HTML resource. But even then there are issues:
How to track if a download has completed?
How to track user geolocation and browser capabilities to make them further visible in the reports?
Thanks everybody in advance
According to awstats aborted download has http status code 206 so if you analyze server log for such code you can get those downloads that were not completed.
#Kerido ~ I'm curious what the business case is here. Are you trying to track installs or downloads? If installs, go with #SamMeiers solution.
However, if you're trying to track downloads, then the next question is what webserver base are you using? IIS? Apache? Something else?
In IIS, assuming you're using 7 (or later), you could (easily?) write a HttpHandler that checks for the last bytes of the file to be sent, and on that, record a log somewhere.
On Apache, just setup logging to tell you how many bytes were transferred (a trivial change in httpd.conf) and then parse the logs daily (awstats [amongst others] is pretty good for this, but you might have to write a sed/awk script) and find out how many full transfers were completed. Just depends on how thorough you're trying to be.
But I go back to, what's the business case for this? What does it matter if there were unfinished downloads?
It's possible to track links as a goal, which may be of use to you. However, this won't track when the download was completed.
http://www.google.com/support/analytics/bin/answer.py?answer=55529
Hope this helps.
Cheers
Tigger
I think the solution of #SamMeiers is very good but you may optimized by calling a web services after the installation complete but you might find a small problem if the use installing the app in an environment without internet but you might force to check if there is an internet or not.
You can create any trigger when you installation start as a start flag then when if finish check if the start flag exists then the app have been downloaded and installed also.
I m developin an Online Examination System in C#.net and want to copy files on client machine as soon as exam starts, so that even if internet gets disconnected examinee can continue with test
You may wish to consider a client server solution, such as WPF or winforms as this is more suited to this type of development. You can use one click deployment to have this still launched from the web and updated on every run.
If you do decied to use asp.net this will result in a very javascript heavy site with a very slow load in the first page.
To do this you would load all your test qustions into a javascript datastructure on the first page, when every the user when to the next page you would need to, using javascript, collect all the answers and store in javascript. then rereender the entire page using your definitions of the test in javascript with no trip back to the server. then once the test was complete you would need to send your results back to the server, the internet must be active once you've compleated the test.
You'll have to create a download package and provide a link for the user to click to request the files. You can't force a download.
If your exam in all in one web page, you don't need to do anything. Once the page appears in the users browser, it has already been "copied locally".