Is there any reasonable method to allow users of a webapp to download large files? I'm looking for something other than the browser's built-in download dialog - the requirements are that the user initiates the download from the browser and then some other application takes over, downloads the file in background and doesn't exit when the browser is closed. It might possibly work over http, ftp or even bittorrent. Platform independence would be a nice thing to have but I'm mostly concerned with Windows.
This might be a suitable use for BitTorrent. It works using a separate program (in most browsers), and will still run after the browser is closed. Not a perfect match, but meets most of your demands.
Maybe BITS is something for you?
Background Intelligent Transfer
Service Purpose
Background Intelligent Transfer
Service (BITS) transfers files
(downloads or uploads) between a
client and server and provides
progress information related to the
transfers. You can also download files
from a peer.
Where Applicable
Use BITS for applications that need
to:
Asynchronously transfer files in the
foreground or background. Preserve
the responsiveness of other network
applications. Automatically resume
file transfers after network
disconnects and computer restarts.
Developer Audience
BITS is designed for C and C++
developers.
Windows only
Try freeDownloadManager. It does integrate with IE and Firefox.
Take a look at this:
http://msdn.microsoft.com/en-us/library/aa753618(VS.85).aspx
It´s only for IE though.
Another way is to write a BandObject for IE, which hooks up on all links and starts your application.
http://www.codeproject.com/KB/shell/dotnetbandobjects.aspx
Depending on how large the files are, pretty much all web-browsers all have built-in download managers.. Just put a link to the file, and the browser will take over when the user clicks.. You could simply recommend people install a download manager before downloading the file, linking to a recommended free client for Windows/Linux/OS X.
Depending on how large the files are, Bittorrent could be an option. You would offer a .torrent file, when people open them in a separate download-client, which is seperate from the browser.
There are drawbacks, mainly depending on your intended audience:
Bittorrent is rarely allowed on corporate or school networks
it can be difficult to use (as it's a new concept to lots of people).. for example, if someone doesn't have a torrent client installed, they get a tiny file they cannot open, which can be confusing
problems with NAT/port-forwarding/firewalls are quite common
You have to use run a torrent tracker, and seed the file
...but, there are also benefits - mainly reduced bandwidth-usage on the server, as people download also seed the file.
Related
i try to find a good combination of libraries for managing a real-time communication (client/server) using Haxe (only Haxe, not openfl or other framework base on Haxe) targeting flash (swf) for the client and no preference for the server except don't use neko.
The goal is to make a simple tchat and put a display representation of all clients on an aera. Each client can move his representation in this area, and the other sees the movement.
I find some Lib to make this :
https://github.com/soywiz/haxe-ws
https://github.com/MattTuttle/hxnet
haxe-js-kit
But I'm not sure of the best way to adopt.
Do you have any suggestion/remarks/tips to choose the better way ?
Disclaimer: I wrote the library that I am sharing here.
My somewhat new library mphx may be able to help you. It can manage 'rooms' of connections, allows client to server and server to client messaging in the form of events, and best of all, is cross platform. It also works in the web with websockets.
It was originally an extention of HxNet, however I wanted it to be easier to use. Connecting and sending a 'message' with data just takes a few lines.
I have a few examples in the github repository, the simplest being the 'basic' example. One of your requests you have is that it doesn't rely on one of the big libraries (open fl, etc) and mphx doesn't. The basic example proves that, and only runs in terminal. That being said, it can be used with haxeflixel, for that you can see the other examples.
It sounds like your main goal is to have simple, graphic multiplayer. For that you can look at the 'movement' haxeflixel example.
Documentation is still a little skim, and the code is alpha, so it might change or break. That can probably be said for most of the library's you listed though. The best way to install it is like this
haxelib git mphx https://github.com/5Mixer/mphx.git
That will not install the examples though. To run them, either download the repository as a zip, or just git clone it, and go into the examples folder.
Library: https://github.com/5Mixer/mphx
Old video's I made. A little outdated, most likely.
Video 1: https://www.youtube.com/watch?v=07J0wLXwH0g
Video 2: https://www.youtube.com/watch?v=MUx2CUtsnTU
Chrome is on the verge of definitly break compatability with NPAPI, and IE breaking with ActiveX the future of Java Applets is dark. Currenty we actively use a secure applet for out client organizations that enables their users to upload a bunch of files from their file system to our servers with the click of a button. The applet has full access to any configured drive, including network drives.
With the imminent death of the applet this functionality is going to be lost if we don't find an alternative. I have already tried to explore different solutions, including the chrome FileSystem API but that is currently only available for Chrome (http://caniuse.com/#feat=filesystem) and has limited access.
Does anybody know about an alternative to keep supporting the much appreciated functionality? Unfortunately we are obligated to support all browser down to IE8.
I've written a post about this here.
Once Google Chrome was the first to announce that they won’t be supporting NPAPI anymore, they were also the first to provide a new architecture in order to rewrite your code to work on their browser. You can take a look on Native Messaging, which “can exchange messages with native applications using an API that is similar to the other message passing APIs”. The problem is that this approach only works on Chrome, is not something that you can adapt to other browsers.
A more useful approach is FireBreath, a browser plugin in a post NPAPI world. Check the words below from one buddy of the project:
“FireBreath 2 will allow you to write a plugin that works in NPAPI, ActiveX, or through Native Messaging; it’s getting close to ready to go into beta. It doesn’t have any kind of real drawing support, but would work for what you describe. The install process is a bit of a pain, but it works. The FireWyrm protocol that the native messaging component uses could be used with any connection that allows passing text data; it should be possible to make it work with js-ctypes on firefox or plausibly WEB-RTC or even CORS AJAX in some way. For now the only thing we needed to solve was Chrome, but we did it in a way that should be pretty portable to other technologies.”
In light of the answer provided by Uly Marins I have researched the options suggested. Unfortunately these options weren't viable for our application, because the mayority of our users do not have sufficient rights to install third party plugins. Additionally the API is still in Beta which won't do any good in a stable production environment.
The main problem we wanted to solve was the abbility to delete files from the accessed folders. It seemed like one of the mayor goals of the removal of the NPAPI support was exactly to prevent this kind of possibility. Therefore we needed to reduce our goals to a simple solution that was still acceptable for our users, with the additional training on how to clear the selected folder manually (because most of our users are almost computer illiterate and needed to access network folders).
Long answer short. The requested solution is just not possible anymore and had to be replaced by a simpler solution and additional training.
We need to be able to supply big files to our users. The files can easily grow to 2 or 3GB. These files are not movies or similiar. They are software needed to control and develop robots in an educational capacity.
We have some conflict in our project group in how we should approach this challenge. First of all, Bittorrent is not a solution for us (despite the goodness it could bring us). The files will be availiable through HTTP (not FTP) and via a filestream so we can control who gets access to the files.
As a former pirate in the early days of the internet i have often struggled with corrupt files and using filehashes and filesets to minimize the amount of redownload required. I advocate a small application that downloads and verifies a fileset and extracts the big install file once it is completely downloaded and verified.
My colleagues don't think this is nessecary and point to the TCP/IP protocols inherit capabiltities to avoid corrupt downloads. They also mention that Microsoft has moved away from a downloadmanager for their MSDN files.
Are corrupt downloads still a widespread issue or will the amount of time we spend creating a solution to this problem be wasted, compared to the amount of people who will actually be affected by it?
If a download manager is the way to go, what approach would you suggest we take?
-edit-
Just to clearify. Is downloading 3GB of data in one chunk, over HTTP a problem OR should we make our own EXE that downloads the big file in smaller chunks (and verifies them).
You do not need to go for your own download manager. You can use some really smart approach.
Split files in smaller chunks, let's say 100MB each. So even if a download is corrupted, user will end-up downloading with that particular chunk.
Most of web servers are capable of understanding and treating/serving range headers. You can recommend the users to use download manager / browser add-ons which can use this capacity. If your users are using unix/linux systems, wget is such a utility.
Its true that TCP/IP has capacities of preventing corruption but it basically assumes that network is still up and accessible. #2 mentioned above can be one possible work-around to the problems where network was completely down in middle of download.
And finally, it is always good to provide file hash to your users. This is not only to ensure the download but also to ensure the security of the software that you are distributing.
HTH
I'm going to create a Java program that allows "locking" a USB drive by making it's files accessible only with a password. Similar software that does this is USB safeguard.
Here is what I am thinking of doing:
Store all files into a single archive on the USB.
Encrypt the archive using AES or
blowfish
Hide the archive.
The problem is, how can I "unlock" the USB? What approach can I take here? Here is what I have thought of:
Ramdisk: It is very hard, if not impossible, to load a Ramdisk from an encrypted arhive. While it may be plausible in c++, I think it may be much harder in Java and might involve messing with the system classes, which would kill the compatibility of the software and defeat the whole purpose of using Java.
Loading the unencrypted archive onto the USB - Nobody likes waiting 10 minutes just to view a file on a USB. Copying all the files might take some time. Also, what about free space on the USB?
Loading unencrypted archive onto harddrive - While being very unsecure and error-prone, this looks like the only possible way to get it done.
Creating a custom file browser allowing the user to browse the archive - Do you use winrar to browse your files? Would you like doing it? No. Creating a custom file browser will take alot of time to create, and again, is an error-prone and user-unfriendly approach.
I can't think of any other way of doing this. Can anyone think of a better way? Note that this is going to be free and open-source software.
TrueCrypt is Free Open-Source software for storing encrypted files on a storage device (i.e. USB drive). It runs on Windows, Linux, and MacOS. TrueCrypt even allows hidden volumes. I would start with their source code, and proceed from there.
I'm planning on doing more coding from home but in order to do so, I need to be able to edit files on a Samba drive on our dev server. The problem I've run into with several editors is that the network latency causes the editor to lock up for long periods of time (Eclipse, TextMate). Some editors cope with this a lot better than others, but are there any file system or other tweaks I can make to minimize the impact of lag?
A few additional points:
There's a policy against having company data on personal machines, so I'd like to avoid checking out the code locally.
The mount is over a PPTP VPN connection.
Mounting to Linux or OS X client
Use a source control system — Subversion, Perforce, Git, Mercurial, Bazaar, etc. — so you're never editing code on a shared server. Instead you should be editing a local work area and committing changes to a repository located on the network.
Also, convince your company to adapt their policy such that company code is allowed on personal machines if it's on an encrypted volume. Encrypted disk images that you can use for this are trivial to create using Disk Utility, and can use strong cryptography. You can get even more security by not storing your encryption passphrase in your keychain, and instead typing it every time you mount the encrypted volume; this means that even if your local user account is compromised, as long as you don't have the volume mounted, nobody else will be able to mount it.
I did this all the time when I was consulting and none of my clients — some of whom had similar rules about company code — ever had a problem with it once I explained how things worked. (I think some of them even started using encrypted disk images even within their offices.)
Remate plugin simply disables this dreadful refresh-on-focus feature.
Download, unpack, doubleclick and choose "Disable Refresh on Regaining Focus" from "Window" menu (you can refresh manually by right-clicking project in drawer). Voila!
If you are accessing the data from your personal computer, it is in your RAM, so we will assume that you just can't store it on your hard drive, floppy, USB stick, etc.
Your solution is a RAM drive. Copy the files you need to edit there using whatever method you prefer (I would suggest source control) and then you can edit them without lag. When you are done commit them back to the server.
As was pointed out your editor may be caching changes to your temp directory, or maybe even your swap file (if it is in memory, then it can get swapped out). The solution to that is get a much larger RAM drive and run a Virtual Machine in the RAM drive. Not sure what OS you are running, but you can get a pretty slim install of most OS's if all you are doing is editing source code.
If you don't have enough RAM, then get a Gigabyte i-RAM solid state drive and remove the battery, that way it will lose everything when you power down.
Set your VMWare to not allow the OS to swap any of the virtual machine. Keep a baseline VM on your hard drive and copy it to your RAM drive before booting it up. Then you can use the hard drive in the VM like a hard drive, even though it is RAM.
Might be a good idea to run a secure erase on your RAM drive before powering down. Also keep in mind that they have found if you super cool a RAM chip before removing it from a functioning computer, and place it in a new computer quick enough, the data may still be intact.
I guess it all comes down to how detailed that policy is, and how it is interpreted.
Good luck!
Short answer: you can do no trick. CIFS is really geared towards LAN with a reasonably calm trafic, so you have zero chance to not suffer intermittent lag accessing a share through a VPN. The editor at some point needs to access the file in blocking IO, because it makes no real sense to do otherwise.
You could switch editor and use Emacs + TRAMP which is geared to work on remote files.