filesystem encryption over the wires - encryption

lets assume the following scenario; i need to open a encrypted filesystem (like i'm able to do with TrueCrypt locally) over a network, but
i want the encryption/decryption to happen strictly in the client, so no magic tokens get outside my machine
i want to read/write the filesystem on-demand basis: my encrypted filesystem might contain 3Gb of files, but i only need to edit a file of 1Mb, so my bandwidth consumption should not exceed a significant portion of that
it seems to me the only way to satisfy both requirement is with block-level encryption, so the client will decrypt the filesystem structure, request specific blocks over the network, edit some of the requested blocks, send updated (already encrypted) blocks.
What tools do exist for that? I've heard that eCryptFS does block-level encryption, but i'm not sure if there is a nice frontend for it as with TrueCrypt
My understanding is that with TrueCrypt you would need to download the full 3Gb partition, open it, edit some files, unmount and then resend the whole 3Gb. Is this correct?

You can use a protocol that allows you to connect to a raw disk over the network, then run a standard partition-encryption tool (like TrueCrypt) on top of that.
Examples of such protocols are NBD (Network Block Device) and iSCSI (SCSI over IP).

If you are looking for a file system library, than our SolFS offers exactly what you need. You can keep the storage on the server (encrypted) and open it from the client. When opening, only some pages will be downloaded and they will be decrypted on the client side (and re-encrypted and uploaded back upon change).

Network block devices should make this possible. Not sure how stable that protocol is or whether it even supports multiple clients.

Related

What is the difference between in FTP and HTTP?

HTTP is used to display the info and also can be used to transfer files from one host to another host.
FTP is used to transfer files from one host to another.
So I come to this point that FTP and HTTP both are almost doing the same work. Then what is the exact benefit of using FTP while I can do this with the HTTP?
Correct me if I am wrong.
Thanks
FTP is a File Transfer Protocol, for transferring files.
FTP is significantly older, it is a protocol designed to enable the transfer of files over a long-running session. There are a wide array of commands and the intent is to allow you to navigate and browse a remote file system and retrieve files (originally over a separate data connection).
FTP still sees a lot of use, but many files are actually transferred over HTTP instead.
HTTP
The HyperText Transfer Protocol was originally designed to transfer hypertext documents and the various assets needed to render them. In practice, this is the way information is transferred on the web -- html, css, images, data are all transferred between web servers and web browsers, as well as between one server and another this way.
HTTP was designed to retrieve a resource from a URL that may or may not match the remote file system (in many web apps, the structure of the URLs has very little to do with the file locations). There is often only a single request in a single http connection and the data uses the same connection as the request.
So I come to this point that FTP and HTTP both are almost doing the same work.
Not really. FTP can be used for file transfer and not really much more. HTTP is way more flexible since it not only transfers byte streams but also meta data (what kind of data is this), supports implicit compression, client specific responses (like based on supported languages), has more flexible ways for authentication, is tuned for less overhead (i.e. can be faster) ...
Then what is the exact benefit of using FTP while I can do this with the HTTP?
There is no real benefit of FTP today. In contrary, in contrast to alternatives like HTTP the design of FTP leads to lots of problems in today's infrastructure where NAT is heavily used (i.e. multiple internal systems behind a single router with public IP address).
FTP remains mostly in places where clients or servers don't support more modern ways for file exchange. A typical example is cheap web hosting where access to the server to update files is often done by FTP since lots of tools have FTP builtin and it is easy to setup on the server too. Alternatives like WebDAV (HTTP based) or SFTP (SSH based) are less used here since they have less support in clients and servers even though they would offer more security and more flexibility and less problems.

How can I split a single direct ftp download into simultaneous multiple parts?

Is there any way, commonly known or purely theoretical, to have a single link to a single file -- say, a typical download file on your regular browser -- but split the transfer itself into multiple parts from the client side?
Essentially, I want to know if it's possible for a computer to split a single network file transfer into two (or more) so that if the computer has multiple network cards (assuming that ISP isn't causing bottleneck), they can effectively download the file at twice the rate. Assume that the download source isn't doing anything to monitor this probably-angering behavior.
FTP supports this via the REST command: http://www.ipswitch.com/support/ws_ftp-server/guide/v5/a_ftpref3.html#10694
Clients usually do feature detection on the FTP server to see if it supports this by issuing the FEAT command.
HTTP also supports this via the Range request header: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
My favourite client that can do above is aria2: http://aria2.sourceforge.net/

mounting a truecrypt file across network

If I have a truecrypt file on a shared drive, if I mount it by using the shared path does my password data get sent in plain text across the network? Basically my question: is it safe to mount a truecrypt file across a network without copying the file to your local machine first.
Your password data is not sent across the network, because the cryptographic operations takes place on your computer, in the TrueCrypt driver. The password is used to derive a key that is used on your computer to decrypt the encrypted sectors sent across the network.
TrueCrypt FAQ has a section on this. I beleive item 2 is what you want to acheive. Their warning is that someone looking at the encrypted trafic could get some side-channel information, like the amount of data read and written, and the offset in the encrypted file.
Unless you want protection from your government or other well funded attacker, I beleive you should be ok, password wise. You might test what happens when a network failur occurs while writing a large file. It might corrupt the file system you mounted.
What I did:
mounted the TrueCrypt Drive and a TrueCrypt-Container with VeraCrypt (is newer)
created a windows (samba) and mac (afp) share of the drive and container with a password in the share settings (whatever software you use)
Mounting the container prevented it from being overwritten from some one else opening the container directly.

can the scanner class be used to read files off of a different computer through the internet?

I need to be able to read and write .txt files to a folder that is on a server. if it is possible do you have any advice as to how to get started?
Scanner can work with a number of sources, including a File, Sting, and an InputStream.
If the remote computers filesystem was mounted via SMB ("Windows Shares") or NFS or similar, then it could be transparently accessed "as a normal file".
If the remote file is accessible as the response content of an HTTP GET request then getInputStream of HttpURLConnecction might be appropriate. (Or, just read the entire response into a String and use that...)
Otherwise, find out how the remote file/resource can be accessed and use that: break down the problem and requirements, come up with a set of solutions, pick the least painful approach, and get to work :-)
Happy coding.

Efficient reliable incremental HTTP multi-file (or whole directory) upload software

Imagine you have a web site that you want to send a lot of data. Say 40 files totaling the equivalence of 2 hours of upload bandwidth. You expect to have 3 connection losses along the way (think: mobile data connection, WLAN vs. microwave). You can't be bothered to retry again and again. This should be automated. Interruptions should not cause more data loss than neccessary. Retrying a complete file is a waste of time and bandwidth.
So here is the question: Is there a software package or framework that
synchronizes a local directory (contents) to the server via HTTP,
is multi-platform (Win XP/Vista/7, MacOS X, Linux),
can be delivered as one self-contained executable,
recovers partially uploades files after interrupted network connections or client restarts,
can be generated on a server to include authentication tokens and upload target,
can be made super simple to use
or what would be a good way to build one?
Options I have found until now:
Neat packaging of rsync. This requires an rsync (server) instance on the server side that is aware of a privilege system.
A custom Flash program. As I understand, Flash 10 is able to read a local file as a bytearray (indicated here) and is obviously able to speak HTTP to the originating server. Seen in question 1344131 ("Upload image thumbnail to server, without uploading whole image").
A custom native application for each platform.
Thanks for any hints!
Related work:
HTML5 will allow multiple files to be uploaded or at least selected for upload "at once". See here for example. This is agnostic to the local files and does not feature recovery of a failed upload.
Efficient way to implement a client multiple file upload service basically asks for SWFUpload or YUIUpload (Flash-based multi-file uploaders, otherwise "stupid")
A comment in question 997253 suggests JUpload - I think using a Java applet will at least require the user to grant additional rights so it can access local files
GearsUploader seems great but requires Google Gears - that is going away soon

Resources