mounting a truecrypt file across network - networking

If I have a truecrypt file on a shared drive, if I mount it by using the shared path does my password data get sent in plain text across the network? Basically my question: is it safe to mount a truecrypt file across a network without copying the file to your local machine first.

Your password data is not sent across the network, because the cryptographic operations takes place on your computer, in the TrueCrypt driver. The password is used to derive a key that is used on your computer to decrypt the encrypted sectors sent across the network.
TrueCrypt FAQ has a section on this. I beleive item 2 is what you want to acheive. Their warning is that someone looking at the encrypted trafic could get some side-channel information, like the amount of data read and written, and the offset in the encrypted file.
Unless you want protection from your government or other well funded attacker, I beleive you should be ok, password wise. You might test what happens when a network failur occurs while writing a large file. It might corrupt the file system you mounted.

What I did:
mounted the TrueCrypt Drive and a TrueCrypt-Container with VeraCrypt (is newer)
created a windows (samba) and mac (afp) share of the drive and container with a password in the share settings (whatever software you use)
Mounting the container prevented it from being overwritten from some one else opening the container directly.

Related

Is there any reason to encrypt a VM? (already have file vault encryption for Mac enabled)

I currently have File Vault enabled for encryption on my Mac. Is there any need to encrypt a virtual machine that is installed on my Mac? Or does the File Vault cover that?
I am running Mac OS Sierra 10.12 (Host), and using Virtual Box for my VM's. The guest OS is Ubuntu 14.04. If someone got access to my Mac that is protected by File Vault, would they have any way of accessing my Ubuntu VM?
As #manannan mentioned, it is not easy to give you a proper answer to your question given the little context you provided. But I'll give it a try - feel free to edit your question and I'll update the answer.
With File Vault activated on your Mac, every file that is written to the encrypted disk will be encrypted. A virtual machine consists of one or more virtual disks which are stored on the Mac's file system. Hence they are encrypted on your Mac and can't be read by someone who disassembles your Mac and tries to read your hard disk.
If you export the virtual disk file (for a example on a non-encrypted USB flash storage), the virtual machine is no longer encrypted - so for a safe transfer to another user, you have to come up with your own encryption.
Having the virtual machine encrypt the disk "again" would enable you to safely export and transfer your disk image to another user but would also add overhead to using the machine, as the virtualized OS would have to take care of encryption on top of File Vault.
For a customer project which handled very sensitive data we once created a virtual machine inside a TrueCrypt encrypted folder on the Mac. Like that you basically get the same effect as having the virtualized OS encrypted its hard disks - but I think that TrueCrypt is no longer actively maintained and I don't know any alternative product.
Hope this helps.

How to authorize a file to be opened in just one device using encryption

I have developed a software that opens encrypted files. the files are Encrypted with the key generated from a Mixture of Device Hard Disc Serial and Mac ID In the server side.
In Order To prevent that File to be Opened on any other Device, In Client Software I Generate the same key from Device Hard Disc Serial and Mac ID too, and Decrypt the file with that Key.
Is it the correct way to limit unauthorized computers to open the file? If yes, what if someone debugs the assembly code of my software and Understand the generated key?
Is it the correct way to limit unauthorized computers to open the file?
There is no one "correct" way. Your way will work great right up until...
what if someone debugs the assembly code of my software and Understand the generated key?
that happens. Unless you can lock down the hardware so well that no one can disassemble and debug into your executable, someone will always be able to reverse-engineer your scheme, get the key, and open the file.

SqLite3 NFS mount issue with locking - can I use something like CIFS nobrl?

I'm having a locking problem where an SQLITE3 databse is permanently locked when created on an NFS file system. I have read that an option called nobrl can help this issue when the file system in question is CIFS. (its an option to the mount command).
From: http://linux.die.net/man/8/mount.cifs
nobrl
Do not send byte range lock requests to the server. This is
necessary for certain applications that break with cifs style
mandatory byte range locks (and most cifs servers do not yet support
requesting advisory byte range locks).
Is there any way to stop byte-range-lock requests in NFS if they occur, or am I running in the wrong direction by even thinking about this? I'm happy to change the mount command as was done for the CIFS solution.
I recommend to open you sqlite db by software with nolock parameter enabled, golang exg.:
sql.Open("sqlite3", "file:/media/R/Databases//your.db?nolock=1")
while /media/R is a mounted windows nfs-network-drive. Be carefull because you have to lock your db interactions by software otherwise you could corrupt your db, when accessing it simultaneously.
You can read more about sqlite parameters here:
https://www.sqlite.org/c3ref/open.html

filesystem encryption over the wires

lets assume the following scenario; i need to open a encrypted filesystem (like i'm able to do with TrueCrypt locally) over a network, but
i want the encryption/decryption to happen strictly in the client, so no magic tokens get outside my machine
i want to read/write the filesystem on-demand basis: my encrypted filesystem might contain 3Gb of files, but i only need to edit a file of 1Mb, so my bandwidth consumption should not exceed a significant portion of that
it seems to me the only way to satisfy both requirement is with block-level encryption, so the client will decrypt the filesystem structure, request specific blocks over the network, edit some of the requested blocks, send updated (already encrypted) blocks.
What tools do exist for that? I've heard that eCryptFS does block-level encryption, but i'm not sure if there is a nice frontend for it as with TrueCrypt
My understanding is that with TrueCrypt you would need to download the full 3Gb partition, open it, edit some files, unmount and then resend the whole 3Gb. Is this correct?
You can use a protocol that allows you to connect to a raw disk over the network, then run a standard partition-encryption tool (like TrueCrypt) on top of that.
Examples of such protocols are NBD (Network Block Device) and iSCSI (SCSI over IP).
If you are looking for a file system library, than our SolFS offers exactly what you need. You can keep the storage on the server (encrypted) and open it from the client. When opening, only some pages will be downloaded and they will be decrypted on the client side (and re-encrypted and uploaded back upon change).
Network block devices should make this possible. Not sure how stable that protocol is or whether it even supports multiple clients.

beginner backend web programming questions about SSH

So, I've taken a handful of programming courses(object-oriented, web) but never had "hands-on" projects where it's outside of coding.
Now I'm trying to figure out what these SSH stuff is about, I can't even figure out which client to use, so picked filezilla for now.
My question is, where can I read more about these terms like ports, and whatnots, in a way so I'm not learning aimlessly.
Thanks!
Basically, SSH is a way to command another computer exactly what to do over the Internet. You can execute any commend the remote system has, and your user has permission for.
The Internet
The Internet runs on a series of protocols collectively named TCP/IP. TCP/IP defines a way to find and address individual computers (IP) and a way to communicate between them (TCP).
You can think of computers on the Internet as a large collection of office buildings all close together. Each office has the exact same number of windows: 65535. Offices (computers) communicate by stringing channels between windows (ports). Each channel has two ends, called sockets. Each socket is associated with a port on the respective computer. We send data back and forth, and then the connection is closed.
Client/Server
There are two types of computers on the Internet: clients, and servers. Clients request information, and servers provide it. Ports 1-1024 are reserved for servers, 1 port per protocol. The full list is here, and as you can see, it is not without contention.
Let's say you visit a website
Your browser, the client program, sees that you typed "stackoverflow.com", and using DNS, discovers that stackoverflow.com is computer number 64.34.119.12. This is it's IP address. It allows your computer to find the network stackoverflow.com is located in, route to it, and establish a connection to the Stack Overflow web server. The web server is a program that accepts client requests from a browser like yours.
They speak in a protocol called HTTP - it allows your browser to request a page determined by a URL. The server sees the request, runs a program to construct a web page (or retrieves an HTML file, image, or any other file), and sends the result back to the browser. Port 80 has been reserved for HTTP. That means, your computer chooses a random port to connect from, and connects to port #80 on the server.
Unix and the shell
The majority of the Web (The Internet, even) runs on an OS called Linux (a Unix variant), instead of something like Windows. Unix systems possess a command-line interface, running a program called a "shell", which is a direct interface to the system. The shell accepts input, one command at a time. You type text in, and it spits out the out put of the command.
Secure Shell
SSH allows you to do this securely. All data traffic is encrypted using a well-studied published "public-key" cryptographic system. (In fact, it was major news when a vulnerability was discovered in a supporting encryption scheme, see these advisories).
SSH is a protocol commonly running on port 22. Anyone with a computer on the Internet (not behind a firewall) can run an SSH server, and allow users to connect to it and execute commands.
The majority of systems administrators and software developers using Unix on the server use SSH to configure, control, and upload programs to that server (located in some data center somewhere).
More
There are many many more details to all of this. Any term or acronym above can be typed into Wikipedia for pretty comprehensive information. There are plenty of books on Unix, Networking, and Web programming.
SSH is originally a secured replacement for telnet. The need for SSH arose from the fact that telnet does not support encryption and therefore everything (commands, output and password) was plainly visible on the network for all to see.
Because in the beginning SSH encryption (based on key exchange) was supposed to be strong (and it was indeed a marked improvement), and was open source, it took off rapidly and several extensions to the protocol were added, especially in the domain of remote file manageent and transfer.
In addition, SSH is used in tunelling and port forwarding configurations.
In the domain of file copy there are several options.
SCP: cp (copy). Inspired by rcp, an early file transfer extension to ssh.
SFTP: SSH File Transfer Protocol, a newer SSH extension to support File copy and browsing (but not really like FTP with 2 ports). It is more feature rich than both scp and ftp. Think of it as a remote file system protocol (however, however somewhat slower than scp).
FTPS: FTP over TLS/SSL. Needs 2 ports like ftp, one for command and one for data. Both connections can be encrypted.
Secure FTP. Real FTP tunelled over SSH.
The site to which you will need to connect probably offers SFTP. You just need to declare the remote server connection configuration in Filezilla site manager. You will need to provide the server ip address or name, the SSH server port, usually 22 but there are other possibilities (you should have been provided with this info) and select sftp as server type). When the connection is established, accept the public key and that should be it.
You can then drop your devs on the remote server.
OS choice
You shall first make a kind of choice between 2 worlds (MS or Linux).
Provided that the Linux community is somehow significantly less reluctant to share explanations. Also you will loose less time by choosing one or the other one, avoiding to wonder the same questions twice, with different answers depending on which OS you chose.
I experienced both, starting to search for solutions in the MS world, that I knew. Big mistake, loss of time. Then I changed, too late, to the Linux world. So I would advice to go straight to the linux OS for learning. Really many distributions for this. I would advice Debian (opened, user friendly, simple, safe, huge community) but you'll get as many proposals as there are admin.
OS understanding
http://www.linuxfromscratch.org/lfs/
http://www.ibm.com/developerworks/library/l-bash.html
http://tldp.org/LDP/abs/html/
Specific Questions about SSH
It depends a lot on the system you will choose but you could easily build a small client and a small server, then configure both and use ssh. Your 2 servers could even be hosted on the same machine, locally if you wish. Then you will learn how to set up the ssh-client side (often called ssh_config) and the ssh server side (often named sshd_config, with "d" standing for daemon).
Here you can find explanations about ssh for both worlds :
http://support.suso.com/supki/SSH_Tutorial_for_Linux
Some keywords for your google searches
List_of_TCP_and_UDP_port_numbers
ssh-keygen : encrypted keys (private/public),
ssh-add ssh agent
Gentoo keychain
and later but soon if you administrate your server on your own
The two main ones :
1) iptables
You may start with this and then go further with that one
2) fail2ban
this is a complement tool for which you'll find easily plenty of docs
...
Have fun :-)
EDIT: you can easily experience a Linux machine hosted in a windows OS, using virtualization (virtualbox, vm-ware..). It's a safe start and offer a good payback for this time investment. It would allow you to host as many machines (for example one linux server and one linux client) as you wish, in the limits of your HD room.
I assume you need to learn shell scripting. I recommend this book.
Filezilla is a FTP client. Try Putty - free SSH Client. And of course you need Linux server.
If you want to learn about SSH in depth then may I advise you this book SSH: The Secure Shell The Definitive Guide
See here for more info: http://www.snailbook.com/
I've read the book and learned really a lot. It teaches you all about setting up servers, clients, key agents and various (practical) applications.

Resources