I had a requirement to mount a NFS. After several trial and errors, I could mount a NFS file system from NAS on my Linux system. We were also evaluating if cifs can be used when NFS does not work. man pages were too confusing and could not find any lucid explanation on web. My question is - if NFS is a problem can mount -t cifs be used in place ? Is cifs always available as replacement for nfs.
It's hard to answer, because it depends on the server.
NFS and CIFS aren't different filesystems - they're different protocols for accessing a server side export.
Generally speaking:
NFS is what Unix uses, because it aligns neatly with the Unix permissions model.
CIFS is (generally) what Windows uses. (It uses a different permissions model too).
Key differences between the two are that CIFS operates in a user context - a user accesses a CIFS share. Where NFS operates in a host context - the host mounts an NFS filesystem, and local users permissions are mapped (in a variety of ways, depending on NFS version and authentication modes).
But because - pretty fundamentally - they use different permissioning and authorization mechanisms, you can't reliably just mount an NFS export as CIFS. It relies on the server supporting it, and handling the permission mapping. You would need to ask the person who owns that server for details.
CIFS is not always available (but often is). When it works, NFS works better for unixy clients than CIFS tends to be.
To see if there's CIFS on the server, use the smbclient(1) program, possibly 'smbclient -L servername'.
To use CIFS from unix, you typically need to know a user name and password for the CIFS server, and reference them in the mount command or fstab entry. You can put the password in a file that is protected and use the file for the mount.
If you don't know the CIFS server admin to get a user/pass, you have many problems.
Related
Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.
I want to use golang's built in ListenAndServeTLS() function to serve up my webserver (it's a very simple one), and I need to show it where my keys are stored. The keys are stored in a location only the root user can access (Let's Encrypt did this by default) and I can't listen to port 80 or 443 unless I'm the root user.
Does this mean I have to be running the script as root all the time? Is that not insecure?
To blatantly quote the well-written Caddy FAQ:
No. On Linux, you can use setcap to give Caddy permission to bind to
low ports. Something like setcap cap_net_bind_service=+ep ./caddy
should work. Consult the man pages of your OS to be certain. You could
also use iptables to forward to higher ports.
Privilege de-escalation is another option, but it is not yet a
reliable solution. It will be implemented as soon as this becomes a
robust possibility. Concerned readers are encouraged to get involved
to help this become a reality.
To add to #ma_il's answer, you can use setcap but you still would have to change the permissions of the cert.
Or build your app then run it as root, example: go build && sudo ./app
I'm trying to find a way to update client software while reducing traffic and update server load.
Case:
Server is just http server that has latest non compressed/packed version of software.
Client uses rsync to download changes
Does server have to run rsync instance/host/service (idk how to call it) in order to produce delta files?
Seen some forum question about downloading files with rsync. It seemed like server didn't need rsync instance. If server isn't running rsync instance is that download gonna be done without delta files?
Do you know other solutions which can reduce network and server load?
The server doesn't need any special software other than a ssh server.
I was incorrect about this for your use case. I believe what you are looking for is daemon mode rsync for the server. This has rsync listen on a port to serve requests.
I misunderstood what you were trying to do at first. However in theory it might still be able to be done with only ssh or telnet, I think the daemon mode is a better solution.
See: SSH vs Rsync Daemon
I'm thinking about configuring the remind calendar program so that I can use the same .reminders file from my Ubuntu box at home and from my Windows box at work. What I'm going to try to do is to make the directory on my home machine that contains the file externally visible through webdav on Apache. (Security doesn't really concern me, because my home firewall only forwards ssh, to hit port 80 my my home box, you need to use ssh tunneling.)
Now my understanding is that webdav was designed to arbitrate simultaneous access attempts. My question is whether this is compatible with direct file access from the host machine. That is, I understand that if I have two or more remote webdav clients trying to edit the same file, the webdav protocol is supposed to provide locking, so that only one client can have access, and hence the file will not be corrupted.
My question is whether these protections will also protect against local edits going through the filesystem, rather than through webdav. Should I mount the webdav directory, on the host machine, and direct all local edits through the webdav mount? Or is this unnecessary?
(In this case, with only me accessing the file, it's exceedingly unlikely that I'd get simultaneous edits, but I like to understand how systems are supposed to work ;)
If you're not accessing the files under the WebDAV protocol, you're not honoring locks set via LOCK and UNLOCK methods and therefore will open to potential to overwrite changes made by another client. This situation is described in the WebDAV RFC here: https://www.rfc-editor.org/rfc/rfc4918#section-7.2
I know that rsync can enable / disable the ssh encryption protocol during the file transfer. So, if the ssh encryption protocol has been disabled, does it mean that rsync does not do any encryption at all?
Also, the reason why I asked the above question is we use the rsync module as part of our file transfer and there is nothing in the module that specifies that ssh encryption will be used.
If rsync does not use any encryption, then I can theoretically open a port on both source and destination machines and push the file from source to destination.
If you use the rsync:// protocol scheme (i.e. when you connect to a rsyncd daemon) then no encryption will be used (although password authentication is done using a MD4-based challenge-response system and is probably still reasonably secure).
If you use the hostname:/some/path scheme then rsync transparently calls SSH, which encrypts everything, and uses SSH's native authentication mechanisms. As far as I can tell, some OpenSSH versions supported an option Ciphers null in the configuration file, but this has been removed in later versions.
Generally you shouldn't worry about encryption overhead, unless you are working in a 1 Gbit network or you have old computers.
rsync performs no encryption on its own. If you don't use ssh, nor do you tunnel the rsync traffic through stunnel or some kind of VPN, then no encryption is performed. Yes, you can save some CPU cycles this way.