I am working on a project that requires using raw_sockets and raw sockets to work needs CAP_NET_RAW we used setcap and it worked fine, now the executable is on NFS, and nw setcap can’t be used is their a work around? Thanks in advance
I tried chown root and chmod u+s to increase prevelage of my executable but it didn’t work
Your app uses raw sockets, and raw sockets requires that the process have CAP_NET_RAW capability, correct?
https://manpages.ubuntu.com/manpages/kinetic/en/man7/packet.7.html,
In order to create a packet socket, a process must have the
CAP_NET_RAW capability in the user namespace that governs its network
namespace.
You've been relying on extended attributes to associate CAP_NET_RAW capability with your app's executable file, but your NFS server doesn't support this, correct?
Here's a potential workaround:
https://stackoverflow.com/a/44103544/421195
You can use fuse_xattrs (a
fuse filesystem layer) to emulate extended attributes (xattrs) on NFS
shares. Basically you have to do:
mount the NFS share. e.g.: /mnt/shared_data
mount the fuse xattr layer:
$ fuse_xattrs /mnt/shared_data /mnt/shared_data_with_xattrs
Now all the files on /mnt/shared_data can be accessed on
/mnt/shared_data_with_xattrs with xattrs support. The extended
attributes will be stored on sidecar files. The extended attributes
are not going to be stored on the server filesystem as extended
attributes, they are going to be stored in sidecar files.
Sadly this is only a work-around.
disclaimer: I'm the author of fuse_xattrs.
fbarriga
Related
Can docker containers share a unix abstract socket like the ones for DBUS?
If it can be done, how do you do it?
If it cannot or cannot yet, is there a way to share a dbus connection among the host and containers, or between containers?
Here is the answer from another site :
DBus uses abstract sockets , which are network-namespace specific.
So the only real way to fix this is to not use a network namespace
(i.e. docker run --net=host). Alternatively, you can run a process on
the host which proxies access to the socket. I think that's what
xdg-app does basically (also for security reasons to act as a filter).
There might be some other way, but that's all I can think of offhand.
http://ask.projectatomic.io/en/question/3647/how-to-connect-to-session-dbus-from-a-docker-container/
We have a IBM Host System Z sitting in our cellar. Now the issue is that i have no clue about Mainframes!!! (It's not USS btw.)
The Problem: How can i transfer a file from the host system to a windows machine.
Usually on UNIX systems i would just install and ssh daemon and connect to it via. a program called winscp. After that transfer the file in binary so that it does not convert something (Ultraedit and other Editors can handle this).
With the host system it seems to be a bit difficult as the original format from IBM is EBCDIC and i have no idea if there is a state of the art SFTP server program for the host. Could anybody be so kind and enlighten me? From my current expirience with IT there must be a state of the art sftp connection to that system? I appreciate any help/hints/solutions.
Thank you,
O.S
If the mainframe "sitting in [your] cellar" is running z/OS then it has Unix System Services installed. You can't have z/OS without it.
There is an SFTP package available (for free) for z/OS.
You can test to see about Unix System Services by firing up a 3270 emulator going to ISPF option 3.17, putting a forward slash (/) in the Pathname field and pressing the mainframe Enter key. Another way would be to key OMVS at a TSO READY prompt, which will start up a 3270-based Unix shell.
It is possible that USS is simply not available to you; if you're running any supported release of z/OS then USS is present. There could be concerns about supporting something outside a particular group,
Or, depending on what OS you have running on your System z, it's possible you don't have z/OS. You could have z/VM, you could have zLinux, you could have TPF. However, if you're running zLinux, you have linux, which has sftp installed, and which uses ASCII, not EBCDIC.
As cschneid says, however, if you have z/OS, you have USS. TCP/IP, among other things, won't run without it. Also note that z/OS TCP/IP has an FTP server, so you can connect that way if the FTP server is set up. If security is an issue, FTPS is supported, although it's painful to set up. With the native FTP server, you can convert from EBCDIC to ASCII when you're doing the transfer. There's also an NFS server available. And SMB as well, I believe.
And there's an FTP client available as well, so you could FTP from z/OS to your system, if you wanted to.
Maybe a better thing to do would explain what you're trying to do with the data, and what the data is, in general. You can edit files directly on the mainframe, using either TSO, ISPF, or OMVS editors. There are a lot of data types that the mainframe supports that you're not going to be able to handle on a non-z system unless you go through an export process. I'm not really clear on whether you want to convert the file to ASCII when you transfer it or not.
While the others are correct that all recent releases of z/OS have USS built-in, there's quite a bit of setup work that needs to be done in order for individual users to have access to USS capabilities like SFTP. Out of the box, you get USS "minimal mode" that just has enough of USS to support the TCP/IP stack and so forth. USS "full function mode" requires setup:
HFS filesystems need to be allocated
Your security package needs to be manage UIDs/GIDs for your users
etc etc etc
Still, with these details and with nothing more than the software you're entitled to as part of your z/OS license, you can certainly run SFTP and all the other UNIX style network services you're used to.
A good place to start is the UNIX Services Planning guide: http://publibz.boulder.ibm.com/epubs/pdf/bpxzb2c0.pdf
I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc.
I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here.
Is this even something which is achievable?
Can I transfer only shares that are interesting to me instead of the entire volume?
NDMP is essentially an application protocol to control backup/restore operations. The protocol is supple enough to do interesting things like data migration or tape cloning as well.
However it is not a file access protocol so "mounting" anything via NDMP isn't possible unless a NDMP server vendor writes a NDMP extension to do so: which will be a rather silly given there are specialized protocols that do just that.
Hope this helps.
NDMP is designed for data management (backup, recovery, etc.) and not as a file access protocol such as CIFS. If you application is a backup application then yes, you can use NDMP to control and copy subsets of the data from your filer to the application server. Note that the format of the data will be what the filer provides (via NDMP) and not in your control. Hope that helps!
Has anyone used the lockfile utility that ships with procmail in conjunction with NFS mounted directories?
The lockfile man page states that "Lockfile is NFS-resistant and eight-bit clean."
I've used it. My company had an very NFS-intensive infrastructure at one point (less so now) and many Perl sysadmin tools dating back to the mid 90s. We wrapped lockfile in a perl module so that we could do consistent locking across NFS mounts. For that matter, our home directories were NFS mounted and we used procmail to deliver mail into them using the same style of locking and never had any problems with it (procmail delivering mail via NFS from server-a and mail being read via firect file access or UW-imap from a bunch of other servers).
What is the easiest way to manage the authorized_keys file for openssh across a large number of hosts? If I need to add or revoke a new key to an account on 10 hosts say, I must login and add the public key manually, or through a clumsy shell script, which is time consuming.
Ideally there would be a central database linking keys to accounts#machines with some sort of grouping support (IE, add this key to username X on all servers in the web category). There's fork of SSH with ldap support, but I'd rather use the mainline SSH packages.
I'd checkout the Monkeysphere project. It uses OpenPGP's web of trust concepts to manage ssh's authorized_keys and known_hosts files, without requiring changes to the ssh client or server.
I use Puppet for lots of things, including this.
(using the ssh_authorized_key resource type)
I've always done this by maintaining a "master" tree of the different servers' keys, and using rsync to update the remote machines. This lets you edit things in one location, push the changes out efficiently, and keeps things "up to date" -- everyone edits the master files, no one edits the files on random hosts.
You may want to look at projects which are made for running commands across groups of machines, such as Func at https://fedorahosted.org/func or other server configuration management packages.
Have you considered using clusterssh (or similar) to automate the file transfer? Another option is one of the centralized configuration systems.
/Allan