How do I make tcpdump to use SPDK(Storage Performance Development Kit) to write on my storage devices such as hard disk?
By cloning the tcpdump and libpcap source code from the Git repository and modifying the file writing/reading code for libpcap to leverage SPDK.
Related
I am working on a project that requires using raw_sockets and raw sockets to work needs CAP_NET_RAW we used setcap and it worked fine, now the executable is on NFS, and nw setcap can’t be used is their a work around? Thanks in advance
I tried chown root and chmod u+s to increase prevelage of my executable but it didn’t work
Your app uses raw sockets, and raw sockets requires that the process have CAP_NET_RAW capability, correct?
https://manpages.ubuntu.com/manpages/kinetic/en/man7/packet.7.html,
In order to create a packet socket, a process must have the
CAP_NET_RAW capability in the user namespace that governs its network
namespace.
You've been relying on extended attributes to associate CAP_NET_RAW capability with your app's executable file, but your NFS server doesn't support this, correct?
Here's a potential workaround:
https://stackoverflow.com/a/44103544/421195
You can use fuse_xattrs (a
fuse filesystem layer) to emulate extended attributes (xattrs) on NFS
shares. Basically you have to do:
mount the NFS share. e.g.: /mnt/shared_data
mount the fuse xattr layer:
$ fuse_xattrs /mnt/shared_data /mnt/shared_data_with_xattrs
Now all the files on /mnt/shared_data can be accessed on
/mnt/shared_data_with_xattrs with xattrs support. The extended
attributes will be stored on sidecar files. The extended attributes
are not going to be stored on the server filesystem as extended
attributes, they are going to be stored in sidecar files.
Sadly this is only a work-around.
disclaimer: I'm the author of fuse_xattrs.
fbarriga
I am looking for JCL Script/Procedures in mainframe which can facilitate file transfer from Unix server to Mainframe.I am required to do FTPS for the Outbound Jobs (pull the file from UNIX server to mainframe Host).
Rather than a JCL, just do it a shell script. Here is a good site on using such commands:
https://blog.eduonix.com/shell-scripting/how-to-automate-ftp-transfers-in-linux-shell-scripting/
Once you have that working in the shell script in USS, you should be able to call the shell script from a JCL so you can execute it on a scheduled batch job if you need it.
Kenny's suggestion is fairly reasonable. IBM's documentation on how to write JCL for FTP(S)-related tasks is available in their "z/OS Communications Server: IP User's Guide and Commands" publication, IBM Publication No. SC27-3662. The current revision appears to be SC27-3662-30, but later revisions are possible. You can easily find this publication online, and make sure you don't skip the section beginning with the title "Submitting FTP requests in batch." Make sure you set the security options correctly (of course).
Please note that you're asking about FTPS, i.e. TLS encryption applied to either or both (preferably both) of the FTP channels (control and data). SFTP is another file transfer protocol based on SSH that z/OS also supports.
Another possible approach that you'll fairly often find available on z/OS installations is to use IBM MQ Advanced for z/OS's Managed File Transfer (MFT) feature to retrieve the file(s) using FTPS. As the name suggests, this'll be managed and have at least some error handling capabilities.
Yet another possible approach if you prefer HTTPS protocol is to use the z/OS Client Web Enablement Toolkit's HTTPS protocol enabler to fetch the file. That's a built-in, standard feature in all currently supported z/OS releases, and you can use it from a relatively simple REXX script for example. Details are available here (z/OS 2.3 variant of the documentation):
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.3.0/com.ibm.zos.v2r3.ieac100/ieac1-cwe-http.htm
I have a requirement to perform checksum (for data integrity) for SFTP. I was hoping this could be done during the SFTP file transfer - I realize this could be product dependent (FYI: using CLEO VLTrader), but was wondering if this is customary?
I am also looking for alternative data integrity checking options that are as good (or better) than using a checksum algorithm. Thanks!
With the SFTP, running over an encrypted SSH session, there's negligible chance the file contents could get corrupted while transferring. The SSH itself does data integrity verification.
So unless the contents gets corrupted, when reading the local file or writing the remote file, you can be pretty sure that the file was uploaded correctly, if no error is reported. That implies that a risk of data corruption as about the same as if you were copying the files between two local drives.
If you would not consider it necessary to verify data integrity after copying the files from one local drive to another, then I do not think, you need to verify integrity after an SFTP transfer, and vice versa.
If you want to test explicitly anyway:
While there's the check-file extension to the SFTP protocol to calculate a remote file checksum, it's not widely supported. Particularly it's not supported by the most widespread SFTP server implementation, the OpenSSH. See What SFTP server implementations support check-file extension.
Not many clients/client libraries support it either. You didn't specify, what client/library you are using, so I cannot provide more details.
For details about some implementations, see:
Python Paramiko: How to check if Paramiko successfully uploaded a file to an SFTP server?
.NET WinSCP: Verify checksum of a remote file against a local file over SFTP/FTP protocol
What SFTP server implementations support check-file extension
Other than that, your only option is to download the file back (if uploading) and compare locally.
If you have a shell access to the server, you can of course try to run some shell checksum command (e.g. sha256sum) over a separate shell/SSH connection (or the "exec" channel) and parse the results. But that's not an SFTP solution anymore.
Examples:
Calculate hash of file with Renci SSH.NET in VB.NET
Comparing MD5 of downloaded files against files on an SFTP server in Python
We have a IBM Host System Z sitting in our cellar. Now the issue is that i have no clue about Mainframes!!! (It's not USS btw.)
The Problem: How can i transfer a file from the host system to a windows machine.
Usually on UNIX systems i would just install and ssh daemon and connect to it via. a program called winscp. After that transfer the file in binary so that it does not convert something (Ultraedit and other Editors can handle this).
With the host system it seems to be a bit difficult as the original format from IBM is EBCDIC and i have no idea if there is a state of the art SFTP server program for the host. Could anybody be so kind and enlighten me? From my current expirience with IT there must be a state of the art sftp connection to that system? I appreciate any help/hints/solutions.
Thank you,
O.S
If the mainframe "sitting in [your] cellar" is running z/OS then it has Unix System Services installed. You can't have z/OS without it.
There is an SFTP package available (for free) for z/OS.
You can test to see about Unix System Services by firing up a 3270 emulator going to ISPF option 3.17, putting a forward slash (/) in the Pathname field and pressing the mainframe Enter key. Another way would be to key OMVS at a TSO READY prompt, which will start up a 3270-based Unix shell.
It is possible that USS is simply not available to you; if you're running any supported release of z/OS then USS is present. There could be concerns about supporting something outside a particular group,
Or, depending on what OS you have running on your System z, it's possible you don't have z/OS. You could have z/VM, you could have zLinux, you could have TPF. However, if you're running zLinux, you have linux, which has sftp installed, and which uses ASCII, not EBCDIC.
As cschneid says, however, if you have z/OS, you have USS. TCP/IP, among other things, won't run without it. Also note that z/OS TCP/IP has an FTP server, so you can connect that way if the FTP server is set up. If security is an issue, FTPS is supported, although it's painful to set up. With the native FTP server, you can convert from EBCDIC to ASCII when you're doing the transfer. There's also an NFS server available. And SMB as well, I believe.
And there's an FTP client available as well, so you could FTP from z/OS to your system, if you wanted to.
Maybe a better thing to do would explain what you're trying to do with the data, and what the data is, in general. You can edit files directly on the mainframe, using either TSO, ISPF, or OMVS editors. There are a lot of data types that the mainframe supports that you're not going to be able to handle on a non-z system unless you go through an export process. I'm not really clear on whether you want to convert the file to ASCII when you transfer it or not.
While the others are correct that all recent releases of z/OS have USS built-in, there's quite a bit of setup work that needs to be done in order for individual users to have access to USS capabilities like SFTP. Out of the box, you get USS "minimal mode" that just has enough of USS to support the TCP/IP stack and so forth. USS "full function mode" requires setup:
HFS filesystems need to be allocated
Your security package needs to be manage UIDs/GIDs for your users
etc etc etc
Still, with these details and with nothing more than the software you're entitled to as part of your z/OS license, you can certainly run SFTP and all the other UNIX style network services you're used to.
A good place to start is the UNIX Services Planning guide: http://publibz.boulder.ibm.com/epubs/pdf/bpxzb2c0.pdf
An article about setting up Ghost blogging says to use scp to copy from my local machine to a remote server:
scp -r ghost-0.3 root#*your-server-ip*:~/
However, Railscast 339: Chef Solo Basics uses scp to copy in the opposite direction (from the remote server to the local machine):
scp -r root#178.xxx.xxx.xxx:/var/chef .
In the same Railscast, when the author wants to copy files to the remote server (same direction as the first example), he uses rsync:
rsync -r . root#178.xxx.xxx.xxx:/var/chef
Why use the rsync command if scp will copy in both directions? How does scp differ from rsync?
The major difference between these tools is how they copy files.
scp basically reads the source file and writes it to the destination. It performs a plain linear copy, locally, or over a network.
rsync also copies files locally or over a network. But it employs a special delta transfer algorithm and a few optimizations to make the operation a lot faster. Consider the call.
rsync A host:B
rsync will check files sizes and modification timestamps of both A and B, and skip any further processing if they match.
If the destination file B already exists, the delta transfer algorithm will make sure only differences between A and B are sent over the wire.
rsync will write data to a temporary file T, and then replace the destination file B with T to make the update look "atomic" to processes that might be using B.
Another difference between them concerns invocation. rsync has a plethora of command line options, allowing the user to fine tune its behavior. It supports complex filter rules, runs in batch mode, daemon mode, etc. scp has only a few switches.
In summary, use scp for your day to day tasks. Commands that you type once in a while on your interactive shell. It's simpler to use, and in those cases rsync optimizations won't help much.
For recurring tasks, like cron jobs, use rsync. As mentioned, on multiple invocations it will take advantage of data already transferred, performing very quickly and saving on resources. It is an excellent tool to keep two directories synchronized over a network.
Also, when dealing with large files, use rsync with the -P option. If the transfer is interrupted, you can resume it where it stopped by reissuing the command. See Sid Kshatriya's answer.
Finally, note that rsync:// the protocol is similar to plain HTTP: unencrypted and no integrity checks. Be sure to always use rsync via SSH (as in the examples from the question above), not via the rsync protocol, unless you really know what you're doing. scp will always use SSH as underlying transport mechanism which has both integrity and confidentiality guarantees, so that is another difference between the two utilities.
rysnc can be useful to run on slow and unreliable connections. So if your download aborts in the middle of a large file rysnc will be able to continue from where it left off when invoked again.
Use rsync -vP username#host:/path/to/file .
The -P option preserves partially downloaded files and also shows progress.
As usual check man rsync
Difference b/w scp and rsync on different parameter
1. Performance over latency
scp : scp is relatively less optimise and speed
rsync : rsync is comparatively more optimise and speed
https://www.disk91.com/2014/technology/networks/compare-performance-of-different-file-transfer-protocol-over-latency/
2. Interruption handling
scp : scp command line tool cannot resume aborted downloads from lost network connections
rsync : If the above rsync session itself gets interrupted, you can resume it as many time as you want by typing the same command. rsync will automatically restart the transfer where it left off.
http://ask.xmodulo.com/resume-large-scp-file-transfer-linux.html
3. Command Example
scp
$ scp source_file_path destination_file_path
rsync
$ cd /path/to/directory/of/partially_downloaded_file
$ rsync -P --rsh=ssh userid#remotehost.com:bigdata.tgz ./bigdata.tgz
The -P option is the same as --partial --progress, allowing rsync to work with partially downloaded files. The --rsh=ssh option tells rsync to use ssh as a remote shell.
4. Security :
scp is more secure. You have to use rsync --rsh=ssh to make it as secure as scp.
man document to know more :
scp : http://www.manpagez.com/man/1/scp/
rsync : http://www.manpagez.com/man/1/rsync/
One major feature of rsync over scp (beside the delta algorithm and encryption if used w/ ssh) is that it automatically verifies if the transferred file has been transferred correctly. Scp will not do that, which occasionally might result in corruption when transferring larger files. So in general rsync is a copy with guarantee.
Centos manpages mention this the end of the --checksum option description:
Note that rsync always verifies that each transferred file was
correctly reconstructed on the receiving side by checking a whole-file
checksum that is generated as the file is transferred, but that
automatic after-the-transfer verification has nothing to do with this
option’s before-the-transfer “Does this file need to be updated?”
check.
There's a distinction to me that scp is always encrypted with ssh (secure shell), while rsync isn't necessarily encrypted. More specifically, rsync doesn't perform any encryption by itself; it's still capable of using other mechanisms (ssh for example) to perform encryption.
In addition to security, encryption also has a major impact on your transfer speed, as well as the CPU overhead. (My experience is that rsync can be significantly faster than scp.)
Check out this post for when rsync has encryption on.
scp is best for one file.
OR a combination of tar & compression for smaller data sets
like source code trees with small resources (ie: images, sqlite etc).
Yet, when you begin dealing with larger volumes say:
media folders (40 GB)
database backups (28 GB)
mp3 libraries (100 GB)
It becomes impractical to build a zip/tar.gz file to transfer with scp at this point do to the physical limits of the hosted server.
As an exercise, you can do some gymnastics like piping tar into ssh and redirecting the results into a remote file. (saving the need to build
a swap or temporary clone aka zip or tar.gz)
However,
rsync simplify's this process and allows you to transfer data without consuming any additional disc space.
Also,
Continuous (cron?) updates use minimal changes vs full cloned copies speed
up large data migrations over time.
tl;dr
scp == small scale (with room to build compressed files on the same drive)
rsync == large scale (with the necessity to backup large data and no room left)
it's better to think in a practical context. In our team, we use rsync -aP to replace a bad cassandra host in our cluster. We can't do this with scp (slow and no progress preservation).