tcpprep: Command line arguments not allowed - networking

I'm not sure, why executing below command on ubuntu terminal throws error. tcpprep syntax and options are mentioned as per in help doc, still throws error.
root#test-vm:~# /usr/bin/tcpprep --cachefile='cachefile1' —-pcap='/pcaps/http.pcap'
tcpprep: Command line arguments not allowed
tcpprep (tcpprep) - Create a tcpreplay cache cache file from a pcap file
root#test-vm:~# /usr/bin/tcpprep -V
tcpprep version: 3.4.4 (build 2450) (debug)

There are two problems with your command (and it doesn't help that tcpprep errors are vague or wrong).
Problem #1: Commands out of order
tcpprep requires that -i/--pcap come before -o/--cachefile. You can fix this as below, but then you get a different error:
bash$ /usr/bin/tcpprep —-pcap='/pcaps/http.pcap' --cachefile='cachefile1'
Fatal Error in tcpprep_api.c:tcpprep_post_args() line 387:
Must specify a processing mode: -a, -c, -r, -p
Note that the error above is not even accurate! -e/--mac can also be used!
Problem #2: Processing mode must be specified
tcpprep is used to preprocess a capture file into client/server using a heuristic that you provide. Looking through the tcpprep manpage, there are 5 valid options (-acerp). Given this capture file as input.pcapng with server 192.168.122.201 and next hop mac 52:54:00:12:35:02,
-a/--auto
Let tcpprep determine based on one of 5 heuristics: bridge, router, client, server, first. Ex:
tcpprep --auto=first —-pcap=input.pcapng --cachefile=input.cache
-c/--cidr
Specify server by cidr range. We see servers at 192.168.122.201, 192.168.122.202, and 192.168.3.40, so summarize with 192.168.0.0/16:
tcpprep --cidr=192.168.0.0/16 --pcap=input.pcapng --cachefile=input.cache
-e/--mac
This is not as useful in this capture as ALL traffic in this capture has dest mac of next hop of 52:54:00:12:35:02, ff:ff:ff:ff:ff:ff (broadcast), or 33:33:00:01:00:02 (multicast). Nonetheless, traffic from the next hop won't be client traffic, so this would look like:
tcpprep --mac=52:54:00:12:35:02 —-pcap=input.pcapng --cachefile=input.cache
-r/--regex
This is for IP ranges, and is an alternative to summarizing subnets with --cidr. This would be more useful if you have several IPs like 10.0.20.1, 10.1.20.1, 10.2.20.1, ... where summarization won't work and regex will. This is one regex we could use to summarize the servers:
tcpprep --regex="192\.168\.(122|3).*" —-pcap=input.pcapng --cachefile=input.cache
-p/--port
Looking at Wireshark > Statistics > Endpoints, we see ports [135,139,445,1024]/tcp, [137,138]/udp are associated with the server IPs. 1024/tcp, used with dcerpc is the only one that falls outside the range 0-1023, and so we'd have to manually specify it. Per services syntax, we'd represent this as 'dcerpc 1024/tcp'. In order to specify port, we also need to specify a --services file. We can specify one inline as a temporary file descriptor with process substitution. Altogether,
tcpprep --port --services=<(echo "dcerpc 1024/tcp") --pcap=input.pcapng --cachefile=input.cache
Further Reading
For more examples and information, check out the online docs.

Related

Ubuntu (Oracle VM) - Mounted Samba shares hang indefinitely

I have a VM instance on Oracle Cloud (Ubuntu 22.04) set up with ZeroTier to act as a web server for some services that should work with my local Synology NAS.
For some of those services I also need to mount three SMB shares from my NAS with the ZeroTier tunnel, but I can't make it work.
I used mount and mount.cifs plenty of times with automounting too, this time it acts very strange:
running the mount command seems to succeed from the console, but /var/log/syslog reads
CIFS: VFS: \\XXX.XXX.XXX.XXX has not responded in 180 seconds.
Reconnecting...
if trying to access one of the shares (ls or lsof or cd or any other command), it succeeds for only one of the shares (always the same one), but only for the first time any command is given:
$ ls /temp
folder1 folder2 folder3
any other following command just "hangs" as if they system is working on something, but it stays like that indefinitely most of the times:
$ ls /temp
█
Just a few times it spits out this error
lsof: WARNING: can't stat() cifs file system /temp
Output information may be incomplete.
ls 1475 ubuntu 3r DIR 0,44 0 123207681 /temp
findmnt reads:
└─/temp //XXX.XXX.XXX.XXX/Downloads cifs rw,relatime,vers=2.0,cache=strict, username=[redacted],uid=1005,noforceuid,gid=0,noforcegid,addr=XXX.XXX.XXX.XXX,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,bsize=1048576,echo_interval=60,actimeo=1
for the remaining two "mounted" shares, none of them seems to respond to any command, not even the very first command, and they just hang like the one share that, at least, lets me browse for one time;
umount and umount -l take at least 2-3 minutes to successfully unmount the shares.
Same behavior when using smbclient and also with NFS shares from the same NAS.
What I have already tried:
update kernel and all packages;
remove, purge and reinstall cifs-utils, smbclient and so on...
tried mounting the same shares in another client / node within the ZeroTier network and it works just fine; also browsing from Windows and Android file manager apps with and without ZeroTier works flawlessly;
tried all SMB versions including SMBv3 and SMBv1 (CIFS);
tried different browsing or mounting methods / commands including mount, mount.cifs, autofs, smbclient;
tried to debug what happens behind the console, but didn't found anything that seems related to this in logs, htop or anything else. During the "hanging" sessions there is no spike in CPU, RAM or Network usage in either the Oracle VM or Synology NAS;
checked, reset and reconfigured all permissions on my NAS for shares, folders and files recursively and reconfigured users groups permissions.
What I haven't tried yet (I'll try as soon as possible):
reproduce this on another Oracle VM configured the same as the faulty one and another with a different base image (maybe Oracle Linux?);
It seems to me that the mount.cifs process doesn't really succeeds in mounting the share correctly, as it doesn't show as such anywhere. It also seems an issue not related to folder/file permissions, but rather something related to networking?
A note on something that may or may not be related to this: ZeroTier on my Synology NAS does not seems to work with IPv4 only - it remains OFFLINE. The node goes ONLINE only when IPv6 is enabled, but I must say that this is the only node in my ZT network that shows a IPv6 as public IP in the ZT web GUI - the other nodes show IPv4 public addresses.
If anyone has any clue on this, I'll be happy to support and reproduce any advice. Thank you!
I'm using YailScale, but I presume it will work the same.
You need to add the port 445 to /etc/iptables/rules.v4 just under the SSH setup like below:
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 445 -j ACCEPT (like this)
Then you need to edit the interfaces in /etc/samba/smb.conf to:
interfaces = lo tailscale0 100.0.0.0/24
Obviously, my interface is tailscale0, but yours will be different. Use ip link show to find yours. You may also need to change your IP range to suit ZeroTeirs, such as 100.0.0.0/24, which is what tailscale uses.
Then reboot!
I couldn't get it working without doing this.

Network traffic through a particular port using iftop

I have a process using https. I found its PID using ps and used the command lsof -Pan -p PID -i to get the port number it is running on.
I need iftop to see the data transfer. The filter I am using now is
iftop -f "port http 57787".
I don't think this is giving me the right output.
Can someone help me the right filter to use with iftop so that I know the traffic going through only this port?
I can see 2 problems here:
1/ Is that a typo? The correct option for filtering is -f (small "f"). -F (capital "F") option is for net/mask.
2/ Though not explicitly stated by iftop documentation, the syntax for filtering seems to be the pcap one from the few examples given (and using ldd I can see that yes, the iftop binary is linked with libpcap). So a filter with http is simply not valid. To see the doc for pcap filtering syntax, have a look at pcap-filter (7) - packet filter syntax man page. In your example, a filter such as "tcp port 57787" would be OK. pcap does not do layer 5 and above protocol dissection such as http (pcap filters are handled by BPF in the kernel, so above layer 4 you're on your own, because that's none of the kernel business).
All in all, these looks like iperf bugs. It should refuse your "-F" option, and even with "-f" instead exit with an error code because pcap will refuse the filter expression. No big deal, iftop is a modest program. See edit bellow.
EDIT:
I just checked iftop version 1.0pre4 source code, and there is no such obvious bug from a look at set_filter_code() and its caller packet_init() in iftop.c. It correctly exit with error, but...
Error 2, use the "-f" option, but your incorrect filter syntax:
jbm#sumo:~$ sudo iftop -f "port http 57787"
interface: eth0
IP address is: 192.168.1.67
MAC address is: 8c:89:a5:57:10:3c
set_filter_code: syntax error
That's OK.
Error 1, the "-F" instead of "-f", there is a problem:
jbm#sumo:~$ sudo iftop -F "port http 57787"
(everything seems more or less OK, but then quit the program)
Could not parse net/mask: port http 57787
interface: eth0
IP address is: 192.168.1.67
MAC address is: 8c:89:a5:57:10:3c
Oops! "Could not parse net/mask: port http 57787"! That's a bug: it should exit right away.

nmap warning: giving up on port because retransmission cap hit (2)

I am trying to scan a large set of domain names using nmap. I used the following command:
Nmap -PN -p443 -sS -T5 -oX out.xml -iL in.csv
I get the following warning:
Warning: xx.xx.xx.xx giving up on port because retransmission cap hit (2).
Why does this happen? How to resolve the issue ?
The option -T5 instructs nmap to use "insane" timing settings. Here's the relevant part of the current source code that illustrates what settings this implies:
} else if (*optarg == '5' || (strcasecmp(optarg, "Insane") == 0)) {
o.timing_level = 5;
o.setMinRttTimeout(50);
o.setMaxRttTimeout(300);
o.setInitialRttTimeout(250);
o.host_timeout = 900000;
o.setMaxTCPScanDelay(5);
o.setMaxSCTPScanDelay(5);
o.setMaxRetransmissions(2);
}
As you can see, the maximum number of retransmissions is 2. The warning you saw gets printed when there is a non-default cap on the number of retransmissions (set with -T5, -T4, or manually with --max-retries), and that cap is hit.
To avoid this problem, try scaling back your timing settings. -T4 is still very fast, and should work for nearby networks. -T3 is the default. If you are certain that your latency and bandwidth are not a problem, but that you may be dropping packets due to faulty hardware, you can manually set --max-retries to a higher value, and keep the rest of the -T5 settings.
I had the same problem and changing T parameter and --max-retries didn't changed anything.
The problem for me was my network adapter in VirtualBox was configured asNAT and notbridge.
It maybe happen because the virtual card is satured by all the packet.
This configuration solve the problem for my case.

How to nmap without rDNS and write the DNS in the output

I need to use nmap to check if port 443 is open for a list websites. So, I saved them into a file. I need the output to tell me if the port is open or not. I used the command:
nmap -PN -p443 gnmap -oG logs/output.gnmap -iL myfolder/input.txt
The problem: the output file is giving me a different domain names. Nmap made rDNS and I found that the IP points to adifferent domain name. Please, explain. Does this means both domains are hosted in the same server ? However, I checked their certificates and found each domain has different certificate. I am concerned about port 433 in my list to check their certificates later. So, I don't want to check another domain's certificate's other than the one I entered in the file.
To solve the issue, I used the -n option. But the problem is that the output file contains IPs only. How can I produce output file that contains the result of my domains without rDNS ??
The "Grepable" output format (-oG) is deprecated because it cannot show the full output of an Nmap scan. There is no way to get the output you want with the -oG option unless you modify Nmap and recompile it.
Luckily, the XML output format (-oX) contains the information you want and more:
<hostnames>
<hostname name="bonsaiviking.com" type="user"/>
<hostname name="li34-105.members.linode.com" type="PTR"/>
</hostnames>
In this example, from scanning my domain, the hostname provided on the command line has the attribute type="user", and the hostname that was a result of the reverse lookup has type="PTR".

Unable to rsync between my server and my Mac

I have a server where I store data from Mac A and Mac B.
I use rsync to keep the files updated between my Macs.
I run the following code unsuccessfully
#!/bin/zsh
# to copy files from my server to my folder
rsync -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
# to copy files from my folder to my server
rsync -Pav ~/Dropbox/Courses/math $Masi:~/private/
I get the following error message
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.5]
ssh: connect to host port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.5]
I have ssh keys in place so the connection should work, since I can use scp without problems.
How can you use rsync between my server and one of my Macs?
I used to do a lot of this. Just ran a test, a few suggestions.
Spell out your entire user#host pattern
Run the ssh connection sans the rsync first, you may need to first approve your fingerprint
You do not seem to pass a flag to protect extended attributes, this can yield broken files on OS X. If you do not need resource forks, you are OK, but most of the time you do need them.
My test case:
$ rsync -Pav ~/Desktop/ me#remote.example.com:~/rsyc-test
In that case, all the files within ~/Desktop were copied to the remote host, in my home dir. Since the directory 'rsyc-test' did not exist, it was made for me. I had a .app on my Desktop, it made it over, surprisingly, it works. Even some .webloc files made it and appear to work, though I do not trust it.
I would strongly suggest adding in the -E flag
-E, --extended-attributes
Apple specific option to copy extended attributes, resource
forks, and ACLs. Requires at least Mac OS X 10.4 or suitably
patched rsync.
I ran a new test, moved a Interarchy bookmark to my desktop, I know for a fact these break if they are copied sans resource forks. Running without the -E versus with the -E, there is a difference of 152 bytes in xfered data. The first file on the remote machine did not work, the second transfered file did work.
I can not help but notice in your example one of your paths is ~/Dropbox so this may all not matter, since DropBox, the app, does not at all support resource forks currently, though I hear there are plans to in the future.
You also are not sending in the --delete flag, if your end goal is a mirror of your data, you are not getting that, if your end goal is backups that continually grows, keeping everything that was ever on the source, the lack of --delete is good.
Other notes:
You can exclude those silly .DS_Store files
--exclude '.DS_Store'
You can also set rsync up in a way to be a true mirror, so you would not need to run your other command, see the man page for details.
My final working command to shove the Desktop of my laptop to a remote machine:
$ rsync -PEav --delete --exclude '.DS_Store' ~/Desktop/ me#remote.example.com:~/rsycn-test
Check "$Masi". Is that the hostname you are trying to reach?
Try the following command to debug it:
rsync -e 'ssh -v' -Pav $Masi:~/private/ ~/Dropbox/Courses/math/
The Connection refused usually happens when there is a connection issue to the remote (e.g. firewall).
In your case the problem is that $Masi variable is empty. If it's not variable, use Masi.
As per this error:
ssh: connect to host port 22: Connection refused
Notice the double space above after the host word.
the connect to host message doesn't say to which host, so you're trying to connect to empty host. So it sound like a typo in the host name.

Resources