What does the following command do? tcpdump -i eth1 -s 0 -w capnet1.cap - networking

What does the following command do?
tcpdump -i eth1 -s 0 -w capnet1.cap
I said it collects
it collects data

Related

Suricata dont drop packets

I have a server with Suricata (169.69.1.11) installed and a specific rule:
drop ICMP any any -> 169.69.1.11 any (msg: "ping dropped";sid:10001;)
In other VM I execute:
ping 169.69.1.11 -c 5
so at this point, everything is bad because the pings reach, and nothing is registered on fast.log so I execute on the Suricata machine
sudo suricata -i enp0s8
and I ping another time with the same command ( 5 pings )
In my other machine every seems okay, the 5 pings seems they reach, but I look at the logs on Suricata /var/log/suricata/fast.log it drops that line
03/25/2022-11:11:05.231735 [wDrop] [**] [1:10001:0] ping dropped [**] [Classification: (null)] [Priority: 3] {ICMP} 169.69.1.10:8 -> 169.69.1.11:0
Why the pings are hitting and don't get blocked?
Why do I ping 5 times but only 1 time is logged?
My first problem is I didn't have Suricata IPS, first delete ur iptables rules with
sudo iptables -F
sudo iptables -I INPUT -j NFQUEUE
sudo iptables -I OUTPUT -j NFQUEUE
sudo iptables -I FORWARD -j NFQUEUE
and execute the Suricata with -D to let as bg
sudo Suricata -q 0 -D

Gatttool Non-Interactive mode, multiple char-write-req

I would like to retrieve the data of a stryd footpod. I would like to listen to 2 separate uuid's. In interactive mode, I would connect using
sudo gatttool -t random -b XX:XX:XX:XX:XX:XX -I
connect
char-write-req 0x001a 0100
char-write-req 0x000f 0100
However, as I use this as part of a perl script, I would like to leverage non-interactive mode.
Starting gatttool with a single handle works fine:
gatttool -t random -i hci0 -b XX:XX:XX:XX:XX:XX --char-write-req --handle=0x001a --value=0100 --listen
However how do I pass both handles at the same time? Following does not work.
gatttool -t random -i hci0 -b XX:XX:XX:XX:XX:XX --char-write-req --handle=0x001a --value=0100 --char-write-req --handle=0x000f --value=0100 --listen
Thanks!
Found the solution on http://www.humbug.in/2014/using-gatttool-manualnon-interactive-mode-read-ble-devices/
gatttool -t random -i hci0 -b XX:XX:XX:XX:XX:XX --char-write-req --handle=0x001a --value=0100; sleep 1; gatttool -t random -i hci0 -b XX:XX:XX:XX:XX:XX --char-write-req --handle=0x000f --value=0100 --listen
Does the trick!

Temporary disabling exposed ports on Docker

I would like to temporary disable some docker container ports at runitme, so without changing the image or stopping/starting the container.
I have some services running, a webclient, an authentication service a mongodb instance and also a loadbalancer, all of them in the same VM.
Since there is no API to modify exposed ports at runtime in docker, I have to work with iptables command.
So I've built some code which disable the ports related to a particular container name passed as parameter.
I have the following rule for the authentication server:
-A DOCKER -d 172.18.0.16/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 8081 -j ACCEPT
Which my code modify as the following:
-A DOCKER -d 172.18.0.16/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 8081 -j DROP
At this point I am expecting I can't authenticate anymore, but I can still do it.
At the same time if try the same code against the load balancer, everything works fine, I can't access the URL as expected.
These the original rules for nginx:
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 81 -j ACCEPT
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 443 -j ACCEPT
Here the modified ones:
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 81 -j DROP
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 80 -j DROP
-A DOCKER -d 172.18.0.11/32 ! -i br-3ec61cf14e6e -o br-3ec61cf14e6e -p tcp -m tcp --dport 443 -j DROP
Below the output of docker ps command
[root#sandbox-test-28 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d007479faaf4 service-auth-nodejs "/bin/sh -c \"/usr/bin" 2 days ago Up 2 days 0.0.0.0:8081->8081/tcp authentication-microservice
c073989b49ce nginx "/bin/bash -c /etc/ng" 2 days ago Up 2 days 0.0.0.0:443->443/tcp, 0.0.0.0:9000->80/tcp, 0.0.0.0:10000->81/tcp nginx-microservice
432ea895d90a web "/bin/sh -c \"/usr/bin" 2 days ago Up 2 days 0.0.0.0:8000->8000/tcp webclient-microservice
0c8141da8c0b mongo "/entrypoint.sh mongo" 2 days ago Up 2 days 0.0.0.0:27017->27017/tcp mongo-microservice
[root#sandbox-test-28 ~]#
Am I missing something?
The subnets in your rules are different:
172.18.0.11/32 (authentication service)
vs
172.18.0.16/32 (nginx)
So presumably, the packets for your authentication server arrive via 172.18.0.16 and are still allowed by a different rule.

File transfer between 2 vmware workstations on same host

I need to transfer a file from aProcess1 on a VM1 inside a VmWare Workstation to another VM2 inside the same VmWare Workstation hypervisor on the same host, so as to calculate the rate of data transfer between these 2 virtual machines.
Either to write a FTP server client server program, then how to calculate time..?
And also how to manage the ports in virtual machines (let's say on both Ubuntu is working )
in writing server client program..?
What you need to do is not trivial and you'll have to invest some effort. First of all you need to create a bridge between the two VMs and have each VM with a tap interface on that bridge.
I have a script below you can look at as an example - it creates some screen sessions (you will need a basic .screenrc) and I launch a VM in each screen tab. Really the bit that should only interest you is the bridge setup and how to launch qemu.
Networking setup you want static routes - the below is an example of what I had with eth0 being a user network interface and eth1 being the interface connected to the peer VM. You could get rid of eth0. The peer VM route was a mirror of this
sudo vi /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
up route add default gw 10.0.2.2 eth0
auto eth1
iface eth1 inet static
address 20.0.0.1
netmask 255.255.255.0
network 20.0.0.0
broadcast 20.0.0.255
gateway 20.0.0.2
up route add -host 21.0.0.1 gw 20.0.0.2 dev eth1
up route add -host 21.0.0.2 gw 20.0.0.2 dev eth1
up route del default gw 20.0.0.2 eth1
# You want it like this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 20.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
20.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
# on the peer
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
default 21.0.0.2 0.0.0.0 UG 0 0 0 eth1
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
21.0.0.0 * 255.255.255.0 U 0 0 0 eth1
link-local * 255.255.0.0 U 1000 0 0 eth0
Once you have the VMs up and have configured a static route on each of those, you can use iperf with one end as a sink and the other as a source e.g.:
iperf -s
iperf -c 20.0.0.1 -t 10 -i 1
bridge script setup:
#
# Settings
#
INCLUDE_QEMU_AS_SCREEN=
A_TELNET_1=6661
A_NAME="A-VM"
A_MEMORY=1G
B_TELNET_1=6665
B_NAME="B-VM"
B_MEMORY=1G
A_DISK_IMAGE=.A.disk.img
B_DISK_IMAGE=.B.disk.img
A_PID=.A.pid
B_PID=.B.pid
A_CMD_1=.A.cmd.1
B_CMD_1=.B.cmd.1
#
# Run QEMU in background or foreground
#
if [ "$INCLUDE_QEMU_AS_SCREEN" != "" ]
then
SCREEN_QEMU_A='screen -t "A-qemu"'
SCREEN_QEMU_B='screen -t "B-qemu"'
else
SCREEN_QEMU_A='bg'
SCREEN_QEMU_B='bg'
fi
#
# Store logs locally and use the date to avoid losing old logs
#
LOG_DATE=`date "+%a_%b_%d_at_%H_%M"`
HOME=$(eval echo ~${SUDO_USER})
LOG_DIR=logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=/tmp/$LOGNAME/logs/$LOG_DATE
mkdir -p $LOG_DIR
if [ $? -ne 0 ]; then
LOG_DIR=.
fi
fi
LOG_DIR_TEST=$LOG_DIR
mkdir -p $LOG_DIR_TEST
#
# create the tap
#
echo
echo ================ create taps ================
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap1
sudo tunctl -b -u $LOGNAME -t $LOGNAME-tap2
#
# bring up the tap
#
echo
echo =============== bring up taps ===============
sudo ifconfig $LOGNAME-tap1 up
sudo ifconfig $LOGNAME-tap2 up
#
# show the tap
#
echo
echo =================== tap 1 ===================
ifconfig $LOGNAME-tap1
echo
echo =================== tap 4 ===================
ifconfig $LOGNAME-tap2
#
# create the bridge
#
sudo brctl addbr $LOGNAME-br1
#
# bring up the bridge
#
sudo ifconfig $LOGNAME-br1 1.1.1.1 up
#
# show my bridge
#
echo
echo =================== bridge 1 ===================
ifconfig $LOGNAME-br1
brctl show $LOGNAME-br1
brctl showmacs $LOGNAME-br1
#
# attach tap interface to bridge
#
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl addif $LOGNAME-br1 $LOGNAME-tap2
SCRIPT_START="echo Starting..."
SCRIPT_EXIT="echo Exiting...; sleep 3"
cat >$A_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-A -f -c 'telnet localhost $A_TELNET_1'
$SCRIPT_EXIT
%%%
cat >$B_CMD_1 <<%%%
$SCRIPT_START
script -f $LOG_DIR_TEST/VM-B -f -c 'telnet localhost $B_TELNET_1'
$SCRIPT_EXIT
%%%
chmod +x $A_CMD_1
chmod +x $B_CMD_1
run_qemu_in_screen_or_background()
{
SCREEN=$1
shift
if [ "$SCREEN" = "bg" ]
then
$* &
else
$SCREEN $*
fi
}
echo
echo
echo
echo "##########################################################"
echo "# Starting QEMU #"
echo "##########################################################"
echo
echo
echo
run_qemu_in_screen_or_background \
$SCREEN_QEMU_A \
qemu-system-x86_64 -nographic \
-m $A_MEMORY \
-enable-kvm \
-drive file=$A_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$A_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=10:16:3e:00:01:12 \
-net tap,ifname=$LOGNAME-tap1,vlan=21,script=no \
-boot c \
-pidfile $A_PID
run_qemu_in_screen_or_background \
$SCREEN_QEMU_B \
qemu-system-x86_64 -nographic \
-m $B_MEMORY \
-enable-kvm \
-drive file=$B_DISK_IMAGE,if=virtio,media=disk \
-serial telnet:localhost:$B_TELNET_1,nowait,server \
-net nic,model=e1000,vlan=21,macaddr=30:16:3e:00:03:14 \
-net tap,ifname=$LOGNAME-tap2,vlan=21,script=no \
-boot c \
-pidfile $B_PID
sleep 1
screen -t "$A_NAME" sh -c "sh $A_CMD_1"
screen -t "$B_NAME" sh -c "sh $B_CMD_1"
sleep 5
echo
echo
echo
echo "##########################################################"
echo "# Hit enter to quit #"
echo "##########################################################"
echo
echo
echo
read xx
cat $A_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $A_PID 2>/dev/null
cat $B_PID 2>/dev/null | xargs kill -9 2>/dev/null
rm -f $B_PID 2>/dev/null
rm -f $A_CMD_1 2>/dev/null
rm -f $B_CMD_1 2>/dev/null
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap1
sudo brctl delif $LOGNAME-br1 $LOGNAME-tap2
sudo ifconfig $LOGNAME-br1 down
sudo brctl delbr $LOGNAME-br1
sudo ifconfig $LOGNAME-tap1 down
sudo ifconfig $LOGNAME-tap2 down
sudo tunctl -d $LOGNAME-tap1
sudo tunctl -d $LOGNAME-tap2
Supposing your VM1 and VM2 have some active ISO-OSI-L2/L3 connectivity, so there is some transport available, a way better approach to setup process-to-process communication is to use some industry-proven messaging framework rather than to spend time to build just another ftp-c/s.
For sending / receiving anything ( incl. BLOBs alike the whole files ) try ZeroMQ or nanomsg libraries, as these are broker-less frameworks, have bindings for many programming languages ready & have genuine performance / low latency.
Any distributed process-to-process systems projects will benefit from early adoption and using this approach.
Check http://zguide.zeromq.org/c:fileio3 to get some additional insights about additional add-on benefits alike load-balancing, failure-recovery et al.
As per your Q1: FTP servers, per-se, report the file transfer times. In ZeroMQ you can pragrammatically measure amount of time spent on file transfer as a distance between the two timestamps
As per your Q2: for ZeroMQ use any port allowed ( not restricted ) in Ubuntu

Snort - Error while running

Running snort (in packet dump mode) with command sudo snort -C snort.conf -A console -i eth0 a following problem occurred:
--== Initializing Snort ==--
Initializing Output Plugins!
Snort BPF option: snort.conf
pcap DAQ configured to passive.
The DAQ version does not support reload.
Acquiring network traffic from "eth0".
ERROR: Can't set DAQ BPF filter to 'snort.conf' (pcap_daq_set_filter: pcap_compile: syntax error)!
Fatal Error, Quitting..
Can someone please suggest a solution?
You're using the wrong option to load the configuration, it should be the lower case '-c'.
sudo snort -c snort.conf -A console -i eth0
Also, you can test your configuration with '-T' before running it:
sudo snort -T -c snort.conf
just put "-i" before eth0 in command it will solve the problem
Try this:
sudo service snort
ps ax|grep snortstart
The output I got was
/usr/sbin/snort -m 027 -D -d -l /var/log/snort -u snort -g snort -c
/etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0
The man page says
-D Run Snort in daemon mode. Alerts are sent to
/var/log/snort/alert unless otherwise specified.
So when I drop the -D and add the -A
sudo /usr/sbin/snort -m 027 -d -l /var/log/snort -u snort -g snort -c /etc/snort/snort.conf -S HOME_NET=[192.168.0.0/16] -i enp4s0 -A console
Works for snort Version 2.9.7.0 GRE (Build 149)

Resources