I have a setup at home as follow:
DHCP clients -----> (wifi)(bridge) Openwrt -----> (eth)Main Router
The device I'm using is TPlink MR3020 with Barrier Breaker and I tried to set up transparent proxy for bridge traffic - I want to redirect the packets passing through the bridge to proxy server(privoxy). I tried to use ebtables. But when I enter the following command:
ebtables -t broute -A BROUTING -p IPv4 --ip-protocol 6 --ip-destination-port 80 -j redirect --redirect-target ACCEPT
I got following error:
Unable to update the kernel. Two possible causes:
1. Multiple ebtables programs were executing simultaneously. The ebtables
userspace tool doesn't by default support multiple ebtables programs running
concurrently. The ebtables option --concurrent or a tool like flock can be
used to support concurrent scripts that update the ebtables kernel tables.
2. The kernel doesn't support a certain ebtables extension, consider
recompiling your kernel or insmod the extension.
I tried to activate the IPv4 package with insmod, but no luck.
Any ideas on how to accomplish this?
Related
I've created a new network namespace for USB-ethernet and Wireless interface.
I run a DNSMASQ dhcp server on an interface with
sudo dnsmasq --port 5353 --interface wlp2s0 -F 123.12.1.101,123.12.1.200,24h
Which works like a charm.
I then want to set up another DNSMASQ dhcp server on the other interface with:
sudo dnsmasq --port 5454 --interface enx65ad574sa -F 123.12.1.101,123.12.1.200,24h
But this just reports
dnsmasq: failed to bind DHCP server socket: Address already in use.
I am able to setup multiple dnsmasq dhcp servers if i run the dnsmasq outside the namespace, but inside a namespace, i can only have it running once.
If i create a configuration file:
interface=wlp2s0
dhcp-range=wlp2s0,123.12.1.101,123.12.1.200,255.255.255.0,24h
interface=enx65ad574saaw
dhcp-range=enx65ad574saaw,192.168.0.101,192.168.0.200,255.255.255.0,24h
listen-address=::1,127.0.0.1,192.168.0.1,123.12.1.1
It also works just fine on both interfaces inside the namespace... So what's the difference? I need to run this dynamically from the commandline, so i can't use the configuration file.
I ended up creating 2 separate dnsmasq configuration files, both has dhcp serve setup on the same interface, and then run the configarions concurrently in each their own network namespace - That means when the wireless interface is added to the network namespace, dnsmasq is already serving as dhcp server on the interface.
Apparently, setting up and starting 2 dnsmasq, that has their own configuration but with the same interface in both, does not interfere or break each other, and works quite seamlessly (atleast when they are started in separate network namespaces).
I got a kitchen-ansible test that runs serverspec as a verifier. The test runs on two containers. One running with Amazon Linux 1 and the other Amazon Linux 2. The Ansible code installs a Keycloak server which listens on the ports 8080 and 8443.
In the Amazon Linux 1 container, everything's fine and the serverspec reports the ports to be listening.
In the Amazon Linux 2 container, the installation also ends without any errors but serverspec fails to report the ports not be listening. As I found out Serverspec is wrong.
After logging into the container running netstat -tulpen |grep LISTEN it shows the ports to be listening. Serverspec is checking with ss command: /bin/sh -c ss\ -tunl\ \|\ grep\ -E\ --\ :8443\\\
So I logged in to the Amazon Linux 1 container for checking the output of the ss command there and it showed no listening on both ports.
So has anyone a clue why the serverspec succeeds on Amazon Linux 1 and fails on Amazon Linux 2 despite in both containers the ss command is reporting no ports to be listened on?
The root cause was, that the ports aren't bind quickly enough. Serverspec starts to check, when the service hasn't been started completely. Logging in to the container takes more time, so the service is started successful and the ports are bound.
I'm trying to set up DPDK on a Mellanox ConnectX-3 card and run some of the applications that comes with it, e.g., l2fwd.
My understanding is that I need to use dpdk_nic_bind.py script that comes with DPDK distribution to bind ports to Mellanox card PMD driver. However, dpdk_nic_bind.py doesn't my Mellanox card.
./dpdk_nic_bind.py -s
Network devices using DPDK-compatible driver
============================================
<none>
Network devices using kernel driver
===================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth0 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth1 drv=ixgbe unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
=====================
<none>
In general, do I need to do the binding? If yes, how? If not, how is the PMD driver enabled?
If you want to bind it with dpdk_nic_bind.py, you should run: dpdk_nic_bind --bind userspace driver BDF, wheras BDF is what you see by ethtool -i ethName. The userspace driver might be ib_ipoib in this case. You can find out the required userspace driver by running dpdk_nic_bind.py -s, and looking for the connectx driver under the "Network devices using kernel driver" section.
For Mellanox you should follow the procedure described here:
http://dpdk.org/doc/guides/nics/mlx4.html
Basically, the answers are:
No you do not need to bind your card to UIO, but you need to load Mellanox kernel modules:
modprobe -a ib_uverbs mlx4_en mlx4_core mlx4_ib
You should use the whitelist EAL argument to run a DPDK app on Mellanox NIC, i.e.:
testpmd -w 0000:83:00.0 -w 0000:84:00.0 ...
I've got two servers on a LAN with fresh installs of Centos 6.4 minimal and R 3.0.1. Both computers have doParallel, snow, and snowfall packages installed.
The servers can ssh to each other fine.
When I attempt to make clusters in either direction, I get a prompt for a password, but after entering the password, it just hangs there indefinately.
makePSOCKcluster("192.168.1.1",user="username")
How can I troubleshoot this?
edit:
I also tried calling makePSOCKcluster on the above-mentioned computer with a host that IS capable of being used as a slave (from other computers), but it still hangs. So, is it possible there is a firewall issue? I also tried using makePSOCKcluster with port 22:
> makePSOCKcluster("192.168.1.1",user="username",port=22)
Error in socketConnection("localhost", port = port, server = TRUE, blocking = TRUE, :
cannot open the connection
In addition: Warning message:
In socketConnection("localhost", port = port, server = TRUE, blocking = TRUE, :
port 22 cannot be opened
here's my iptables
# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT
You could start by setting the "outfile" option to an empty string when creating the cluster object:
makePSOCKcluster("192.168.1.1",user="username",outfile="")
This allows you to see error messages from the workers in your terminal, which will hopefully provide a clue to the problem. If that doesn't help, I recommend using manual mode:
makePSOCKcluster("192.168.1.1",user="username",outfile="",manual=TRUE)
This bypasses ssh, and displays commands for you to execute in order to manually start each of the workers in separate terminals. This can uncover problems such as R packages that are not installed. It also allows you to debug the workers using whatever debugging tools you choose, although that takes a bit of work.
If makePSOCKcluster doesn't respond after you execute the specified command, it means that the worker wasn't able to connect to the master process. If the worker doesn't display any error message, it may indicate a networking problem, possibly due to a firewall blocking the connection. Since makePSOCKcluster uses a random port by default in R 3.X, you should specify an explicit value for port and configure your firewall to allow connections to that port.
To test for networking or firewall problems, you could try connecting to the master process using "netcat". Execute makePSOCKcluster in manual mode, specifying the hostname of the desired worker host and the port on local machine that should allow incoming connections:
> library(parallel)
> makePSOCKcluster("node03", port=11234, manual=TRUE)
Manually start worker on node03 with
'/usr/lib/R/bin/Rscript' -e 'parallel:::.slaveRSOCK()' MASTER=node01
PORT=11234 OUT=/dev/null TIMEOUT=2592000 METHODS=TRUE XDR=TRUE
Now start a terminal session on "node03" and execute "nc" using the indicated values of "MASTER" and "PORT" as arguments:
node03$ nc node01 11234
The master process should immediately return with the message:
socket cluster with 1 nodes on host ‘node03’
while netcat should display no message, since it is quietly reading from the socket connection.
However, if netcat displays the message:
nc: getaddrinfo: Name or service not known
then you have a hostname resolution problem. If you can find a hostname that does work with netcat, you may be able to get makePSOCKcluster to work by specifying that name via the "master" option: makePSOCKcluster("node03", master="node01", port=11234).
If netcat returns immediately, that may indicate that it wasn't able to connect to the specified port. If it returns after a minute or two, that may indicate that it wasn't able to communicate with specified host at all. In either case, check netcat's return value to verify that it was an error:
node03$ echo $?
1
Hopefully that will give you enough information about the problem that you can get help from a network administrator.
I have 2 servers(serv1,serv2) that communicate and i'm trying to sniff packets matching certain criteria that gets transferred from serv1 to serv2. Tshark is installed on my Desktop(desk1). I have written the following script:
while true; do
tshark -a duration:10 -i eth0 -R "(sip.CSeq.method == "OPTIONS") && (sip.Status-Code) && ip.src eq serv1" -Tfields -e sip.response-time > response.time.`date +%F-%T`
done
This script seems to run fine when run on serv1(since serv1 is sending packets to serv2). However, when i try to run this on desk1, it cant capture any packets. They all are on the same LAN. What am i missing?
Assuming that either serv1 or serv2 are on the same physical ethernet switch as desk1, you can sniff transit traffic between serv1 and serv2 by using a feature called SPAN (Switch Port Analyzer).
Assume your server is on FastEtheret4/2 and your desktop is on FastEthernet4/3 of the Cisco Switch... you should telnet or ssh into the switch and enter these commands...
4507R#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
4507R(config)#monitor session 1 source interface fastethernet 4/2
!--- This configures interface Fast Ethernet 4/2 as source port.
4507R(config)#monitor session 1 destination interface fastethernet 4/3
!--- The configures interface Fast Ethernet 0/3 as destination port.
4507R#show monitor session 1
Session 1
---------
Type : Local Session
Source Ports :
Both : Fa4/2
Destination Ports : Fa4/3
4507R#
This feature is not limited to Cisco devices... Juniper / HP / Extreme and other Enterprise ethernet switch vendors also support it.
How about using the misnamed tcpdump which will capture all traffic from the wire. What I suggest doing is just capturing packets on the interface. Do not filter at the capture level. After you can filter the pcap file. Something like this
tcpdump -w myfile.pcap -n -nn -i eth0
If your LAN is a switched network (most are) or your desktop NIC doesn't support promiscuous mode, then you won't be able to see any of the packets. Verify both of those things.