Postfix relayhost communication through Squid proxy? - postfix-mta

I'm running postfix inside a private network and external communication can happen only through squid proxy. My relay host is AWS SES. Is there a way to make the relay host communication happen through squid proxy?
I've been looking into rinetd and ssconnect. I think we can do this with ssconnect according to this post but I can't find a package to install on ubuntu for that.

This is an old post, hope this still can help. It is based upon this post translated from German.
You can find sconnect today here.
It is a small program what you can use to ssh via a proxy - specifically via a socks or HTTP proxy:
To make it runnable you can do:
$ gcc -o sconnect connect.c
$ cp sconnect /usr/local/bin (or your a folder where it is globally reachable)
$ nano ~/.ssh/config
And then edit the ssh config like this:
# Usage with Socks proxy
ProxyCommand /usr/local/bin/sconnect -4 -S your-socks-server:1080 %h %p
# Usage with HTTP Proxy
ProxyCommand connect /usr/local/bin/sconnect -4 -H proxy.local.net:8080 %h %p

Related

Privoxy as intercepting proxy

I want to setup Privoxy to be able to filter all http requests that my Wordpress page are sending and receiving, but I have hard time trying to do it.
I setup Wordpress with bitnami package and privoxy with apt-get install and found out that in order to intercept all of requests I have to turn on "accept-intercepted-requests" and actually redirect them with iptables.
I Tried this command to do so:
sudo iptables -t nat -D OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8118
But when I try to access website outside of localhost I have connection refused message.
My question is: is it possible to intercept all http request of webserver with privoxy and iptables or maybe I have to use some other software to achieve this?
I figured it out, so I am posting solution for anyone who also struggles with this:
sudo iptables -t nat -A PREROUTING -i {INTERFACE_NAME} -p tcp --dport {WEBSITE_PORT} -j REDIRECT --to-port {PROXY_PORT}
where:
INTERFACE_NAME - name of your VM interface which can be get with the ifconfig command (for me it was ens33)
WEBSITE_PORT - port on which your apache2 service is listening (default is 80 or 8080)
PROXY_PORT - port of Privoxy (default is 8118)
It works with every website that is hosted using Apache. (I tested it also with phpBB and it works with no problems.)

OpenVPN: Route SquidProxy

I am trying to setup a public squid proxy that routes it's traffic via a VPN server elsewhere in the world. It's running inside a docker container on a VPS host.
Using the default settings with push gateway, I can access the squidproxy on the VPS itself and it does route it's traffic via the vpn.
However, no external IPs can access the squid proxy.
I do have docker forwarding the port 3128:3128.
It is something to do with the OpenVPN routes that are created (as the Squid proxy is accessible until OpenVPN starts)
I found it is this route that seems to "block" my external traffic.
128.0.0.0/1 via 10.91.10.5 dev tun0
(10.91.10.5 is the gateway of the VPN)
If I remove it I can access squid again but then outgoing requests don't use the VPN.
I can make my external IP work by explicitly adding it like so
ip route add 203.X.X.X via 172.18.0.1 dev eth0
(172.18.0.1 is the docker gateway)
But I need it to work with any external IPs.
I have tried ip route add 0.0.0.0 via 172.18.0.1 dev eth0.
But this doesn't work as 128.0.0.0/1 is more specific so matches first.
In conclusion
1) Need any IP to access the SquidProxy (port 3128)
2) Need all outgoing SquidProxy requests (80,443) to go via the VPN
Any help would be greatly appreciated!
UPDATE:
So I have this working
1) Start OpenVPN with the below command
openvpn --route-nopull --script-security 2 --up /etc/openvpn/up.sh
This disables it from setting up the VPN routes. So all traffic in and out is using the default route not via VPN
2) In the up.sh, I run the below commands
#!/bin/sh
/sbin/ip route add 0.0.0.0/0 dev $1 table 100
/sbin/ip rule add from all fwmark 1 table 100
/sbin/iptables -A OUTPUT -t mangle -p tcp -m multiport --dports 80,443 -j MARK --set-mark 1
/sbin/iptables -t nat -A POSTROUTING -o $1 -j MASQUERADE
I have then setup Squid to only allow ports 80 & 443. Docker has port 3128 open for access to the container.
I also needed to use --sysctl net.ipv4.conf.all.rp_filter=0 in the docker run command.

Docker : Unable to run Docker commands

I have installed docker engine v1.12.3 on Ubuntu 14.04 LTS and since after the following changes to enable Remote API, I'm not able to pull or run any of the docker images,
Added DOCKER_OPTS="-H tcp://127.0.0.1:2375" in /etc/default/docker.
/etc/init.d/docker start.
Following is the error received,
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Note: I have added login in user to the docker group
If you configure the docker daemon to listen to a TCP socket (as you do), you should use the -H command line option with the docker command to point it to that socket instead of the default Unix socket.
#mustaccio is correct. The docker command defaults to using a unix socket normally at /var/run/docker.sock. You can either make your options setup like:
DOCKER_OPTS="-H tcp://127.0.0.1:2375" -H unix:///var/run/docker.sock" and restart, or always use docker -H tcp://127.0.0.1:2375 whenever you interact with the host from the command line.
The only good scenario I've seen for removing the socket is pure user security. If your Docker host is TLS enabled, you can ensure only authorized people are accessing the host by signed certificates, not just people with access to the system.

docker nginx container not receiving request from outside, connection refused

I have a running nginx container: # docker run --name mynginx1 -P -d nginx;
And got its PORT info by docker ps: 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp
Then I could get response from within the container(id: c30991a04b2f):
docker exec -i -t c3099 bash
curl http://localhost => which return the default index.html page content, it works
However, when I make the curl http://localhost:32769 outside of the container, I got this:
curl: (7) failed to connect to localhost port 32769: Connection refused
I am running on a mac with docker version 1.9.0; nginx latest
Does anyone know what cause this? Any help? thank you
If you are On OSX, you are probably using a VirtualBox VM for your docker environment.
Make sure you have forwarded your port 32769 to your actual host (the mac), in order for that port to be visible from localhost.
This is valid for the old boot2docker, or the new docker machine.
VBoxManage controlvm "boot2docker-vm" --natpf1 "tcp-port32769 ,tcp,,32769,,32769"
VBoxManage controlvm "boot2docker-vm" --natpf1 "udp-port32769 ,udp,,32769,,$32769
(controlvm if the VM is running, modifyvm is the VM is stopped)
(replace "boot2docker-vm" b ythe name of your vm: see docker-machine ls)
I would recommend to not use -P, but a static port mapping -p xxx:80 -p yyy:443.
That way, you can do that port forwarding once, using fixed values.
Of course, you can access the VM directly through docker-machine ip vmname
curl http://$(docker-machine ip vmname):32769
Solved.. I misunderstood how docker port mapping works.
Since I'm using mac, the host for nginx container is a VM, 0.0.0.0:32769->80/tcp maps the port 80 of the container to the port 32769 of the VM.
solution:
docker-machine ip vm-name => 192.168.99.xx
curl http://192.168.99.xx:32769
Not exactly answers for your question but spend some time trying to figure out similar thing in context of "why is my docker container not connecting to elastic search localhost:9200" and this was the first S.O. question that pops up, so I hope it helps some other googling person
if you are linking containers together (e.g. docker run --rm --name web2 --link db:db training/webapp env)
... then Dockers adds enviroment variables:
DB_NAME=/web2/db
DB_PORT=tcp://172.17.0.5:5432
DB_PORT_5432_TCP=tcp://172.17.0.5:5432
DB_PORT_5432_TCP_PROTO=tcp
DB_PORT_5432_TCP_PORT=5432
DB_PORT_5432_TCP_ADDR=172.17.0.5
... and also updates your /etc/hosts
# /etc/hosts
#...
172.17.0.9 db
so you can technically connect to ping db
https://docs.docker.com/v1.8/userguide/dockerlinks/
so for elastic search is
# /etc/hosts
# ...
172.17.0.28 elasticsearch f9db83d0dfb5 ecs-awseb-qa-3Pobblecom-env-f7yq6jhmpm-10-elasticsearch-fcbfe5e2b685d0984a00
so wget elasticseach:9200 will work

Could not resolve hostname, ping works

I have installed RasPi Raspbian, and now I can't do ssh or git clone, only local host names are being resolved it seems. And yet ping works:
pi ~ $ ssh test.com
ssh: Could not resolve hostname test.com: Name or service not known
pi ~ $ git clone gitosis#test.com:test.git
Cloning into 'test'...
ssh: Could not resolve hostname test.com: Name or service not known
fatal: The remote end hung up unexpectedly
pi ~ $ ping test.com
PING test.com (174.36.85.72) 56(84) bytes of data.
I sort of worked around it for github by using http://github.com instead of git://github.com, but this is not normal and I would like to pinpoint the problem.
Googling for similar issues but the solutions offered was either typo correction, or adding domains to hosts file.
This sounds like a DNS issue. Try switching to another DNS server and see if it works.
OpenDNS
208.67.222.222
208.67.220.220
GoogleDNS
8.8.8.8
8.8.4.4
Try reseting te contents of the DNS client resolver cache.
(For windows) Fireup a command prompt and type:
ipconfig /flushdns
If you are a linux or mac user, they have their own way of flushing the dns.
Had the same error, I just needed to specify a folder:
localmachine $ git pull ssh://someusername#127.0.0.1:38765
ssh: Could not resolve hostname : No address associated with hostname
fatal: The remote end hung up unexpectedly
localmachine $ git pull ssh://someusername#127.0.0.1:38765/
someusername#127.0.0.1's password:
That error message is just misleading.
if you've a network-manager installed
check /etc/nsswitch.conf
if you've got a line
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4
remove the **[NOTFOUND=return]**
restart /etc/init.d/networking
the [NOTFOUND=return] prevents futher lookups if the first nameservwe doesn't respond correctly
This may be an issue with the proxy. Kindly unset and try.
git config --global --unset http.proxy
git config --global --unset https.proxy

Resources