mitmproxy in docker between traefik and service - nginx

I'd like to analize the traffic placing mimtproxy between the load balancer (traefik or nginx) and the service but I can't really understand how to do that. I don't want to set mitmproxy as ordinary proxy (that works like a charm) as I'd like to understand how the load balancers modify the requests.
I read the documentation on available mode of operation but I didn't recognize which situation suits me. I tend to exclude transparent mode (that I used on firewalls) and I don't really understand what is the --mode reverse:http://...: I thought it was a way to forward anything to the given address, so I tried it setting:
mitmweb:
image: mitmproxy/mitmproxy
tty: true
ports:
- "8080:8080" # proxy
- "8081:8081" # web-interface
command: mitmweb --web-host 0.0.0.0 --no-web-open-browser -p 8080 --mode reverse:http://django:8000/
...
but mitmproxy complains that
403: To protect against DNS rebinding, mitmweb can only be accessed by
IP at the moment. (https://github.com/mitmproxy/mitmproxy/issues/3234)
Is it any possible and how?

I misinterpreted the message that was not refusing to proxy. It just hinted me that I needed to access the web interface via an ip rather than a dns name.
The result is awesome. You get the traffic and can introspect the request/response cicle in a very useful way.

Related

Setting up Tabula on a remote server

new here. I'm currently trying to set up an implementation of Tabula on a Windows Server. I've set up a non-standard port to run the service (port 8090) and have set up firewall rules, but I can't seem to make it work. I've also been assured by the hosting company that they don't filter ports so all possible ports would have been blocked by either the router or the machine itself.
I've also set up port forwarding (with similar settings that work on the router). Didn't work.
I've also tried using port 80, then temporarily turning off the Apache server on that machine so it would free up that port. To no avail, alas.
I've also tried Proxypass, with the same failed results:
<Location /tab>
ProxyPass http://release.123-246.com:8090/
ProxyPassReverse http://release.123-246.com:8090/
</Location>
It works on localhost (127.0.0.1:8090) and local network address (192.168.0.4:8090, but only from within the machine's browser) but can't seem to make it work on live (78.46.210.12:8090)
Pretty sure I'm missing something, but I don't know what it is. Help please? I'm welcome to different approaches on this.
Did you check that Tabula is listening on the appropriate interface?
The version of Tabula that is packaged as a Windows application might not bind to the interface fronted by your reverse proxy.
Try this command to make Tabula listen on all available interfaces:
jruby -G -r jbundler -S rackup -o 0.0.0.0 config.ru

Supporting SSL/TLS for a Kubernetes NodePort Service

The Problem
I need to expose a Kubernetes NodePort service externally over https.
The Setup
I've deployed Kubernetes on bare-metal and have deployed Polyaxon on the cluster via Helm
I need to access Polyaxon's dashboard via the browser, using a virtual machine that's external to the cluster
The dashboard is exposed as a NodePort service, and I'm able to connect to it over http. I am not able to connect over https, which is a hard requirement in my case.
Following an initial "buildout" period, both the cluster and the virtual machine will not have access to the broader internet. They will connect to one another and that's it.
Polyaxon supposedly supports SSL/TLS through its own configs, but there's very little documentation on this. I've made my best attempts to solve the issue that way and also bumped an issue on their github, but haven't had any luck so far.
So I'm now wondering if there might be a more general Kubernetes hack that could help me here.
The Solutions
I'm looking for the simplest solution, rather than the most elegant or scalable. There are also some things that might make my situation simpler than the average user who would want https, namely:
It would be OK to support https on just one node, rather than every node
I don't need (or really want) a domain name; connecting at https://<ip_address>:<port> is not just OK but preferred
A self-signed certificate is also OK
So I'm hoping there's some way to manipulate the NodePort service directly such that https will work on the virtual machine. If that's not possible, other solutions I've considered are using an Ingress Controller or some sort of proxy, but those solutions are both a little half-baked in my mind. I'm a novice with both Kubernetes and networking ideas in general, so if you're going to propose something more complex please speak very slowly :)
Thanks a ton for your help!
Ingress-controller it's a standard way to expose HTTP backend over TLS connection from cluster to client.
Existing NodePort service has ClusterIP which can be used as a backend for Ingress. ClusterIP type of service is enough, so you can change service type later to prevent HTTP access via nodeIP:nodePort.
Ingress-controller allows you to teminate TLS connection or pass-through TLS traffic to the backend.
You can use self-signed certificate or use cert-manager with Let's encrypt service.
Note, that starting from 0.22.0 version Nginx-ingress rewrite syntax has changed and some examples in the articles may be outdated.
Check the links:
TLS termination
TLS/HTTPS
How to get Kubernetes Ingress to terminate SSL and proxy to service?
Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure

Best software for dynamic dns proxying to docker containers

Currently i am using haproxy with manual updating backends which points to separate docker nginx containers for different apps.
What is best software to proxying request to different local nginx containers based on hostname?
I would have a simple map file or even /etc/hosts/ which my script would update when docker containers change, for example:
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
So ideal will be haproxy -> some software proxy or dns -> docker nginx
and software would use map file in fly, not reloading and point request to local ip address.
Maybe i would put varnish cache in front so it would need to be compatible with that too (and why wouldn't) so flow would be:
request -> haproxy (for load balancing in multiple servers)
-> varnish on public server ip ( for in memory caching based on host and route, so if there is cache return response immediately )
-> SOME PROXY OR DNS BASED ON SIMPLE MAP FILE which will further proxy to local ip of one of multiple docker nginx containers
-> docker nginx inside custom network
-> some app in container
What is best practice for this flow, should i put varnish somewhere else, and what is a software i am seeking for?
I am currently using one extra nginx and mapping $host to custom ip address in custom maps.conf file and gracefully reloading nginx on change, but i got feeling that there is better solution for this.
Also i forgot to mention that i dont need only http proxying based on map file, but tcp (ssh, smtp, ftp..) too, just in those cases i will not have haproxy and varnish in front and this app would be public faced on those port.
for example:
port:22
domain1 1.1.1.1
domain2 1.1.1.2
domain3 1.1.1.3
port:25
domain1 1.1.1.4
domain2 1.1.1.5
domain3 1.1.1.6
I think something like Muguet might solve your issue.
From their github repo:
When using Docker, it's sometimes a pain to access your containers
using specific IPs/ports.
Muguet provides you with a DNS Server that resolves auto-generated
hostnames to your containers IPs, plus a Reverse Proxy to access all
your web apps on port 80.
I think what you want is dnsmasq. This basically is a lightweight DNS service you run on your host running docker containers and it allows you to use hostnames instead of IP addresses. It's a pretty common way to solve this issue.
A nice guide to setting up dnsmasq can be found at:
http://docs.blowb.org/setup-host/dnsmasq.html
and searching dnsmasq and docker will point you to many more resources.
One thing to remember is on your haproxy host, make sure you modify the /etc/resolv.conf to include your dnsmasq server.

Docker nginx-proxy : proxy between containers

I am currently running a development stack using Docker-Compose in my company, to provide to developers everything they need to code our applications.
It includes in particular:
a Gitlab container (sameersbn/gitlab) to manage private GIT repositories,
a Jenkins container (library/jenkins) for building and continuous integration,
an Archiva container (ninjaben/archiva-docker) to manage Maven repositories.
In order to secure the services through HTTPS, and exposing them to the outside world, I installed the excellent nginx-proxy container (jwilder/nginx-proxy) which allows automated nginx proxy configuration using environment variables on containers, and automated HTTP to HTTPS redirection.
DNS are configured to map each public URL of dockerized services to the IP of the host.
Finally, using Docker-Compose, my docker-compose.yml file looks like this :
version: '2'
services:
nginx-proxy:
image: jwilder/nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/config/nginx-proxy/certs:/etc/nginx/certs:ro
postgresql:
# Configuration of postgresql container ...
gitlab:
image: sameersbn/gitlab
ports:
- "10022:22"
volumes:
- /var/data/gitlab:/home/git/data
environment:
# Bunch of environment variables ...
- VIRTUAL_HOST=gitlab.my-domain.com
- VIRTUAL_PORT=80
- CERT_NAME=star.my-domain.com
archiva:
image: ninjaben/archiva-docker
volumes:
- /var/data/archiva:/var/archiva
environment:
- VIRTUAL_HOST=archiva.my-domain.com
- VIRTUAL_PORT=8080
- CERT_NAME=star.my-domain.com
jenkins:
image: jenkins
volumes:
- /var/data/jenkins:/var/jenkins_home
environment:
- VIRTUAL_HOST=jenkins.my-domain.com
- VIRTUAL_PORT=8080
- CERT_NAME=star.my-domain.com
For a developer workstation, everything works as expected. One can access the difference services through https://gitlab.my-domain.com, https://repo.my-domain.com and https://jenkins.my-domain.com.
The problem occurs when one of the dockerized service access another dockerized service. For instance, If I try to access https://archiva.my-domain.com from jenkins docker, I will get a timeout error from the proxy.
It seems that even if archiva.my-domain.com is resolved as the public host IP from the docker container, requests coming from dockerized services are not proxied by nginx-proxy.
As far as I understood, docker-nginx is handling requests coming from the host network, but does not care about the ones coming from the internal container network (_dockerconfig_default_ for a Docker-Compose stack).
You could say, why would I need to use the proxy from a container ? Of course, I could use URL http://archiva:8080 from Jenkins container, and it would work. But this kind of configuration is not scalable.
For example, using a Gradle build to compile one application, the build.gradle needs to declare my private repository through https://archiva.my-domain.com. It will work if build is launched from a developer workstation, but not through the jenkins container ...
Another example is an authentication in Jenkins by OAuth GitLab service, where the same URL GitLab authentication needs to be both available from the outside, and inside the Jenkins container.
My question here is then : How to configure nginx-proxy to proxy a request from a container to another container ?
I did not see any topic discussing this problem, and I do not understand enough the problem to build a solution on nginx configuration.
Any help would be really appreciated.
BMitch, the odds were good, it was indeed a iptables rules problem, and not a misconfiguration of nginx-proxy.
The default policy of chain INPUT for the table filter was DROP, and no rules was made to ACCEPT requests from the container IPs (127.20.X.X).
So for the record, I give some details of the situation if other people face the same problem.
To access containers from the outside world, Docker put rules on PREROUTING and FORWARD rules to allow external IPs to be DNATed from the host IP to the container IPs. Theses default rules allow any external IPs, and that is why limiting access to containers requires some advanced iptables customizations.
See this link for an example : http://rudijs.github.io/2015-07/docker-restricting-container-access-with-iptables/
But if your containers need to access host resources (services runing on the host, or in my case, a nginx-proxy container listening to HTTP/HTTPS host port and proxying to containers), you need to take care about the iptables rules of the INPUT chain.
In fact, a request coming from the container and addressed to the host will be routed to the host network stack by the Docker daemon, but will then need to pass the INPUT chain (as the request src IP is the host one). So if you want to protect host resources and let containers access them, do not remember to add something like this :
iptables -A INPUT -s 127.20.X.X/24 -j ACCEPT
Where 127.20.X.X/24 is the virtual network on which your containers are running.
Thank you a lot for your help.

do docker container IPs change on restart?

I am new to docker and I have been dockerizing all my application in a single server. So far, everything is fine and working. However, I don't understand one thing. I am using docker-compose for everything (I haven't created dockerfile for my projects yet) and there is this ports attribute in docker-compose. If I write something like this:
ports:
8085:80
It will listen on 0.0.0.0:8085, which means outside world has access to my server. After some discussions and google-ing, I found that I can take an IP address in my docker bridge network and do port mappings easily:
ports:
172.17.0.1:8085:80
This will listen only on 172.17.0.1:8085, which is great as it is only listens on internally and my nginx proxies the traffic to the necessary ports. (e.g proxy_pass http://172.17.0.1:8085). After knowing more about docker and understanding how they work, I realized that all these containers have their own IP addresses with ports exposed only to those addresses. For example, one of my "web" containers has IPv4 address of 172.17.0.10 and port 80 is exposed. If I do docker inspect on one of these containers, I will see the IP address of the container.
Now, I want to use these IP addresses in my nginx. Instead of proxy_pass http://172.17.0.1:8085, I want to do http://172.17.0.10. I personally think that this is a much elegant interface but there is one thing that concerns me. What happens if I restart my machine? All the containers will start in some kind of order. If I have 5 web containers and they start in random order, can I be sure that the IPs for these containers will be the same? Or will they change? Should I always use ports in docker-compose for use by nginx? If yes, how can I have different IPs per container instead of different ports with the same IP? Will it be okay if I create another docker network interface (let's say in subnet 172.17.1.0), and assign different IPs from that interface to the exposed "public" ports? By this I mean basically using 172.17.1.1:80:80 in one container, 172.17.1.2:80:80 in another etc.
Not an expert in the docker-networking domain but I'll try my best to answer the questions you got there.
Q: What happens if I restart my machine? All the containers will start in some kind of order.
A: Unless you use the links or depends_on keywords, otherwise you cannot guarantee the start sequence.
Q: If I have 5 web containers and they start in random order, can I be sure that the IPs for these containers will be the same?
A: I did a little experiment on my machine by taking note of the ipaddresses of my existing 2 containers (postgresDB and influxDB).
They are running on
Postgres: 172.17.0.2
InfluxDB: 172.17.0.3
Shut it down and boot again. Probably due to them booting up the same order this time, the ip addresses seems to have been maintained. Added the depends_on keyword to force the InfluxDB container to start first before Postgres can, now the IP addresses of both containers are;
Postgres: 172.17.0.3
InfluxDB: 172.17.0.2
I think the IP is distributed based on a first come first serve basis. If you didn't specify an ordering for booting up the containers I think there is a small chance which the ip can be different for the containers. Really depends on who runs first.
Q: Should I always use ports in docker-compose for use by nginx?
A: Yes, if you would like to port forward an nginx instance's port to the outside world. Otherwise no one will be able to hit that web server. E.g. exposing port 443 to let HTTPs traffic come through.
Q: how can I have different IPs per container instead of different ports with the same IP?
A: I don't know whether that is possible or not but after doing some research for you on the docker-compose documentation it seems possible by using the ipam keyword.
See:
https://docs.docker.com/compose/compose-file/#/ipam
That looks scary to me so what I did for my own project was to use the service_name instead.
Example:
container_bbb:
image: banana
my_nginx:
image: apple
environment:
- MOUNT_SRC0=http://container_bbb:80
- MOUNT_DEST0=/
links:
- container_bbb
For this instance in the my_nginx container, the service name container_bbb would be resolved into the hostname of that container.
Then I will have a python script that would dynamically generate the ngix config using this information at the container's entrypoint script area.
Sounds a bit overkill but that gives me more control over what I want to do with my nginx.
So in my /etc/nginx/conf.d/default-locations/ the configs would be something like;
location /container_bbb/ {
proxy_pass http://container_bbb:3000/;
}
Note:
I am using this nginx server instance as a reverse proxy server.
I guess what I am trying to say here is that you can essentially use the hostname instead of the ip address. And to get to the container next-door you would pass in http://CONTAINER_SERVICE_NAME:PORT
You can't rely on the containers' IP addresses. If all your services are on the same docker-compose config, they will automatically be part of the same internal network and you can simply use the service names as the hostnames.
E.g., If your web app was named php, your nginx proxy config would look something like:
location / {
proxy_pass http://php;
proxy_set_header Host $host;
proxy_redirect off;
}
(Also note you might want to enable your firewall, just in case any of the port mappings leaks out to the public IP address of the host.)

Resources