How do I channel all browsing traffic through encrypted sTunnel session? - unix

I have sTunnel running on my client and server and can't seem to get my head round how I can have it running in a sort of "silent mode" whereby if I were abroad I could fire up the sTunnel connection on my client, connect to my server and then my browsing traffic connection would behave as if I were in the UK (an encrypted proxy).
On the client conf I have:
accept = localhost:xxx (I understand this means the local sTunnel installation listens on xxx port and grabs any traffic sent on that port).
connect = serverip:xxx (This is the instruction of where it needs to be forwarded, i.e the server).
On my server:
accept: clientIP:xxx (the source IP address of my client)
connect: localhost:xxx (the loopback address of the server)
What am I failing to see here? As I see it I can only use this tunnel if I explicitly target a port with my browser and even then wouldn't it only make it as far as the sTunnel server and not onward to the website intended? Do I need to setup proxy settings in the browser?
thanks a lot

I'm not sure stunnel is what you're looking for here.
What you describe would be best accomplished with OpenSSH, and its dynamic SOCKS5 proxy functionality, e.g. ssh -D1080 from the client.
This generally doesn't require any extra settings on the server-side (unless it was specifically disabled by your system administrator), and then on your roaming client-side, you simply establish an SSH connection to your server as per usual, but add an extra -D1080 parameter to your ssh invocation.
Or, if using PuTTY, set up dynamic port forwarding within Connection, SSH, Tunnels, Source port of 1080 and Destination of Dynamic, click Add.
Subsequently, change the settings of your browser to use SOCKS proxy at localhost, port 1080, and also make sure to specify SOCKS v5 and ensure that the checkbox for resolving hostnames remotely is set, too.

Related

Why does Nginx Proxy Manager Stream won't work?

I'm currently trying to setup a tunneling tool, specific for game servers.
So you can start the server locally and everyone can join without open your ports or getting unsecure.
Basicly I do a reverse ssh tunnel to one of my dedicated linux servers where the game port get mapped to a different port (for example 8888). So the server is now exposed to the internet and available for anyone and the user don't have to get unsecure and open his own ports. Everyone can connect to the following ip: SERVERADRESS:8888.
The command which gets executed looks like this:
ssh -N -R "*:8888:localhost:25565" root#SERVERADRESS
This works fine just as i want. But I also want to secure my "forwarding" server, so I'm relativ new to networking but I found reverse proxy's. I watched some tutorials and I installed the "Nginx Proxy Manager" tool which comes with a web interface and looks very good and easy. So there is an option to create an Stream (Picture below), there you can enter the incoming port and the forward Host + port, for example: REVERSEPROXY:7777 -> FORWARDINGSERVER:8888. So with this I want to hide the ip adress from the server where all the ssh tunnels. Sadly this Stream tool won't work, I already saw some other topics with that. They all said to enter the port into the docker-compose.yml which I already did + restart. But for now it won't work. Any other soloutions for this problem? Or completly different ideas to protect my server?
https://i.stack.imgur.com/FolLe.png https://i.stack.imgur.com/KuJbt.png https://i.stack.imgur.com/2SN4a.png https://i.stack.imgur.com/9kzbj.jpg
I try to do my own tunneling tool, but with a protection so that my server getting damaged.

Shadowsocks client cannot connect my Shadowsocks server

I am trying to set up a Shadow Socks connection from China. To do that, I
downloaded and installed a ShadowCocks client (ShadowsocksX-NG.app) on my local, and configure
create a ShadowCocks service at a server aboard
For 2), I have created one instance on AWS of east US zone, and the service is already started with the following configuration. The server instance is SECURED with key pair for connection.
{
"server":"0.0.0.0",
"local_address":"127.0.0.1",
"local_port":1080,
"port_password":{
"7777":"password1",
"8888":"password2"
},
"timeout":300,
"method":"aes-256-cfb",
"fast_open":false
}
For 1), I connect with the address of the server instance's address, port number 7777 and password = password1.
I use global mode (to ensure Shadowcocks kick in) for ShadowCocks client and start it, no website gets loaded (both blocked sites and unblocked sites by the GFW). I assert there is problem with the connection between the Shadowcocks client side and server sides). I also tried different encyption algorithm but still doesn't load.
I need some hint on where the problems might be!
I suspect something wrong with the crytographics? I think the concept of Shadowcocks is that
the client side encrypts the URL and sends it to the server side
then the server side receives the encrypted text and decrypt it, and then fetch the result with the decrypted URL.
I am guessing the problem might occur at this part. I don't see how my client side encrypts or how my server side can decrypt since I didn't share the keys between the 2 sides.
Set the inbound rules to allow traffics to port 7777 and 8888.
type = TCP
port = 7777 or 8888
source = 0.0.0.0/0
So Shadowsocks will be able to connect.

How to suppress the Windows Security Alert for Windows Firewall?

When I create the Hello World example in C++ from The Guide on ZeroMQ found here:
http://zguide.zeromq.org/page:all#Ask-and-Ye-Shall-Receive
and run the application, I get a Windows Security Alert that asks if I would like to allow the application to communicate on public or private networks.
It looks like this:
Here is where things get interesting.
I only need my program to listen on port 5555 for connections from localhost and I do NOT need to allow incoming connections on port 5555. This is because I only want to communicate between applications on the localhost.
Client and server are both running on the same machine.
Here is my current process. I start the server, the Windows Security Alert comes up, since I am running the application as a non-administrator account, I only have standard permissions. Then I click Cancel on the Alert.
Clicking cancel on the alert puts an explicit deny inbound rule on all ports for HelloWorldServer.exe. This is totally fine.
Then I start the client. Since the client is connecting to the localhost. I actually does not need to send messages outside of the local machine, and all of its messages arrive at the server just fine.
Given an explicit deny rule on incoming connections to HelloWorldServer.exe, the messages can still arrive from the client on the local host. This is a desirable result.
Now the question becomes is there anyway to automatically respond to the Windows Security Alert to click cancel? Is there any way to suppress it from popping up since the allow is not needed?
The prompt is undesirable because it implies that the application needs to create a vulnerability when it does not.
Please assume that Named Pipes are not a valid alternative to tcp as a means of inter-process communication.
When binding the socket the caller may specify the IP address the socket is bound to. The coding samples provided by ZeroMQ specify
socket.bind ("tcp://*:5555");
where * appears to be specify all possible addresses (INADDR_ANY in BSD socket-derived parlance) which will trigger the Windows firewall as it allows remote and local addresses.
Calling socket.bind with the localhost address 127.0.0.1
socket.bind ("tcp://127.0.0.1:5555");
limits the sockets allowed to connect to the local machine and should silence the firewall warning for most Windows firewall configurations.

Is there BurpSuit alternative that allows MITM to be performed not only on a browser but also on any programs whose local ports are randomly spawned?

Recently I have come across an 0day in the most popular software in, let's just say "Entertainment" industry, where the remote code execution can be achieved via MITM.
Usually, I use Burp to accomplish MITM. But this one is a client-side program that spawns random local ports to send HTTP requests to its server. Since ports are randomized, Burp proxy couldn't channel traffic to its listener as Burp requires predefined proxy port to be bound to Firefox/Chrome
(The software I mentioned above is not a browser though it facilitates some behavior, so configuring it to use a proxy is basically out of the question).
So, is there any alternative program that could serve as a proxy, in the mean time provides similar real-time capabilities of Burp?
Firstly, you could still use Burp. You have 3 options, one might work:
Look for a proxy setup in the client. Lots of clients allow you to use proxies. You can look for a config parameter, or a command line switch etc.
Set the system proxy to use Burp. In this way all HTTP traffic will be sent to Burp. In linux you can use the http_proxy https_proxy environment variable, or in winsdows in the Internet Settings.
If the client connects to a hostname and not to an IP, you can configure this hostname in the OS's hosts file to resolve to 127.0.0.1 , and configure Burp to listen on the port, which the client tries to connect to. Of course this will not work, if the the server port is also randomized, but that would be really weird. In Burp you also have to configure to send the whole traffic to the target server and to work as a transparent proxy.
If all these don't work, you can try with bettercap, which is a MITM tool.

Multiple certificates for HTTPS on a software NLB'd IIS7 cluster

We're currently trying to set up a HTTPS with multiple certificates. We've had some limited success but we're getting some results I can't make any sense of...
Basically we have two servers on our NLB (10.0.51.51 and 10.0.51.52) and two IPs assigned to our NLB (10.0.51.2 and 10.0.51.4) and we have IIS listening on both of these IPs with a different wildcard certificates (To avoid giving out public IP's let's say A:443 routes to 10.0.51.2:443 and B:443 routes to 10.0.51.4:443). We also have a Cisco router using port address translation to route port 443 from two external IP's to these internal NLB IPs.
The weird thing is, this works if we request A:443 or B:443, but if you go internally on 10.0.51.51:443, 10.0.51.52:443, 10.0.51.2:443 or 10.0.51.4:443 you ALWAYS get the same SSL cert. This cert was in the past assigned to *:443 but we've made sure there's no * bindings anymore defined in IIS.
When i run "netsh http show sslcert" after trimming out all the irrelevant stuff I get:
IP:port : 0.0.0.0:443
Certificate Hash : <Removed: Cert 1>
IP:port : 10.0.51.2:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.3:446
Certificate Hash : <Removed: Cert 3 - Another site>
IP:port : 10.0.51.4:443
Certificate Hash : <Removed: Cert 2>
Which tells me that the * binding is still in there, which is a bit weird, but I can't see why that would prevent the other from working (Or even more more strangely why the request through the router would work).
It's got me wondering whether it's actually treating the requests as the machine's IP rather than the NLB IP, but unfortunately our dev environment is only a single server which sorta reduces the amount of trial/error I can take to this (Since all I can test on is a live environment) without convincing management to buy more servers for the test environment - which is something I'm trying.
Does anyone have any idea:
Why there's a difference between internal and through the router?
Why the internal request is getting the wrong cert?
How I can remedy this so that we get the same behavior on both sides?
I ended up tracking the problem down. Leaving this as a hint for anyone else who falls in the same trap...
The problem was caused by us using a shared configuration model on our IIS servers. When setting up a HTTPS binding this appears to only actually bind it on the box you're managing it on (Leaving the other completely unbound). Since our * binding still existed it was catching it on the server we didn't do through the UI and just let pick up the shared config.
Crazy bad luck with single-affinity NLB sent us down the garden path after the router being the cause by making our internal requests go to one server and our external requests to another.
We ended up finding this by running "netsh http show sslcert > certs.txt" on both servers and diff'ing the outputs.
Going forwards our plan is to no longer use the IIS UI for SSL configuration instead following the steps below:
Install the certificates on each server.
Run a command-line binding of the SSL port "netsh http add sslcert ipport=?:? certhash=? appid=?" (ip:port is easy to work out, certhash can be copied from the "certificate hash" section of the server certificates page, appid can be copied from an existing IIS binding on the netsh http add sslcert)
Edit the IIS ApplicationHost.config file directly to add the bindings without the UI being involved.
Our understanding is this will prevent a repeat of this error.

Resources