Accessing web-server in private network from arbitrary clients - http

I am looking for the best practice to allow accessing a web-server in a private network from a client running on an arbitrary machine (possibly on another private network). Here is the typical setup:
Web-server is running on a Linux machine behind NAT (not directly accessible by the client) and is being used for JPEG streaming.
Client my run on any device and cannot be directly accessed by the web-server.
An intermediate server (Linux machine) which can be directly accessed by both the web-server and clients (since it owns a static IP address).
The first solution that comes to my mind is using the intermediate server as a web proxy and creating a SSH tunnel between the intermediate machine and the web-server, however it requires clients to make changes to their proxy settings which is not an option in my case. The other solution which would possibly work is having the intermediate machine to forward HTTP requests to the web-server, though I do not know of any easy way to do so. Since it looks like a common case to me, I feel like there should be some service which does that. I am also open to other solutions which would offer transparent access to the web-server.

Related

What will happen if a SSL-configured Nginx reverse proxy pass to an web server without SSL?

I use Nginx to manage a lot of my web services. They listens different port, but all accessed by the reverse proxy of Nginx within one domain. Such as to access a RESTful-API server I can use http://my-domain/api/, and to access a video server I can use http://my-domain/video.
I have generated a SSL certificate for my-domain and added it into my Nginx conf so my Nginx server is HTTPS now -- But those original servers are still using HTTP.
What will happen when I visit https://my-domain/<path>? Is this as safe as configuring SSL on the original servers?
One of the goals of making sites be HTTPS is to prevent the transmitted data between two endpoints from being intercepted by outside parties to either be modified, as in a man-in-the-middle attack, or for the data to be stolen and used for bad purposes. On the public Internet, any data transmitted between two endpoints needs to be secured.
On private networks, this need isn't quite so great. Many services do run on just HTTP on private networks just fine. However, there are a couple points to take into consideration:
Make sure unused ports are blocked:
While you may have an NGINX reverse proxy listening on port 443, is port 80 blocked, or can the sites still be accessed via HTTP?
Are the other ports to the services blocked as well? Let's say your web server runs on port 8080, and the NGINX reverse proxy forwards certain traffic to localhost:8080, can the site still be accessed at http://example.com:8080 or https://example.com:8080? One way to prevent this is to use a firewall and block all incoming traffic on any ports you don't intend to accept traffic on. You can always unblock them later, if you add a service that requires that port be opened.
Internal services are accessible by other services on the same server
The next consideration relates to other software that may be running on the server. While it's within a private ecosystem, any service running on the server can access localhost:8080. Since the traffic between the reverse proxy and the web server are not encrypted, that traffic can also be sniffed, even if authorisation is required in order to authenticate localhost:8080. All a rogue service would need to do is monitor the port and wait for a user to login. Then that service can capture everything between the two endpoints.
One strategy to mitigate the dangers created by spyware is to either use virtualisation to separate a single server into logical servers, or use different hardware for things that are not related. This at least keeps things separate so that the people responsible for application A don't think that service X might be something the team running application B is using. Anything out of place will more likely stand out.
For instance, a company website and an internal wiki probably don't belong on the same server.
The simpler we can keep the setup and configuration on the server by limiting what that server's job is, the more easily we can keep tabs on what's happening on the server and prevent data leaks.
Use good security practices
Use good security best practices on the server. For instance, don't run as root. Use a non-root user for administrative tasks. For any services that run which are long lived, don't run them as root.
For instance, NGINX is capable of running as the user www-data. With specific users for different services, we can create groups and assign the different users to them and then modify the file ownership and permissions, using chown and chmod, to ensure that those services only have access to what they need and nothing more. As an example, I've often wondered why NGINX needs read access to logs. It really should, in theory, only need write access to them. If this service were to somehow get compromised, the worst it could do is write a bunch of garbage to the logs, but an attacker might find their hands are tied when it comes to retrieving sensitive information from them.
localhost SSL certs are generally for development only
While I don't recommend this for production, there are ways to make localhost use HTTPS. One is with a self signed certificate. The other uses a tool called mkcert which lets you be your own CA (certificate authority) for issuing SSL certificates. The latter is a great solution, since the browser and other services will implicitly trust the generated certificates, but the general consensus, even by the author of mkcert, is that this is only recommended for development purposes, not production purposes. I've yet to find a good solution for localhost in production. I don't think it exists, and in my experience, I've never seen anyone worry about it.

How to make My PC work as Host Server?

I have an ASP.NET web application that has been hosted in IIS local Machine.
My Question is :
Is there any free or paid method that allows browsing this web
application from the internet as Host Server ?
Thanks
The easiest way to to publish it directly onto the internet. You do run the risk of attackers then being able to attach your machine, so you will need to brush up on your security skills. It might be worth looking into one of the free hosting options from AWS, Azure or Google Cloud.
To use your local machine as a web server, first, configure it to use a static IP. Its been a while since I've done it on windows, but this looks about right http://www.howtogeek.com/howto/19249/how-to-assign-a-static-ip-address-in-xp-vista-or-windows-7/.
Next you will need to configure port forwarding on your model. You want to send all traffic on port 80 to your machine, using its new fixed IP address. If your using HTTPS as well, configure port 443 to go to your machine. There are too many different modem brands, all of which handle this slightly differently, to consider offering any more help on this. You will need to do some reading up on your particular modem for step-by-step instructions.
If your internet connection is using a fixed IP, then you can stop here.
If not, or if you just want a domain name, then its worth signing up for a dynamic dns service. I use No-ip, its free, it integrates with my modem and I haven't had any problems with it in the last few years. Once this is in place, you will be able to hit your webserver just like a real one. Using something like "http://mypc.no-ip.biz/mydemoapp/
But again, be warned about exposing your machine on the internet. There are nasty people out there who love to hijack other peoples computers.
Update:
This should give you some guidance on port forwarding
http://www.howtogeek.com/66214/how-to-forward-ports-on-your-router/
Try http://www.noip.com I just logged in and it seemed happy. Otherwise, have a click through all the settings in your modem looking for ddns or dynamic DNS. There is usually a drop down of all the providers that it will talk to. And some providers have apps that you run on your PC , which is easier that working with the modem for some. (Or for models that don't support ddns.)

Is there BurpSuit alternative that allows MITM to be performed not only on a browser but also on any programs whose local ports are randomly spawned?

Recently I have come across an 0day in the most popular software in, let's just say "Entertainment" industry, where the remote code execution can be achieved via MITM.
Usually, I use Burp to accomplish MITM. But this one is a client-side program that spawns random local ports to send HTTP requests to its server. Since ports are randomized, Burp proxy couldn't channel traffic to its listener as Burp requires predefined proxy port to be bound to Firefox/Chrome
(The software I mentioned above is not a browser though it facilitates some behavior, so configuring it to use a proxy is basically out of the question).
So, is there any alternative program that could serve as a proxy, in the mean time provides similar real-time capabilities of Burp?
Firstly, you could still use Burp. You have 3 options, one might work:
Look for a proxy setup in the client. Lots of clients allow you to use proxies. You can look for a config parameter, or a command line switch etc.
Set the system proxy to use Burp. In this way all HTTP traffic will be sent to Burp. In linux you can use the http_proxy https_proxy environment variable, or in winsdows in the Internet Settings.
If the client connects to a hostname and not to an IP, you can configure this hostname in the OS's hosts file to resolve to 127.0.0.1 , and configure Burp to listen on the port, which the client tries to connect to. Of course this will not work, if the the server port is also randomized, but that would be really weird. In Burp you also have to configure to send the whole traffic to the target server and to work as a transparent proxy.
If all these don't work, you can try with bettercap, which is a MITM tool.

Is it possible to optimally route browser data: try localhost then local network then internet

Let's say that I'm building a web app that will be required to exchange data with 3 possible entities:
Another open browser window open on the same machine.
A browser on a different machine that is still within the same intranet.
A browser on a machine outside of the intranet.
Is it possible to somehow finagle to HTTP protocol so that the data is optimally routed?
If the transfer is on the same machine, then the request should never even reach the router.
If the transfer is in the intranet, then the request should never make it onto the internet at large.
If the transfer is outside of the intranet - then so be it.
This has nothing to do with HTTP.
What you want is exactly what properly configured routing does.
You need a combination of a properly configured DNS and a router and communication with local hosts will never pass the router.

Website currently being viewed

I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.

Resources