What is the advantage of using proxy in network for accessing internet? - networking

My college has different proxies for accessing Internet like 192.168.0.2/3/4 and also a specific port number.What is the advantage of using this ? I also would like to know what exactly happens there.I also heard that my institution has different ISP connections shared over the same network. What is the role of proxy there?

It will be very easy to know if you understand what proxies do and why they are used generally. Which could be found on a magical website called www.google.com. By using a proxy, you get more control over the network because all request go through there.Your school may want to do stuffs like traffic shaping, content filtering etc. Using the proxy server will make sure all request to the internet are routed there first.

Proxies are good for a few things:
Filtering. By using a proxy, your college can filter out viruses, porn, Facebook or torrent downloads.
Logging. By requiring a username and password, the college can track what you do with your internet time, and can tell you off if you go somewhere you shouldn't or help you be allowing them to do traffic shaping, or other network maintenance.
Line Bonding. For example, if you have two ADSL lines of 5Mb, you can bond those to get a 10Mb line (normally this is done at the gateway stage, and not the proxy, but it is possible to do it at this stage of the network)
Failover. Again, this would normally be done at the gateway/router stage. This detects which lines are active and routes your traffic to those lines.
Network Connectivity. If your college is in-turn part of a bigger academic network, this could allow crossing those network boundaries to get internet access.
Although those are valid possibilities, it's probably just for Filtering...
In the wider internet, proxies are in use for allowing access to blocked content - like giving China access to Google...

Related

HTTP TCP connection to web server behind NAT

My question is the same as this one but hopefully adds clarity to get an answer. After reading this fantastic article on the specifics behind NAT Traversal along with a general summary of methods found here, I'm wondering if the scenario has been accomplished or is possible. I'm writing software that serves web pages on any specified port, and am wondering if it is possible to have a web client from the WAN side connect to this server that is behind a NAT router. The reason this I'm finding this difficult is because:
I don't want to tell the user (who owns the web server) to configure their router to port forward (and many cases the user may not have privileges to do so).
UPnP I believe is often default-disabled, and is another configuration privilege not afforded to the user.
UDP Hole Punching looked promising until I realized the client is using a browser with http, and thus can communicate only through TCP, and limits my capability further by restricting options to browser-scripts.
I haven not found a successful implementation of TCP Hole Punching, considering the difficulties of maintaining state information (currently I'm looking at chownat, but am wondering how to implement TCP over a UDP tunnel from a web browser (or if that's even possible?).
Using a proxy to forward all traffic doesn't scale well (though using an external server, that is not behind a NAT, would be perfectly fine for setting up the initial connection or NAT traversal). By Scaling, I mean if many many users have their own web servers, not for the one user's web server to have high traffic (which is not a concern given the user's upload-bandwidth is often severely limited).
Right now I'm starting to think there will have to be some client-side browser script to help implement this, so the task won't be completely handled by the server. If anybody has any ideas or experience with trying to have a user connect to a web server behind a NAT router, I'd greatly help some direction! Thanks!

Restricting access to IP addresses on a web application

I'm working on an application that will be used from different locations so it has to be on a network, and since the distance from each workstation that will use the application is quite far, it will be on the internet. Definitely on a dedicated Windows Server.
I have security concerns because it is such an application that black-hat hackers and crackers will like to abuse to their own ends.
So I'm thinking, I can (since I am the I.T head of the company) procure a static IP address for all the workstations that will use the application, then I can compile a white list of IP addresses. If a request is not coming from an IP address in the white list, the request will be denied. Does this make sense?
I could also use more security tips on securing the server and the application.
It's an ASP.NET MVC application.
Does this make sense?
At a network level? Somewhat. At an application level? Probably not.
IP filtering is something that makes sense at the network level. So setting firewall rules to dictate which IPs are allowed to access certain ports on a server. That is both sensible and common.
Trying to do the same thing at the application layer is error prone and problematic. For instance, if your application is behind a load balancer, the IP address your application sees may well be the one belonging to the load balancer, not the client who originated the request.
As an additional note, just because a request is coming from a trusted IP, doesn't mean that you don't have to be careful. Your "trusted" client systems could be compromised or an attacker could be using a CSRF attack.

meteor based websites can't be accessible from corporate networks

I built 2 apps based on meteor for big clients - MAN and BMW Denmark. Unfortunately both can't see apps (just white screen in browsers) from own internal networks. I able to see it, my partners able to see it, issue somehow happened only from their[bmw and man] networks. I think somehow it's related to their firewalls or any kind of security settings/services, but it's impossible to get any info from their tech. department. What kind of issues could be the reason of this scenario? I'm 100% sure it's related to meteor only, 'coz old-school solutions (Django-based) works fine from same domain. And it's not related to exact apps 'coz 'em works on any another browser outside of corporate networks.
If it's not the port as suggested, the IT group may have restrictions in place, like an aggressive web filter or strict site whitelisting. Speaking from my team's experience inside a large corporation, we even have trouble building our apps behind the firewall because of the package checks with every server refresh. We get the same thing with NPM and bower.
What do see happening if you run Fiddler? Blank screens can be due to a bad request.
We had a situation similar to yours and the issue was related to the F5 load balancing appliance stripping HTTP headers and the corporate network using a proxy server. The header we lost was the X-Forwaded-for causing issues when going through the corporate proxy server. We had to troubleshoot with their IT to resolve it.
Compare the Fiddler trace from outside the corporate network with one from within the corporate network.
If it works outside the corporate network, the issue is probably something like I'm describing above.
It's most likely defaulting the websocket port to something other than 80/443. A lot of corporate firewalls block traffic on ports other than 80 & 443.

Website currently being viewed

I have 50 machines in a LAN and each of these have internet access. Can a program be developed using vc++ which will tell what are all the websites which is being opened by users in each machine?
You can easily accomplish this by writing an application which captures packets outbound on port 80 (and the associated DNS information). The problem is that this application must run on every client computer which you want to trace. The easier method, as stated by others, is to take advantage of your network architecture and tunnel all traffic through a central proxy which can record the same information.
There are many-many enterprise tools suited for just this task in the latter instance.
Route your internet traffic through a centralized proxy and monitor the traffic from proxy say using Fiddler, or something else. In case proxying is not possible, use Fiddler to generate data at known location and then collate it at required intervals.
Install a firewall, if you don't already have one, and use it to log connections.

How to get browser IP or hostname?

I have a web application that should behave differently for internal users than external ones. The web application is available over the Internet, and therefore obviously to the internal users as well.
All the users are anonymous, not authenticated, but the page should render differently for internal users than external. What I'm doing in my code is use Request.UserHostName and then Dns.GetHostEntry. The result is then compared to a setting in my web.config (that holds something like *.mydomain.local) . If the comparison gives a positive result then I render the HTML that the internal user should see otherwise I render the HTML the external user should see.
However, my problem is that I don't always get the expected value from Request.UserHostName. on the development site I get the IP-number (?) of the machine running the browser but on the customer site I don't get the IP-number of the user machine, I get some other IP-number. The browsers don't have any proxies set or anything like that.
Should I be using something else than Request.UserHostName?
I recommend using IP addresses as well. I'm dealing with this exact same situation setting up an authentication system right now as well and the conditions described by Epso and Robin M are exactly what is happening. External users coming to the site give me their actual IP address while all internal users provide the IP of the gateway machine(router) on to the private subnet the webservers sit on.
To deal with it I just check for that one IP. If I get the IP of the gateway, I provide the internal access. If I get anything else they get the external one which requires additional authentication in my case. In yours, it would just mean a different interface.
Try Request.UserHostAddress, which returns the client's IP address. Assuming your internal network uses IP addresses reserved for LANs, it should be relatively simple to check if an IP is internal or external.
There might be a firewall that is doing some sort of NAT, to enable inside clients to use the external dns-name to reach the server.
Is the IP-number you get on customer site the same at the external customer-server ip? In that case you can hard code for that one IP-address. All internal computers behind that firewall will appear to have to same ip-address and you can classify them as "internal".
It looks like you're being returned a public facing IP Address. Get the user to go to http://www.myipaddress.com . If this is the same as the IP Address returned to your software, then this is definitely the case.
The only solution I can see to get around this is to either get them to connect to the machine holding the asp.net application via a VPN, or to use some other kind of authentication. The latter is probably the best option.
It does sound like there is a proxy between users and the server on the customer site (it doesn't need to be configured in the browser). It may be an internal or external proxy depending on your network configuration.
I would avoid using the UserHostName for what is effectively authentication as it is presented by the browser duing the request and would be easy to spoof. IP address would be much more effective as it's difficult to spoof an IP address in a TCP/IP connection (and maintain a connection). It's still weak authentication but may be sufficient in this scenario.
Even if you are using IP address, if there's a NAT proxy between client and server, you may have to accept that anything coming through that proxy is trusted (I'm assuming that external/untrusted clients don't come through that proxy).
If that isn't acceptable, you're back to other methods of authentication. Rather than requiring a logon or VPN connection, you might consider a permanent cookie or client certificates and only give those to internal clients but you would need some way of delivering those to the client. You could certainly deliver a permanent cookie based on a one-time logon. Cookies can be spoofed in a similar way in that the UserHostName can be however you've got a better opportunity to create a cookie value that is less guessable than a domain name.

Resources