I have a requirement that I believe may be impossible and wanted to confirm this with experts in this community.
A client wants us to configure a DNS server to point all non-whitelisted domains to an IP address of a server on the internet. This server should forward / redirect all non-http traffic to an IP address associated with the real DNS record as accurately as possible. However, for all port 80 traffic, it should intercept the traffic and forward to a web proxy. This could in theory be possible if we had a large block of public IP addresses that could intelligently route based on the sender's IP to the proper destination, but the engineering effort required there to keep the DNS request and subsequent requests to that same domain in sync would be immense. Not to mention we would be limited from a concurrency perspective.This is probably similar to how OpenDNS does their DNS+Proxying, but they only seem to do it for google.com. This needs to work for an arbitrary set of domains (potentially all of them).
Is the above approach feasible? If not, are there other ways this problem can be approached short of requiring specialized gateway hardware?
Ideally the system will minimize bandwidth usage & latency for non-http traffic without requiring anything besides DNS or firewall configuration. I realize we can forward all http traffic at the firewall level, but the client wants to avoid http requests to CDNs or media heavy sites as well as minimize deployment effort across disparate network configurations.
OpenDNS works by blacklisting instead of whitelisting
When a host is blacklisted, openDNS will resolve the name into their IP address, which in turn prevent the client from accessing the real IP.
In your case, looks like you need transparent proxy where you can route all HTTP traffic to your proxy server:
See :
http://www.howtoforge.com/dansguardian-content-filtering-with-transparent-proxy-on-ubuntu-9.10-karmic
This might not be exactly what you are looking for but take a look at my article "How To Setup A Transparent Content Filtering Proxy" in which I utilize OpenDNS's blacklisting capabilities.
You can do it using two pieces:
DNS resolver configured with *. pointing to IP A.B.C.D (wildcard)
NGINX reverse proxy listening on A.B.C.D that proxy request to the domain present in the Host header.
Related
I am trying to get the big picture although my primary domain is not networking.
Some question's narrowed down for which I'm not getting enough/proper answers online
Is the IP that is resolved by the DNS server when I hit www.google.com is same as any of the Google router's Gateway IP?
Do bigger companies like Amazon do port forwarding?
If point 2 is true, I suppose they must be port forwarding with only 443 (https) port which means, to use multiple static IP across different data centers, they need to have that many routers. So, if they have N static IP address which resolves to a website, then they must be having N routers right? Is this a fair assumption?
A gateway IP refers to a device on a network which sends local
network traffic to other networks. it sits between you and internet,or other network . its like a watchman.
Question 1 : google.com has multiple ip addresses lets say then , Yes, that is possible, and will need to be two A records. This is called Round-Robin DNS. Clients will semi-randomly use one of the two addresses.
question2: yes port forwarding happens more often than we think. ALL VPC's (virtual private clouds like AWS , GCP , Azure etc) use this as they dont want to expose servers/internal resources to the internet.
depending on the port number , particular service is exposed to requesting client. lets say we want to make a website public , then we explicitly expose port 80(http) 443(https) so that web crawlers and users can see them.
Port forwarding, sometimes called port mapping, allows computers or
services in private networks to connect over the internet with other
public or private computers or services.
google https://www.google.com:444/ wont work because they did not expose port 444 on their cloud router
but https://www.google.com:443/ will work because the server corresponding to google.com has explicitly left it open.
How IP is resolved:
Step 1 - Send a Request to Resolve a Domain Name
When you type www.google.com into a browser, in order to load the webpage, your computer asks for the IP address. Computers do not know in advance where they can find the necessary information, so they try searching through the DNS cache and for available external source. proceed from lower level caches to root/main servers.
Step 2+3 - Try to resolve an IP Locally
Before going externally, your computer loads the local DNS cache database to see if you already requested the IP for that domain name. Every computer has a temporary cache with the most recent DNS requests and attempts to connect to online sources. if required record is present locally its called "CACHE HIT" and query stops.
However A computer’s local DNS cache database does not always contain the necessary data to resolve a domain name this is called a "CACHE MISS" . In that case, the request goes further to your Internet Service Provider (ISP) and its DNS server.
Step 4 - ISPs Ask Outside DNS Servers to Provide an IP Address iff Cache miss
ISP DNS resolvers are configured to ask other DNS servers for correct IP address mapping until they can provide data back to the requester. These are iterative DNS queries.
When a DNS client sends such a request, the first responding server does not provide the needed IP address. Instead, it directs the request to another server that is lower in the DNS hierarchy, and that one to another until the IP address is fully resolved. There are a few stops in this process.
hierarchy looks like this (just for reference):
Root domain nameservers. Root servers themselves do not map IP addresses to domain names. Instead, they hold the information about all top-level domain (TLD) nameservers and point to their location. TLD is the rightmost section of a domain name... Root servers are critical since they are the first stop for all DNS lookup requests.
TLD nameservers. These servers contain the data for second-level domains, such as ‘phoenixnap’ in phoenixnap.com. Previously, the root server pointed to the location of the TLD server. Then, the TLD server needs to direct the request toward the server that contains the necessary data for the website we are trying to reach.
Authoritative nameserver. Authoritative servers are the final destination for DNS lookup requests. They provide the website’s IP address back to the recursive DNS servers. If the site has subdomains, the local DNS server will keep sending requests to the authoritative server until it finally resolves the IP address.
Step 5 - Receive the IP Address
Once the ISP’s recursive DNS server obtains the IP address by sending multiple iterative DNS queries, it finally returns it to your computer. The record for this request now stays cached on the hard drive. The browser can then fetch this IP from the cache and connect it to the website’s server.
ALL this happens in less than 1 second, most of the times. if you just registered a new domain it might take few hours to propagate this DNS cache globally hence newly registered websites do not show up sometimes.
About companies owning multiple IPs
Big companies have pool of IPs reserved for example 123.234.xxx.xxx which means a company has reserved 255*255 ips. they are mapped on a VPC(virtual private cloud)
and accessible vis a subnet masking and CIDR feature, like your EC2 instances on AWS
Is the IP that is resolved by the DNS server when I hit www.google.com is same as any of the Google router's Gateway IP?
For sure it should, but it is mostly a Google management question that only they will be able to answer right. The thing is that we must understand how DNS query's work for this.
Let's take a look of it:
Device A requests the IP address through a DNS query of the device B.
To do this, it uses the network port 53 (Domain) on which it will ask, depending on which DNS server is being used at the time, which is usually the home router. Then the router will ask the ISP's DNS server, which will respond with a cached response, or the query with another server on top of it if it does not have one; All this process is followed until a reliable cache response is reached or until the authoritative response server is reached, that is, the name server that manages the domain in question.
Only the authoritative response server contains the reliable information of which IP of the domain which is going to be reached.
I suppose that within Google's servers and its network they use Google's own DNS servers, which are 8.8.8.8 and 8.8.4.4 where the DNS records are obtained and consulted by caching from many sites.
In general terms Google's IP will change depending on where you are, I made a DIG query to Google's authoritative servers, however, I received a result based on location to improve the route and loading time of the site which was 142.250.73.238.
Do bigger companies like Amazon do port forwarding?
Yes, they do. To handle queries with load balancers or similar and even for caching dns requests.
If point 2 is true, I suppose they must be port forwarding with only 443 (https) port which means, to use multiple static IP across different data centers, they need to have that many routers. So, if they have N static IP address which resolves to a website, then they must be having N routers right? Is this a fair assumption?
This has multiple answers. By the way, they actually can do a secure DNS query.
if they have N static IP address which resolves to a website, then they must be having N routers right?
They don't have to, but if they want to they can.
"Is this a fair assumption?"
No, the IP's doesn't depend on a router, the router only routes to a computer/server which can have multiple IP's. By the other hand, each thing (computer, server, etc... must have an IP which can be also a WAN IP).
I have a basic question about DNS infrastructure.
I'm wondering how the IP addresses of upstream DNS servers are configured within DNS servers. For example, when my router needs to satisfy a DNS query on behalf of a machine on my LAN, it asks its upstream DNS server that it was given through DHCP. However, how does the upstream DNS server know how to reach the root DNS server or some authoritative DNS server if it doesn't have that information cached? Is the root DNS server's IP address hardcoded anywhere to achieve this? Are backbone DNS servers always configured with some DNS server upstream from it?
I recall setting up a Microsoft DNS server in which any requests that couldn't be satisfied by it would be forwarded. However, since an upstream DNS server wasn't configured, it forwarded those requests right to the root. This behavior makes sense, however, how did it know where to contact the root?
Your reasoning is correct.
Q: How does the upstream DNS server know how to reach the root DNS server or some authoritative DNS server if it doesn't have that information cached? Is the root DNS server's IP address hardcoded anywhere to achieve this?
A: Small scale DNS server (for example DNS server serving clients in one organization) will sometimes have (manually) configured forwarders (usually ISP nameservers) in order to benefit from big cache of ISPs nameservers and faster queries. From my experience, with faster internet links (and with less latency) in recent years, this setup is used less often. Instead, root hints are used.
Q: Is the root DNS server's IP address hardcoded anywhere to achieve this?
A: Yes. For Microsoft DNS server it is located in systemroot\System32\dns\cache.dns, for BIND it is usually in /etc/bind/db.root or /var/named/named.root. An updated copy (if needed) can be retrieved from https://www.internic.net/domain/db.cache
Q: Are backbone DNS servers always configured with some DNS server upstream from it?
A: As far as I know, never.
A recursive server has the (or at least a) list of root servers provided out-of-band. This is often called "root hints" or something similar. Once it knows how to talk to the root servers, everything else follows from that. In practice, a recursor will quite quickly come to cache the name server addresses for the more common TLDs (like .COM and .ORG), so it doesn't always have to start at the root. But the root server addresses are manually provided to start things off.
I am trying to understand tor and Im confused about one thing. If one modifies a conventional web browser to use tor, does this give access to .onion websites? Seems that the browser would still not be able to resolve the .onion domain suffix. If true, then what is the purpose of trying to add the tor feature to a conventional web browser? If only for anonymity, then how does this differ from using a VPN?
To modify a conventional browser just involves changing its proxy settings to use Tor as a SOCKS proxy. Tor Browser has a number of other security enhancements and anonymity features, but at the very basic level it can communicate with the Tor controller and the browser proxies everything using SOCKS.
When proxied through Tor SOCKS, it will transparently route .onion addresses over the Tor network to the hidden service destination and back if it is available. When accessing regular internet sites, Tor can resolve the DNS, bypassing your local DNS, and proxy your traffic through an exit relay.
It's different from a VPN in the way it routes your traffic to the destination. With a single VPN alone, your traffic is potentially one hop away from your destination. If the VPN is being monitored or is subverted, it could be possible to see your unencrypted traffic, or at least know what IP addresses you may be communicating with. VPN traffic might be more detectable, require the use of special software, or be more complicated to set up in general.
Since Tor traffic is encrypted with TLS and there are thousands of potential entry points, and roughly 1000 exit relays as of today with an additional random hop in between, your traffic is potentially more difficult to trace back to you without massive or very targeted surveillance.
The Tor Overview and Hidden Services pages can be helpful to read too.
Hope that helps.
I need to use my computer as a server but my ISP blocks port 80, 21, 23 etc. I can use other ports and some dynamic dns service but I don't want:
(HTTP) Users have to type http://mydynamicdnsaddress:#port#
(HTTP) Users be redirected from http://mydynamicdnsaddress to http://mydynamicdnsaddress:#port#
(HTTP) Some kind of service that gets HTTP response and change it before resending to users. No-ip and GoDaddy do that. They change some parts of html - eg: title.
Users have to type ftp://mydinamicdnsaddress:#port#
I believe that I need some kind of dynamic dns service that points to a router that forwards TCP packets to another address changing ports. Do you know any online service like that?
Many "dynamic DNS companies use HTTP redirection to send the browser from port 80 to a different port. When you ask a dynamic DNS company to point your domain to a port other than 80, what they actually do is point the domain to their own web-server IP address (in DNS), and then on their web-server (running on port 80) they have a simple server side script which redirects the browser to the your web-server on whatever port you specified - optionally "cloaked" so the visitor won't notice." Can I specify a TCP/IP port number for my web-server in DNS? (Other than the standard port 80)
Here's a reference article for a redirection script: Redirect Script.
What you are asking for is a tunnel or proxy. You'd set up a server which receives communications via port (e.g.) 80 and proxies that request to your home server on port-whatever. You'd probably need to get a dedicated host (or VM like linode) in order to do this. At that point, you might as well move your webserver to the unblocked host.
Also, to be clear, this is impossible with pure DNS. DNS, "Domain Name System", resolves names to IP addresses, NOT to IP address/port pairs.
Most dynamic DNS service providers also provide free web redirect or port forwarding such as dynu.com.
Please note that the cloak works by loading the page in a frame of sort and it does not work with all browsers. For example, Chrome does not support cloak.
As far as I know, you cannot specify the port number in the DNS unless the web server which performs redirection is clever enough to read out the TXT record and use it for redirection. Any web server doing that would be really nice though.
Am I able to depend on a requestor's IP coming through on all web requests?
I have an asp.net application and I'd like to use the IP to identify unauthenticated visitors. I don't really care if the IP is unique as long as there is something there so that I don't get an empty value.
If not I guess I would have to handle the case where the value is empty.
Or is there a better identifier than IP?
You can get this from Request.ServerVariables["REMOTE_ADDR"].
It doesn't hurt to be defensive. If you're worried about some horrible error condition where this isn't set, check for that case and deal with it accordingly.
There could be many reasons for this value not to be useful. You may only get the address of the last hop, like a load balancer or SSL decoder on the local network. It might be an ISP proxy, or some company NAT firewall.
On that note, some proxies may provide the IP for which they're forwarding traffic in an additional HTTP header, accessible via
Request.ServerVariables["HTTP_X_FORWARDED_FOR"]. You might want to check this first, then fall back to Request.ServerVariables["REMOTE_ADDR"] or Request.UserHostAddress.
It's certainly not a bad idea to log these things for reference/auditing.
I believe that this value is set by your web sever and there is really no way to fake it as your response to there request wouldn't be able to get back to them if they set there IP to something else.
The only thing that you should worry about is proxies. Everyone from a proxy will get the same IP.
You'll always get an IP address, unless your web server is listening on some sort of network that is not an IP network. But the IP address won't necessarily be unique per user.
Well, web request is an http connection, which is a tcp connection and all tcp connections have two endpoints. So, it always exists. But that's about as much as you know about it. It's neither unique nor reliably accurate (with all the proxies and stuff).
Yes, every request must have an IP address, but as stated above, some ISP's use proxies, NAT or gateways which may not give you the individual's computer.
You can easily get this IP (in c#) with:
string IP = Context.Request.ServerVariables["REMOTE_ADDR"].ToString();
or in asp/vbscript with
IP = request.servervariables("REMOTE_ADDR")
IP address is not much use for identifying users. As mentioned already corporate proxies and other private networks can appear as a single IP address.
How are you authenticating users? Typically you would have them log in and then store that state in their session in your app.