I am interested to know how the DNS requests to political sites differ in different countries.
I need to know how I can send a DNS query to a remote computer, let say, in China. Then, I want to compare the results to US. The goal of the experiment is to get a hand-on experience on the concept about DNS poison. I feel my lectures so theoretical.
How can you compare DNS requests between China and US, such that I can investigate DNS poisoning?
This depends a bit on how the queries are being altered. If the server is giving different results based on your locality, then asking it directly will not be of any use. If you're queries are being poisoned by a caching server in between, these methods might help.
If you have shell accounts in different parts of the world you can perform a simple test.
I'm using 'dig', which is available on most *nix systems. If you're running Windows you might want to search for an alternative in this list of DNS tools
To find the responsible DNS servers
dig ns domain-in-question.com #the.dns.server.you.want.to.use
To get the IP addres for the hostname
dig a host.domain-in-question.com #the.dns.server.you.want.to.use
(You can skip the #.. part to run with your current server)
I recommend trying both of these from different parts of the world to see if the server itself is giving different results or if the caching servers on the way there are being poisoned.
Also, searching for 'how to poison dns' gave me a number of practical results.
You can just use nslookup (the server command lets you specify the DNS server to ask)
Try this web tool:
http://www.kloth.net/services/dig.php
As for learning about DNS poisoning, every computer has settings for which DNS server to trust, and so on. If one of them in a chain is compromised, every computer downstream will receive bad information.
If the remote servers are correctly configured, they won't let you interrogate them.
Any recursive resolver should be configured to only provide answers to the clients its intended to serve.
Related
Assuming that the IP address that the domain is mapped to is known, are there any advantages to using this known IP address rather than using the domain? What makes the trace routing decision? Because DNS servers translate the domain names to IP addresses I am compelled to say that using an IP address is quicker, albeit unnoticeable. However, because DNS servers process these requests at a high volume and presumably cache the most popular sites I am also compelled to say that a DNS server might know the fastest route to the server which would result in the domain being slightly quicker. I understand that when I am asking which may be faster this quantification may be at the nanosecond or microsecond scale.
Technically, yes. At least the first time. The first time your computer asks the internet "Where is this domain name located?" and some machine out there responds with its IP address.
However, when it gets this response back it keeps a copy (called caching) so it doesn't have to ask again for a while (these things CAN change, but rarely do)
So, if your computer currently has the IP cached, then they are equal. If you don't currently have it IP is faster, but only for the first time in a few days and only a few seconds
As for the question of how the fastest route is picked. There are several routing protocols, most of which take into account several different factors including load on a connection, bandwidth, latency, jitter, and distance. Several others are also possible. Long story short is that the routers of the internet are constantly telling each other that such and such link is down or I just got a new address connected and they have algorithms that the routers run to figure out which way is best.
N.B. A side note is that IP wont always give you access to a certain website: take for instance a site hosted on a hosting service. They rarely have their own specific IP address, but instead requests for lots of different sites could come into one IP. In this case the domain name being requested is used to determine which site to return to the requester
Both of the examples that you gave are correct. Inputting an IP address directly will bypass the need for a DNS lookup, but the advantage you gain by doing this could be pointless if you use an IP address to a popular website which brings you halfway around the world instead of to a server nearby. Ultimately, you wouldn't benefit enough to make it worth your while, especially since your computer will cache the response you receive from the DNS lookup, making the difference 0.
This question was answered pretty well by #PsychoData but I think there's a few things worth noting and restating here:
When using IP, you bypass DNS which will save you the DNS resolution time on the first call until the TTL (Time To Live) expires. TTL is usually 1 hour. The difference is usually not worth noticing in most applications. If you're only making one call, you won't notice the milliseconds delay. If you make multiple calls, all calls after the first won't have the delay.
When entering a name vs IP you can be calling several different Networking daemons including NetBIOS (\ServerX), DNS FQDN (\ServerX.domain.com), DNS Shortname (\ServerX which MAY get automatically lengthened or guessed to the FQDN \ServerX.domain.com by your OS or DNS server)
Microsoft has two primary Authentication Mechanisms in play with SMB shares: NTLMv2 (NTLMv1 and CHAP are insecure) and Kerberos. Depending on lots of configurations on your client, the server, and the authentication server (Active Directory if in play) and the way you called the name, you may get one or the other. Kerberos is generally faster than NTLMv2, at least for repeated calls, as it gets and keeps an authentication token and doesn't need to reauthenticate via password hash each time.
NetBIOS uses different ports than DNS which can play into network latency due to ACLs/routers/Firewalls.
NetBIOS can actually give you a different answer than DNS because it's a different resolution system. Generally the first PC to boot on a subnet will act as the NetBIOS server and a new server can randomly declare itself to the network as the new NetBIOS master. Also \FileShareServer.domain.com wouldn't come back in a NetBIOS lookup as it's not the machine name (ServerX) but a DNS alias.
There's probably even more that I'm missing here but I think you get the idea that a lot of factors can be in play here.
(please redirect my question to relevant stack site, if I am in wrong place, however here I feel guaranteed to get help)
When playing with traceroute command I want to be sure I am not connecting to virtual host that may be dynamically mapped to a number of geographically dispersed servers(since it does not make much sense to track packets jumping from continents).
So more precisely with concrete example: how to prove with help of nslookup -querytype=NS google.com that google may redirect me to different servers across the world. I tried IPconfig locator for all values returned by nslookup, it always returns same location: California Mountain View.
It seems I don't understand something really important in here. Thanks.
update: tried nslook up from australian server, all the ip adresses still point to same location..
You cannot prove the location of any host. At the very best you can make an educated guess.
Geolocation databases are a big list of IP addresses and where the machines hosting those addresses are believed to be located. But they are just a guess and even the best of them are only 90% accurate to the state/regional level, meaning 10% of the addresses are someplace completely different. I use MaxMind because they have a fairly accurate free version and their commercial versions are not too expensive. They also have a free web-form where you can do 25 lookups per day.
You can use tools like traceroute to see some of the machines between you and your destination. Sometimes they have geographic locations in their DNS names. Sometimes their IP addresses will be listed in Geolocation databases. However, not all routers respond, many segments are virtualized and so their hops/routers are invisible, and firewalls may block the trace before it completes.
DNS databases list the address of the organization who owns an address or domain. DNS names themselves can be anything anyone wants, so even they contain geolocation information, there is no reason to believe it is true. In particular, a router might have a DNS name indicating the destination its connecting to, or even the administrative office responsible for it, and not the physical location of the device itself.
The IP address you are talking to can forward anything it wants to anywhere else it wants and there's absolutely no way you can detect that. So you can only follow the trail up to a point.
To make a good guess for the location of a host, look-up its IP address in a geolocation database, then run a traceroute and look-up the IP address of the last router before the destination. That will get you as close as you can.
I have a web service where i do different things according to where ones IP is from. I have a simple test application where i open a WebClient and makes it call the web service. I would like to be able to change the IP on the test application so that it "seems" that it comes from different countries (this will help me test goals in google analytics too) - is this possible - to change/simulate that my application is located in another country (France, Germany, Belgium, England, US, etc...)
It's possible to use a Proxy or an VPN Tunnel, but you'll need an End-Point in the country you want. But, there are also plenty of lists around the web for this.
The other answers more accurately provide a solution, but you could always fake it. Utilise your own small private network and provide a facade to handle IP locationing for DEBUG vs. PRODUCTION mode. All of this of course wouldn't trick Google ;-) but it would help solidify your application.
Sorry for possibly being redundant.
The obvious solution is to "bounce" through a proxy ser ver in each of the countries you wish to test for. I've had good luck in the past with sites such as proxy2free or publicproxyservers in the past.
Other solutions would involve running a client from a host in one of these countries, by way of a VPN / RDP / RAdmin-type session, but that implies owning assets or knowing people in these countries who would trust you with using their hosts in this fashion.
Another solution involves a bit of a program change in your application. By detection of a particular trigger (could be one of several different IPs but from the same country where you reside, could be some added parameter on the url such as &ctrytest=Spain etc.) your application would substitute the IP with one of several foreign IP (from the desired countries) at the level of the country detection logic in your code, but otherwise using the real IP from your client request to actually serve the application.
You probably realize it based on the previous answers, but just to be sure: IP addresses are not a certain indicator of the country a user is in. For example, I once worked in the US for a UK-based company, and we used IP addresses allocated to a UK-based ISP.
Ultrasurf may help: http://ultrasurf.en.softonic.com/
I don't think you can specify though, exactly where in the world your request is sent from.
I'm making a network game (1v1) where in-game its p2p - no need for a game server.
However, for players to be able to "find each other", without the need to coordinate in another medium and enter IP addresses (similar to the modem days of network games), I need to have a coordination/matching server.
I can't use regular web hosting because:
The clients will communicate in UDP.
Therefore I'll need to do UDP Hole Punching to be able to go through the NAT
That would require the server to talk in UDP and know the client's IP and port
afaik with regular web hosting (php/etc) I can only get the client's IP address and can only communicate in TCP (HTTP).
Options I am currently considering:
Use a hosting solution where my program can accept UDP connection. (any recommendations?)
UDPonNAT seems to do this but uses GTalk and requires each client to have a GTalk account for this (which probably makes it an unsuitable solution)
Any ideas? Thanks :)
First, let me say that this is well out of my realm of expertise, but I found myself very interested, so I've been doing some searching and reading.
It seems that the most commonly prescribed solution for UDP NAT traversal is to use a STUN server. I did some quick searches to see if there are any companies that will just straight-up provide you with a STUN hosting solution, but if there even were any, they were buried in piles of ads for simple web hosting.
Fortunately, it seems there are several STUN servers that are already up and running and free for public use. There is a list of public STUN servers at voip-info.org.
In addition, there is plenty more information to be had if you explore SO questions tagged "nat".
I don't see any other choice than to have a dedicated server running your code. The other solutions you propose are, shall we say, less than optimal.
If you start small, virtual hosting will be fine. Costs are pretty minimal.
Rather than a full-blown dedicated server, you could just get a cheap shared hosting service and have the application interface with a PHP page, which in turn interfaces with a MySQL database backend.
For example, Lunarpages has a $3/month starter package that includes 5gb of space and 50gb of bandwidth. For something this simple, that's all you should need.
Then you just have your application poll the web page for the list of games, and submit a POST request in order to add their own game to the list.
Of course, this method requires learning PHP and MySQL if you don't already know them. And if you do it right, you can have the PHP page enter a sort of infinite loop to keep the connection open and just feed updates to the client, rather than polling the page every few seconds and wasting a lot of bandwidth. That's way outside the scope of this answer though.
Oh, and if you're looking for something absolutely free, search for a free PHP host. Those exist too! Even with an ad-supported host, your app could just grab the page and ignore the ads when you parse the list of games. I know that T35 used to be one of my favorites because their free plan doesn't track space or bandwidth (it limits the per-file size, to eliminate their service being used as a media share, but it shouldn't be a problem for PHP files). But of course, I think in the long run you'll be better off going with a paid host.
Edit: T35 also says "Free hosting allows 1 domain to be hosted, while paid offers unlimited domain hosting." So you can even just pay for a domain name and link it to them! I think in the short term, that's your best (cheapest) bet. Of course, this is all assuming you either know or are willing to learn PHP in order to make this happen. :)
There's nothing that every net connection will support. STUN is probably good, UPnP can work for this.
However, it's rumored that most firewalls can be enticed to pass almost anything through UDP port 53 (DNS). You might have to argue with the OS about your access to that port though.
Also, check out SIP, it's another protocol designed for this sort of thing. With the popularity of VOIP, there may be decent built-in support for this in more firewalls.
If you're really committed to UDP, you might also consider tunneling it over HTTP.
how about you break the problem into two parts - make a game matcher client (that is distinct from the game), which can communicate via http to your cheap/shared webhost. All gamers who wants to use the game matching function use this. THe game matcher client then launches the actual game with the correct parameters (IP, etc etc) after obtaining the info from your server.
The game will then use the standard way to UDP punch thru NAT, etc etc, as per your network code. The game dont actually need to know anything about the matcher client or matcher server - in the true sense of p2p (like torrents, once you can obtain your peer's IPs, you can even disconnect from the tracker).
That way, your problems become smaller.
An intermediate solution between hosting your own dedicated server and a strictly P2P networking environment is the gnutella model. In that model, there are superpeers that act like local servers, having known IP addresses and being connected to (and thus having knowledge of) more clients than a typical peer. This still requires you to run at least one superpeer yourself, but it gives you the option to let other people run their own superpeers.
I'm a Java coder and not very familiar with how networks work (other than basic UDP/TCP connections)
Say I have servers running on machines in the US, Asia, Latin America and Europe. When a user requests a service, I want their request to go to the server closest to them.
Is it possible for me to have one address: mycompany.com, and somehow get requests routed to the appropriate server? Apparently when someone goes to cnn.com, they receive the pictures, videos, etc. from a server close to them. Frankly, I don't see how that works.
By the way, my servers don't serve web pages, they serve other services such as stock market data....just in case that is relevant.
Since I'm a programmer, I'm interested to know how one would do it in software. Since this is little more than an idle curiosity, pointers to commercial products or services won't be very helpful in understanding this problem :)
One simple approach would be to look at the first byte (Class A) of the IP address coming into the UDP DNS request and then based off that you could deliver the right geo-located IP.
Another approach would be a little more complicated. Instead of using the server that is geographically closest to the user, you could use the server that has the lowest latency for that user.
The lower latency will provide faster transfer speeds while being easier to calculate than geographic location.
For a much more detailed look, check out this article on CDNs (pay attention to the Technology Section):
Content Delivery Network - Wikipedia
These are the kinds of networks that the large sites use to distribute their content over the net (Akamai is a popular example). As you can see, things can get pretty complicated pretty quickly with CDNs having their own proprietary protocols, etc...
Update: I didn't see the disclaimer about commercial solutions at the end of the original post. I'll leave this up for those who may find it of interest.
--
Take a look at http://ultradns.com/. A managed DNS service like that may be just what you need to accomplish what you are looking for.
Amazon.com, Forbes.com, Oracle, all use them...
Quote From http://ultradns.com/solutions/traffic.html:
UltraDNS Traffic Management solution provides a set of tools allowing IT administrators to define load balancing configurations for content servers residing in one or more geographic locations. The Traffic Management Solution manages traffic directed to the servers by dynamically changing the responses to DNS requests. Load balancing is performed based on dynamic metrics obtained from the host servers on a continual monitoring basis. The UltraDNS Traffic Management solution is not a single application, but combines the capabilities of several existing UltraDNS systems to control traffic, manage site failures, and optimize web content systems.
One approach is, as Jeff mentioned, using the IP address: http://en.wikipedia.org/wiki/Geolocation_software
In my experienced, this is precise to the nearest relatively large city (in the US at least). There are several open databases to aid in this (see the wiki link). Then you can generate image tags and download links and such based on this information.
As for locating the nearest server, I'm sure you can think of a few ways to do it. For instance, if the best return you can get is major city, you can lookup that city in a list of Latitude/Longitude and calculate the nearest server based on that.