Clearing/Editing Internet explorer DNS cache - asp.net

I am using internet IE 7, IE 8
My application in running in dns fail over environment with primary and back up server. As the primary server is down, failover changes to the secondary server after 2 - 3 minutes.
But the problem is, the current opened page in IE is still sending requests to the primary server due to dns caching, which stores the IP of the primary server for default 30 minutes) and hanged.
This problem can be solved if we can clear or edit the dns cache with C# ASP.Net coding.
Thanks in advance for replying.

You cannot access a client machine's DNS Cache from your ASP.NET server or anything in the browser. It would be a huge security hole to either of those environments to do so.
The better approach if you're looking for DNS failover is to talk to your network administrator. Ask him/her to set the TTL for your DNS records to a number smaller than your failover time. This will increase the frequency with which the client machines refresh their caches (for your site only) and shorten their downtime in the event of a failover.
The negligible drawback is that it can increase (ever so slightly) their wait time for the site because they have to do DNS requests more often.

There is an undocumented api called DnsFlushResolverCache in dnsapi.dll, see this link for an example for how to use it from C# (not ASP.NET) .

I have the similar issue from JAva applet. JVM has bug which doesn't honor the System property to disable DNS caching. Any workarounds? Changing the java.security file works. But I am looking for a better solution.

Related

DNS Lookup Time and Windows DNS Cache

For DNS Resolution testing purposes... I want to disable all DNS caches in my Windows 7
Still I keep seeing "DNS Lookup : 0 ms" for consecutive requests of the same domain.
I've tried the obvious "ipconfig /flushdns", and also stopping totally the service:
net stop dnscache
also this command makes the same effect:
net stop "DNS Client"
and also I know Browser are caching DNS Lookups for very short time. So I flush their caches, close-open the browser, or I open the same domain in different browsers (Firefox, chrome, chrome incognito, IE ) to bypass that DNS Cache.
So the first time, DNS Lookup time can be 25ms (using 8.8.8.8) but then the next DNS Lookup is cached somewhere in the system and time is 0ms. And only goes away if I wait around 3 to 5 minutes to repeat the request.
What can I do to force the system to resolve the DNS every single time, even it's 5 seconds between the same DNS request ?
Has anything to do with keep-alive or some kind of re-use of TCP connections by Windows ? It shouldn't, because I re-open the browser. But I'm out of ideas,
Could you shed some light on this issue?
Thank you
It sounds like your goal is to simulate a non-existent configuration that doesn't exist in the wild (since all clients have DNS caches). It's not entirely clear why that's an interesting configuration to test, but it is possible to do so.
As you mentioned, all browsers have DNS caches. Windows' DNS client itself has a cache. Any upstream proxy you might be using also has a DNS cache.
In this case, you are hitting two problems: First, Fiddler itself maintains a DNS cache. Second, Fiddler pools keep-alive connections to the server, regardless of whether you close your browser client or not.
As described in the Fiddler book, you can control the Fiddler DNS cache using the preference fiddler.network.timeouts.dnscache. The default value is 150000 (measured in milliseconds, so that's 2.5 minutes). You can set this value to 0 to prevent DNS caching.
In order to prevent reuse of connections, you can either hit CTRL+X in the Fiddler session list, or call the FiddlerApplication.oProxy.PurgeServerPipePool method as desired.

Intermittent 'the remote name could not be resolved'?

I have an ASP.NET application that I use to read the contents of a web page by a HttpWebRequest frequently. There's no problem with the remote address and my application is always working fine.
While I don't change anything, sometimes (about once a day) I get this error:
the remote name could not be resolved.
Why a previously resolved DNS name sometimes fails to be resolved?
The intermittent nature of this is going to be extremely difficult to resolve and it's going to take a configuration change instead of a code solution. (hint: read everything ;)
I would guess that the remote servers DNS is set to expire pretty often. Probably daily or maybe even every 12 hours or so. This is the TTL (time to live) setting. Admins sometimes set this to an artificially low level if they need the ability to quickly move the site to a new server.
You can determine how often it expires by going to a command prompt and running:
nslookup
set debug
www.theserverdomain.com
At the top of this will be a section that says "AUTHORITY RECORDS:" with an item under it that says "ttl".
Now, (and I'm making an educated guess here), what's probably happening when you query your DNS server to resolve that host name your server will have this value cached.
However, once it expires the your server will have to contact another server upstream to get the ip address resolution, called DNS forwarding. If there are a lot of hops between yours and the remote server OR if one of the DNS servers between the sites is overloaded then it could timeout and send back the message you are receiving.
If this is true then the ONLY thing you can do is hardcode the DNS and IP address combination in your web servers hosts file. This is usually at C:\Windows\System32\drivers\etc and is a file named "hosts". There is an example on how to properly edit this within the file itself.
Once you create the host mapping in that file, your web server will no longer have to contact the DNS server to perform name resolution and it won't matter what the TTL is set to.
The only danger here is if they move the web site to a new IP address. At which point you could simply update your hosts file again...
The first thing I would check is if DNS is no longer correctly configured or malfunctioning.
Try (from a Windows command line)
nslookup MyDnsNameHere
and see if you get the IP you would expect.

Redirecting http traffic to another server temporarily

Assume you have one box (dedicated server) that's on 24 7 and several other boxes that are user machines that have unused bandwidth. Assume you want to host several web pages. How can the dedicated server redirect http traffic to the user machines. It is desirable that the address field in the web browser still displays the right address, and not an ip. Ie. I don't want to redirect to another web page, I want to tell the web browser that it should request the same web page from a different server. I have been browsing through the 3xx codes, and I don't think they are made for anything like this.
It should work some what along these lines:
1. Dedicated server is online all the time.
2. User machine starts and tells the dedicated server that it's online.
(several other user machines can do similarly)
3. Web browser looks up domain name and finds out that it points to dedicated server.
4. Web browser requests page.
5. Dedicated server tells web browser to repeat request to user machine
Is it possible to use some kind of redirect, and preferably tell the browser to keep sending further requests to user machine. The user machine can close down at almost any point of time, but it is assumed that the user machine will wait for ongoing transactions to finish, no closing the server program in the middle of a get or something.
What you want is called a Proxy server or load balancer that would sit in front of your web server.
The web browser would always talk to the load balancer, and the load balancer would forward the request to one of several back-end servers. No redirect is needed on the client side, as the client always thinks it is just talking to the load balancer.
ETA:
Looking at your various comments and re-reading the question, I think I misunderstood what you wanted to do. I was thinking that all the machines serving content would be on the same network, but now I see that you are looking for something more like a p2p web server setup.
If that's the case, using DNS and HTTP 30x redirects would probably be what you need. It would probably look something like this:
Your "master" server would serve as an entry point for the app, and would have a well known host name, e.g. "www.myapp.com".
Whenever a new "user" machine came online, it would register itself with the master server and a the master server would create or update a DNS entry for that user machine, e.g. "user123.myapp.com".
If a request came to the master server for a given page, e.g. "www.myapp.com/index.htm", it would do a 302 redirect to one of the user machines based on whatever DNS entry it had created for that machine - e.g. redirect them to "user123.myapp.com/index.htm".
Some problems I see with this approach:
First, Once a user gets redirected to a user machine, if the user machine went offline it would seem like the app was dead. You could avoid this by having all the links on every page specifically point to "www.myapp.com" instead of using relative links, but then every single request has to be routed through the "master server" which would be relatively inefficient.
You could potentially solve this by changing the DNS entry for a user machine when it goes offline to point back to the master server, but that wouldn't work without an extremely short TTL.
Another issue you'll have is tracking sessions. You probably wouldn't be able to use sessions very effectively with this setup without a shared session state server of some sort accessible by all the user machines. Although cookies should still work.
In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).
and more interesting stuff in here
apart from load balancing you will need to set up more or less similar environment on the "users machines"
This sounds like 1 part proxy, 1 part load balancer, and about 100 parts disaster.
If I had to guess, I'd say you're trying to build some type of relatively anonymous torrent... But I may be wrong. If I'm right, HTTP is entirely the wrong protocol for something like this.
You could use dns, off the top of my head, you could setup a hostname for each machine that is going to serve users:
www in A xxx.xxx.xxx.xxx # ip address of machine 1
www in A xxx.xxx.xxx.xxx # ip address of machine 2
www in A xxx.xxx.xxx.xxx # ip address of machine 3
Then as others come online, you could add then to the dns entries:
www in A xxx.xxx.xxx.xxx # ip address of machine 4
Only problem is you'll have to lower the time to live (TTL) entry for each record down to make it smaller (I think the default is 86400 - 1 day)
If a machine does down, you'll have to remove the dns entry, though I do think this is the least intensive way of adding capacity to any website. Jeff Attwood has more info here: is round robin dns good enough?

ASP.NET session lost but only for one particular user

I have an ASP.NET application that is running on two load balanced servers. Everything is working fine except for one group of customers. All of these customers are coming from the same company. Randomly, an unhandled NullReferenceException error is thrown. It happens at random times in random places. It seems as if the session is just totally gone. since this is only happening for a specific group of users I have to assume that is has something to do with thier environment. I have seen users coming in with IE6, IE7, IE8 and FF and it the error occurs in all cases.
I am not 100% sure how to troubleshoot this. Does anyone have any ideas?
EDIT: Session is set to "InProc"
<sessionState mode="InProc" cookieless="false" timeout="20" />
InProc session isn't shared between servers, so it sounds like this group of users is moving from one server to another and the others aren't. Maybe your load balancer is trying to achieve sticky sessions using something like IP address or whatever and this organisation is blocking that information.
I got in contact with the user that was having the porblem. I asked him to open a browser and go to whatsmyip.org and tell me what it says is his IP address. Then I asked him to refresh the screen a few times. Well, wouldn't you know it, the IP address changed. It kept switching between two different IP addresses. This was not the IP address of his machine but of two different proxies. Each request could come form one or the other apparently.
Our load balancer (something called Zeus - I am not a network guy) was set to estabilsh session affinity (a.k.a. sticky connections) using IP addresses. We changed the settings so that the load balancer would drop a cookie and use that to maintain the session and everything works correctly now.
If you're using SQL to store session state, check that all the servers in the farm are looking at the same SQL database - I've been caught out by this one before and it took quite a while to work it out!
Edit:
Actually you might need to set it to StateServer as you're running in a web farm.
See this about Session-State Modes from MSDN.
If your load balancing is based on directing every hit to the least-busy server, then InProc isn't going to work. you would need to use StateServer or SQLServer modes.
Imagine the first hit from a client is directed to server A - that starts a new session on server A. The second hit from the same client could go to server B, supplying the session cookie from server A which server B doesn't recognise.
If you have 'sticky' (or client affinity) load balancing, where the first hit is allocated to the least-busy server, but then subsequent hits from the same session are directed to the same server, then InProc should still work.

Why can I see my website even though it's down?

I'm wondering if anyone knows how this happens? My website is down, but every computer on my internet connection/router can see it. I've cleared my cache etc, but another computer in the house shouldn't be seeing a site that's offline. How weird?
It's hosted remotely, not on my network or anything.
The first question to ask yourself is, how certain are you that it's down? If computer A can access it and computer B cannot, either one could be "right":
The site could be down, and computer A could be looking at a cached version from the ISP.
The site could be up, but computer B could be having general internet connectivity problems, or problems accessing this site in particular (bad DNS cache, etc.)
One way to tell is to add some new content to the site (via FTP or an in-place content management system like wordpress, for example) and see if the computer that can access it (computer A) can see the changes. If so, then you're looking at a "live" site, where the pages are being served directly from the server. (If the server is active and runs web software like PHP or ASP, then that would be another way to "prove" that the site is up and running).
Do you know the IP address of your web server?
Do you have direct access to the Internet on port 80?
Tell if your server is up or down by doing the following
telnet 255.255.255.255 80
Where 255.255.255.255 is your web server's IP address. On windows the screen will go blank if the server answers. Then type
GET / HTTP/1.0
And hit enter twice. You should see the content of your default page. If your running as a virtual host, you'll probably need to use HTTP/1.1 and the Hostname tag.
GET / HTTP/1.1
Host: www.yourservername.com
There is one return after HTTP/1.1 and two returns after your hostname. If you get content (the correct content) back from your web server it is definitely not down. If the server fails to connect then your web server is really down, and the content your computers are seeing could be any of the following:
local page cache
local proxy server
ISP proxy server
local ARP poisoning attack redirecting you to attacker's local web server which mirrored your site.
DNS poisoning to direct your browsers to someone else's web server which mirrored your site.
If your site is up, but geographically separated folks can't see your site, it is most likely a DNS issue or an ISP level routing issue.
A good tool to check for DNS issues is OpenDNS's CacheCheck. As for the routing issue, the best bet is to call your web hosting company and see if they've had any other complaints from their other customers, or if they are currently working on a routing issue.
Internet provider cache maybe.
What DNS servers are your friends using? Same as yours?
Your ISP is probably caching the content.
i know it's down cause i asked my friends in other locations to look at it. then i ran a test using this site i found.
http://www.websitepulse.com/help/tools.php
i'm switching hosts and we're dealing with my main domain name. that's the other reason i expected this interuption. i just want to know when it's finally switched.
is ISP cache a bad thing?

Resources