DNS caching using DDoS protection services - networking

I'm currently using CloudFlare's services for my domain.
The interesting thing is that when I change my A record, the new website popup after a few minutes.
I remember, when I didn't used them, I had to wait 24 hours, and even 48 hours on some computers.
Is this because of them? If it is, I guess it's because I change the A record, but the domain actually remains with the same (theirs)?

Every DNS record has a "Time To Live" (aka TTL) which specifies how long dns resolvers should remember an answer before they go get a fresh copy of the answer.
For example:
dig +noall +answer stackoverflow.com
stackoverflow.com. 144 IN A 104.16.37.249
stackoverflow.com. 144 IN A 104.16.35.249
stackoverflow.com. 144 IN A 104.16.33.249
stackoverflow.com. 144 IN A 104.16.36.249
stackoverflow.com. 144 IN A 104.16.34.249
In this case, my resolver will remember this answer to the question "stackoverflow.com" for 144 more seconds. Probably CloudFlare is using a smaller TTL than wherever your DNS records used to come from.

Related

Is a websites downtime a DNS problem if you can ping and traceroute the domains successfully?

I've been with a web host for 10 years and their hosting is good. I have 12 clients hosted on one account, all with low traffic.
I upgraded to a better package as mine was being phased out. They promised no downtime. It's now almost 48 hours and all sites have been down since the migration.
In the 3 chats I've had with support they have agreed the downtime (12, 24, and 36 hours roughly each time) was excessive and they had a known issue. Each support person said the same thing: "It takes time for DNS to propagate, I have escalated your ticket, This is a known issue, I have pushed the DNS again, just wait 4 hours."
I said this didn't make sense to me because:
It is the same host
It is the same dedicated IP address
It uses the same DNS servers
If I ping any of the 12 domains they resolve to the right IP and give me good ms responses
If I traceroute any of the 12 domains they resolve to the right IP and show me reasonable hops
If I connect through HTTP I get a standard parking page with ads
I've told them this each time and asked if maybe it's not an internal switching problem. They told me not to worry and it would be resolved soon, it's a DNS issue.
What do I not understand about DNS that justifies their assertion?
Thanks in advance!
It does not look like public DNS issue, but their own internal one, albeit it is a bit excessive.
My bet is that they have internal issue.

Amazon Cloudfront settings to reduce waiting time

UPDATE: This was my mistake, see my comment the below. Now Cloudfront works great with new settings.
Sometimes dns waits 600ms and than it will wait another half second which makes 90kb file waiting more than 1 second. Sometimes pingdom wait time shows even 1 second. If I try another test, it will go sometimes to 90ms all together.
I understand that first request will take more time because cloudfront needs first to take file from our server. I set cache time to 86400 s which means if it should get file from cache for whole 24 hours. But if I try pingdom just 2 hours after first test it will go again very slow.
The below are my results and settings. Am I missing something?
Most of the cases its the DNS that makes the delay because amazon is really scalable.
I had similar issues with my ISP and was able to resolve its quickly by changing the DNS servers.
Try changing your DNS to Google DNS
IP V4
8.8.8.8
8.8.4.4
IP V6
2001:4860:4860::8888
2001:4860:4860::8844
Google Public Dns Documentation
Or use OPEN DNS
208.67.220.220
208.67.222.222
OPEN DNS Documentation
CloudFront is not only scalable, it also eliminates bottlenecks, but aims to speed it up.
AWS CloudFront is a service with low latency and fast transfer rates.
Here are some of the symptoms that may be slower when using CloudFront.
(This includes most problems.)
The requesting Edge may be receiving a large number of requests.
The edge server closest to the client may be farther than the web host server.
(Geographic delay)
DNS lookups can be delayed.
There is not much of this possibility, but make sure the x-edge is in the "View in cloud front" state.
Cache may be missing.
Detailed troubleshooting is difficult because you do not know what the test is or what the condition is.
If logging is enabled, further troubleshooting is possible.
It is generally recommended to enable logging.
If you have any questions, please feel free to ask!
thank you.

How to connect an Amazon EC2 instance to external domain registrar

Hello (I'm posting this here because I apparently can't post on amazons forums), I'm new to Amazon AWS and I'm trying to figure out how to make a wordpress site on EC2 (is this the best choice?) and I just want to point it to my registrar (hover.com).
I'm used to setting up sites and pointing domain names on other hosts but AWS is different so I need some help.
For starters, I seem to be constantly directed at using Route53, well that's another AWS service, and it sounds optional, is it really optional? I did a cost estimator for Route 53 and I did 1 hosted zone, with 1000 hits a month and it estimated around $400. That can't be right. I must be missing something. You're telling me for 1000 hits on a particular domain name, I'm going to have to pay $400?
Where's the name servers?
So to give a little history, I setup a wordpress site on an instance of EC2, and I made an elastic IP and connected it to that WP instance. I see a public IP address, great. The public IP address works fine, but I need to find up that domain name I have registered.
I put the public IP address in the DNS settings at hover, but I need the name servers. The only way I was able to get name servers was doing it through Route 53, which again, is this really needed?
I guess all in all, too, is this the right route? Should I be doing an instance of S3 instead of EC2? I'm on the free tier for a year but I'd like to stick with AWS and I don't want to have to change to another host a year form now. I know the elastic IP's is another cost, is this an astronomical cost? Do I even need to setup an elastic IP? I'd assume so. Lots of questions... any help would be much appreciated. Thanks.
Amazon route 53 cost pennies to use in most cases, so your estimate is way, way off.
You need to either tell your host registrar that Route53 will handle your DNS for you, by getting the 4 nameservers from Route53 when you setup the domain and entering them onto your domain at your registrar, or assuming your registrar has the ability to do DNS for you, then you don't need route 53 at all. Most registrars will do your DNS for free - even so I prefer to pay the few dollars a month to have it all under one roof, so to speak.
In either case, once you know who is going to handle your DNS, you need to create at minimum a single 'A' record that points your domain name to your EC2 instance IP - there are lots of variations to this, but at its simplest, that is what needs to happen.
Your math is wrong:
I did a cost estimator for Route 53 and I did 1 hosted zone, with 1000 hits a month and it estimated around $400.
Route53 pricing for zones: $0.50 per hosted zone / month for the first 25 hosted zones
So, 50 cents.
Route 53 pricing for traffic: $0.400 per million queries – first 1 Billion queries / month
So, 0.04 cents.
That's a total of 50.04 cents for a month. It'll take over 60 years to add up to $400 for Route53 DNS at that volume.
I'm having trouble tracking the rest of your question.
I did an elastic IP on AWS and bound that to the EC2 instance. I used the public IP on my registrars admin screen and left their existing name servers (hovers) in the name server area.
Ec2 wouldn't be your first choice if you don't expect high loads. For what you will pay with ec2 (starting from like $10 a month) you can have some space in a shared hosting and even your own domain for a whole year. Moreover you won't have to deal with DNS because almost all hosting sellers do it for free.
Just to notify others, OP's math was wrong because AWS calculator counts queries in Millions per month. He likely entered "1000 million queries per month", bringing the estimate to 400

Best TCP port number range for internal applications [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I work in a place where each of our internal applications runs on an individual Tomcat instance and uses a specific TCP port. What would be the best IANA port range to use for these apps in order to avoid port number collisions with any other process on the server?
Based on http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xml, these are the options as I currently see them:
System Ports (0-1023): I don't want to use any of these ports
because the server may be running services on standard ports in this
range
User Ports (1024-49151): Given that the applications are internal I don't intend to request IANA to reserve a number for any of our applications. However, I'd like to reduce the likelihood of the same port being used by another process, e.g., Oracle Net Listener on 1521.
Dynamic and/or Private Ports (49152-65535): This range is ideal for custom port numbers. My only concern is if this were to happen:
a. I configure one of my applications to use port X
b. The application is down for a few minutes or hours (depending on the nature of the app), leaving the port unused for a little while,
c. The operating system allocates port number X to another process, for instance, when that process acts as a client requiring a TCP connection to another server. This succeeds given that it falls within the dynamic range and X is currently unused as far as the operating system is concerned, and
d. The app fails to start because port X is already in use
I decided to download the assigned port numbers from IANA, filter out the used ports, and sort each "Unassigned" range in order of most ports available, descending. This did not work, since the csv file has ranges marked as "Unassigned" that overlap other port number reservations. I manually expanded the ranges of assigned port numbers, leaving me with a list of all assigned port numbers. I then sorted that list and generated my own list of unassigned ranges.
Since this stackoverflow.com page ranked very high in my search about the topic, I figured I'd post the largest ranges here for anyone else who is interested. These are for both TCP and UDP where the number of ports in the range is at least 500.
Total Start End
829 29170 29998
815 38866 39680
710 41798 42507
681 43442 44122
661 46337 46997
643 35358 36000
609 36866 37474
596 38204 38799
592 33657 34248
571 30261 30831
563 41231 41793
542 21011 21552
528 28590 29117
521 14415 14935
510 26490 26999
Source (via the CSV download button):
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml
I can't see why you would care. Other than the "don't use ports below 1024" privilege rule, you should be able to use any port because your clients should be configurable to talk to any IP address and port!
If they're not, then they haven't been done very well. Go back and do them properly :-)
In other words, run the server at IP address X and port Y then configure clients with that information. Then, if you find you must run a different server on X that conflicts with your Y, just re-configure your server and clients to use a new port. This is true whether your clients are code, or people typing URLs into a browser.
I, like you, wouldn't try to get numbers assigned by IANA since that's supposed to be for services so common that many, many environments will use them (think SSH or FTP or TELNET).
Your network is your network and, if you want your servers on port 1234 (or even the TELNET or FTP ports for that matter), that's your business. Case in point, in our mainframe development area, port 23 is used for the 3270 terminal server which is a vastly different beast to telnet. If you want to telnet to the UNIX side of the mainframe, you use port 1023. That's sometimes annoying if you use telnet clients without specifying port 1023 since it hooks you up to a server that knows nothing of the telnet protocol - we have to break out of the telnet client and do it properly:
telnet big_honking_mainframe_box.com 1023
If you really can't make the client side configurable, pick one in the second range, like 48042, and just use it, declaring that any other software on those boxes (including any added in the future) has to keep out of your way.
Short answer: use an unassigned user port
Over achiever's answer - Select and deploy a resource discovery solution. Have the server select a private port dynamically. Have the clients use resource discovery.
The risk that that a server will fail because the port it wants to listen on is not available is real; at least it's happened to me. Another service or a client might get there first.
You can almost totally reduce the risk from a client by avoiding the private ports, which are dynamically handed out to clients.
The risk that from another service is minimal if you use a user port. An unassigned port's risk is only that another service happens to be configured (or dyamically) uses that port. But at least that's probably under your control.
The huge doc with all the port assignments, including User Ports, is here: http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.txt look for the token Unassigned.

DNS A Record problem, how do I flush the server side DNS records? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Around 24 hours ago I set a new IP address for the A record on my website and it appears to be working well by pointing visitors to that new IP address. But, sometimes it still points users to the old IP address which is now set up as a restricted access test environment. How can I go about ensuring that only the new DNS A record are sent to clients? How can I refresh/flush the DNS on the server?
EDIT: Can one lower the timeout BEFORE the IP change so that they flush the old one sooner? How?
Looking at the SOA record for the domain:
primary name server = ns21.ixwebhosting.com
responsible mail addr = admin.ixwebhosting.com
serial = 2011060963
refresh = 10800 (3 hours)
retry = 3600 (1 hour)
expire = 604800 (7 days)
default TTL = 86400 (1 day)
The default TTL says that anyone can cache the result for up to 1 day. Besides the refresh says that a slave server should get new data from the master every three hours, so you have to wait at least 24 + 3 = 27 hours before you can trust everyone to have the new information.
The best way to handle this kind of DNS changes is to prepare at least 24 hours (or whatever TTL you have) ahead by temporarily setting down the TTL (maybe to 600, which is 10 minutes). Then you can do the changes and they take effect within 10 minutes. When you see that everything works and you don't need the possibility for a quick rollback, you can reset the TTL to 86400 again.
When you change the DNS on the server, the change is immediate, but for the others around the world, the DNS could take 24-48 hours to see the new change. So mainly, you have to wait :D
If you are close to your server location, it could take 2 or 3 hours but that depends on when your ISP and others ISP flush their DNS server's cache.
You can't.
DNS is a distributed system and clients and intermediary caching servers (including the root servers) will regard the cached values as correct until they timeout.
An approach to make it faster is to reduce the TTL (time-to-live) on the record well in advance of the actual change and then put it back up when you make the change. This way once the old record with a long TTL times out the caching and root will refresh more frequently from the authoritative server. But if you've already changed it, it's too late for that and you can only wait.

Resources