Set up a NTP server which can be queried globally - networking

I want to set up a server (hosted on aws/or a running system in some part of the world) as an NTP server that can be queried globally.
Currently, I have modified the ntp.conf file on the node to be made the server as server . But the problem is, on using an NTP client if I try to query time from this server, or rather on using sudo ntpdate it says no suitable server found.
However, if I replicate the same on my local network (the server, as well as the querying node, are all on the local network) then this works perfectly fine.
I think the problem might lie in the ntp.conf file. Do I need to put some specific restrict lines for this to work publicly as well? And no I cannot list the server on public ntp pages. Is it at all possible?

Solved. This was a port issue. I was testing it on aws and had to manually open the related udp ports.

Related

Deno Server doesn't go through the Internet

I built a simple Webserver with just the serve function from the std http module. It just redirects a request to a new URL:
import { serve } from "https://deno.land/std#0.120.0/http/server.ts";
serve(req => Response.redirect("https://google.com"))
It works, when I access the server through a browser on my laptop, where the server is running, but when I try to access it on another machine in the same network using the ip-address of my laptop, there simply is no response at all. Is this one of the security features Deno has and if so, how can you deactivate it?
Update:
So I tried looking up the requests I make on my local machine in Wireshark, but when I run the server and send a request, it doesn't show up there. I disabled my Wifi Connection to see if that changes anything and to my surprise, I still got an answer from the server when I sent a request through the browser. I came to the conclusion that the Deno server somehow doesn't serve over the local network which really confuses me. Is there a way to change that behaviour?
This is not related to Deno, but rather the firewall features of your device/router/network or an error in the method that you are using to connect from the other device (typo, network configuration, etc.).
Without additional configuration (by default), serve binds to 0.0.0.0:8000, so — as an example — if your laptop is assigned the local address 192.168.0.100 by your router, you could reach the server at the address http://192.168.0.100:8000.
You might want to do research on SE/NetworkEngineering and elsewhere to determine the cause of the blocked connection.

Can't connect to local server

Currently we have a system in place where multiple server backup to a server in house. There are a total of 11 different servers backing up to this one storage server. Without any change(any that we are aware of) one of the servers stopped being able to connect to the storage server. It's weird too because the one that can't connect is actually our DNS server. It can ping the storage server and nslookup returns the appropriate value. However when I tried to browse to the server in windows explore via network I get the following message:
"Check the spelling of the name. Otherwise, there might be a problem with your network. To try to identify and resolve network problems, click Diagnose." - Error Code: 0x800004005 Unspecified error.
If at all possible I would like the solution to not have to restart the server(obviously that's a big request) but we run 24/7 and can't have the DNS server down for the next few weeks.
Thanks in advance!
I am completely guessing here however lets start with this, does it work if you try and connect to the share using IP?
A few things to consider in the mean time? What O.S is it?
-> Is network discovery off?
-> Have any firewalls been accidentally turned on
-> We had a similar sort of problem when the server lost it's trust relationship with AD (required a reboot I am afraid).
Unfortunately this error can relate to a range of problems including network devices, anti-virus, firewalls, shares, user accounts etc etc.

R httpd issue - help pages fail to load using local IP

This may be a general topic, but I came across the issue while working on some code using the Rook package.
The recent R versions include an http server. You may have seen this while checking for help topics using RGui. It opens a new browser with the IP/Port, etc.
For eg., if I enter ?paste, this brings up,
http://127.0.0.1:31234/library/.../paste.html
But if I use my IP, say 192.168.1.2 in place of 127.0.0.1, the page fails to load, I get an error
While trying to retrieve the URL:http://192....
The following error was encountered:
We can not connect to the server you have requested
I have other apps that have httpd interfaces, and I can go to those app's http interfaces using both 127.0.0.1 and 192.168.1.2 ... etc. So, as far as system/network permissions are concerned - I do not think that is the issue here.
Rather, there is something specific to the R httpd process that disallows it to be accessed using the domain name, etc ?
The above was tested in a corporate network. When I tried the same process from my home network it worked fine. However, since I already access http interfaces of many other locally installed apps from the corporate PC, I think there might be something specific to R's http process that needs to be checked ?
Workstation is running - Windows XP
Please let me know if you have any thoughts on the above,
Regards,
Raj.
Fixed it. The trick is to specify,
s <- Rhttpd$new()
s$start(listen="0.0.0.0",port="20000")
when starting the Rook process. Specifying 0.0.0.0 makes it listen to all the interfaces and now I can access it using my external IP. Thanks a lot for your help nonetheless !
When opening a TCP port, the local IP address may be chosen. For incoming connections, typically INADDR_ANY (-1) is supplied to bind(), which means to open the port on every available interface.
However, it is quite possible to open a port on just one interface on your machine (in this case, 127.0.0.1), simply by supplying the IP address of the interface. Seems that R does just this.
My guess is that you may have a proxy in place on your corporate network. Your browser is probably configured to use that proxy to access the Internet. Most browsers will exclude an address which they know to be local (127.0.0.1 or localhost) from using the proxy, but might not exclude any other IP.
Try disabling the proxy in your browser (even "Auto-Detect", completely turn the proxy off) and see if you're able to connect.
I had the same problem.
If you are using RStudio, this might be a bug in the RStudio. Check out this link:
https://support.rstudio.com/hc/communities/public/questions/202656007-Cryptic-error-on-starting-RStudio-daily-with-R-devel
Updating to the latest version of RStudio with the latest version of R fixes the problem.

Sending files from server to client with ASP.NET

I am developing a C# ASP.NET 4.0 application that will reside on a Windows Server 2003. By mean of accessing this application through a network computer, any user would be able to upload files to the windows server. But also, once these files are stored on server, he/she would be able to copy these files from the windows server to another networked computer.
I have found a way to upload files to a specified location on the server disk,
but now I need to send these files that are on server disk to the client computers.
My question is: Is there any way to send or copy files from server to other client computers (not the one that is accessing the web service) without needing a program recieving those files on the client computers? FTP, WCF, cmd commands, sockets?
Any idea?
If you want users of your webapp to download files, I'd look into an "ashx generic handler." It will allow you to send files back down to clients over HTTP(s).
If you are looking to have remote users, tell your webserver to copy files to other servers ON THE SAME LAN AS THE SERVER, you would write using normal System.IO operations.
Over a LAN, if you have the correct permissions and so on, you can write to a disk on a different machine using File.Copy -- there's nothing special about that.
If we're talking about remote machines over the internet, that's a different story. Something has to be listening whether it's FTP, WCF, DropBox, etc.
If the problem is that it can be painful to get something like WCF to work from a client due to problems like firewall issues under Windows 7, you could take a different route and have the client periodically ping the server looking for new content. To give the server a point of reference, the ping could contain the name or creation date of the most recent file received. The server could reply with a list of new files, and then the client could make several WCF calls, one by one, to pull the content down. This pattern keeps all the client traffic outbound.
You can if you can run the program as an account that has access to that computer. However having this sort of access on your network that would grant access to the outside world to put an unfiltered file on your internal network is just asking to be hacked.
Finally, I decided to install a FileZilla FTP server on each client computer and my page is working very well. But another option is to create a work group in the windows server and put every client computer to work in this work group, so that Windows server have access to the computers in the same work group.
Here are some links that may help to create the work groups:
http://helpdeskgeek.com/networking/cannot-see-other-computers-on-network-in-my-network-places/
http://www.computing.net/answers/windows-2003/server-2003-workgroup-setup-/1004.html

Intermittent 'the remote name could not be resolved'?

I have an ASP.NET application that I use to read the contents of a web page by a HttpWebRequest frequently. There's no problem with the remote address and my application is always working fine.
While I don't change anything, sometimes (about once a day) I get this error:
the remote name could not be resolved.
Why a previously resolved DNS name sometimes fails to be resolved?
The intermittent nature of this is going to be extremely difficult to resolve and it's going to take a configuration change instead of a code solution. (hint: read everything ;)
I would guess that the remote servers DNS is set to expire pretty often. Probably daily or maybe even every 12 hours or so. This is the TTL (time to live) setting. Admins sometimes set this to an artificially low level if they need the ability to quickly move the site to a new server.
You can determine how often it expires by going to a command prompt and running:
nslookup
set debug
www.theserverdomain.com
At the top of this will be a section that says "AUTHORITY RECORDS:" with an item under it that says "ttl".
Now, (and I'm making an educated guess here), what's probably happening when you query your DNS server to resolve that host name your server will have this value cached.
However, once it expires the your server will have to contact another server upstream to get the ip address resolution, called DNS forwarding. If there are a lot of hops between yours and the remote server OR if one of the DNS servers between the sites is overloaded then it could timeout and send back the message you are receiving.
If this is true then the ONLY thing you can do is hardcode the DNS and IP address combination in your web servers hosts file. This is usually at C:\Windows\System32\drivers\etc and is a file named "hosts". There is an example on how to properly edit this within the file itself.
Once you create the host mapping in that file, your web server will no longer have to contact the DNS server to perform name resolution and it won't matter what the TTL is set to.
The only danger here is if they move the web site to a new IP address. At which point you could simply update your hosts file again...
The first thing I would check is if DNS is no longer correctly configured or malfunctioning.
Try (from a Windows command line)
nslookup MyDnsNameHere
and see if you get the IP you would expect.

Resources