prove network is truly unavailable - networking

I have an old school foxpro web app that I am trying to help limp along while I rewrite the system. Every day, multiple times, I get this following error message: The specified network name is no longer available.
Does anyone have any suggestions how to troubleshoot this? Perhaps, prove to my IT guys that there really is a network issue. I have theories, but I have no idea how to prove anything, it always comes back to foxpro sucks rewrite it now.
I'll take any help, tools, and will answer any questions that may clarify this for you.
thanks

We have a very large multi-user VFP application on hundreds of sites. Occasionally you get this sort of problem. It is almost always down to environmental issues.
Had one just recently where a client had two machines continually crashing out of the VFP application. Network IT guys swearing up and down that it's not their problem. But what's this in the System Log of both machines? Why, it's the Broadcom NIC reporting a network link loss detected at the same times the application crashed.
Check if the client and server NICs in your situation can report this.

You could consider writing a small program that pings the network resource periodically. You might just look for a file and if the network is failing and the program cannot find the file email the folks in charge of the network and yourself. This would be an independent app, and best if not written in FoxPro so you can independently prove it is not the application or the language/tool it was written in.
I have seen this when networks have bad wiring, a bad port on the switch/hub, a failing NIC in the mix, and sometimes when the network is just flooded with requests from workstations.
You also did not mention if this was a wireless connection. I am hoping not, but I have seen wireless (especially slower wireless) hubs fail with respect to the network overload and slow and unreliable performance. Especially compared to a wired network.
Rick Schummer

In addition to the comments about IP address, is the setting on the network controller to be energy efficient? and thus turn itself off when not actively in use.

Related

for multiplayer is server needed to connect players to each other

I am trying to understand mechanism of how internet work...i understand dns. But this raise a question that if same is true for multiplayer games as well.
There are two type of multiplayer games that i have seen..Local/Lan and Online.
In online either you connect to a server or one of the people you are with becomes the host.
SO my question is basically can lobby be constructed without needing a server to reffer each player to the pool of players.
If not, than isn't it primitive. Shouldn't there be a way to create unique virtual infinite strings that each client could send request to and tell internet that hey direct all data headed to this address to me too.
The two architectures for multiplayer games that are most common are server based and peer-to-peer. Any code you could run on a server, sure - you could run it on a peer to peer basis as well. Simply, put the server code as the peer "server" code.
So in your lobby example, perhaps you could have the client code seek peer-to-peer-based servers versus normal servers. The seeking is the same as in the traditional client-server architecture... just different machines doing the work.
The point is, the client code and server code YOU WRITE has plenty of flexibility to do what you are asking. It just isn't necessarily the easiest way to do these things, so you may not find that people do what you are describing very often. Big games like WoW have plenty of big server machines.
I mean, versus a normal server architecture you could go for something like:
It boils down to what you mean by "server". If you are asking whether it is necessary that some machine run different code than all the others, in order to produce a multi-player game, my answer to that is no. But then each peer will be likely running some code that has traditional "server" functionality.

How to test Network monitoring?

I'm currently building a network monitoring system that will notify me if any interface errors or network issues. after building it we would like to be able to test if it works before implementing it to our network, so need a way of simulating network interface errors on a switch or networking device?
I was thinking about cutting ethernet cables or terminating them wrong, but ideally I need soemthing that can create loads of different types of interface errors
any help would be much appreciated
Sean
You could download Nagios which is a powerful, enterprise-class host, service, application, and network monitoring program. Designed to be fast, flexible, and rock-solid stable. Nagios runs on *NIX hosts and can monitor Windows, Linux/Unix/BSD, Netware, and network devices.
you can download other network monitoring systems from sourceforge they have many different network tools written in different languages most of them are open source. you can take notes of their
design and maybe add to the application you building.
if you want to test your application the best thing to do is to tested on real environment, I believe their might be one or two Virtual Lab.
But Ideally I would tested on real interfaces
One of the ways to simulate network failures would be to dynamically change the firewall settings. You can make packets drop, hosts, disappear, etc. This doesn't require any physical damage to anything :)

How to avoid crashing my user's router?

It appears that cheap consumer routers are fairly easy to crash: hanging around in various backup/sync software forums, I see this mentioned from time to time. Developers seem to be putting a fair amount of effort into making sure they don't crash the routers.
What are the "do"s and "don't"s for my network-heavy application to ensure that it doesn't cause issues with badly designed routers? Especially one that intends to connect to a number of peers?
IMO trying to workaround bad hardware is the road to nowhere, because every router fails in its own remarkable way :).
What you can do in the network-heavy application is assume that network is not stable media (routers can crash, etc) and design application network operations accordingly.
For instance, provide reconnect logic, connection timeouts, some sort of state caching to allow users work with app even if network connectivity is gone.
Concerning faulty routers - they usually crash because of great number of simultaneous connections (e.g. downloading via bittorrent or other p2p protocol). So, maintaining minimum number of connections can help.

Distributed network monitoring - how to tell if the monitored resource has fallen over, or the monitor it's self is at fault

I'm building a system for monitoring several large web sites (resources), using distributed web services controlled by a central controller.
I'm coming to a specific part of the design - the actual reporting of resources that are thought to have fallen over.
My problem is that there is always the chance that the actual monitor it self is at fault, or has lost its network connection to a resource, and the resource is actually fine. I don't want to report issues if they are not really there.
My plan at the moment is to have the monitor request, that all other monitors check the resource if it encounters a problem, and then make a decision as to whether the resource has really fallen over based on collective results.
I'm sure there's someone out there with more experience of this type of programming than myself.
Is there a common solution to this type of problem? Is my solution a decent way of looking at this?
Your solution is about one of the only pragmatic ones.
There is nothing new under the sun. The IETF Routing Information Protocol wasn't the first attempt at addressing this problem, but it is well documented and works.
Note well, that there is no optimal (or perfect) solution to the class of problems which you are facing, the best you can do with in-band monitoring is make good guesses about where the fault is. In systems that need a very high degree of accuracy of fault information (e.g. the public switched telephone network) a parallel out-of-band monitoring network is established which itself must necessarily be monitored by humans.
Quis custodiet ipsos custodes? (Who will watch the watchers?) -- Juvenal, "Satires"

is it possible to limit the network traffic from my PC to my PC?

Hi Guys I'm debugging some CS program and to view the performance of the application in slow internet I tried many different ways. However the best would be the Server and the client be in the same PC ---- my debugging environments for both the server side and the client is setup in one PC.
So I'm wondering is there anyway to limit the speed? I'm using TCP but I don't know too much in-depth knowledge of it.
Thank you
There are two important factors regarding a "slow" internet connection that you need to test out since they have different implications for your application: bandwidth and latency.
If you provide some more details about what os you are running your tests on, it would be easier to recommend a way to limit the network performance.
On a related side note, it's generally a bad idea to performance test any kind of networking using the loopback device on your machine, since many aspects of this will perform very different than the regular network device on your machine.
You mention in the comments this needs to be done on windows, while the Network Emulators I know of (e.g. netem, TCN, other variants) all require Linux. So one thing you could do is create a virtual machine (VirtualBox is fine, I did similar things with it), install linux on it, configure 2 network interfaces, emulate the slow/long/lossy/jittery network between them, and route the test traffic through it from windows.
Finally I found this does what I need.
http://www.nirsoft.net/utils/socket_sniffer.html
Captures Windows Socket traffic, no matter it's local or not.

Resources