Peer to Peer Networking [closed] - networking

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
In a peer-to-peer network, is every node connected to the network a client or do they act as both a server and a client? And how can you make the peer-to-peer network robust enough to support an unlimited number of nodes connected at one time?

Well, as it already states, its P2P. So, you could basically call them both client and server,
it doesnt matter in this case (its doing the job of a server and client). When a peer in this kind of network needs to update something, it has to send the data once to EVERY peer it is connected. And meanwhile, this peer waits for any incoming data sent from the peers its connected to, and updates it accordingly.
As for your second question - i wouldnt recommend using a P2P pattern for a unlimited amount of peers. for a higher number of peers, the network usage would grow even larger - because every single peer needs to be connected to every other peer and send data to all of them. not to mention that this could get the data desynchronised very easily. P2P is good for a smaller networks with a smaller number of peers. in that case, the raw sending speed will be higher than that of a server-client model, because there is no breakpoint in the connection(in a s-c model,this is the server). For a higher number of total Connections, i would stick to a Client-Server model.

Related

No means of detecting collision at the application layer? Hmm [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Given the degree of uselessness one has come to expect from both tutorials
https://inet.omnetpp.org/docs/tutorials/wireless/doc/step5.html
and manual pages:
https://doc.omnetpp.org/omnetpp/manual/#sec:ned-lang:warmup:network
how can collision be modelled at the application layer?
You did not find a tutorial how can collision be modelled at the application layer simply because in application layer collisions do not occur.
Generally, a collision may occur when some medium (or layer) cannot be accessed simultaneously by many elements. However, there is no such limitation for application layer. Application may send a packet in any time, that packed will be processed by the transport layer (TCP or UDP) and then it is sent to network layer. The network layer has a buffer so in the situation when at the same time two or more application send packets the conflict will not occur.
According the details presented in your question:
how can hostSink check whether hostA or hostB are still sending packets [originally: signals]? Answer: hostSink cannot determine whether hostA is still sending packets. Simulation reflects the behavior of a real network and in real network host does not know whether the another host is still sending packets.
How does time "pass" in a simulation? Answer: OMNeT++ is Discrete Event Simulator and according to Simulation Manual:
A discrete event system is a system where state changes (events) happen at discrete instances in time, and events take zero time to happen.
It means that a simulation internally maintains variable called currentSimtime. At the beginning currentSimtime=0. When the first event (for example sending an ARP packet) is scheduled at, for example, t=0.003s, currentSimtime is set to 0.003s and the sending ARP packet is executed.

how exactly does http.sys work [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm trying to get a deeper understanding of how IIS works.
http.sys i understand is one its major components. However, i have been having trouble finding easily digestible information about it. I couldn't get a good mental model going until i heard about the WSK, then i think it all fell into place.
From a lot of random googling a little experimentation this is my current high level understanding of why it exists and how it does it's stuff.
Why:
Port sharing, and higher performance caching.
How:
User mode processes use the WinSock api to open a socket listening on a port to gain access to the networking subsystem, e.g. tcp/ip. Kernal mode software like the http.sys driver uses Winsock Kernal Sockets (WSK) api to achieve the same end using the same pool of TCP port numbers as the WinSock api.
IIS, a web service or anything that wants to use http registers itself with http.sys using a unique url/port combination. http.sys opens up a socket on this port using WSK (if it hasn't already for another url/port combination with the same port) and listens.
When the transport layer (tcpip.sys) has reconstructed a load of ip packets back into an http request that a client sent it gives it to http.sys via the port in the request. Http.sys uses the url/port number to send it the the appropriate process which parses it however it pleases.
I know it seems like I'm answering my own question but I'm really not that sure of myself on this and would like some closure so i can get on with more interesting things.
Am i close?

Calculating Real Network Round-trip [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to understand network latency a bit better.
Lets say there's one Client and two Servers. Client sends 1000 bytes to each of the Servers, each Server responds instantly with 1000 bytes.
Ping round trip times from Client:
To Server 1 - 2ms
To Server 2 - 20ms
Assume both Client and Servers are connected to quality 1 Gbps pipe (but not via dedicated line between them).
Question: how to calculate real time from when Client starts sending its 1000 bytes to when it fully receives the last byte of the response data. Will it be something close to 2ms for Server 1 and 20ms for Server 2?
Yes, that's exactly right!
The ping round-trip delay measures how long it takes a small packet of data to travel from one host on the network to another, and back to the original host.
You should keep in mind that the numbers you get fluctuate a bit based on network conditions and load on the processors of the hosts. You should average the round-trip delay over a few samples but be prepared that any other packet may experience an unusual delay for a variety of reasons.

Would a custom IP-based protocol work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Let's say somebody invented a new protocol I would put on top of IP. Would two computers from the other ends of the world be able to communicate with it, i.e. would routers forward the frames that aren't standard TCP/UDP/ICMP?
Yes, if it is build on top of IP then it would be routable over the internet. The IP protocol defines the header and payload. The header is used for routing. So you would be able to send custom IP-based protocol data from one computer to another over the internet.
However, both computers will need custom drivers to send, receive and understand the data.
I'm not sure why you'd bother though. If you're sending custom data, you're much better off writing an application level protocol on top of tcp or udp and take advantage of the networking layer built into all computers and operating systems already. It'll be easier to write, maintain, and debug.

Choosing a Server For my game [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm about to release a MMORPG. In my game, every 1 second, each player sends 30 TCP messages and gets back from the server 30. Every message is not really long. Like 20~ chars.
the point is that I never got my hands with multiplayer games. I have programmed all the server and client, but I don't know what server I'm gonna need. I mean, RAM, CPU, etc... I still don't know what to be ready for, but let's say for 15K same-time clients. As said, every 1 second every client need to get and send 30 TCP messages, and in the most cases I need also to update my non-SQL DB with the data.
Update: It's a multiplayer game, I must have 30 msgs/sec. Most of the msgs are for the current position of the player. Also I'm using C++.
It depends on what your (already implemented) server requires. You'll never know until you try some particular hardware. Rent a powerful dedicated server for a month and profile your game server. At least just check CPU usage. You'll need multithreaded asynchronous networking.
Details you provided help only to calculate how much bandwidth you need:
~94 bytes (TCP + IP + Ethernet headers) + ~20 bytes (your data) = 114 bytes every packet * 30 per second * 15000 users = ~50MBps * 8 bits = ~400Mbps of both incoming and outgoing traffic. Seems you're in troubles here. Consider something wiser than sending your every packet in separate TCP frame. E.g., implement buffer that collects data ready to be sent and is filled by your game logic threads and separate thread that sends this data to network. In this case your several soft packets can be combined into one TCP packet greatly reducing your bandwidth requirements.
But even after this you're still in troubles. I'd recommend to wait for users first before investing into complicated solution. When you'll have them you'll need to implement some kind of clustering. It's separate story, much more complicated than simple networking.
Try to handle at least 1K users by single server. This can bring you some money to hire somebody experienced in game networking/clustering.
If you know you sending 30 messages every second, why not bundle them into 1 request, every second? Makes a lot of difference in terms in server resources...
And in which language are you going to run your server? I hope you write something dedicated to process/manage these connections? If so: do some profiling and just measure what you need...
And what is your processor doing every second, to process those 30*15K messages?
There is no generic answer to your question. It all depends on your software.

Resources