I want to develop simple Serverless LAN Chat program just for fun. How can I do this ? What type Architecture should I use?
Last year I have worked on TCP,UDP Client/ Server application Project.It was simple (Server listens to certain port/socket and Client connect to server's port etc..) But I have no idea about how to develop "Serverless" LAN Chat program. How can I do this? UDP,TCP,Multicast,Broadcast? or Should program behave like both server and client?
The simplest way would be to use UDP and simply broadcast your messages all over the network.
A little bit more advanced version would be to only use the broadcast to discover other nodes in the network.
Every node maintains a list of known peers.
Messages are sent with TCP to all known peers.
When a node starts up, it sends out an UDP broadcast to discover other nodes.
When a node receives a discovery broadcast, it sends "itself" to the source of the broadcast, in order to make it self known. The receiving node adds the broadcaster to it's own list of known peers.
When a node drops out of the network, it sends another broadcast in order to inform the remaining nodes that they should remove the dropped client from their list.
You would also have to consider handling the dropping out of nodes without them informing the rest of the network.
The spread toolkit may be a bit overkill for what you want, but an interesting starting point.
From the blurb:
Spread is an open source toolkit that provides a high performance messaging service that is resilient to faults across local and wide area networks. Spread functions as a unified message bus for distributed applications, and provides highly tuned application-level multicast, group communication, and point to point support. Spread services range from reliable messaging to fully ordered messages with delivery guarantees.
Spread can be used in many distributed applications that require high reliability, high performance, and robust communication among various subsets of members. The toolkit is designed to encapsulate the challenging aspects of asynchronous networks and enable the construction of reliable and scalable distributed applications.
Spread consists of a library that user applications are linked with, a binary daemon which runs on each computer that is part of the processor group, and various utility and demonstration programs.
Some of the services and benefits provided by Spread:
Reliable and scalable messaging and group communication.
A very powerful but simple API simplifies the construction of distributed architectures.
Easy to use, deploy and maintain.
Highly scalable from one local area network to complex wide area networks.
Supports thousands of groups with different sets of members.
Enables message reliability in the presence of machine failures, process crashes and recoveries, and network partitions and merges.
Provides a range of reliability, ordering and stability guarantees for messages.
Emphasis on robustness and high performance.
Completely distributed algorithms with no central point of failure.
Apples iChat is an example of the very product you are envisioning. It uses Bonjour (apple's zero-conf networking protocol) to identify peers on a LAN. You can then chat or audio/video chat with them.
I'm not entirely sure how Bonjour works inside, but I know it uses multicast. Clients "register" services on the LAN, and the Bonjour protocol allows for each host to pull up a directory of hosts for a given service (all without central management).
Related
I was wondering if DDS could be used over the internet, and if it would be a good choice for online gaming.
I have seen on the RTI website that they support WAN, but does that mean I can subscribe to a topic from another participant that is on the other side of the world?
What would happen to the QoS guarantees if this was the case?
Thanks.
Disclaimer: I work on OpenDDS full time, but have no experience in networked games programming.
A internet-enabled DDS could be used for connecting game clients. Whether or not it's a good idea is something I can't answer at the moment with no specific information, but the QoS part is a good question. In OpenDDS, as far as I'm aware, we try to adhere to the QoS defined by the user as if it was a normal RTPS connection. This means using it over the Internet might require some tuning of the QoS depending on what QoS you want to use. For example if deadline QoS was being used on a local network, the time period might have to be relaxed given the greater latency of the Internet.
For OpenDDS, internet-enabled RTPS is described in Chapter 15 of the OpenDDS's Developer's Guide: http://download.objectcomputing.com/OpenDDS/OpenDDS-latest.pdf. In addition to using ICE to overcome NATs, we also have a feature called the RTPS Relay to enable connections when a client can't use ICE.
I'm not familiar with what specific capabilities RTI Connext here has but as far as I'm aware they are similar, in that they use ICE as well. Also it should be noted that internet-enabled RTPS is not standardized, so the Connext and OpenDDS wouldn't be able to be talk over WAN.
OpenDDS would only be appropriate for games in very constrained environments because of the bandwidth requirements. If all users are on the same LAN then the UDP multicast approach that RTPS uses would be effective for a peer-to-peer game architecture. However, if remote users are added, then the requirement of every peer having to send every update directly to every other peer will very quickly explode the bandwidth requirements.
Given that the RTPS relay is already another application that needs to be run, a game server that collates updates from peers and sends world state would be far more effective for cases where users are not all on a single LAN segment.
What are the advantages of using multiple ports in a game? I understand why some would use a combination TCP and UDP for different purposes, but why do some games use multiple TCP or UDP ports? Is there any advantage to this? I am asking because I find myself making networking code for my game and I wonder why others go out of there way to have multiple ports?
For example GTA V uses 5 UDP ports and Assasins Creed Revelations uses 4 TCP and 4 UDP ports.
There is always a reason.
Quite often they are not (entirely) technical. For instance one team is working on the inter-game chat functionality while another is working on the server-client protocol for game X. Then they are integrated into the same product, but nobody bothers unifying the protocols due to costs, time constraints, concerns regarding future maintainability etc.
There are also purely technical reasons:
If the game server and the chat server run on different locations, it is natural to use multiple connections; the alternative is to use a sort of reverse-NAT box on the server side, but it's risky since it's a bottleneck and a single point of failure.
Stability: if the chat server crashes or malfunctions you don't want it to also bring down the stream between the client and the game server, so it's safer to communicate on parallel connections.
Overall it's a classic example of theory meets practice: it would be great to use a single port (and connection), but it is more practical to separate the different transmissions and interactions for a variety of reasons.
A bit off topic, have you noticed how many connections are opened when accessing a web page nowadays? It's often several dozens. Would probably be an order of magnitude less without all the ads though. Anyways, compared to that, a game opening 5 connections is nothing.
I'm researching about SDN and NFV.
In the concept of NFV on Wikipedia , it says : "Network Functions Virtualization (NFV) is a network architecture concept that proposes using IT virtualization related technologies, to virtualize entire classes of network node functions into building blocks that may be connected, or chained, together to create communication services."==> first thing to consider that it will reduce the cost of facilities.
So in real life implementation, for example, how can we virtualize a network nodes like a router?
NFV was created for the networks to be capable to extend in a dynamically way(virtualize the router) , not a static way(buy a new router), that is we must implement the router functions in the server or a computer instead of buying and then adapting the new router to the current nextwork , in this case I don't see any different in this implementation , because buying a server to implement a virtualized router is not cheaper than buying a new router.
Can anyone explain this for me , or Am i wrong understanding the NFV concept?
Thanks.
SDN is just that, software defined networking. In a Hybrid SDN model SDN decouples the logic from the physical box, rendering the physical box a simple "forwarding" box. The logic rests with the SDN controller where developers create APIs that manage these forwarding boxes (we call them network elements now) with flow tables that get pushed to them. The benefit here is that the devices can now be configured and provisioned through this controller, as opposed to having to log into each and every box.
Then you have the cloud. A small office can literally get away with porting all of their apps and services into the cloud, doing away with most of their physical boxes. Of course you still need a LAN in the office and a way to get out to the Internet and eventually the cloud. You can even ask the cloud provider to provision load-balancing on specific applications, firewalls and content delivery services. So basically your office applications and most of the supporting LAN and databases can be safely ported to cloud providers.
When you said "...because buying a server to implement a virtualized router is not cheaper than buying a new router", it depends: As it's a virtualized resource, you can use this new server to run your router and another resource from your infrastructure, if the machine has more hardware capacity than you need for a single router.
In fact, you might not even need to buy a new machine, if you have your resources in a cloud like AWS (or your own private cloud), when you have need for more routers, you can just flexibly allocate more hardware resources and spawn a new router instance (scale out) and, whenever your router demand is lower than what you have allocated, you can reduce your number of routers (scale in) and stop losing money with an infrastructure that you are not using at the moment.
Consider that a really high level explanation, if you want to know the details about how a Virtual Network Function scales in and out in a NFV implementation, I recommend you to read the ETSI specification about how it should work: http://www.etsi.org/standards-search#page=1&search=&title=1&etsiNumber=1&content=0&version=1&onApproval=1&published=1&historical=0&startDate=1988-01-15&endDate=2017-04-13&harmonized=0&keyword=&TB=789,,832,,831,,795,,796,,800,,798,,799,,797,,828&stdType=&frequency=&mandate=&collection=&sort=3
Let me continue with your example of the router. Traditionally, these routers are vendor specific. For example, the major sellers are companies like Cisco, Juniper, etc. They are implemented on proprietary hardware and therefore if you want to buy a new router you need to buy from them only. Further, when they go into some problems, you need a dedicated engineer to repair them. Therefore, the telecommunication has to take care of high Capital Expenditure (COPEX) and Operational Expenditure (OPEX).
With NFV, the entire router function is implemented as a software and deployed on a general purpose servers (GPP) or cloud. These GPPs are relatively very cheap when compared to proprietary hardware. Thanks to cloud computing, even small companies can afford servers on Amazon and Google clouds. Because of cheap availability, COPEX is now relatively cheaper. Further, you don't need a dedicated engineer when the hardware goes into a problem, the same engineer who works for GPP server maintenance is enough. This way OPEX is reduced.
Now imagine, like routers there are many networking elements present in Telecommunication. If every networking element requires a dedicated engineer, how much a Teleco operator will be spending money. Apart from this, due to software implementation, suppose, when you have very high traffic than expected, you can just roll out a new router (software network function) on GPP or Cloud instead of completely buying a new router, which is very costly. As you already know, in the cloud you pay based on usage.
There are many more uses. To know more you need to read research papers.
Quick question: do most chat applications (ie. AIM, Skype, Oovoo) use peer to peer UDP exchange for talking to other users or an echoing TCP connection with a server? Or some combination in-between?
Traditionally, most applications used a TURN-like solution (i.e., communication via a server) to overcome NAT traversal issues. Since chat does not consume much bandwidth, servers could support thousands of communications.
But now that P2P has evolved and the NAT traversal issues are now well understood, some use direct UDP communication provided that the users' NAT allows this (i.e., STUN-like communication). They still need a central server to punch the hole though. Direct communication is also helpful when lots of data needs to be transmitted.
I believe it is fair to say that most modern frameworks use a combination of both.
when you need small fragments of data, such as text messaging, there's no need of using P2P. data can be transmitted from client1 to server, and from server back to the client2.
When you need to transfer data quickly between clients, in cases such as VoIP (voice over IP), or file transfer, you will use P2P.
A pretty standard IM protocol is XMPP. I know it's used by Google Talk, as well as a few other big names in chat.
We're about to design an inhouse industry network consisting basically of the following: 1 server connected via wire to up to 100 proprietary RF access points (basically embedded devices), which each can be connected via radio to up to 100 endpoint embedded devices. Something like this:
Now, I'm wondering about some design decisions that we need to take and I'm sure there are plenty of similar designs out there and lots of folks with experiences of them, both good and bad. Maybe you can chime in?
All endpoint devices are independent and will communicate their own unique data to the server, and the other way around. The server therefore needs to be able to target each endpoint device individually. Each endpoint device pairs itself with 1 access point and then talks a proprietary RF protocol to it, TCP/IP is not an option there.
The server will know which endpoint device is paired with which access point, so when the server needs to talk to an individual endpoint device, the communication must go through the paired access point. Hence, the server needs to directly address the access point.
Question: Considering the limited resources available in the proprietary access point, is TCP/IP between server and access point recommended for this scenario? Or would you suggest something entirely different?
I find the diagram confusing:
If this isn't its own network and the server to AP link is running on your internal company network, there isn't really an option, there must be a TCP/IP stack on the AP.
If this is its own isolated network then what is the router for?
If this is, in fact, its own isolated network then you are right, there really isn't a need for the Ethernet connectivity at all. The overhead you will see on the wireless is huge, your no overhead ideal data rate is 250kbit/sec, running ZigBee on 802.15.4 # 2.4ghz point to point your real data throughout is usually around 20kbit/sec. A custom protocol should be able to obtain lower overhead but this would need to be defined.
If I were designing this I would choose a SoC for the AP that had on board 802.15.4 and CAN (Controller Area Network). Depending on size and data rate just get a PCI CAN card for the server and connect it up, use something like DeviceNet as your protocol layer for server to AP communications. This can be expanded by using CAN switches and repeaters. CAN is used all the time in industrial automation, a little googling can find you example of tens of thousands of nodes used in some manufacturing plants.
There are small TCP/IP stacks, for example LwIP.
You didn't mention the amount of data to be communicated, or bandwidth considerations?
A 3rd party TCPIP stack targeted at the 8051 would simplify all the networking issues with connecting 100 units. You probably will still end up with a proprietary protocol that sits on top of the tcpip stack but then it is just simple point-to-point communication between the server and each end point.