A friend of mine made a small LAN-playable game, and ask me to change it, so it could be playable over the Internet. I don't want to make huge changes on the client application.
When a game is created, the server keep sending UDP BROADCAST packets to tell everyone that a game has been created. Now, I just need to change this BROADCAST in order to send these packets to a group of internet IP-adresses.
Can you tell me if the following solution is a good one: I would create a room server, lets call it 'room-broadcast-server', that contains IP adresses of everyone who joined the room. Then, the clients, instead of sending that BROADCAST packet, would send the packet to the room-broadcast-server, which would broadcast this packet to everyone who joined the room.
The problem is: The clients would receive these packets from the 'room-broacast-server' and they would try to comunicate with the room-broadcast-server, instead of comunicating with the machine that created the game. I would like to fool the clients, so that they think the packet came from the game server, and not from the room-broadcast-server. How can I make it?
Are you modifying only the server, or both the server and the clients? I would think that it would be simpler to just remove the broadcasts altogether and have the clients explicitly choose the server they want to connect to, rather than relying on the server broadcasting.
As a version 1, you could simply require players to type in the IP address/DNS name of the server in the client in order to connect.
For version 2, you could add support for a "lobby" where you have a (known) central lobby server that both clients and server connect to in order to find each other (so servers connect to the lobby to announce their presence and then clients connect to the lobby in order to browse servers that they want to connect to).
In a game that I have been writing (but on hold currently due to lack of free time :p) I had written the "lobby" server as a simple PHP+MySQL web application and had the clients and servers use HTTP requests to poll it for updates and so on. That way, I could host the central lobby server on a cheap web host and anybody could host games (the drawback is that cheap web hosts don't allow arbitrary socket connections, so I wasn't able to implement NAT punch-through on it, but if/when the game became popular my plan was to move the lobby server to a more expensive host that did allow arbitrary socket connections...)
Spoofing the source address isn't a suitable mechanism for your normal application protocol. It requires special permissions on client machines, it will be dropped by some networking filters, and is just generally gross and anti-social.
Since you're modifying the clients anyway (to send the messages to the game server instead of to the broadcast address), you can simply have the game server append the "true source" of the packet to each packet it sends out, and have the clients expect and process this information in the packets that they recieve from the game server.
The basic concept (one centric lobby server + multiple game servers) of the 2nd option from Dean is good and very common even in retail online games. One thing I don't understand is why you want to fake IP addresses. Clients should not care what server ip address is as long as it gets valid packets from it. Also, imo, you don't need to create a separate room(?) server since a game server can / should manage a list of clients as clients are connected via a lobby server.
Related
My team wants to build a chat app and so we are researching about all the available technologies available at our arsenal. I am concerned about XMPP. So i was reading the Oreilly's "XMPP: The definitive guide", and came across these lines and i quote
In XMPP, messages are delivered as fast as possible over the network. Let’s say that Alice sends a message from her new account on the wonderland.lit server to her sister on the realworld.lit server. Her client effectively “uploads” the message to wonderland.lit by pushing a message stanza over a client-to-server XML stream. The wonderland.lit server then stamps a from address on the stanza and checks the to ad- dress in order to see how the stanza needs to be handled (without performing any deep packet inspection or XML parsing, since that would eat into the delivery time). Seeing that the message stanza is bound for the realworld.lit server, the wonderland.lit server then immediately routes the message to realworld.lit over a server-to-server XML stream (with no intermediate hops).Page 45
Like email, but unlike the Web, XMPP systems involve a great deal of inter-domain connections. However, when you send an XMPP message to one of your contacts at a different domain, your client connects to your “home” server, which then connects directly to your contact’s server without intermediate hops (see Figure 2-4).Page 13
Can anyone please make me understand how can there be no intermediate hops(unlike email).
E-Mail (SMTP) also has no intermediate hops. I assume you confuse the application OSI layer, where XMPP, SMTP and so on live, with the network layer (IP).
For mobile Apps, it is a valid assumption that the network may be intermittent, or it may switch from one to another as the user keeps moving. For example, your device is connected to a startbucks wifi and you are using the App before you grab your coffee and walk out of the store -> Your mobile device network may switch from wifi to carrier network, 3G/4G/LTE. Even with the carrier network itself, it may switch among 3G/4G/LTE depending on their coverage at your position.
Question,
Will this intermittent network, or frequently network switch affect the http communication?
For example, an http request was sent out with Wifi, and while the server is processing the request, the device already switched to 4G. Will the device still be able to receive the response?
If Yes, how is Http or TCP designed to support this scenario?
If No, should we try to solve the problem from the application layer? and How?
Will the device still be able to receive the response?
For current practice, No. After network is switched:
Device's public IP address is changed.
TCP connection is based on IP protocol, so all current TCP connection would be destroyed.
HTTP is based on TCP connection, so it would be destroyed too.
Actually, you can make a simple experiment to verify this: Put a web page on internet and make the web server delay the page delivery for 30 seconds. Visit this page and switch network while waiting for the response.
However, this is a classic problem in mobile world, so some work is doing to give mobile device a constant IP, which will keep TCP&HTTP alive when device switches from one network to another. You can check Mobile IP in wikipedia for more information on various technologies and protocols.
If No, should we try to solve the problem from the application layer?
It depends on whether you can tolerate network interruption for your application. If it is a static web page, I think it is totally OK to leave this problem alone, and wait for Mobile IP technology improvement in future. If it is a highly network-dependent application, such as online video or stock market app, I think this problem should be solved in application layer.
and How?
There are 3 methods to fix/workaround this problem (maybe more):
Cache. Prefetch resources, so that when TCP connection is destroyed and reconnected, device can use cached resources. This works well in online audio/video apps, but it does not apply when no resource can be prefetched (realtime stock market data for example).
Take TCP re-connection as first priority. Check your code, when HTTP failed due to destroyed TCP connection, re-send HTTP request as early as possible.
Improve user experience when network interruption do happens.
Reading about Webrtc i get the feeling that "it will drop server bandwidth usage dramatically" except for "a few corner enterprise-firewall cases" where one needs a TURN server which relays the whole traffic between the peers.
For example, altough not webrtc related but the idea is similar, the wikipedia article of Chatroulette states: The website uses Adobe Flash to display video and access the user's webcam. Flash's peer-to-peer network capabilities (via RTMFP) allow almost all video and audio streams to travel directly between user computers, without using server bandwidth. However, certain combinations of routers will not allow UDP traffic to flow between them, and then it is necessary to fall back to RTMP.
Also similar articles on Webrtc focus on "yeah there might be problems with firewalls so you need a TURN server but ignore this and look at my awesome PeerConnection javascript code".
What i don't understand:
A Connection between two peers requires a server socket to be open so the peers can connect to it. Even UDP requires the concept of a udp server socket. Since nearly all not-server internet connected peers are behind some kind of router. E.g. every smartphone uses a wifi router, desktop PC's use the router of the service provider, ...
It shouldnt be possible to connect to a server socket hosted on a smartphone (browser webrtc server socket) or desktop cause of the router/firewall.
Thus my understanding is practically no two peers which need to send their traffic through the internet will be able to use a direct P2P connection, right?
So the only useful case to use Webrtc is in a LAN like environment, right?
Furtherly in case of a video chat service like chatroulette based on webrtc would need to use a bunch of TURN servers to relay nearly ALL traffic. Which makes Webrtc equally costly regarding server bandwidth like hosting my own solution.
So my question is: Am i right? If not what is the technical detail that allows a PeerConnection to be used without a TURN server but for two nodes separated by the Internet? How is the connection established on Layer 4 the TCP/UDP Transport Layer? Is it using UDP and all wifi routers allow hosting UDP server sockets or such? Which wouldnt make much sense cause of NAT and security.
UPDATE 1:
Digging a bit further i found what "symmetric nat" means and what it has to do with enterprises: In most enterprises it seems that the device connected to the internet has symmetric nat implemented. This means that the routing table which maps internal "internal-ip:internal-port" tuples to "internet-ip:internet-port" also stores "destination-ip:destination-port". So such routes/nats store a table for every (tcp?) connection having 6 columns "internal-ip:internal-port:internet-ip:internet-port:destination-ip:destination-port". This means no one else but the destination is allowed to communicate with internal-ip:internal-port.
Whereas non-enterprise-routers seem to only store the "internal-ip:internal-port:internet-ip:internet-port" combination. Thats also what is meant as "poke a hole in the firewall".
You're not right. All peers have IP addresses in order to communicate, and can be reached on those same addresses, provided a firewall allows it.
NATs tend to be optimized for client-initiated client-server traffic only. That typically means they initially allow outbound traffic only, and only allow inbound traffic on the same line after outbound traffic has happened. Perfect for servers. See this WebRTCHacks article for an intro to the problem.
This is where ICE comes in to attempt to poke holes in the firewall from the inside (client-side), in order to establish a line of communication directly between two peers, without needing any "server" socket, whatever that means.
How ICE works is quite complicated, and is explained in detail in the RFC.
But in broad terms it works in a number of steps:
Each peer (e.g. browser) has an "ICE agent" that collects candidates. Candidates are addresses (IP:port numbers) at which this peer can be reached, including:
Host candidates: e.g. immediate LAN/wifi/VPN IPs of the machine.
Server-reflexive candidates: public (outside-NAT) addresses of the machine, obtained by bouncing requests off mirroring (STUN) servers on the internet.
Relay candidates: addresses to a shared TURN server to forward data if all else fails.
Once discovered, candidates are inserted into the local SDP, and trickled over the signaling channel to the other peer, where they are inserted in it's remote description, where the other agent sees them.
Once an ICE agent has both local and remote candidates, it starts pairing local and remote candidates, and checks them for connectivity by sending STUN requests on them (effectively attempts at reaching the peer).
Successful pairs are ones both ICE agents have gotten a response back on (a 4-way handshake if you will).
If there's more than one successful pair, they're sorted by some metric, and the best pair becomes selected.
The selected pair is then used to send media over. One pair is needed for each track of (video or audio) media.
If a better pair is found later, the selected pair may change, affecting what address media is sent on.
TURN should only be needed in cases either where both clients are behind symmetrical NATs, or UDP traffic is blocked entirely.
My job is to write a distributed client/server application with some concurrent tasks. So i decided to use akka.net for the concurrency issues. To implement the ipc between server and client akka remote is used. For some reasons there may run more than one client of the same type on a workstation. So i configured these clients for dynamic assignment of a tcp port. This worked fine for sending messages to the server.
My problem is to push some information to the clients. To accomplish this task an actor on the client exist. Now the server creates a reference for this actor. Therefor it needs the port the client is listening on . My idea is to send the tcp port the client uses to the server in some sort of connection procedure using a actor on the server.
After searching for some hours I didn't find any hint where to find the dynamically assigned tcp port. So how would the client get the assigned tcp port?
Ok, I could use akka.cluster. But using akka.cluster is breaking a fly on the wheel, I think. And if it solves my issue reamins to be seen.
Two suggestions, assuming that it is your client that makes the first contact with the server.
I'd have the server keep track of which clients are connected. I'd probably have a heartbeat message that gets sent once every few seconds from each client system. This way you can store an IActorRef for each alive client and send messages back without the need for finding the port. IActorRefs are preferable wherever possible for location transparency.
If you actually need to explicitly find the port, you may be able to extract it from the Path property of the IActorRef of one of the actors on the client system.
Thanks to patricks suggestions my issue is solved.
The solution is to extract the needed information from the senders path available while executing the hello message. With this information the server is able to maintain a list of all connected clients and theire network address.
Thanks a lot # patrick.
Regards Gregor
Unfortunately I don't know much networks. I am writing a program that has two versions. A server version and a client version. Lets assume that the client versions are installed on, say 20 PCs that are connected to the server over ethernet. The client versions needs to CONSTANTLY get some data from the server. The data is kind of serial. I wanted to know a way to broadcast the data that gets updated every second and make it available to all the other PCs in the network. Could I use the HTTP Port for this?, like writing the data to an HTML page or something? or Is there a better port or method for doing this?
Any ideas will be greatly appreciated.
This sounds like a pretty straightforward application of TCP sockets. The server would be set up to "listen" on a particular port (you pick the port number, say 12345), and each client would make a TCP connection to the server on that port.
Whenever the server has data to send, it would send it once to each connected client. This could mean that the server sends the data up to 20 times on different sockets, but that's fine. The client would read the data from its connected socket to the server.
There are other alternatives, such as UDP or even UDP multicast, but these usually end up being a lot more complicated because UDP doesn't guarantee that packets always arrive at the destination (and they may even be duplicated or out of order). TCP ensures that the data you send either arrives complete in the correct order, or doesn't arrive at all (in that case the connection would be dropped).
An example of this sort of multiple TCP connection is VNC:
VNC is widely used in educational contexts, for example to allow a distributed group of students simultaneously to view a computer screen being manipulated by an instructor, or to allow the instructor to take control of the students' computers to provide assistance.
There are many ways. you can choose any of them but i think, document below will help you a lot.
Multicast over TCP/IP HOWTO:
http://www.ibiblio.org/pub/Linux/docs/howto/other-formats/html_single/Multicast-HOWTO.html#sect-trans-prots