How does Skype works in imo.im and im+ services? Any guesses?
I think there is only 3 ways:
Runing many copies of Skype client for each connecting client on server
Runing many copies of Runtime from SkypeKit for each client on server
Reverse-engineering of Skype protocol...
(Yes i know that 1 and 2 is illegal)
Has anyone any information?
Probably some kind of SIP-Skype hardware gateway?
http://shop.skype.com/phones/#pbx-systems lists some.
Just my guess about possible legal ways though. Reverse engineering Skype protocol does not seem to be realistic to me - those guys are very paranoid about obscurity and change their protocol details quite often.
Related
I'm currently working on an app that will need some realtime communication between two clients. Not necessarily text chat. I was wondering if I can utilize free IRC services like Freenode to act as the backend of sorts for my app's communication?
I skimmed through their TOS and I couldn't find anything against it. But I want to know if there are some gotchas that I need to be aware of.
It sounds like what you're really asking is something like the following...
How can I communicate between two clients over the Internet, even if both of these clients are behind some kind of firewall that prevents a direct TCP or UDP connection?
Answer: The usual solution is to use a server on the Internet that is reachable by both clients as an intermediary, to relay their traffic. Up until recently this was accomplished in a very application specific manner and required managing a dedicated server on the Internet. But what if there was a way to offload the burden to someone else...
I was wondering if I can utilize free IRC services like Freenode to
act as the backend of sorts for my app's communication?
Answer: Probably not. Or if it works for your test app, you will quickly get banned in production when you try to send a significant amount of traffic through the IRC servers. Fortunately this kind of relay service is actually available for developers and production use cases. The WebRTC was designed specifically to make these kinds of real-time applications possible. The firewall-busting buzzwords you should be Googling for are STUN and TURN.
I'm currently looking into Twilio's hosted service for my own apps, however it's also possible to host your own TURN relay on Amazon's EC2. Unfortunately there's no such thing as a free lunch, so you'll have to pay some amount for each of these services but you'll be able to bask in the warm glow of writing robust, standards compliant code.
My knowledge about network programming is limited, so, all the comments are more than welcome. Essentially my question boils down to the following question:
Q1. Is there really such a thing as decentralized asynchronous cross-platform peer-to-peer communication?
Let me explain myself.
If we have two http servers running on computers with actual IP addresses, then clearly the answer is yes, assuming one writes a protocol for the interaction.
To go one step further, if one of them (or both) is (are) behind a router, then, with port forwarding the communication can still be established. However, here the problems start because if someone wants to run such a server on the background, say in a mobile phone, the app that is relying on this server really works when one is at home (we can not really expect to request port forwarding everywhere we go).
But even beyond that,
Q2. do mobile phones obtain an actual IP address from telecommunication companies when someone is not using a wi-fi?
If this is true, then clearly one can have cross-platform asynchronous peer-to-peer communication at the expense of not using wi-fi by running an http server on a smartphone. (I understand that this is not convenient, but it is certainly doable.)
Concluding, the two (perhaps there are more) relevant questions that I can think of are:
Q3. How does Skype really work?
Q4. How does Viber really work?
Based on the answer for Skype, it says: If one of the callee or both of them do not have a public IP, then they send voice traffic to another online Skype node over UDP or TCP.
So, it appears that there is no direct communication in Skype, because they have to use a man-in-the-middle for such a scenario.
Regarding Viber, I could not find a good-thorough answer to this particular question. Do people talk to each other through a Viber centralized server, or, do they establish a direct connection? Of course if they do establish a direct connection, then I really want to know how they manage such a thing since a mobile phone may or may not have a physical address. How is a Viber message routed to my cell phone from a friend of mine even when Viber is not running and I am behind a router?
I guess the answer to Viber is really push notifications, but as far as I can understand, all the variations of push notifications rely on open connections, and then the servers of the applications send the notifications to the clients through such connection(s). So, this approach gives us the feeling that it is asynchronous, but essentially it is not. We are cheating, in the sense that there is a constantly open connection to a server, and moreover, as far as I can understand, the application server has to push the notification through that server. Schematically:
A > Central App Server > Central Server w/ open connection to my cellphone > me
So, this seems to be once again a centralized approach.
Honestly, the only approach that I can think of that is both decentralized and asynchronous (on mobile phones as well) is to run an http server on every platform/device, but this comes at the expense of not using Wi-Fi and assuming that a telecommunication company really assigns a physical IP address to every mobile phone (which I do not know if it is true, do you?).
What about WASTE, darknets, F2Fs, etc? Do they offer advantages in the sense of a more direct asynchronous communication between some interested parties? Are there real-world applications (also including mobile phones) using such approaches for communication.
Really, this is not the actual problem that I would like to work on, but I would like to know what the state of the art is so that I can figure out how I can proceed from there. So, all comments are really more than welcome. If you have references for the state of the art I would like to know about them as well, but a brief description would also be nice.
I appreciate all your time and effort in advance.
You asked many questions, here is the beginning of the answers:
Q1: Yes. For example, take BitTorrent's very successful 10 million+ node network. Aside from the bootstrapping process, the protocol is entirely decentralized and asynchronous. See here for more info.
Q2: Yes! Go to www.whatismyip.com on your mobile telephone, and you will see your assigned IP. However, you are likely to be very filtered (e.g: incoming traffic on port 80 is likely to be blocked).
Q3: It has elements of P2P and clever tricks to get around NAT issues - see here for more info.
Q4: I don't know.
i'm a newbie to webRTC and their is some stuff that i did not get if it was possible i would like an answer to those question and i quiet think that it will be a good reference to all the other guys over the web .
webRTC server code witch left to be handle by the developer what is it job ? i mean their is a lot of signaling method using websocket and socket.io but what did they send to the server ? .
i see some github sources in may learning path that provide these "id" i'm wondering does the server code provide these id and what is it job ?.
i did not get how i can share video conf in real base scenario .. any concret example explanation ?.
i'm wondering if i can use a combination of signalR and webRTC . is is possible thus signlaR provide real time communication and data delivering and the webRTC provide many many services like video conf .. audio .. data exchange .. etc . and is it a valid server code ? .
1) The server-side differs depending on the method used for signalling. For WebRTC specifically, because any browser that supports WebRTC will also support WebSocket, WebSocket is the likely candidate to be used for the signalling method.
Now, the server-side for WebSocket can be somewhat complex, as you have to first handle the handshake to elevate the protocol to either ws or wss, and after that, you have to handle the encrypting and decrypting of all messages sent over the line via WebSocket. This is not trivial at all, but if you do some searching around SO and the web in general for information about how to code the server-side for WebSocket, you should be able to find what you're looking for.
2) I can't understand what you're asking in this question. Could you please provide an example/link? Thanks.
3) You use WebRTC to establish a peer-to-peer connection between two clients to quickly transfer data back and forth. One benefit of this peer-to-peer connection (and the speed at which you can transfer data) is the ability to establish video connections. Also, you can establish video links between more than two clients at a time, although with too many connections, there can be bandwidth issues.
What specifically do you want to know about how to use this technology for video conferencing?
4) I'm not too familiar with SignalR, but looking at the home page, SignalR is used to push data from the server. WebRTC doesn't use a server at all (once the peer-to-peer connection has been established). By that rationale, WebRTC will likely always provide a better, faster connection than SignalR.
Please clarify some of your questions as noted above, and I will help in any way I can. Thanks.
I can answer number 4...
You can of course use SignalR to do the signaling between clients to get WebRTC running, but SignalR has no built-in functionality for the WebRTC signaling so you are in for a pretty nasty job if you are planning on doing it your self.
Since you are asking about SignalR I am jumping to conclusions here and guess that you are a .NET developer? If so there are .NET libraries out there that already have taken care of the signaling for you. One of them is XSockets.NET.
Just install the sample package from XSockets and you will have a multi video chat up and running in a minute.
Sorry for not answering 1,2 and 3... But I hope that the package from XSockets will solve these quesitons :)
For example we have chat client application executable (and we can change server Endpoint).
How I can analyze sockets, packets send to server and back? So I can write server emulator for that client?
This is just example. I know this is very general question. But I need general answer. What techniques you can suggest? What tools? Any tutorials or books?
This is for educational purpose and I have no intention to violate any law.
Edited: Basically I want to get protocol that client/server communicate.
If you consider about writing a server emulator, you must know what to emulate, and therefore you must know the details of the protocol. So I doubt that approach can help to discover the unkown features.
I think that the Wireshark protocol analyser can help you to see all the network dialog between the server and the client. You do not have to write a custom server, just spy the actual exchanges :o)
I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck