This is not a specific coding question but more a research project.
I am very interested in streaming services. To be more specific, I am interested in streaming audio from a server to a device. I do understand the basic logic behind it but I want to get better knowledge in these things.
I would like to try implementing a streaming server and a streaming client (server on mac and client on iOS) but I am having some trouble finding any pages that offer "tutorials" on how its done. I have managed to get the client side somehow and in its example it is playing stream from shoutmedia. How can I implement the server side? I would prefer it in C++ (server in Qt for example would be a bullseye) but any links will be appreciated.
I think that streaming is quite an interesting topic and any links will be appreciated.
Thanks!
Related
i'm a newbie to webRTC and their is some stuff that i did not get if it was possible i would like an answer to those question and i quiet think that it will be a good reference to all the other guys over the web .
webRTC server code witch left to be handle by the developer what is it job ? i mean their is a lot of signaling method using websocket and socket.io but what did they send to the server ? .
i see some github sources in may learning path that provide these "id" i'm wondering does the server code provide these id and what is it job ?.
i did not get how i can share video conf in real base scenario .. any concret example explanation ?.
i'm wondering if i can use a combination of signalR and webRTC . is is possible thus signlaR provide real time communication and data delivering and the webRTC provide many many services like video conf .. audio .. data exchange .. etc . and is it a valid server code ? .
1) The server-side differs depending on the method used for signalling. For WebRTC specifically, because any browser that supports WebRTC will also support WebSocket, WebSocket is the likely candidate to be used for the signalling method.
Now, the server-side for WebSocket can be somewhat complex, as you have to first handle the handshake to elevate the protocol to either ws or wss, and after that, you have to handle the encrypting and decrypting of all messages sent over the line via WebSocket. This is not trivial at all, but if you do some searching around SO and the web in general for information about how to code the server-side for WebSocket, you should be able to find what you're looking for.
2) I can't understand what you're asking in this question. Could you please provide an example/link? Thanks.
3) You use WebRTC to establish a peer-to-peer connection between two clients to quickly transfer data back and forth. One benefit of this peer-to-peer connection (and the speed at which you can transfer data) is the ability to establish video connections. Also, you can establish video links between more than two clients at a time, although with too many connections, there can be bandwidth issues.
What specifically do you want to know about how to use this technology for video conferencing?
4) I'm not too familiar with SignalR, but looking at the home page, SignalR is used to push data from the server. WebRTC doesn't use a server at all (once the peer-to-peer connection has been established). By that rationale, WebRTC will likely always provide a better, faster connection than SignalR.
Please clarify some of your questions as noted above, and I will help in any way I can. Thanks.
I can answer number 4...
You can of course use SignalR to do the signaling between clients to get WebRTC running, but SignalR has no built-in functionality for the WebRTC signaling so you are in for a pretty nasty job if you are planning on doing it your self.
Since you are asking about SignalR I am jumping to conclusions here and guess that you are a .NET developer? If so there are .NET libraries out there that already have taken care of the signaling for you. One of them is XSockets.NET.
Just install the sample package from XSockets and you will have a multi video chat up and running in a minute.
Sorry for not answering 1,2 and 3... But I hope that the package from XSockets will solve these quesitons :)
I've been doing some research in how to implement a server free, point-to-point video/audio chat (i.e., my own skype without text messaging).
I've been looking for ways to implement it and I had this next ideas:
A multithreaded c++ (cause I know some c++) program getting audio and video (with qt), sending it through 2 different UDP sockets and reading video and audio from 2 other different UDP sockets from the other 'point'. So I'll had to write the UDP server and client multithreaded with a sum of 4 threads: 2 for sending audio and video, other 2 to receive audio and video.
Writing my own protocol to enable video and audio in the same thread, something like parsing half of the packet data size for audio and video buffering, which would leave me with only 2 threads in the application and a lot more 'error prone' code to write.
I've been looking to some real time media protocols, and some of them looked interesting. Maybe study and implement interfaces to this protocols and use them instead of 'creating' my own.
Now, the actual question(s):
Are there some documentation on how to accomplish this? Maybe some 'state of the art' apis/protocols that are being used or well implemented/suited solutions for this problem?
If I choose to implement audio separated from video, is VoIP a possible solution to the audio connection?
Is Qt a good tool for this purpose? I never used Qt before, and for video and audio interfaces I also thought about openframeworks, so I was wondering if anyone has ever used one of this frameworks and if this is the right choice.
I know that my question has no code and that the range of possible answers is wide, but I really need some help here.
Thanks.
First, you should answer on question: How your clients should connect / authorize without server part?
Notes: 1) Skype has servers. 2) A lot of internet users are visiting web throught NAT / Proxy.
Ofc, you can try to implement something for learning proposes, but if you want to create something usefull - try thirdparty solutions that are created by specialists. For example: google libjingle.
You need VOIP library’s :)
There's no need to start from scratch you can use library’s opensource like: opalvoip
I am searching for a good method to transfer data over internet, and I work in C++/windows environment. The data is binary, a compressed blob of an extracted image. Input and requirements are as follows:
6kB/packet # 10 packets/sec (60kBytes per second)
Reliable data transfer
I am new to network programming and so far I could figure out that one of the following methods will be suitable.
Sockets
MSMQ (MS Message Queuing)
The Client runs on a browser (Shows realtime images on browser). While server runs native C++ code. Please let me know if there are any other methods for achieving the same? Which one should I go for and why?
If the server determines the pace at which images are sent, which is what it looks like, a server push style solution would make sense. What most browsers (and even non-browsers) are settling for these days are WebSockets.
The main advantage WebSockets have over most proprietary protocols, apart from becoming a widely adopted standard, is that they run on top of HTTP and can thus permeate (most) proxies and firewalls etc.
On the server side, you could potentially integrate node.js, which allows you to easily implement WebSockets, and comes with a lot of other libraries. It's written in C++, and extensible via C++ and JavaScript, which node.js hosts a VM for. node.js's main feature is being asynchronous at every level, making that style of programming the default.
But of course there are other ways to implement WebSockets on the server side, maybe node.js is more than you need. I have implemented a C++ extension for node.js on Windows and use socket.io to do WebSockets and non-WebSocket transports for older browsers, and that has worked out fine for me.
But that was textual data. In your binary data case, socket.io wouldn't do it, so you could check out other libraries that do binary over WebSockets.
Is there any specific reason why you cannot run a server on your windows machine? 60kb/seconds, looks like some kind of an embedded device?
Based on our description, you ned to show image information, in realtime on a browser. You can possibly use HTTP. but its stateless, meaning once the information is transferred, you lose the connection. You client needs to poll the C++/Windows machine. If you are prety confident the information generated is periodic, you can use this approach. This requires a server, so only if a yes to my first question
A chat protocol. Something like a Jabber client running on your client, and a Jabber server on your C++/Windows machine. Chat protocols allow almost realtime
While it may seem to make sense, I wouldn't use MSMQ in this scenario. You may not run into a problem now, but MSMQ messages are limited in size and you may eventually hit a wall because of this.
I would use TCP for this application, TCP is built with reliability in mind and you can simply feed data through a socket. You may have to figure out a very simple protocol yourself but it should be the best choice.
Unless you are using an embedded device that understands MSMQ out of the box, your best bet to use MSMQ would be to use a proxy and you are then still forced to play with TCP and possibly HTTP.
I do home automation that includes security cameras on my personal time and I use the .net micro framework and even if it did have MSMQ capabilities I still wouldn't use it.
I recommend that you look into MJPEG (Motion JPEG) which sounds exactly like what you would like to do.
http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server
For example we have chat client application executable (and we can change server Endpoint).
How I can analyze sockets, packets send to server and back? So I can write server emulator for that client?
This is just example. I know this is very general question. But I need general answer. What techniques you can suggest? What tools? Any tutorials or books?
This is for educational purpose and I have no intention to violate any law.
Edited: Basically I want to get protocol that client/server communicate.
If you consider about writing a server emulator, you must know what to emulate, and therefore you must know the details of the protocol. So I doubt that approach can help to discover the unkown features.
I think that the Wireshark protocol analyser can help you to see all the network dialog between the server and the client. You do not have to write a custom server, just spy the actual exchanges :o)
Greetings. I'm planning on building a Flex based multiplayer game, and I'm researching what will be required for the server end. I have PHP experience, so I started looking at ZendAMF.
Now in this game, I'll need the concept of rooms, and real time updates to clients in those rooms, so it looks like I'll be using remote shared objects (correct, yes?). I'm not seeing where ZendAMF can support this.
So I found this page: http://arunbluebrain.wordpress.com/2009/03/04/flex-frameworks-httpcorlanorg/
It seems to indicate that ZendAMF isn't going to do what I want. WebORB for PHP seems to be the only PHP based solution that does messaging, but on that page it doesn't mention "real-time" next to it like the Java based ones below it do.
What should I be looking at for the server piece with my requirements? Do I need to make the jump to something like BlazeDS and try to pick up a bit of Java knowledge?
Thanks.
I'd highly reccommed flash media server if you have the cash.
I've had good expereince with it in the past
Both ZendAmf and weborb use http long pulling. Think of it as pinging to check for updates. If you really need TRUE realtime push notification then PHP will not be your answer due to it not having threads or long running processes. WebOrb has several servers in other languages along with BlazeDS, RubyAMF, PyAmf, and of course LCDS from adobe that allows for true messaging.
I think you already know the answer, but for other people looking into this as well:
All *AMF solutions use HTTP as transfer protocol and can't have permanent connection. AMF is sent encoded through HTTP and then it's closed.
When you want to use "real" real-time (RTMP,RTMPT), you have choices like:
opensource: Red5 (Java), BlazeDS (Java), FluorineFX (.NET)
commercial: Wowza Media Server (Java), WebORB (.NET and Java)