I'm trying to stream several iPad screen to a single python client (computer) on a local network but I don't know which protocol to use.
I can do it with 1 Ipad using MonaServer, an app that stream on RTMP and a little Python script to read the video.
But I am dealing with problems to use several Ipads because as I saw RTMP uses a single port on Windows, :1935 and I am not sure it's possible to multi-stream with RTMP.
I am not a pro with networking, so if you have any suggestions I'm open
What you need is to following the wiki and usage of open source projects, to get some instincts about multiple clients live streaming.
For example, you could use OBS to publish some streams to a media server, like SRS, play it by different protocols like RTMP/HTTP-FLV/HLS/WebRTC.
You could publish multiple streams, they are not mutually exclusive. And play by different players, depends on the protocol you chose, please read this post.
Try it.
Related
I have a server (not internet connected) that hosts a webpage with company data on an internal website. The server also contains videos (thousands of them) in a defined directory structure.
When a client connects I can display the videos to them on the internal website. The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow; the browser seems to be trying to download them in order to play them rather than stream them.
Is there a video streaming server that I could send a file path to and it would serve the video back to the client as a stream?
I guess this is essentially transcoding the video that I need done. I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
Sorry if i'm not being clear on my need. Send me a question if I haven't been clear on a point.
...the browser seems to be trying to download them in order to play them rather than stream them.
To echo what #Offbeatmammal said in the comments, if you're using MP4 files, you need to ensure the MOOV atom is at the beginning of the file. Without it, the browser doesn't know what byte offsets to request.
Ideally, encode your video files as fragmented. In FFmpeg:
ffmpeg -i ... -f mp4 -movflags frag_keyframe+empty_moov output.mp4
See also: https://stackoverflow.com/a/9734251/362536
That should allow the client to stream the MP4 files from any web server that supports HTTP/1.1 range requests. (Most all do, unless configured otherwise.)
However, there is another point to address:
The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow...
While fixing the streaming issue means the clients won't have to download the whole file first, they still need the bandwidth to keep up with the stream. If it's possible they won't, you'll want to implement some sort of transcoder.
I would recommend using an existing segmented streaming method such as DASH or HLS. HLS is currently the most compatible, thanks to Apple's platform policies. Either will enable adaptive bitrate switching, which will allow slow clients to automatically switch to a lower bitrate stream that they can smoothly keep up with. That way, slower clients can still see the video, albeit a lower quality one, while fast clients can get the full quality video.
You can use FFmpeg to do the transcoding and HLS playlist creation.
I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
As for when you do this transcode, I suppose it depends on how much load you're looking at. If this is just one or two people viewing the file, you can transcode on demand if your servers can keep up. Ideally, you have at least a couple stream variants around for less popular files, and add more later if needed.
If you're doing this live, I'd recommend doing all of your transcoding up front. You can always prune old files/variants if you need the storage back.
I'm trying to figure our what's the best method to receive live streaming video at a server, and making it available back to the client.
I noticed two modules for nginx:
https://github.com/arut/nginx-rtmp-module
https://github.com/arut/nginx-ts-module
It looks like both modules support HLS for video streaming.
What is the difference then between the options?
Apparently you can stream HLS/DASH with both but the nginx-ts-module has fewer features. You can setup both using docker and test which one suits your needs better. I'd almost always go with the simpler option.
I want to do encode a live stream for MPEG-DASH in various bitrates and resolutions for live playback.
Everything I found so far either uses only the source resolution (Nimble, nginx-rtmp-module) or seems to be only for VOD streaming(DASHEncoder).
Is it possible to use DASHEncoder with a live input (rtmp stream) and how would I do that?
If not, is it possible to use nginx-rtmp + ffmpeg for what I want to do?
There are several different services available which support such use-cases, like NGINX Plus.
I also successfully managed to run a live stream with Bitmovin and to the best of my knowledge livestream.com is also capable of doing that.
I've been doing some research in how to implement a server free, point-to-point video/audio chat (i.e., my own skype without text messaging).
I've been looking for ways to implement it and I had this next ideas:
A multithreaded c++ (cause I know some c++) program getting audio and video (with qt), sending it through 2 different UDP sockets and reading video and audio from 2 other different UDP sockets from the other 'point'. So I'll had to write the UDP server and client multithreaded with a sum of 4 threads: 2 for sending audio and video, other 2 to receive audio and video.
Writing my own protocol to enable video and audio in the same thread, something like parsing half of the packet data size for audio and video buffering, which would leave me with only 2 threads in the application and a lot more 'error prone' code to write.
I've been looking to some real time media protocols, and some of them looked interesting. Maybe study and implement interfaces to this protocols and use them instead of 'creating' my own.
Now, the actual question(s):
Are there some documentation on how to accomplish this? Maybe some 'state of the art' apis/protocols that are being used or well implemented/suited solutions for this problem?
If I choose to implement audio separated from video, is VoIP a possible solution to the audio connection?
Is Qt a good tool for this purpose? I never used Qt before, and for video and audio interfaces I also thought about openframeworks, so I was wondering if anyone has ever used one of this frameworks and if this is the right choice.
I know that my question has no code and that the range of possible answers is wide, but I really need some help here.
Thanks.
First, you should answer on question: How your clients should connect / authorize without server part?
Notes: 1) Skype has servers. 2) A lot of internet users are visiting web throught NAT / Proxy.
Ofc, you can try to implement something for learning proposes, but if you want to create something usefull - try thirdparty solutions that are created by specialists. For example: google libjingle.
You need VOIP library’s :)
There's no need to start from scratch you can use library’s opensource like: opalvoip
I'm building an application that will serve up video files to users on a variety of different platforms. As such, I need the ability to set up a server that will serve up video files that might need to be transcoded into a number of different formats. Basically, I want to replicate the functionality that TVersity provides.
The ideal solution would allow me to access the video stream via http, specifiying some sort of transcoding parameters in the call.
Anyone have any good ideas?
Thanks!
Chris
HTTP is not a streaming protocol. Have a look at progressive download - there are lots of PHP implementations / flash players available. ffmpeg is a good tool for converting formats / size / frame rates etc.