If i broadcast a video and divide it into packets, and when a users connect to the netgroup and receive the object from the group( the user will receive from specific time let say actual video is 10 minutes and user connect to the group, and seek video for last 5 minutes). how can i achieve this task. is it possible ? i am using flash player 10.1
Yes, it is possible, but it is a little complicated.
Flash video over HTTP uses progressive display and download. Random access into the stream is not technically possible. It may work in some instances when the file is already in the browser's cache, but it is not truly reliable. If you are stuck with HTTP only, then the only real option is to edit your video into chunks that represent your random access points. For example, if you have a one hour video, you can make twelve videos representing five minute offsets that play to the end (ie, a 60 min file, a 55 min file, etc). There are also some techniques to use a custom server and player which inject metadata to allow random access (I know colleagues who have done this, but have never had to do it myself).
Flash video can also play over a RTMP connection. Flash Media Server provides this, as do one or two alternates. RTMP / FMS give you lots more options for streaming your video and allows for true random access into the stream. You can either purchase and host FMS yourself, or go with a hosted solution like Influxis. Some cloud based solutions are also starting to become available.
Related
I'm planning to build an HTTP Live Streaming server, using NGINX and RTMP module, that uses also FFmpeg to encode the incoming stream into different bitrate levels, enabling adaptive bitrate for the live video streaming.
What I want to do more, and I'm not able to find any reference or a similar question, is to enable and disable one or more bitrate levels based on the number of current users consuming the stream. So if I'm running out of bandwidth cause of the high number of users connected, the server can disable automatically some bitrate levels and not incur bandwidth exceeding that will block the whole service.
Does anyone have any suggestions?
I don't think you will find an out of the box solution to this and you may even find that paying for extra bandwidth is cheaper in the long run than adding extra complexity to your solution to provide this.
If you do want to build this yourself then you could update the manifest being delivered in real time to remove the higher bandwidth renditions.
For a live stream the manifest, or HLS playlist, is updates every few seconds with a new version contains the new segments in the live stream, including the versions available for each bandwidth variant.
If you remove the higher variant from the playlist, in theory the player should recognise this and choose the next segment from the available bandwidths, but you would likely need to test it with the player(s) you are using to verify it works smoothly.
Based on this article, when implementing a WebRTC solution without a server, I assume it means SFU, the bottleneck is that only 4-6 participants can work.
Is there a solution that can work around this? For example, I just want to use Firebase as the only backend, mainly signaling and no SFU. What is the general implementation strategy to achieve at least 25-50 participants in WebRTC?
Update: This Github project shares a different statement. It states "A full mesh is great for up to ~100 connections"
Your real bottleneck with MESH is that each RTCPeerConnection will do its own video encoding in the browser.
The p2p concept naturally includes the requirement that both peers should adjust encoding quality based on network conditions. So, when your browser sends two streams to peers X (good download speed) and Y (bad download speed), the encodings for X and Y will be different - Y will receive lower framerate and bitrate than X.
Sounds reasonable, right? But, unfortunately, mandates separate video encoding for each peer connection.
If multiple peer connections could re-use the same video encoding, then MESH would be much more viable. But Google didn't provide that option in the browser. Simulcast requires SFU, so that's not your case.
So, how many concurrent video encodings can browser perform on a typical machine, for 720p 30 fps video? 5-6, not more. For 640x480 15 fps? Maybe 20 encodings.
In my opinion, the encoding layer and networking layer could be separated in WebRTC design, and even getUserMedia could be extended to getEncodedUserMedia, so that you could send the same encoded content to multiple peers.
So that's the real practical reason people use SFU for multi-peer WebRTC.
If you want to make a conference with 25 people all sending their video, then a regular webrtc setup will not work. Except if you massively lower your video quality. The reason for this is that every participant would need to send 24 seperate streams to every other client. So lets say you stream is 128 KB/s then you will need to have 3MB/s in upload speed available. Which isn't always available. Then also downloading that same amount.
The problem is that isn't scalable. That's why you need an SFU. Then you will only send a single stream and receive from others. The other positive thing about SFUs is that you can use simulcast which adapts the quality of your received streams depending on your network speed.
You can use the Janus gateway or mediasoup for example. Here is an already setup mediasoup video conferencing application that is scalable github repository
For example, 3 users are streaming a video from a remote url. 1 user is master, and can play, pause, and set the current playback position. They are talking to each other while they watch (voip), so their video streams need to be synced.
A solution off the top of my head is that the master broadcasts high-level actions (play, stop, scrub position). For minor deviations, the clients could regularly ping the master to get his playback position, and to apply a speed factor to their playback to speed up or slow down to keep in sync.
I can find a few papers on the subject (eg, https://www.sciencedirect.com/science/article/pii/S0306437908000525, https://link.springer.com/article/10.1007/s00530-012-0278-9) but nothing in terms of example projects or community discussion.
Any guidance would be appreciated.
Synching video across clients is not easy but there are some examples.
This is an open source client based solution:
https://github.com/Syncplay/syncplay
And these are a couple of browser based ones:
https://pdfs.semanticscholar.org/c0bc/b42b63b6d88ebbb5fb4c6686662300d3611b.pdf
https://github.com/povdocs/sync-player
As you suggest some sort of feedback either to a master or to a synching server along with responses suggesting synch adjustments is the most frequent approach.
After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.
I'm no expert in audio, so if any of you folks are, I'd appreciate your insights on this.
My client has a handful of MP3 podcasts stored at a relatively high bit rate, and I'd like to be able to serve those files to her users at "different" bit rates depending on that user's credentials. (For example, if you're an authenticated user, you might get the full, unaltered stream, but if you're not, you'd get a lower-bit-rate version -- or at least a purposely tweaked lower-quality version than the original.)
Seems like there are two options: downsampling at the source and downsampling at the client. In this case, knowing of course that the source stream would arrive at the client at a high bit rate (and that there are considerations to be made about that, which I realize), I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Is doing so possible with the Flash Player and ActionScript alone, at runtime (even with a third-party library), or does a scenario like this one require a server-based solution? If the latter, can Flash Media Server handle this requirement specifically? Again, I'd like to avoid using FMS if I can, since she doesn't really have the budget for it, but if that's the only option and it's really an option, I'm open to considering it.
Thanks in advance...
Note: Please don't question the sanity of the request -- I realize it might sound a bit strange, but the requirements are what they are. In that light, for purposes of answering the question, you can ignore the source and delivery path of the bits; all I'm really looking for is an explanation of whether (and ideally how) a Flash client can downsample an MP3 audio stream at runtime, irrespective of whether the audio's arriving over a network connection or being read directly from disk. Thanks much!
I'd prefer to alter the stream at the client somehow, rather than on the server, for several reasons.
Please elucidate the reasons, because resampling on the client end would normally be considered crazy: wasting bandwidth sending the higher-quality version to a user who cannot hear it, and risking a canny user ripping the higher-quality stream at it comes in through the network.
In any case the Flash Player doesn't give you the tools to process audio, only play it.
You shouldn't need FMS to process audio at the server end. You could have a server-side script that loaded the newly-uploaded podcasts and saved them back out as lower-bitrate files which could be served to lowly users via a normal web server. For Python see eg. PyMedia, py-lame; or even a shell script using lame or ffmpeg or something from the command line should be pretty easy to pull off.
If storage is at a premium, have you looked into AAC audio? I believe Flash 9 and 10 on desktop browsers will play it. AAC in my experience takes only half of the size of the comparable MP3 (i.e. a 80kbps AAC will sound the same as a 160kbps MP3).
As for playback quality, if I recall correctly there's audio playback settings in the Publish Settings section in the Flash editor. Wether or not the playback bitrate can be changed at runtime is something I'm not sure of.