Based on this article, when implementing a WebRTC solution without a server, I assume it means SFU, the bottleneck is that only 4-6 participants can work.
Is there a solution that can work around this? For example, I just want to use Firebase as the only backend, mainly signaling and no SFU. What is the general implementation strategy to achieve at least 25-50 participants in WebRTC?
Update: This Github project shares a different statement. It states "A full mesh is great for up to ~100 connections"
Your real bottleneck with MESH is that each RTCPeerConnection will do its own video encoding in the browser.
The p2p concept naturally includes the requirement that both peers should adjust encoding quality based on network conditions. So, when your browser sends two streams to peers X (good download speed) and Y (bad download speed), the encodings for X and Y will be different - Y will receive lower framerate and bitrate than X.
Sounds reasonable, right? But, unfortunately, mandates separate video encoding for each peer connection.
If multiple peer connections could re-use the same video encoding, then MESH would be much more viable. But Google didn't provide that option in the browser. Simulcast requires SFU, so that's not your case.
So, how many concurrent video encodings can browser perform on a typical machine, for 720p 30 fps video? 5-6, not more. For 640x480 15 fps? Maybe 20 encodings.
In my opinion, the encoding layer and networking layer could be separated in WebRTC design, and even getUserMedia could be extended to getEncodedUserMedia, so that you could send the same encoded content to multiple peers.
So that's the real practical reason people use SFU for multi-peer WebRTC.
If you want to make a conference with 25 people all sending their video, then a regular webrtc setup will not work. Except if you massively lower your video quality. The reason for this is that every participant would need to send 24 seperate streams to every other client. So lets say you stream is 128 KB/s then you will need to have 3MB/s in upload speed available. Which isn't always available. Then also downloading that same amount.
The problem is that isn't scalable. That's why you need an SFU. Then you will only send a single stream and receive from others. The other positive thing about SFUs is that you can use simulcast which adapts the quality of your received streams depending on your network speed.
You can use the Janus gateway or mediasoup for example. Here is an already setup mediasoup video conferencing application that is scalable github repository
Related
I have a client with a 1-2 thousand viewer audience, with everyday streams, same concurrent number of viewers.
Ive got a server set up for their website etc, but am in the process of figuring out the best way to stream with OBS onto that server, and than re-distribute that stream to clients (as an embed on the website).
Now from the calculations i did, running that kind of concurrent viewers is very problematic, as it forces you into a 10gbit link - which is very expensive, and i would ideally like to fit within 1-2gbps, if possible.
A friend of mine recommended to look into "Multicast" which supossedly uses MUCH less bandwith than regular live streaming options. Is multicast doable? Ive had a NGINX live stream set up on my server by a friend before, but never looked into the config and if multicast is supported within that. Are there any other options? What would you recommend?
Also, the service of that live stream isnt a high profit / organisation type of deal, so any pre-made services just dont make sense, as it would easily cost 40+ dollars per stream, which is just too much for my client.
Thank you for any help!
Tom
Rather than Multicast, P2P is more practical solution on Internet, to save money not bandwidth.
Especially for H5 browser, it's possible to use WebRTC DataChannel to transport P2P data.
But Multicast does not work on internet routers.
Multicast works by sending a single stream across the network to edge points where clients can 'join' the multicast to get an individual stream for them.
It requires that the network supports multicast protocols and the edges align with your users.
It is typically used when an operator has their own IP network for service like IPTV, rather than for services over the internet.
For your scenario, you would usually use an organ server and a CDN - this will usually reduce the load on your own server as the video will be cached on the network and multiple user can access the same 'chunks' of the video.
You can see and AWS example for on demand video here - other vendor and cloud providers have solutions too so this is just an example:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/tutorial-s3-cloudfront-route53-video-streaming.html
You can find more complex On Demand and Live tutorial also but they are likley more that you need: https://aws.amazon.com/cloudfront/streaming/
Exploring P2P may be an option also as Winton suggests - some CDN may also leverage P2P technology internally.
I want to fetch the data of a stock. Since the data changes very fast, is there any way to pull the data like 50-100 times a second from trading websites?
And can we implement that using a raspberry Pi 4 8gig model.
RasPi4 should be more than adequate for this task. Both the ethernet and WiFi hardware is capable of connections at these speeds. (Unless you’re running a bunch of other stuff on it.) Consider where your bottlenecks may be, likely ISP or other network traffic). Consider avoiding WiFi in favor of cat5e or cat6. Consider hanging this device off your router (edge) to keep lan traffic lower and consider QOS settings if you think this traffic may compete with other lan traffic.
This appears to be a general question with no specific platform in mind. For stocks, there are lots of platforms to choose from.
APIs for trading platforms often include a method to open a stream. Instead of a full TCP conversation for each price check, a stream tells the server to just keep on sending data. There are timeout mechanisms of course, but it is good to close that stream gracefully (It’s polite since you’re consuming server resources at a different scale. I’ve seen some financial APIs monitor and throttle stream subscribers who leave sessions open.).
For some APIs/languages you can find solid classes already built on GitHub. Although, if simply pulling and reading a stream then the platform API doc code snippets should be enough to get you going.
Be sure to find out what other overhead may be implicated. For example, if an account or API key is needed to open a stream then either a session must be opened first or the creds must be passed with the stream being opened. The API docs will say. If you’re new to this sort of thing, just be a detective and try to infer what is needed. API docs usually try to be precise and technically correct with the absolute minimum word count.
Simply checking the steam should be easy. Depending on how that steam can be handled by your code/script, it may be harder to perform logic on the stream while it is being updated. That’s usually a thread issue or a variable scope issue depending on the script/code. For what you’re doing I would consider Python or PowerShell depending on your skill-set and other design parameters.
I understand in a client-server model gRPC can do a bidirectional streaming of data.
I have not tried yet, but want to know will it be possible to stream audio and video data from a source to cloud server using gRPC and then broadcast to multiple client, all in real time ?
TLDR: I would not recommend video over gRPC. Primarily because it wasn't designed for it, so doing it would take a lot of hacking. You should probably take a look at WebRTC + a specific video codec.
More information below:
gRPC has no video compression
When sending video, we want to send things efficiently because sending it raw could require 1GB/s connectivity.
So we use video compression / video encoding. For example, H.264, VP8 or AV1.
Understand how video compression works (eg saving bandwidth by minimising similar data shared between frames in a video)
There is no video encoder for protobufs (the format used by gRPC).
You could then try image compression and save the images in a bytes field (e.g. bytes image_frame = 1;, but this is less efficient and definitely takes up unnecessary space for videos.
It's probably possible to encode frames into protobufs using a video encoder (e.g. H.264) and then decode them to play in applications. However, it might take a lot of hacking/engineering effort. This use case is not what gRPC/protobufs is designed for and not commonly done. Let me know if you hack something together, I would be curious.
gRPC is reliable
gRPC uses TCP (not UDP), which is reliable.
At a glance, reliability might be handy, to avoid corrupting data or lost data. However, depending on the use case (realtime video or audio calls), we may prefer to skip frames if they are dropped or delayed. The losses may be unnoticeable or painless to the user.
If the packet is delayed, it will wait for the packet before playing the rest. (aka. out of order delivery)
If the packet is dropped, it will resend it (aka. packet loss)
Therefore, video conferencing apps usually use WebRTC/RTP (configured to be unreliable)
Having mentioned that, looks like Zoom was able to implement Video-over-WebSockets, which is also a reliable transport (over TCP). So it's not "game-over", just highly not recommended and a lot more effort. They have moved over to WebRTC though.
Data received on the WebSockets goes into a WebAssembly (WASM) based decoder. Audio is fed to an AudioWorklet in browsers that support that. From there the decoded audio is played using the WebAudio “magic” destination node.
After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.
We are making an application involving a server(tomcat, apache, linux) and multiple mobile clients(Android, iPhone, Windows, Nokia J2ME).
Normally the clients and the server will communicate using http.
I would like to know the download and upload speeds of the client from the http request that it made.
Ideally I would not like to upload a file and download a file to come up with these speeds. I am assuming that there might be some thing at the HTTP protocol level that can give me this, or some lower layer of the network.
If only it were that simple.
Even where the bandwidth and latency of a network are very well defined, the actual throughput will be limited by the congestion window and where the end points are in establishing the slow start threshold. These can affect throughput by a factor of 20 or more.
There's nothing in HTTP which will provide metrics for these. Some TCP stacks will expose limited information about throughput (as used by iftop, iptraf).
However if you really want to gather useful metrics on HTTP throughput, then you need to start shoving data across the network - have a look at yahoo boomerang for an implementation.
If the http connection goes to the Apache server first, you can use Apache Bench to do all sorts of load testing. It comes with apache and can be invoked with something like the following.
Suppose we want to see how fast Yahoo can handle 100 requests, with a maximum of 10 requests running concurrently:
ab -n 100 -c 10 http://www.yahoo.com/
HTTP does not deal with connection speeds. Although I could imagine some solution that involves some HTTP (reverse) proxy that estimates speeds on a connection and sets custom headers to pass this info. You would also need to to associate stats of different connections with particular client. I have not seen yet a readily available solution for this.
Also note that
network traffic can be buffered or shaped so download speed may depend on amount of data transferred or previous load of network. So even downloading file would not be accurate.
Amount of data transferred depends on protocol level (payload wrapped in HTTP wrapped in gzip wrapped in TLS wrapped TCP). Which one do you want to measure? Or what do you want to achieve with this measured speed?
I've seen some Real User Monitoring (RUM) tools that can do this passively (they get a feed from a SPAN port or network TAP infront of the servers at the data centre)
There are probably ways of integrating the data they produce into your applications but I'm not sure it would be easy or perhaps given the way latency and bandwidth can 'dynamically' change on a mobile network that accurate.
I guess the real thing to focus on is the design of the app, how much data is travelling across the network, how you can minimise it etc.
Other thing to consider is whether you could offer a solution that allows some of the application to be hosted in the telco's POPs (some telcos route all their towers back to a central pop, others have multiple POPs)