Suppose there is a server producing events continuously. That server has clients accessing it via their browsers, which must receive every event produced. If the server sends the events directly to each client, after a number of clients it will certainly exhaust its bandwidth. Are there technologies that allow using your own clients as peers for the distribution of those events?
you sayed your clients are browsers, so the server cannot talk to the clients. but you can ask the server with AJAX, if there are new events. This technique is called long polling, see http://techoctave.com/c7/posts/60-simple-long-polling-example-with-javascript-and-jquery
Try http://socket.io/. It allows you to very easily target specific client (browser), specific group of clients (room) or everyone connected and push data to them so that frontend JS can then handle it.
Don't know your backend technology, but it ties in very nicely with node.js.
Related
SignalR documentation says that scaleout/backplane works well in case of server broadcast type of load/implementation. However I doubt that in case of pure server broadcast it will cause duplicate messages to be sent to the clients. Consider the following scenario:
I have two instances of my hub sitting on two web servers behind a load balancer on my web farm.
The hub on each server implements a timer for database polling to fetch some updates and broadcast to clients in groups, grouped on a topic id.
The clients for a group/topic might be divided between the two servers.
Both the hub instances will fetch the same or overlapping updates from the database.
Now as each hub sends the updates to clients via the backplane, will it not result in duplicate updates sent to the clients?
Please suggest.
The problem is not with SignalR, but with your database polling living inside your hubs. A backplane deals correctly with broadcast replication, but if you add another responsibility to your hubs then it's a different story. That's the part that is duplicating messages, not SignalR, because now you have N pollers doing broadcast across all server instances.
You could, for example, remove that logic from hubs into something else, and letting just one single instance of your server applications use this new piece in order to do the generation of messages by polling, using maybe a piece of configuration to decide which one. This way you would send messages only from there, and SignalR's backplane would take care of the replication. It's just a very basic suggestion and it could be done differently, but the key point is that your poller should not be replicated, and that's not directly related to SignalR.
It's also true that polling might not be the best way to deal with your scenario, but IMO that would be answering a different question.
After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.
All that I know of are HTTP GET and POST requests.
On our school servers, we can play some games but not others. It seems that we cannot play any games where there is real-time information. So I'm thinking that real-time information requires a different method for the game to communicate with the server, and this method is blocked.
What other ways are there for webpages to communicate with servers?
Using GET/PUT requests is the old-school way to communicate with a webserver. Once a webpage has been loaded, it can use Java/Flash applets, AJAX, or WebSockets to communicate with the webserver over a separate connection without having to leaving the current page. This allows for bidirectional communication and real-time updating of the webpage content.
Consider a web application such as Google Chat, where the servers serve hundreds of millions of clients simultaneously. In such application, the servers have to push notifications to clients at near real time (in the chat example - incoming messages, presence notification etc.).
How do they implement it? Significant part of the clients are browser based. I suppose polling would overload even Google's servers. So, are they using something like Comet? If so - do they need to allocate a server for every 65536 clients (maximum TCP connections per machine)? I understand that there is a way to circumstance this limitation but I don't know how it's implemented.
Chat is not handled by single application / hardware / instance.
They definitely using many instances with load balancing that allows to scale chat system horizontally. It might be dedicated for regions or just single clustered system (I believe it is dedicated within regions but still clustered within region).
As well you can have as many connections as hardware and network will handle but not 64k.
Because 64k (actually less then that) is regarding Binding sockets (server sockets, but not client ones).
In case with google and based on supported browsers, they definitely use mixed technologies to communicate selecting the most powerful based on browser support. That can be long-polling, sockets and even oldest one: simple ajax.
As well for example facebook chat is based on erlang. And using erlang there is many examples having more then million connections.
I don't know how Google handle this, ans they probably won't tell us. But, today you can deal with http streaming, websockets or long polling to build such application. To give you an example Atmosphere framework
is a tool to build "real-time", efficient and scalable web application.
I'm developing a multi-player game and I know nothing about how to connect from one client to another via a server. Where do I start? Are there any whizzy open source projects which provide the communication framework into which I can drop my message data or do I have to write a load of complicated multi-threaded sockety code? Does the picture change at all if teh clients are running on phones?
I am language agnostic, although ideally I would have a Flash or Qt front end and a Java server, but that may be being a bit greedy.
I have spent a few hours googling, but the whole topic is new to me and I'm a bit lost. I'd appreciate help of any kind - including how to tag this question.
If latency isn't a huge issue, you could just implement a few web services to do message passing. This would not be a slow as you might think, and is easy to implement across languages. The downside is the client has to poll the server to get updates. so you could be looking at a few hundred ms to get from one client to another.
You can also use the built in flex messaging interface. There are provisions there to allow client to client interactions.
Typically game engines send UDP packets because of latency. The fact is that TCP is just not fast enough and reliability is less of a concern than speed is.
Web services would compound the latency issues inherent in TCP due to additional overhead. Further, they would eat up memory depending on number of expected players. Finally, they have a large amount of payload overhead that you just don't need (xml anyone?).
There are several ways to go about this. One way is centralized messaging (client/server). This means that you would have a java server listening for udp packets from the clients. It would then rebroadcast them to any of the relevant users.
A second way is decentralized (peer to peer). A client registers with the server to state what game / world it's in. From that it gets a list of other clients in that world. The server maintains that list and notifies the other clients of people who join / drop out.
From that point forward clients broadcast udp packets directly to the other users.
If you look for communication framework with high performance try look at ACE C++ framework (it has Java bindings).
Official web-site is: http://www.cs.wustl.edu/~schmidt/ACE-overview.html
You could also look into Flash Media Interactive Server, or if you want a Java implementation, Wowsa or Red5. Those use AMF and provide native functionality for ShareObjects including synching of the ShareObjects among connected clients.
Those aren't peer to peer though (yet, it's coming soon I hear). They use centralized messaging managed by the server.
Good luck