Developer Remote Collaboration Multiple Monitor Screen Share - collaboration

Does an body have any recomendations for a good way to collaborate between developers in real-time via screen share with multiple display? Essentially what we need is something identical to RDP (because of hit's multi-monitor support in a maximize window) but with audio.
Needs:
Screen share multiple monitors (assuming both developers have same monitor layout)
Easily interact and annotate
Easy to quickly launch
Easy to talk via audio
Needs to be secure and from a trust worthy US based vendor (must have very low risk of a trojan, e.g. SolarWinds)

Related

Integrate custom device into Google Home

My idea is to have single addressable RGBW LED strips in all my rooms. For the sake of practice and interest, I do not simply want to by some controller, I want to start this project with some custom self-build infrastructure, consisting of some Arduinos and/or raspberry pis. My initial idea was to just setup a simple local server on a raspberry (which controls the arduinos connected to the LEDs) and build myself an app to control the lightning. That part is clear to me and should not be a problem, but I thought it might be a plus to integrate my devices directly to Google Home so I do not need any extra app.
I read through the Smart Home Platform but things are not 100% clear to me. I read things about requirements like public Oauth2 Server. I was wondering, if it is possible to get this working without setting up any server which has to be reached publicly, because otherwise I won't waste time on that topic.
If you want to control your room devices from a smartphone and are satisfied with local operation from few meters away than you should consider BLE on phone and devices.
Obviously, you would need to write your own app, but luckily with BLE you can use publicly available apps such as LightBlu for the dev phase and maybe even for later use (I have not looked into that lately).

What is the current best practice for synchronising streaming video between two clients? (Inter-destination multimedia synchronization)

For example, 3 users are streaming a video from a remote url. 1 user is master, and can play, pause, and set the current playback position. They are talking to each other while they watch (voip), so their video streams need to be synced.
A solution off the top of my head is that the master broadcasts high-level actions (play, stop, scrub position). For minor deviations, the clients could regularly ping the master to get his playback position, and to apply a speed factor to their playback to speed up or slow down to keep in sync.
I can find a few papers on the subject (eg, https://www.sciencedirect.com/science/article/pii/S0306437908000525, https://link.springer.com/article/10.1007/s00530-012-0278-9) but nothing in terms of example projects or community discussion.
Any guidance would be appreciated.
Synching video across clients is not easy but there are some examples.
This is an open source client based solution:
https://github.com/Syncplay/syncplay
And these are a couple of browser based ones:
https://pdfs.semanticscholar.org/c0bc/b42b63b6d88ebbb5fb4c6686662300d3611b.pdf
https://github.com/povdocs/sync-player
As you suggest some sort of feedback either to a master or to a synching server along with responses suggesting synch adjustments is the most frequent approach.

Is it possible to make my own network requests to a “smart” device without an API?

What I'm asking here may not be possible at all, due to my lack of knowledge with networks.
I want to start playing around with IOT objects in my house. I would love to be able to control various objects from the touch of a button on my phone.
I have bought a "smart" plug outlet which enables me to turn the power on or off via an app over my home WiFi, however I want to be able to build my own app and control the device exactly how I want to, just for fun.
This app I'm using at the moment comes with the outlet and as far as I can see, it was not meant to be customizable in any way.
My question is, is it possible to figure out the requests being made to and from the device, and create my own API to work with it?
I am a software developer day-to-day however my knowledge in networks is very basic. Any help is really appreciated!
If there is no documented API you can, in theory, to reverse engineer the API using sniffers. If you control the device from your phone you can install sniffers on the phone and see the incoming and outgoing requests. But the bigger problem for you is if there is some kind of security mechanism that the device and the app are implementing. The protocol can be encrypted so you wont be able to understand the network traffic or maybe some kind of key that will allow the device to get orders only from a specific app.
So my suggestion, if you are not experienced with this kind of work is to approach the device vendor and ask them for the API, some vendors would be happy to expose it if you would publish your code and let other customers to use it and expand their product.

Does integrating WebRTC one to one audio/video calls affect the performance of web application

After knowing about some great features of WebRTC, I thought of using WebRTC one to one audio/video calls in my web application. The web application is for many organizations/entities of a category who can register and keep recording several records daily for their internal working and about their clients. The clients of these individual organizations/entities also have access to the web application to access their details.
The purpose of using WebRTC is for communication between clients and organizations. Also for daily inquires by new people to these organizations about products and services.
While going through articles on google etc. I found broadcasting or one to many calls requires very high bandwidth to users if we don't make use of Media Server.
So what is the case for one to one calls?
Will it affect the performance of web application or bring any critical situation if several users are making audio/video calls(one to one) to each other simultaneously as a routine?
The number of users will be very large and users will be recording daily several entries as their routine work. But still it is manageable and application will be running smoothly but I am not sure about the new concept WebRTC. Will it require a very high hosting plan? Is using WebRTC for current scenario suitable or advisable?
WebRTC by its nature is Peer-to-Peer. Meaning that the streaming data is handled CLIENT side. All decoding, encoding, ICE candidate gathering/negotiation, and media encrypting/transmitting will happen on the client side and not on server side. So, you will be providing the pages, client side JS, and some data exchange(session negotiation signalling) but all in all, it is not a huge amount of work. It should be easily handled without having to worry about your host machine being over worked.
All that said, here are the only a performance concerns that would POSSIBLY affect your hosting server.
Signalling session startup, negotiations, and tare down. This is very minimal(only some json data at the beginning of a session). This should not be too much of a burden but you should be aware that if 1000 sessions start at the same time, you will have a queue of messages to direct to the needed parties. How you determine the parties, forward the messages, and what work you do server side could all affect performance. If written smartly(how to store sessions, how to forward messages, etc.) should not be a terrible burden.This could easily done with SignalR since you are on ASP.NET or you could use a separate one running Node.js(or the same box, does not matter) if you so desired.
RTP TURN relay if needed. This will probably be through a different server(or the same one as your hosting server if you want). For SOME connections, a TURN server is needed and any production ready WebRTC solution should take this into account. Here is a good open source turn server. Bandwidth usage here could be very high as RTP packets are sent to this server and the forwarded to the peer in the connection.
If you are recording the streams, you may have increased hosting traffic depending on how you implement it. Firefox supports client side recording of the streams but Chrome does not(they say it is in the works currently). You could use existing JS libraries to record the feeds client side and then push them anywhere you want. You could also push all the data through a MediaServer that will mux, demux, and forward the data to be recorded anywhere you like. Janus-Gateway videoroom is a good lightweight example of a mediaserver.
Client side is a different story.
There are higher level concerns in the Javascript. If you use one of the recording JS libraries, this is especially evident as they do canvas captures numerous times a second which are a heavy hit and would degrade the user experience.
CPU utilization by the browser will increase as the quality of the video being streamed increases. This is rather obvious as HD video frames take more CPU power to encode/decode than SD frames.
Client side bandwidth usage can also be an issue. Chrome and Firefox try to modify the bitrate of each video/audio feed dynamically but the video Bitrate can go all the way up to 2 Mbps. You can cap this in Chrome( by adding an attribute in the SDP) but not in Firefox(last I checked) as of yet.

flash media server requires large bandwidth?

i'm wondering how does the media servers work, do they require large bandwidth if you are doing, let's say, live streaming something like ustream, and there are 10k people watching, do you need a large bandwidth or it is something like p2p ?
I'm more on the client development side with Flash than server admin, but more than likely, yes, you would need a lot of bandwidth to have 10k people watching. The good thing is that with streaming video, you're only downloading the data your watch (unlike progressive). More of an issue would be the number of concurrent connections you could handle per FMS install. 10k would probably require a lot more than 1 server running FMS apps to handle. I'm currently working on a project where we are streaming from 2 installs (beyond the installations of FMS, not sure how they load balanced it) with the hopes of supporting up to something like 2k concurrent connections. I found this article to be pretty helpful (users + bandwidth stats):
http://www.adobe.com/devnet/flashmediaserver/articles/performance_tuning_webcasts.html
The part where "code" meshes with server administration can get pretty daunting (if you ask me)...and every client wants "youtube but with X feature." At 1K a license plus BW, this can get super pricey.
Depending on your needs, you may want to use a 3rd-party FMS company to handle your streaming (especially if it's just for a single event; you can get 'per-event' pricing). Also, I recently used the justin.tv api to create a streaming video feed in Flex. It was pretty painless and all the BW is on them :)
The good part is that once FMS is running, it's super easy to develop with in Actionscript :)

Resources