I am trying to understand networking details of chromecast. Consider this case, there is youtube-server (S), Hand-device(H) and Chromecast(C). Following are the steps, i would do.
1) Initially, would pair both H & C either automatically or explicitly.
2) Would play say youtube video on my hand-held-device(H). H will form a TCP session with the Server-S
3) Now, i play this video on my TV. So,
Questions
A) Is there a separate TCP session between the server and Chromecast or the hand-held-device mirror whatever it gets from server
B) Surprisingly, even after switching-off handle held device, Chromecast kept streaming until completion. So, expecting some kind of TCP state between Server and Chromecast. If so, who initiates this connection ?
D) How does Hand-held-device know about the current streaming state ?
Thanks
A) If you initiated a casting session with the cast button in the app, then yes -- there is a separate session between the server and the Chromecast. Your hand-held-device tells the chromecast how to discover the server, how to request the videos, etc., but the Chromecast sends the requests for the media assets directly to their source. There is no mirroring going on. (keep in mind that Android CAN mirror to the Chromecast, as can Chrome with tab-casting, but this is different than using the cast button)
B) As stated above, the hand-held device provides instructions to the Chromecast (usually in the form of an app ID that the chromecast resolves by looking to Google's servers to see where the web app is houses), and also with the URLs of the media. But once media playback starts, if you want to turn off the sender device, the playback can continue (and this is one of the big benefits of the Chromecast. In fact, this allows for you to come along with a different device and connect to the session if desired).
D) The Chromecast and hand held device also maintain their connection, so that the Chromecast can send back the status of playback, and so that the user can initiate playback control instructions that tell the Chromecast to pause/skip/etc. etc. the media session.
Related
We have an already running MQTT setup for communication between smart home devices and remote server, for remotely controlling the devices. Now we want to integrate our devices with Google Home and Alexa. These two use HTTP for communication with third party device clouds.
I have implemented this for Google Home and after receiving the request to device cloud, the request is converted to MQTT. This MQTT request is then sent to smart home device. The device cloud waits for few seconds to receive reply from smart home device. If no reply is received within predefined time, it then sends failure HTTP response to Google Home else it sends the received reply.
Is there a better way to handle this? Since this is a commercial project I want to get this implemented in the correct way.
Any help will be appreciated.
Thanks
We're using AWS IoT and I think it's a good way to handle IoT issues, below are some features of it:
Certification, each device is a thing and attached its own policy, it's security
Shadow, it's device's current state JSON document, The Device Shadow service acts as an intermediary, allowing devices and
applications to retrieve and update a device's shadow
Serverless, we use lambda to build skill and servers, it's flexible
Rule, we use it to intercept MQTT messages so that we can report device's state changing to Google and Alexa. BTW, to Google, Report State implementation has become mandatory for all partners launch & certify.
You can choose either MQTT or HTTP
It’s time-consuming but totally worth it! We've sold 8k+ products, so far so good.
At least Google Home doesn't really require synchronous operation there. Once you get the EXECUTE-intent via their API, you just need to sent that to your device (but it doesn't necessarily has to report its state back).
Once its state changes, you either store it for further QUERY-intents or provide this data to the Google Homegraph server using the "Report State" interface.
I'm developing gBridge.io as a project providing quite similar functionality (but for another target group). There, it is strictly split as described: A HTTP endpoint listener reacts to commands from Google Home and sends it to a cache, where it is eventually sent to the matching MQTT topic. Another worker is listening to MQTT topics from the users and storing there information in the cache, so it can be sent back to Google once required.
High Level Description
Let's say I have a client program (an iOS app in my specific case) that should communicate with a server program running on a remote host. The system should work as follows:
The server has a set of indexed audio files and exposes them to the client using the indexes as identifiers
The client can query the server for an item with a given identifier and the server should stream its contents so the client can play it in real time
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents and the user should not be able to access the data.
From my perspective, this is a simple implementation of what Spotify does.
Technical Questions
How should the audio data be streamed between server and client? What protocols should be used? I'm aware that using something on top of TLS will protect the information from someone sniffing the traffic, however it won't protect it from the user himself if he has access to the encryption keys.
The data streamed by the server should only be used by the client itself, i.e. someone sniffing the traffic should not be able to interpret the contents…
HTTPS is the best way for this.
…and the user should not be able to access the data.
That's not possible. Even if you had some sort of magic to prevent capture of decrypted data (which isn't possible), someone can always record the audio output, even digitally.
From my perspective, this is a simple implementation of what Spotify does.
Spotify doesn't do this. Nobody does, and nobody can. It's impossible. If the client must decode data, then you can't stop someone from modifying how that data gets decoded.
What you can do
Use HTTPS
Sign your URLs so that the raw media is only accessible for short periods of time. Everyone effectively gets their own URL to the media. (Check out how AWS S3 handles this, for an excellent example.)
If you're really concerned, you can watermark your files on-the-fly, encoding an ID within them so that should someone leak the media, you can go after them based on their account data. This is expensive, so make sure you really have a business case for doing so.
I'd like my Pinocc.io lead scout to make a POST request (e.g. to inform a remote service of an event that has been triggered).
Note that I don't want to listen to a constant stream the results (as detailed here) as I don't want to be constantly connected to the HQ (I'm going to enable the wi-fi connection only when required to minimize battery usage), and the events I'm detecting are infrequent.
I would have thought that this is a very common use case, yet I can find no examples of the lead scout POSTing any messages.
I posted the same message directly on the Pinoccio website and I got this answer from an Admin
Out of the gate, that's not supported via HQ. Mainly because to get as
real-time performance between API/HQ and a Lead Scout, it makes most
sense to leave a TCP socket open continually, and transfer data that
way. HTTP, as you know, requires a connection, setup, transfer, and
teardown upon each request.
However, doesn't mean you can't get it
working. In fact, you can do both if you wanted—leave the main TCP
socket connected to HQ, and have a separate TCP client socket connect
to any site/server you want and send whatever you like. It will
require a custom Bootstrap, but you can then expose any aspect of that
functionality to HQ/ScoutScript directly.
If you take a look at this code, that's the client object you'd use to open an HTTP connection.
So in a nutshell the lead scout cannot make a POST request. To do so you'll need to create a custom bootstrap (e.g. using the Arduino IDE).
I have the System below
one server
20 clients (could increase)
the clients display a webpage that has the flex swf file. it will display a list. it will poll server every 1sec for any changes in the data. if there's any, it will refresh the data.
the polling is handled using a url that returns json object.
Now i want to have an webapplication that i can use to see the current status of all the monitors on the network
any smart solution?
You could potentially have all the monitor apps connect to a NetConnectionGroup along with your status check app. They could then post pings and health checks into the group, and if your status check app is also connected it could report those statuses (this of course doesn't help if one of the monitors has crashed or not connected, but you'll have that problem with any solution!)
Manytimes clients ask for features like instant messaging (IM) and other client-to-client (P2P) communication for their web apps. Typically how is this done in normal web browsers? For example I've seen demos of Google Wave (and Gmail) that are able to IM from a regular browser. Is this via HTTP? Or does XmlHttpRequest (AJAX) provide the necessary backend for such communication?
More than anything I wonder how can a server "wake up" the remote client, lets say for sending an IM? Or does the client have to keep "polling" the message server for new IMs?
Typically the browser will poll the server for new messages. One approach that is often done to make this more efficient is the 'long poll' (see also this link) - the server responds immediately if it has anything; otherwise, it waits, keeping the connection open for a while. If a message comes in, it immediately wakes up and sends it, otherwise it comes back with a 'nope, check back' after a few tens of seconds. The client them immediately redials to go back into the long-polling state.