Characterize network camera stream - networking

I am trying to play a network camera stream in an application, but first I need to identify how to access the stream. Unfortunately, the manufacturer seems to prefer that I access the local web page of the camera and use it's built-in viewer, so there's no documentation on how to access the raw stream with another application.
For starters, I opened up the camera's web viewer and captured the connection traffic, trying to identify an address to point a player at. Here's the traffic that caught my attention: (IP edited)
GET http://127.0.0.1/mpeg4 HTTP/1.1\r\n
So I point Chrome (with VLC) at 127.0.0.1/mpeg4 and I get the VLC plugin. The tab is "busy" downloading but it never stops or plays the stream. My thought is that the plugin thinks the stream is a file, so it waits for end of file to play, which never comes.
Then I switched to VLC standalone and pointed it at the same address with the same results. No errors, but there's no buffering indicator or progress.
IE pointed to the address wants to download a file called mpeg4.mpeg from 127.0.0.1, but again it keeps downloading infinitely.
So with that backstory, my question is: How can I detect exactly what this stream is and how to play it with VLC?

Related

Video streaming, How it works?

As i starting to work with video streaming, i've got a question:
Video streaming is the process of breaking video file into small data packages that are sent over network. But where do they stored and what happen with it after streaming was finished? I am asking because unlike from download, streaming does not keep the file locally, that's how it described in internet. What is the process of handling stream buffers under the hood. Can someone point me into right direction?
Any help appreciated
Thanks
Most video streams are actually HTTP request and response based - i.e. he client (player) request the video chunk by chunk and then plays it as it receives each chunk.
To answer your question what happens to the chunks when they are downloaded, this will depend on the player and the device. In general the chunks will be rebuilt into the particular video container that is being used, e.g. mp4, and then played.
How long they are stored will depend on the device and the players caching rules and capacity.

Networking details of Chromecast

I am trying to understand networking details of chromecast. Consider this case, there is youtube-server (S), Hand-device(H) and Chromecast(C). Following are the steps, i would do.
1) Initially, would pair both H & C either automatically or explicitly.
2) Would play say youtube video on my hand-held-device(H). H will form a TCP session with the Server-S
3) Now, i play this video on my TV. So,
Questions
A) Is there a separate TCP session between the server and Chromecast or the hand-held-device mirror whatever it gets from server
B) Surprisingly, even after switching-off handle held device, Chromecast kept streaming until completion. So, expecting some kind of TCP state between Server and Chromecast. If so, who initiates this connection ?
D) How does Hand-held-device know about the current streaming state ?
Thanks
A) If you initiated a casting session with the cast button in the app, then yes -- there is a separate session between the server and the Chromecast. Your hand-held-device tells the chromecast how to discover the server, how to request the videos, etc., but the Chromecast sends the requests for the media assets directly to their source. There is no mirroring going on. (keep in mind that Android CAN mirror to the Chromecast, as can Chrome with tab-casting, but this is different than using the cast button)
B) As stated above, the hand-held device provides instructions to the Chromecast (usually in the form of an app ID that the chromecast resolves by looking to Google's servers to see where the web app is houses), and also with the URLs of the media. But once media playback starts, if you want to turn off the sender device, the playback can continue (and this is one of the big benefits of the Chromecast. In fact, this allows for you to come along with a different device and connect to the session if desired).
D) The Chromecast and hand held device also maintain their connection, so that the Chromecast can send back the status of playback, and so that the user can initiate playback control instructions that tell the Chromecast to pause/skip/etc. etc. the media session.

sending HTTP requests obtained from WireShark

I have an app controlling my AVR on a local network and I'm trying to embed some of the functionality into another app written by myself. I've started up WireShark and started controlling the volume, which shows up as:
GET /ctrl-int/1/setproperty?dmcp.device-volume=-15.750000 HTTP/1.1
I'm not totally up on this type of http control but i'd like to know if this is enough data to be able to send the same request via a browser or terminal etc.
cheers
Without knowing the avr you can't realy tell. But you should be able to send the command via
avr-ip/ctrl-int/1/setproperty?dmcp.device-volume=-15.750000
in the browser or from you app. The ip should be in the wireshark logs as well.
If that works it was enough information.

Using NetConnection and URLStream to send/recieve data at high frequency

I'm writing a Comet-like app using Flex on the client and my own hand-written server.
I need to be able to send short bursts of data from the client at quite a high frequency (e.g. of the order of 10ms between sends).
I also need the server to push short bursts of data at a similarly high frequency.
I'm using NetConnection.call() to send the data to the server, and URLStream (with chunked encoding) to push the data from the server to the client.
What I've found is that the data isn't being sent/received as soon as it's available. For example, in IE, it seems the data is sent every 200ms rather than as soon as NetConnection.call() is called. Similarly, URLStream isn't making the data available as soon as the server is sending it.
Judging by the difference in behaviour between the browsers, it seems as though the Flash Player (version 10) is relying on the host browser to do all the comms. Can anyone confirm this? Update: This is very likely as only the host browser would know about the proxy settings that might be set.
I've tried using the Socket class and there's no problem with speed there: it works perfectly. However, I'd like to be able to use HTTP-based (port 80) connections so that my app can run in heavily fire-walled environments (I tried using a Socket over port 80, but that has its problems).
Incidentally, all development/testing has been done on an internal LAN, so bandwidth/latency is not an issue.
Update: The data being sent/received is in small packets and doesn't need to be in any particular format. For example, I might need to send a short array of Numbers, and this could either be encoded in AMF (e.g. via NetConnection.call()) or could be put into GET parameters (e.g. using sendToURL()). The main point of my question is really to see whether anyone else has experienced the same problem in calling NetConnection/URLStream frequently, and whether there is a workaround (it's also possible that the fault lies with my server code of course, rather than Flash).
Thanks.
Turns out the problem had nothing to do with Flash/Flex or any of the host browsers. The problem was in my server code (written in C++ on Linux), and without access to my source code the cause is hard to find (so I couldn't have hoped for an answer from this forum).
Still - thank you everyone who chipped in.
It was only after looking carefully at the output shown in Wireshark that I noticed the problem, which was twofold:
Nagle's algorithm
I was sending replies in multiple packets by calling write() multiple times (e.g. once for the HTTP response header, and again for the HTTP response body). The server's TCP/IP stack was waiting for an ACK for the first packet before sending the second, but because of Nagle's algorithm the client was waiting 200ms before sending back the ACK to the first packet, so the server took at least 200ms to send the full HTTP response.
The solution is to use send() with the flag MSG_MORE until all the logically connected blocks are written. I could also have used writev() or setsockopt() with TCP_CORK, but it suited my existing code better to use send().
Chunk-encoded streams
I'm using a never-ending HTTP response with chunk encoding to push data back to the client. Naggle's algorithm needs to be turned off here because even if each chunk is written as one packet (using MSG_MORE), the client OS TCP/IP stack will still wait up to 200ms before sending back an ACK, and the server can't push a subsequent chunk until it gets that ACK.
The solution here is to ask the server not to wait for an ACK for each sent packet before sending the next packet, and this is done by calling setsockopt() with the TCP_NODELAY flag.
The above solutions only work on Linux and aren't POSIX-compliant (I think), but that isn't a problem for me.
I'm almost 100% sure the player relies on the browser for such communications. Can't find an official page stating so atm, but check this out for example:
Applications hosting the Flash Player
ActiveX control or Flash Player
plug-in can use the
EnforceLocalSecurity and
DisableLocalSecurity API calls to
control security settings.
Which I think somehow implies the idea. Also, I've suffered some network related bugs on FF/IE only which again points out to the player using each browser for networking (otherwise there wouldn't be such differences).
And regarding your latency problem, I think that if speed is critical, your best bet is sockets. You have some work to do, but seems possible, check out the docs again:
This error occurs in SWF content.
Dispatched if a call to
Socket.connect() attempts to connect
either to a server outside the
caller's security sandbox or to a port
lower than 1024. You can work around
either problem by using a cross-domain
policy file on the server.
HTH,
Juan

Blackberry buffered playback demo?

Can someone help me to buffer a mp3 file on a server using the Blackberry buffered playback demo app provided with the jde?
I have loaded it in the simulator and my mds is started but I'm unable to play the audio.
There is no error but it doesn't play/load.
The code looks all fine.
This may help:
Blackberry Enterprise Server Limitations
By default, the BlackBerry Enterprise Server (BES) limits the response size of a single HTTP response to 128K. If you try to fetch anything bigger your application will receive a 413 (Request Entity Too Large) response code. To get around this you must use the BES management console to change the value of the Maximum number of kilobytes per connection field to a higher value, up to 1024K.
Note that this limit also applies to the MDS simulator, so you'll need to change the simulator's settings as well. Edit the mds\config\rimpublic.property file in your JDE installation directory and change the value of the IPPP.connection.MaxNumberOfKBytesToSend property to match the BES setting and then restart the simulator.
Also check if you using correct ip instead of localhost
In the end you might want to check file in Blackberry browser before open it in app, don't forget to enable streaming in browser settings.

Resources