Does YouTube stream Videos via TCP? - tcp

I just sniffed some traffic using wireshark and noticed, that the YouTube traffic relies on TCP. I thought, they were using UDP? But it seems like as if they would use HTTP octet streams. Is YouTube really using TCP for streams or am i missing something?

Because they need everything TCP provides (slow start, transmit pacing, exponential backoff, receive windows, reordering, duplicate rejection, and so on) they would either have to use TCP or try to do all those things themselves. There's no way they could do that better than each operating system's optimized TCP implementation.

Obviously, Google is currently experimenting with own Protocol Implementations, like QUIC (Quick UDP Internet Connection), as one can see when examining the HTTP Response
HTTP/1.1 200 OK
...
Content-Type: video/mp4
Alternate-Protocol: 80:quic
...
However, currently, they seem to rely on TCP, just like David mentioned before.

From http://www.crazyengineers.com/threads/youtube-use-tcp-or-udp.38419/:
...of course youtube page uses http [which is over TCP]. The real thing does not happens
via http page but the flash object that is embedded in that page. The
flash object which appear on youtube is video flash player. The video
flash player acts as iframe(technically incorrect term) for contents
that would be called for streaming via flash object. For storing media
contents a media sever have been installed by youtube whose contents
get called when you press play button.
For streaming media to flash player Real Time Streaming
Protocol(RTSP) is used. The play button on flash player acts as RTSP
invoker for media being called and media is streamed via UDP packets.
In fact you don't need to migrate anywhere from page because the
embedded object calls for video not the http page but as the object is
embedded on http page once you close it, object also get closed.

Related

Media (video) negotiation in Astrisk

First, let me dictate the call flow and the nodes involved.
UA1 <--------------> Proxy1 (Kamailio)/RTPProxy1 <-------------------> Asterisk <-------------> Proxy2(Kamailio) /RTPProxy2<---------> UA2
Currently, Asterisk acts as a B2BUA server, and the location lookup/registration is handled by the Proxies. The Asterisk is in the signaling as well as media (audio) path.
Problem Statement:
Asterisk should be in the audio path and not video path if the call is an audio+video call. So, audio goes from UA1 to RTPproxy, Asterisk to RTPProxy to UA2 and back. While video from UA1 to RTPProxy 1 to RTProxy2 to UA2.
Question:
Can Asterisk be configured/programmed, so that it negotiates with RTPProxy1/2 video IP/port? While for Audio it does negotiation with its own IP and Port as its currently doing.
Thanks
Abhijit
No, asterisk video is very limited. Negotiation options are same, so it will work same as audio.
If you want make it different, create TWO calls - one audio call and one video call without audio.
However if you use kamailio as proxy, in theory it is POSSIBLE make it like you want. But very unlikly your UA will support that(at least i never hear about something like that).

Youtube Video Streaming protocol

I was capturing youtube video packets using wireshark. I saw it was http tunneled over tcp packet. (Even in case of youtube live streaming).
But whatever I know is that youtube uses flash video technology and html5. Again in some websites they mention about DASH protocol.
My question is, what is the exact protocol used by youtube? And how we can interpret the data that I have captured in wireshark? In the capture it is shown as just 'Data'. There is nothing mention as video data or any other things like that.
YouTube primarily uses the VP9 and H.264/MPEG-4 AVC video formats, and the Dynamic Adaptive Streaming over HTTP protocol.
By January 2019, YouTube had begun rolling out videos in AV1 format.
For mobile - Sometimes Youtube servers are sending data using RTSP which is an Application Layer Protocol.
On the transport layer RTSP uses both TCP and UDP.
If you want to parse youtube data from wireshark you will have to store it and run it inside a flashplayer. The video is sent as a flash object embedded into the HTML Page that is sent to you via https.
Source:
https://en.wikipedia.org/wiki/YouTube#Features
The exact protocol is tcp; although YouTube has been switching over to UDP as of late. The inability to interpret packet data is intentional, the way YouTube breaks up streaming data prevents capture apps like Wireshark from exposing anything about the data being transferred. To interpret the data you are going to need capture the data from a substantial number of packets and compile it to form a part of the file be sent. It’s best to just take the source IP from the pocket sender and use DNS to resolve it to the Domain name, then do research on what type of data that can be expected from that domain, but obviously this is extremely unreliable.

How to live stream a desktop to html5 video tag

I have some output from a program I'd like to stream live to a html5 video tag. So far I've used VLC to capture the screen, transcode it to ogg, and stream it using its built-in http server. It works insofar that I see the desktop image in the browser window.
The catch is this: Every time I refresh the page, the video starts from the top, where I'd like to see only the current screen, so that I can use it to build a sort of limited remote desktop solution that allows me to control the ubuntu desktop program from the browser.
I was thinking websockets to send the mouse events to the program, but I'm stuck on how to get the live picture instead of the whole stream.
Thanks in advance!
If you are building server side as well, I would suggest handle that operation yourself.
What you can do, is use mjpeg for html streaming. And you can write server application that will accept http connections and will send header of mjpeg stream and then every update will send picture it self. That way you will have realtime stream in browser.
This option is good due to ability of having control over stream from server side, and for client side it is just tag with mjpeg.
Regarding WebSockets - yes you can build it, but you will have to implement input devices control on remote computer side.
Here is server of streaming MJPEG that might be interesting to you: http://www.codeproject.com/Articles/371955/Motion-JPEG-Streaming-Server

RMTP Tunneling - how different it is from HTTP request?

While using RTMP if the request is tunneled through HTTP, how different it is from a HTTP request?
What would be the performance implications of tunneling while using RTMP?
The advantage of RTMP streams over the casual HTTP based progressive downloading is far too realistic to ignore
You can serve Flash Video over the Internet using RTMP, a special protocol for real-time server applications ranging from instant messaging to collaborative data sharing to video streaming. Whereas HTTP-delivered Flash Video is referred to as progressive download video, RTMP-delivered Flash Video is called streaming video. However, because the term streaming is so often misused, I prefer the term real-time streaming video.
One of the benefits of RTMP delivery for the viewer is near-instantaneous playback of video, provided the Flash Video file is encoded with a bitrate appropriate to the viewer's connection speed. Real-time streaming video can also be seeked to any point in the content. This feature is particularly advantageous for long-duration content because the viewer doesn't have to wait for the video file to load before jumping ahead, as is the case for HTTP-delivered video.
http://www.cisco.com/en/US/prod/collateral/video/ps11488/ps11791/ps11802/white_paper_c11-675935.html

http streaming

is http streaming possible without using any streaming servers?
Of course. You can output and flush, it gets to client before you end the script, thus it's streaming.
For live streaming, only segmented, like Apple HLS, other variants of segmented HLS (like OSMF) are not widely supported at the moment.
IIS from microsoft can also do smooth streaming (and Apple HLS as well).
Apple HLS can be supported on any web server when you pre-segment stream to chunks and just upload to web server path.
For VoD streaming, there is lot's of modules for all web servers.
Yes, although libraries have varying level of support. What needs to be used is "http chunking", such that lib does not try to buffer the whole request/response in memory (to computed the content length header) and instead indicate content comes in chunks.
Yes,not only its possible but has been implemented by various media server companies, only reason they still make usage of servers because of commercial purpose. Basically the content you want to stream should be divided into chunks/packets and then client machine can request those chunks via simple HTTP Get Requests.
Well if you have WebSockets available you can actually get quite low-latency streaming for low-fps scenarios by sending video frames as jpegs.
You can also send audio separately and play it using WebAudio on your browser. I imagine it could work for scenarios where you do not require perfect audio-video sync.
Another approach is to stream MPEG chunks through WebSockets, decode them in JS using jsmpeg and render to a canvas. You can find more here (video-only):
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
yes, the answer to your problem with http streaming is MPEG-DASH tech

Resources