Wowza+AWS Cloudfront stream availability delay - cdn

I'm pretty new to Wowza and Cloudfront.
I'm using Wowza Streaming Engine 4.4.1 together with AWS cloudfront to deliver a live rtsp video stream. I'm able to deliver the video through the CDN, but I noticed a relevant delay (~200 seconds delay on CDN) between the availability of the stream (just after its creation) on the CDN and on the Wowza MCU. Long story short, the stream is not immediately available.
I don't understand the reason of such difference. I need to use the CDN to make the stream easily available to many people, but this delay is making this approach not suitable for my purposes.
Is there some configurations I have to take into account, or is a known "drowback" of using a cloudfront CDN?

The correct answer is the one pointed out by "Michael - sqlbot".
I updated the value of Error Caching Minimum TTL to 0 and now, in case of 404 error, the error is not cached, so subsequent request doesn't get 404 error and the stream become immediately available (~10 seconds delay, but it is just fine).
AWS reference here
Simone

Related

What's Download and Upload Packet?

Note: My question remains un-answered.
After hours of reading I'm confused about download/upload speeds.
Let's suppose I sent an http request to fetch an image, all websites mention this as "Download" but why?
My computer is sending the request and getting a response back, so it's both uploading (sending packet) and downloading (getting packet). Thus it's not pure Download.
Plus, I read that internet provides prioterize download over upload but how? given a packet from A to B how can they decide if it's an upload or download?
From A's perspective it's Upload, but according to B it's Download...
Where did you read about prioritizing download speed over upload? ISPs can only prioritize speed traffic by IP (to move traffic to another chanel out of shaper or to make this traffic use another shaper's rules) or by shaper set rules for UDP, Vo-Ip or traffic per VLAN/port. Web sites can optimize data for displaying content cached and compressed, for example. Or to use the uploading feature on other upstream servers. But you, as a user, upload to the server RAW data (an uncompressed set of bytes). For managing speeds from an ISP, the ISP must monitor each packet from each user. And then it can limit speeds either by packet size (but it will work 50/50 'cos files are uploaded by a few requests) or by meta data from a packet. It is a massive load for all infrastructure. And accordingly, it is a very costly process for businesses.

NGINX as warm cache in front of wowza for HLS live streams - Get per stream data duration and data transferred?

I've setup NGINX as a warm cache server in front of Wowza > HTTP-Origin application to act as an edge server. The config is working great streaming over HTTPS with nDVR and adaptive streaming support. I've combed the internet looking for examples and help on configuring NGINX and/or other solutions to give me live statistics (# of viewers per stream_name) as well parse the logs to give me stream duration per stream_name/session and data_transferred per stream_name/session. The logging in NGINX for HLS streams logs each video chunk. With Wowza, it is a bit easier to get this data by reading the duration or bytes transferred values from the logs when the stream is destroyed... Any help on this subject would be hugely appreciated. Thank you.
Nginx isn't aware of what the chunks are. It's only serving resource to clients over HTTP, and doesn't know or care that they're interrelated. Therefore, you'll have to derive the data you need from the logs.
To associate client requests together as one, you need some way to track state between requests, and then log that state. Cookies are a common way to do this. Alternatively, you could put some sort of session identifier in the request URI, but this hurts your caching ability since each client is effectively requesting a different resource.
Once you have some sort of session ID logged, you can process those logs with tools such as Elastic Stack to piece together the reports you're looking for.
Depending on your goals with this, you might find it better to get your data client-side. There, you have a better idea of what a session actually is, and then you can log client-side items such as buffer levels and latency and what not. The HTTP requests don't really tell you much about the experience the end users are getting. If that's what you want to know, you should use the log from the clients, not from your HTTP servers. Your HTTP server log is much more useful for debugging underlying technical infrastructure issues.

Why HTTPS is faster then HTTP?

It makes me crazy.
I know that HTTP connections must be faster then HTTPS since we need some time for SSL handshaking and encoding / decoding data.
But I have checked two images from deviantart and from flickr and get same results.
Also I have checked results in Firefox network tab and in HTTP Debugger Pro and got same results (I don't know why FF shows different sizes for same image).
Here is the test image with and without HTTPS:
http://fc05.deviantart.net/fs70/f/2014/082/a/0/flying_jellyfish_wallpaper_by_andrework-d7bcloj.jpg
https://fc05.deviantart.net/fs70/f/2014/082/a/0/flying_jellyfish_wallpaper_by_andrework-d7bcloj.jpg
As a protocol, HTTPS is not faster than HTTP. Holding this claim3, then:
HTTPS might benefit from QoS (Circumvention) on your path2. I get the same speed for both resources, which is to be expected1.
Alternatively, it could be another artifact on your path such as an HTTP proxy; or pretty much anything which only slows down the HTTP traffic.
I suspect your path is the issue because the same symptom - which is a significant time difference! - is seen when connecting to different servers.
1 Any handshake overhead is dominated by the transfer time on a low-latency connection. Likewise, any encryption overhead is dominated by the network transfer speed.
2 The particular network path taken from your browser to the server, whatever that is.
3 This is a very weak claim (that is, I did not claim that HTTP was faster) and is not a difficult proposition to back up. If HTTPS traffic was fundamentally faster (much less twice as fast!), nobody would still be using HTTP.

find out connection speed on http request?

is it possible to find out the connection speed of the client when it requests a page on my website.
i want to serve video files but depending on how fast the clients network is i would like to serve higher or lower quality videos. google analytics is showing me the clients connection types, how can i find out what kind of network the visitor is connected to?
thx
No, there's no feasible way to detect this server-side short of monitoring the network stream's send buffer while streaming something. If you can switch quality mid-stream, this is a viable approach because if the user's Internet connection suddenly gets burdened by a download you could detect this and switch to a lower-quality stream.
But if you just want to detect the speed initially, you'd be better off doing this detection on the client and sending the results to the server with the video request.
Assign each request a token /videos/data.flv?token=uuid123, and calculate amount of data your webserver sends for this token per second (possible check for multiple tokens at one username at a time period). You can do this with Apache sources and APR.

http streaming

is http streaming possible without using any streaming servers?
Of course. You can output and flush, it gets to client before you end the script, thus it's streaming.
For live streaming, only segmented, like Apple HLS, other variants of segmented HLS (like OSMF) are not widely supported at the moment.
IIS from microsoft can also do smooth streaming (and Apple HLS as well).
Apple HLS can be supported on any web server when you pre-segment stream to chunks and just upload to web server path.
For VoD streaming, there is lot's of modules for all web servers.
Yes, although libraries have varying level of support. What needs to be used is "http chunking", such that lib does not try to buffer the whole request/response in memory (to computed the content length header) and instead indicate content comes in chunks.
Yes,not only its possible but has been implemented by various media server companies, only reason they still make usage of servers because of commercial purpose. Basically the content you want to stream should be divided into chunks/packets and then client machine can request those chunks via simple HTTP Get Requests.
Well if you have WebSockets available you can actually get quite low-latency streaming for low-fps scenarios by sending video frames as jpegs.
You can also send audio separately and play it using WebAudio on your browser. I imagine it could work for scenarios where you do not require perfect audio-video sync.
Another approach is to stream MPEG chunks through WebSockets, decode them in JS using jsmpeg and render to a canvas. You can find more here (video-only):
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
yes, the answer to your problem with http streaming is MPEG-DASH tech

Resources