nginx RTMP vs MPEG-TS - nginx

I'm trying to figure our what's the best method to receive live streaming video at a server, and making it available back to the client.
I noticed two modules for nginx:
https://github.com/arut/nginx-rtmp-module
https://github.com/arut/nginx-ts-module
It looks like both modules support HLS for video streaming.
What is the difference then between the options?

Apparently you can stream HLS/DASH with both but the nginx-ts-module has fewer features. You can setup both using docker and test which one suits your needs better. I'd almost always go with the simpler option.

Related

Which protocol to use for multi-stream application ? RTMP?

I'm trying to stream several iPad screen to a single python client (computer) on a local network but I don't know which protocol to use.
I can do it with 1 Ipad using MonaServer, an app that stream on RTMP and a little Python script to read the video.
But I am dealing with problems to use several Ipads because as I saw RTMP uses a single port on Windows, :1935 and I am not sure it's possible to multi-stream with RTMP.
I am not a pro with networking, so if you have any suggestions I'm open
What you need is to following the wiki and usage of open source projects, to get some instincts about multiple clients live streaming.
For example, you could use OBS to publish some streams to a media server, like SRS, play it by different protocols like RTMP/HTTP-FLV/HLS/WebRTC.
You could publish multiple streams, they are not mutually exclusive. And play by different players, depends on the protocol you chose, please read this post.
Try it.

Rapid transfer of binary files

We have a daily need to ship about 500 MB of compressed image files (about 280K each).
Currently we do this the fast easy way. A web server, and downloads via http.
We are now looking at the putting a better client (nw.js) on the client side. So we have the opportunity to improve the transport protocol.
Data flows only one way.
We have a couple of thoughts but I would love to hear better ideas.
Using a HTTP2 (SPDY) compliant server, and the Chromium hooks in
nw.js for HTTP2 receiving
Using a TCP connection (custom node.js server -> node code in nw.js)
Perhaps we should look at QUIC: https://www.chromium.org/quic
Would bundling this into a zip file (which would not decrease bytecount, since it is already compressed) help?
What does oneDrive, GoogleDrive, and dropbox do in these cases?
Any thoughts?
Has anyone tried ASPERA: http://asperasoft.com/software/transfer-clients/
Windows 10 systems can take advanatge of "TCP Fast Open" technology which you can read about here:
https://en.wikipedia.org/wiki/TCP_Fast_Open
To enable this technology, Chromium accepts this parameter:
--enable-tcp-fastopen
From what I've read this only works in Windows 10 but I've no idea about other platforms. Good luck.

MPEG-DASH Encoding for Live Streaming

I want to do encode a live stream for MPEG-DASH in various bitrates and resolutions for live playback.
Everything I found so far either uses only the source resolution (Nimble, nginx-rtmp-module) or seems to be only for VOD streaming(DASHEncoder).
Is it possible to use DASHEncoder with a live input (rtmp stream) and how would I do that?
If not, is it possible to use nginx-rtmp + ffmpeg for what I want to do?
There are several different services available which support such use-cases, like NGINX Plus.
I also successfully managed to run a live stream with Bitmovin and to the best of my knowledge livestream.com is also capable of doing that.

Simulating a remote website locally for testing

I am developing a browser extension. The extension works on external websites we have no control over.
I would like to be able to test the extension. One of the major problems I'm facing is displaying a website 'as-is' locally.
Is it possible to display a website 'as-is' locally?
I want to be able to serve the website exactly as-is locally for testing. This means I want to simulate the exact same HTTP data, including iframe ads, etc.
Is there an easy way to do this?
More info:
I'd like my system to act as closely to the remote website as possible. I'd like to run command fetch for example which would allow me to go to the site in my browser (without the internet on) and get the exact same thing I would otherwise (including information that is not from a single domain, google ads, etc).
I don't mind using a virtual machine if this helps.
I figured this was quite a useful thing in testing. Especially when I have a bug I need to reliably reproduce in sites that have many random factors (what ads show, etc).
As was already mentioned, caching proxies should do the trick for you (BTW, this is the simplest solution). There are quite a lot of different implementations, so you just need to spend some time selecting a proper one (according to my experience squid is a good solution). Anyway, I would like to highlight two other interesting options:
Option 1: Betamax
Betamax is a tool for mocking external HTTP resources such as web services and REST APIs in your tests. The project was inspired by the VCR library for Ruby. Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
Betamax comes in two flavors. The first is an HTTP and HTTPS proxy that can intercept traffic made in any way that respects Java’s http.proxyHost and http.proxyPort system properties. The second is a simple wrapper for Apache HttpClient.
BTW, Betamax has a very interesting feature for you:
Betamax is a testing tool and not a spec-compliant HTTP proxy. It ignores any and all headers that would normally be used to prevent a proxy caching or storing HTTP traffic.
Option 2: Wireshark and replay proxy
Grab all traffic you are interested in using Wireshark and replay it. This I would say it is not that hard to implement required replaying tool, but you can use available solution called replayproxy
Replayproxy parses HTTP streams from .pcap files
opens a TCP socket on port 3128 and listens as a HTTP proxy using the extracted HTTP responses as a cache while refusing all requests for unknown URLs.
Such approach provide you with the full control and bit-to-bit precise simulation.
I don't know if there is an easy way, but there is a way.
You can set up a local webserver, something like IIS, Apache, or minihttpd.
Then you can grab the website contents using wget. (It has an option for mirroring). And many browsers have an option for "save whole web page" that will grab everything, like images.
Ads will most likely come from remote sites, so you may have to manually edit those lines in the HTML to either not reference the actual ad-servers, or set up a mock ad yourself (like a banner image).
Then you can navigate your browser to http://localhost to visit your local website, assuming port 80 which is the default.
Hope this helps!
I assume you want to serve a remote site that's not under your control. In that case you can use a proxy server and have that server cache every response aggressively. However, this has it's limits. First of all you will have to visit every site you intend to use through this proxy (with a browser for example), second you will not be able to emulate form processing.
Alternatively you could use a spider to download all content of a certain website. Depending on the spider software, it may even be able to download JavaScript-built links. You then can use a webserver to serve that content.
This service http://www.json-gen.com provides mock for html, json and xml via rest. By this way, you can test your frontend separately from backend.

Using NGINX to forward tracking data to Flume

I am working on providing analytics for our web property based on instrumentation data we collect via a simple image beacon. Our data pipeline starts with Flume, and I need the fastest possible way to parse query string parameters, form a simple text message and shove it into Flume.
For performance reasons, I am leaning towards nginx. Since serving static image from memory is already supported, my task is reduced to handling the querystring and forwarding a message to Flume. Hence, the question:
What is the simplest reliable way to integrate nginx with Flume? I am thinking about using syslog (Flume supports syslog listeners), but I struggle with how to configure nginx to forward custom log messages to a syslog (or just TCP) listener running on a remote server and on a custom port. Is it possible with existing 3rd party modules for nginx or would I have to write my own?
Separately, anything existing you can recommend for writing a fast $args parser would be much appreciated.
If you think I am on a completely wrong path and can recommend something better performance-wise, feel free to let me know.
Thanks in advance!
You should parse nginx log file like tail -f do and then pass results to Flume. It will be the most simple and reliable way. The problem with syslog is that it blocks nginx and may completely stuck under high-load or if something goes wrong (this is why nginx doesn't support it).

Resources