Custom backup for RTMP-push streams with nginx-rtmp-module - nginx

I need to backup RTMP streams that I send to my server (nginx with nginx-rtmp-module): in case one of them fails, I need another to be automatically substituted when I grab it from the server.
Is it possible?

I have figured out a bit tricky way to do this and put the solution on Github
It is a bunch of Shell scripts that allows you to do the following (assuming your server has DNS yourserver.ex:
Send your main stream to rtmp://yourserver.ex/main/somekey, backup stream to rtmp://yourserver.ex/backup/somekey and watch the result on rtmp://yourserver.ex/out/somekey. (More instructions on Github)
There could be a slight delay in switching streams, however, it works better than nothing.

Related

nginx RTMP vs MPEG-TS

I'm trying to figure our what's the best method to receive live streaming video at a server, and making it available back to the client.
I noticed two modules for nginx:
https://github.com/arut/nginx-rtmp-module
https://github.com/arut/nginx-ts-module
It looks like both modules support HLS for video streaming.
What is the difference then between the options?
Apparently you can stream HLS/DASH with both but the nginx-ts-module has fewer features. You can setup both using docker and test which one suits your needs better. I'd almost always go with the simpler option.

Using NGINX to forward tracking data to Flume

I am working on providing analytics for our web property based on instrumentation data we collect via a simple image beacon. Our data pipeline starts with Flume, and I need the fastest possible way to parse query string parameters, form a simple text message and shove it into Flume.
For performance reasons, I am leaning towards nginx. Since serving static image from memory is already supported, my task is reduced to handling the querystring and forwarding a message to Flume. Hence, the question:
What is the simplest reliable way to integrate nginx with Flume? I am thinking about using syslog (Flume supports syslog listeners), but I struggle with how to configure nginx to forward custom log messages to a syslog (or just TCP) listener running on a remote server and on a custom port. Is it possible with existing 3rd party modules for nginx or would I have to write my own?
Separately, anything existing you can recommend for writing a fast $args parser would be much appreciated.
If you think I am on a completely wrong path and can recommend something better performance-wise, feel free to let me know.
Thanks in advance!
You should parse nginx log file like tail -f do and then pass results to Flume. It will be the most simple and reliable way. The problem with syslog is that it blocks nginx and may completely stuck under high-load or if something goes wrong (this is why nginx doesn't support it).

What is the best method to send data from a device to a server

I am currently developing a website for an energy-monitoring company. We are trying to send high volumes data from the devices which record the data to a server so they can be processed in a database. The guy developing the firmware seems to think that the best way to send the data is to produce CSV files and send them via FTP. A program on the server needs to monitor the files received via FTP and run a PHP script to process them. I, however, feel that the best way of sending the data is via HTTP POST.
We had HTTP POST working and then I began trying to work with the CSVs which became a pain as reliably monitoring the files received via FTP meant editing the ProFTPD configuration file (which I found to be a near impossible task) and install a package called mod_exec (which comes with security risks) so that ProFTPD could run a PHP script. These issues and the fact that I am unfamiliar with the linux console which I am required to use extensively to set this up, makes the CSV method very difficult to set up. HTTP POST to me seems like a more direct way of sending the data without having to worry about files or relying on ProFTPD. It would also allow us to use identifiers to give the data being passed meaning as opposed to a string of values for which the meaning is not immediately apparent. In addition, the query string could be URL encoded to pass a multidimensional array which would work well given the type of data being passed.
Nevertheless, just because the HTTP POST method would be easier doesn't mean that the CSV method doesn't have advantages. Furthermore, the firmware guy has far more experience than me with computers so I trust his opinion.
Can you please help me to understand his point of view on the advantages of the CSV method and explain what the best method is?
You're right. FTP has major issues with firewalls, and especially doesn't work well on mobile (NAT'ted) IPv4. HTTP POST works far, far better under such circumstances, if only because nobody accepts an "internet" connection that breaks HTTP.
Furthermore, HTTP is a lot easier on the device as well. It's just a single-socket protocol, with trivial read/write semantics on that socket.
Some more benefits? HTTP has almost-native support for compression (gzip). HTTP transmission can start before the input is complete. HTTP is easier to secure (HTTPS)...
No, there really is little reason to use FTP.
The 'CSV method' (I'd call it the 'FTP method' though) has the advantage of being known to the embedded developer. The receiving side will have to create some way of checking if there is a file though. That adds complexity.
The 'HTTP method' has several advantages:
HTTP is easy to implement on the sending side
No need to create a file-checker
You can reply to the embedded device if everything went OK
I actually just implemented a system just like that (not too much data, but still) and use HTTP POST to send the data. I implemented the HTTP POST myself.

Proxys for WebDAV

I'd like to set up a reverse proxy for my webdav server. The main reason for this is so that I can better control which files are being uploaded to the webdav server. I cannot do this at the webdav server itself, it's a service by alfresco and I have now idea whether or not it's possible to configure the webdav service at all.
In particular I'd like to prevent my mac to do the AppleDouble thingy on the webdav server, i.e. stop my mac from uploading ._* files for every real file I upload. There is as far as I know no way to stop my mac from attempting this.
Does the proxy server need to know more than merely relaying http requests back and forth, does it also need to know something about webdav in order for this to work?
Which proxy servers could your recommend for this?
Günther
Unless I'm missing something, a reverse proxy will have to rewrite header fields (such as Destination: and If:) to work properly and potentially even request/response bodies, and thus is unlikely to work well.
A "proper" proxy shouldn't get in the way, though.
You could do this with SabreDAV. It has a TemporaryFileFilter Plugin that does exactly what you need. Not only does it intercept these resource forks, it also places them in a temporary 'quarantine'. This is important, because OS/X will check if the file was successfully written and fail horribly otherwise.
There will be two things you still need to do to make this work though:
Automatic cleanup of these files (a script suitable for cron is also supplied).
The actual proxy bit. This means you'll have to implement a Collection and a File class that perform the HTTP requests.
Disclaimer: I authored SabreDAV

Node.JS: Converting tcp to stdin/stdout

Node.JS seems limited in its ability to live-update code and in its ability to automatically isolate exceptions. Both of which are practically by default in Java.
One very effective way to live-update is to have a listener process that simply echos communication to/from the child process. Then to update, the listener starts up a new child (which reads the updated code automatically) and then starts sending requests to the new child,, ending the old child when all requests are complete.
Is there already a system that provides this http functionality through stdout/stdin.
Is there a system that provides TCP server or UDP server functionaility through stdout/stdin.
By this I mean, providing a module that looks like the http or net module with the exception that it uses stdout/stdin for the underlying I/O.
Similar to This CGI module
some applications will only have to change require('http') to require('cgi')
I intend to do something similar. I hope to re-use code if it is already out there, and also to easily convert a small or single purpose webserver, into this listener layer which runs many webapps. It is important that cleanup occurs properly. Connections that end or error should be freed up and the end/error events/commands should be properly echoed both ways.
(I believe a common way is to have the children listen on ports and the parent communicate with those ports, but I think an stdout/stdin solution will be more efficient)
Use nginx (HttpUpstreamModule) or HAProxy. In both cases you'd run them in front and mark a backend as down and then bring it back up when you need to do a live upgrade.
I'm not certain that this is what you're looking for (indeed, I'm not certain that I understand your question), but Remy Sharp has written a very helpful node module called nodemon. It promises to "monitor for any changes in your node.js application and automatically restart the server." This may help with the issue of live updating code.

Resources