VOD not playing after s3 recording enabled in Ant Media server - ant-media-server

I have enabled s3 recording in application settings on dashboard but when trying to play VOD from dashboard it gives error "The media could not be loaded, either because the server or network failed or because the format is not supported."

Problem:
This error generally comes when HTTP forwarding is not configured on Ant Media Server for S3 Recording.
HTTP forwarding is implemented to forward incoming HTTP requests to any other place. It's generally used for forwarding incoming request to a storage like S3.
Solution:
Here's how HTTP forwarding can be configured on Ant Media Server:
Open the file {AMS-DIR} / webapps / {APPLICATION} / WEB-INF / red5-web.properties with your text editor.
Add settings.httpforwarding.extension=mp4 to the file.
Add the base URL with settings.httpforwarding.baseURL=https://{YOUR_DOMAIN} for forwarding. Please replace {YOUR_DOMAIN} with your own URL. Please pay attention that there is no leading or trailing white spaces.
e.g.,
If you are using AWS S3 bucket, {YOUR_DOMAIN} will be like:
{s3BucketName}.s3.{awsLocation}.amazonaws.com
If you are using Digital Ocean Spaces, {YOUR_DOMAIN} will be like:
{BucketName}.{BucketLocation}.digitaloceanspaces.com
Save the file and restart the Ant Media Server with sudo service antmedia restart
If it's configured properly, your incoming MP4 requests such as https://{SERVER_DOMAIN}:5443/{APPLICATION_NAME}/streams/vod.mp4 will be forwarded to https://{YOUR_DOMAIN_HERE}/streams/vod.mp4
antmedia.io

Related

How can I use TCP instead of UDP for WebRTC publish/play in Ant Media Server?

I'm using Ant Media Server on AWS and it works perfectly fine. However, some of our users have blocked UDP ports and therefore I want to know if it is possible to use TCP instead of UDP for WebRTC.
And with this in your User data in AWS you'll get the current instance public IP inserted automatically on boot:
sed -i "s/server.name=.*/server.name=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)/g" /usr/local/antmedia/conf/red5.properties
Yes, we can make use of TCP ports for WebRTC.
Please open TCP Port range 50000-60000 on the AWS Security group (for AMS v2.4.2.1 and above, for older version use port range 5000-65000).
Go to the Application settings:
/usr/local/antmedia/webapps/<AppName>/WEB-INF/red5-web.properties
Edit the red5-web.properties file and set
settings.webrtc.tcpCandidateEnabled=true
Restart Ant Media Server
sudo service antmedia restart
If you are using a cloud service like OVH or if there is pubic IP directly associated with the instance, then webrtc should work.
If you are using a cloud service like AWS with private/public IP, then some additional settings are required to be configured.
Go to server configuration settings
/usr/local/antmedia/conf/red5.properties
Edit the red5.properties file and set
server.name=Instance_Public_IP
Go to the application settings again and edit the red5-web.properties
/usr/local/antmedia/webapps/<AppName>/WEB-INF/red5-web.properties
set
settings.replaceCandidateAddrWithServerAddr=true
Save the settings and restart Ant Media server
sudo service antmedia restart
Webrtc should work fine afterwards.
Thank you.
antmedia.io

Configuring UI access via HTTP NIFI

I am looking to configure NIFI UI access via HTTP. I've set the values necessary (Or so I thought) in nifi.properties.
properties set:
nifi.web.http.host=192.168.1.99
nifi.web.http.port=8080
I know NIFI does allow for both HTTP and HTTPS to be used simultaneously so I removed the below default values and left them unset:
nifi.web.https.host=
nifi.web.https.port=
Once I saved this file, I restarted the service systemctl restart nifi.service to see if it would read the new config file. I ran netstat -plnt to see if the port was open to no avail.
Did you set the HTTP values? If not, you're not providing a port for NiFi to listen to.

Cert/client authentication with Nginx with multiple clients using different certs

I'm working on a small Flask based service behind Nginx that serves some content over HTTP. It will do so using two-way certificate authentication - which I understand how to do with Nginx - but users must log in and load their own certificate that will be used for the auth piece.
So the scenario is:
User has a server that generates a cert that is used for client authentication.
They log into the service to upload that cert for their server.
Systems that pull the cert from the user's server can now reach an endpoint on my service that serves the content and authenticates using the cert.
I can't find anything in the Nginx docs that says I can have a single keystore or directory that Nginx looks at to match the cert for an incoming request. I know I can configure this 'per-server' in Nginx.
The idea I currently have is that I allow the web app to trigger a script that reads the Nginx conf file, inserts a new server entry and a specified port with the path to the uploaded cert and the sends the HUP signal to reload Nginx.
I'm wondering if anyone in the community has done something similar to this before with Nginx or if they have a better solution for the odd scenario I'm presenting.
After a lot more research and reading some of the documentation on nginx.com I found that I was way over complicating this.
Instead of modifying my configuration in sites-available I should be adding and removing config files from /etc/nginx/conf.d/ and then telling Nginx to reload by calling sudo nginx -s reload.
I'll have the app call a script to run the needed commands and add the script into the sudoers file for the www-data user.

How to get Meteor server to listen on Unix domain socket?

My Meteor server needs to run behind an NGINX proxy which receives HTTP requests, adds the Kerberos-authenticated user name to the header and forwards them to another webserver (assumed to be NodeJS) over a Unix domain socket which is accessed through a file secured by Unix permissions.
I would like to use Meteor instead of NodeJS, but the only way I can get Meteor to listen on a Unix domain socket is to hack a file called run-proxy.js deep inside my Meteor installation and modify a call to server.listen(...) to pass it a file name instead of a port number.
This works, but is there a better way to achieve this? Ideally without modifying Meteor's code. I did try meteor --port /home/me/file_name but it complains that there is no port number.

IMAP Proxy that can connect to multiple IMAP servers

What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py

Resources