Limit antmedia server's recording size - ant-media-server

I have enable the recording for streams. A mp4.tmp_extension file is generated. But it never generate a mp4 until recording stopped manually. And the mp4 could be very large. How can I limit the mp4 size, and the mp4.tmp_extension would mux to mp4 periodically.

Related

AntMedia server overwriting the hls recording on republish

I have a use case where a stream Id will be reused. but when I republish it overwrite previous stream hls files. Even if I enable timestamp for recording it only adds the timestamp to MP4 file but not to .ts or m3u8 files. I have S3 recording enabled.
we are using AntMedia to record drone video.
This issue becomes critical when there is some disruption in network of source which results in stopping and starting the stream in short duration.

ffmpeg can play video but not a stream containing the same data

This is my first time encountering video codecs/video streaming.
I am receiving raw h.264 packets over TCP. When I connect to the socket, listen to it and simply save the received data to a file, I am able to play it back using
ffplay data.h264
However, when I try to directly play it from the stream without saving it, using
ffplay tcp://addr:port
all I get is the error
Invalid data found when processing input
Why is that?
Specify the format: ffplay -f h264 tcp://addr:port
Alright I found another way to display the video stream.
ffplay -f h264 -codec:v h264 tcp://addr:port?listen
The ?listen parameter makes it so ffplay creates its own tcp server. All I do now is send the data to the specified address.

Save bandwidth for HTTP partially download case

I have such a special requirement:
Download a big file from server with HTTP.
If detect network not so good, then giveup current download but download another smaller file.
But per my test, it seems if I send a HTTP GET to server, it will download the file continuously even if I only read 1024 bytes, then I try to close the connection after detect bandwidth is low, but actually download bytes is larger than what I request.
To save bandwidth, I wish server do not send more data then my request, but it sounds like impossible. Then what's the actually mechanism to stop send data to client if we only request partially? e.g. 1024 bytes? If I do not disconnect, server will sent data to client till the whole file is finished?

How do I set up a live audio streaming http server?

I was hoping to build an application that streams audio (mp3, ogg, etc.) from my microphone to a web browser.
I think I can use the html5 audio tag to read/play the stream from my server.
The area I'm really stuck on is how to setup the streaming http endpoint. What technologies will I need, and how should my server be structured to get the live audio from my mic and accessible from my server?
For example, for streaming mp3, do I constantly respond with mp3 frames as they are recorded?
Thanks for any help!
First off, let's split this problem up into a few parts. You have the audio capture (recording), the encoding/codec, the server, and the receiving clients.
Capture -> Codec -> Server -> Several Clients
For audio capture, you will need to use the Web Audio API along with getUserMedia. This will allow you to get 32-bit floating point PCM samples from the recording device. This data stream takes up a ton of bandwidth... a few megabit for a stereo stream. This stream is not directly playable in an HTML5 audio tag, and while you could play it on the receiving end with the Web Audio API, it takes up too much bandwidth to be useful. You need to use a codec to get the bandwidth usage down.
The codecs you want to look at include MP3, AAC (and its variants such as HE-AAC), and Opus. Not all browsers support all codecs. MP3 is the most widely compatible but AAC provides better quality for a given bitrate. Opus is a free and open codec but still doesn't have the greatest client adoption. In any case, there isn't yet a codec that you can run in-browser with any real stability. (Although it's being worked on! There are a lot of test projects made with Emscripten.) I solved this problem by reducing the bit depth of my samples to 16-bit signed integers and sending this PCM stream to a server to do the codec, over a binary websocket.
This encoding server took the PCM stream and ran it through a codec server-side. Here you can use whatever you'd like, such as a licensed codec binary or a tool like FFmpeg which encapsulates multiple codecs.
Next, this server streamed the data to a real streaming media server like Icecast. SHOUTcast and Icecast servers take the encoded stream and relay it to many clients over an HTTP-like connection. (Icecast is HTTP compliant whereas SHOUTcast is close but not quite there which can cause compatibility issues.)
Once you have your streaming server set up, it's as simple as referencing the stream URL in your <audio> tag.
Hopefully that gets you started. Depending on your needs, you might also look into WebRTC which does all of this for you but doesn't give you options for quality and also doesn't scale beyond a few users.

RTSP in Flex

We are currently working on a Flex application that needs to connect to a set a traffic detection cameras via RTSP. Being totally new to the world of video streaming in general, I was wondering if that is possible.
AFAIK it is not possible to consume an RTSP feed in the Flash player, so I'm thinking that we would need some sort of a converter on the server that takes the RTSP stream and converts it to RTMP so we can consume the feed in our Flex app. We were hoping that Red5 could helps us do that.
Am I correct in my assumption and has anyone done this?
Wowza Media seems to support RTSP to RTMF converting: http://www.wowzamedia.com/comparison.html
And there is also general video stream transcoder Xuggle http://www.xuggle.com/ based on Red5 and FFMPEG.
You could try restreaming it via Red5 and connecting your Flex app to the Red5 server.
Read more at: http://red5wiki.com/wiki/SteamStream
Based on this work
I tried to convert a H264 signal to a SWF stream that could be
easily be displayed in Flash. Here is the recipe. (This recipe is
for Linux.)
Download Live555 streaming media, from http://www.live555.com/liveMedia/
The src file you have is usually named live555-latest.tar.gz
Unpack and compile:
Unpack:tar xzvf live555-latest.tar.gz. This will create a directory named live.
cd live
./genMakefiles linux (if you have a 32 bit system) or ./genMakefiles linux-64bit if your system is 64-bit)
make, and after a while you'll have a brand new compiled code
Live55 has a lot of good stuff, but we are only interested in the "testProgs"
directory, where openRTSP resides. OpenRTSP will let us receive a signal and send it
to ffmpeg, a program wich feeds ffserver. Ffserver is a server that receives
the signal from ffmpeg and converts it to SWF (and other formats).
Download, unpack, configure and install ffmpeg
Download ffmpeg from http://www.ffmpeg.org/. The version I tested is 0.6.1: http://www.ffmpeg.org/releases/ffmpeg-0.6.1.tar.gz
Unpack:tar xzvf ffmpeg-0.6.1.tar.gz. This will create a directory named ffmpeg-0.6.1
cd ffmpeg-0.6.1
All the funny video streaming things are packaged in VideoLan. So you
better install VideoLan right now. Go to http://www.videolan.org/ and see how easy is to
install it. You may be surprised that the package dependencies contains ffmpeg libraries.
After installing VideoLan do ./configure and then make.
After 3 or 4 hours you will have mmpeg and mmserver compiled and working.
Now we are almost ready to stream the whole world. First of all, let's try to
get openRTSP working.
Go to your "live" directory (remember 3.2) and do: cd testProgs
Try this:./openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> First of
all, you'll see logs which says something like:
- opening conection blah blah.
- sending DESCRIBE blah blah.
- receiving streamed data.
If all goes OK, your console will start to print a lot of strange characters very quickly.
These characters are bytes of video, but you can't see it (now). If you don't see your screen
printing characters, there is something wrong with your configuration. Check the steps up
to now.
We got the signal! Now let's send it to an useful component: ffmpeg, which is bound to
ffserver. We need to create a configuration file for ffserver.
Use your favorite editor to create this text file:
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
NoDaemon
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 200K
ACL allow 127.0.0.1
</Feed>
<Stream testFlash.swf>
Feed feed1.ffm
Format swf
VideoFrameRate 25
VideoSize 352x288
VideoIntraOnly
NoAudio
</Stream>
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
Name the file, for example, ffserver.conf. Save it anywhere, for example in the same directory of ffserver.
So, ffserver will be bound to the port 8090, for input and output. <Feed> tag configures the
input stream. The name of the configured feed in this case is feed1.ffm. Remember it for step 14.
<Stream> contains configuration for the output stream. In this case the name will be testFlash.swf (remember too), and the format will be SWF. The video frame rate will be 25 and the size 352x288, and it won't contain audio. The last stream is a HTML file (stat.html) that will show you the status of the server.
Start ffserver: ./ffserver -f ffserver.conf (or wherever you have left the config file). The -f parameter indicated
that you will load the confugration from a custom file.
Open a navigator and go to http://localhost:8090/stat.html. A status page of the server will show up, and we'll see a line of information about our testFlash.swf stream. It seems very quiet now, so let's feed this stream with the output of openRTSP (from step 7).
Do this:
<path to openRTSP>/openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> | <path to ffmeg>/ffmpeg -i - http://localhost:8090/feed1.ffm
The first path (before the "|" is the same as step 9. "|" is a symbol that connects the output of
openRTSP (the sequence of video signal, aka strage chars) to be the input of ffmpeg. "-I -" means that
the input of mmpeg is taken from the pipe "|" and http://localhost:8090/feed1.ffm is the destination (output)
of ffmpeg, wich is basically the input of ffserver.
So with this command we have connected openRTSP -> ffmpeg -> ffserver
When you enter this command a lot of information will be shown. Is important to note that the input params
and the output params are shown, and these params NEED to be "compatible". In my case, this will be shown:
Input #0, h264, from 'pipe: ':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264, yuv420p, 352x288, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
Metadata:
encoder: Lavf52.64.2
Stream #0.0: Video: FLV, yuv420p, 352x288, q=2-31, 200 kb/s, 1000k tbn, 25 tbc
Stream mapping:
Stream #0.0 -> #0.0
</pre>
And then the stream begins to play. You will see in the last line numbers CONSTANTLY changing,
telling you the live frame rating in each moment. Something like
frame= 395 fps= 37 q=31.7 Lsize = 1404kB time=15.80 bitrate = 727.9kbits/s
If you don't see this line of metrics, then there is something wrong with your output configuration. Go back and change the parameters of testFlash.swf.
Everything is done. You can see the video in http://localhost:8090/testFlash.swf. You can use this URL to embed a Flash movie or, as in my case, a Flex application.
Red5 is fine, especially now with xuggle support (allows for off the shelf integration of FFMPEG) which provides great integration with red5 (ie. great tutorial on doing live stream conversion etc).
If you're familiar with programming in flex or whatever it takes to bring it into a swf, then you could try and implement rtsp-over-tcp, AFAIK udp isn't available in flash.
I just tested this with Wowza.
Input was RTSP stream from Etrovision H264 DVS.
Take a look at this tread and use Application.xml file from there if you want to try it:
http://96.30.11.104/forums/showthread.php?p=24806
Video is playing in Flash player, but the price is 5 seconds delay for a single stream, all equipment in office LAN, server running at Core2Duo/2.8GHz/3GB RAM.
Not sure if it can go faster or it's the expected transcoding damage for this setup...
While its public / open source version is a bit dated now, you can have a look at this project which did RTSP and Image based camera transcoding into RTMP streams with Xuggler and Red5.
http://sourceforge.net/projects/imux/
(disclaimer: I worked for the company that created the original source)

Resources