I am using VLC to transcode the rtsp stream from an IP camera to a http mjpg stream via the following command:
cvlc -vvv -Idummy -q rtsp://user:password#hostname:554 --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8081/}'
This works fine.
I do not need to transcode the stream all the time but only a fraction of it.
VLC is transcoding even if no client is connected. That utilizes a whole CPU core on my server all the time.
Is there any possibility to start transcoding only if at least one client is connected and stop transcoding if the last client is disconnected?
Thank you very much!
I think you are asking if you can run the command line command above from your server - if so, then it does depend on the server and language, but in general yes, you can.
Your server logic would be something like:
When a client connects:
if this is the first client, run the command to start the transcoding
provide the link to the stream to the client
When a client disconnects:
if this is the last client, stop the transcoding
There will be a delay for the first client as the stream is buffered, but I am guessing you know that already.
The way to run the command will depend on the sever, but you can usually find examples - e.g. for Node: Execute a command line binary with Node.js
If you are using Java, there is a very well know and useful article on running cmd line from Java also - even if you are not using Java it is good reading: https://www.javaworld.com/article/2071275/core-java/when-runtime-exec---won-t.html
I was hoping to build an application that streams audio (mp3, ogg, etc.) from my microphone to a web browser.
I think I can use the html5 audio tag to read/play the stream from my server.
The area I'm really stuck on is how to setup the streaming http endpoint. What technologies will I need, and how should my server be structured to get the live audio from my mic and accessible from my server?
For example, for streaming mp3, do I constantly respond with mp3 frames as they are recorded?
Thanks for any help!
First off, let's split this problem up into a few parts. You have the audio capture (recording), the encoding/codec, the server, and the receiving clients.
Capture -> Codec -> Server -> Several Clients
For audio capture, you will need to use the Web Audio API along with getUserMedia. This will allow you to get 32-bit floating point PCM samples from the recording device. This data stream takes up a ton of bandwidth... a few megabit for a stereo stream. This stream is not directly playable in an HTML5 audio tag, and while you could play it on the receiving end with the Web Audio API, it takes up too much bandwidth to be useful. You need to use a codec to get the bandwidth usage down.
The codecs you want to look at include MP3, AAC (and its variants such as HE-AAC), and Opus. Not all browsers support all codecs. MP3 is the most widely compatible but AAC provides better quality for a given bitrate. Opus is a free and open codec but still doesn't have the greatest client adoption. In any case, there isn't yet a codec that you can run in-browser with any real stability. (Although it's being worked on! There are a lot of test projects made with Emscripten.) I solved this problem by reducing the bit depth of my samples to 16-bit signed integers and sending this PCM stream to a server to do the codec, over a binary websocket.
This encoding server took the PCM stream and ran it through a codec server-side. Here you can use whatever you'd like, such as a licensed codec binary or a tool like FFmpeg which encapsulates multiple codecs.
Next, this server streamed the data to a real streaming media server like Icecast. SHOUTcast and Icecast servers take the encoded stream and relay it to many clients over an HTTP-like connection. (Icecast is HTTP compliant whereas SHOUTcast is close but not quite there which can cause compatibility issues.)
Once you have your streaming server set up, it's as simple as referencing the stream URL in your <audio> tag.
Hopefully that gets you started. Depending on your needs, you might also look into WebRTC which does all of this for you but doesn't give you options for quality and also doesn't scale beyond a few users.
I want to record an audio stream using ffmpeg. The problem is, that this stream is not available over the whole timespan, that I want to record.
I have a device which takes input from a microphone, encodes it and broadcasts a rtp stream. This device can be controlled to start and stop this stream via a button (or a telnet command).
So basicly I want to start a recording session. ffmpeg should capture this rtp stream, if the stream is available. Otherwise silence audio should be recorded and when the stream is available again, it should record this stream again and so on.
How can I achieve this? When I start ffmpeg, it always aborts at the time the stream ends.
For now my ffmpeg command just looks like this:
ffmpeg -i rtp://192.168.2.255:3131 out.wav
If this is not possible with a ffmpeg command what will be the best (or easiest) way to do this.
If I understand the question correctly, you want to record when there is audio, and not record when there is silence using ffmpeg. Try the following:
ffmpeg -i rtp://192.168.2.255:3131 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-30dB output-file.mp3
More details available here: https://ffmpeg.org/ffmpeg-filters.html
Hope that helps.
After reasearching for a few days, i m still lost with this issue:
I have a webcam connected over WiFi to my Android device.
I wrote an Android app to connect to a specified Socket of the webcam (IP and port). From this Socket i get an InputStream which is already encoded in H.264. Then i redirect this InputStream from the android device to my server, where i managed to decode it to images/frame by using Xuggler.
I would like to stream my webcam live to the internet to a flash player or something.
I know i have to use Wowza, FMS or RED5 for this.
My problem is, that i dont understand how to proceed with the InputStream i have. All examples i ve read need a mp4/flv or other container file to stream from... but i have a continuous live InputStream.
Some other examples consider using Flash Encoder. But my InputStream is already encoded in H.264.
This is a general understanding question. Please advise me on how to solve this.
Thank you
you have following options -
Encode in flv container. Yes you can transmit live stream using using flv container. You can set the 'duration' field in the header to be arbitrary long. e.g youtube use this trick for live streaming.
you can encode the stream into RTMP. ffmpeg has code for rtmp code which can be used for understand, or i believe there are other opensource rtmp muxers available.
convert the stream into HLS, there are flash based HLS player available.
why flash if I may ask, hope you know that HTML5 video tag now directly accepts h264 encoded videos.
We are currently working on a Flex application that needs to connect to a set a traffic detection cameras via RTSP. Being totally new to the world of video streaming in general, I was wondering if that is possible.
AFAIK it is not possible to consume an RTSP feed in the Flash player, so I'm thinking that we would need some sort of a converter on the server that takes the RTSP stream and converts it to RTMP so we can consume the feed in our Flex app. We were hoping that Red5 could helps us do that.
Am I correct in my assumption and has anyone done this?
Wowza Media seems to support RTSP to RTMF converting: http://www.wowzamedia.com/comparison.html
And there is also general video stream transcoder Xuggle http://www.xuggle.com/ based on Red5 and FFMPEG.
You could try restreaming it via Red5 and connecting your Flex app to the Red5 server.
Read more at: http://red5wiki.com/wiki/SteamStream
Based on this work
I tried to convert a H264 signal to a SWF stream that could be
easily be displayed in Flash. Here is the recipe. (This recipe is
for Linux.)
Download Live555 streaming media, from http://www.live555.com/liveMedia/
The src file you have is usually named live555-latest.tar.gz
Unpack and compile:
Unpack:tar xzvf live555-latest.tar.gz. This will create a directory named live.
cd live
./genMakefiles linux (if you have a 32 bit system) or ./genMakefiles linux-64bit if your system is 64-bit)
make, and after a while you'll have a brand new compiled code
Live55 has a lot of good stuff, but we are only interested in the "testProgs"
directory, where openRTSP resides. OpenRTSP will let us receive a signal and send it
to ffmpeg, a program wich feeds ffserver. Ffserver is a server that receives
the signal from ffmpeg and converts it to SWF (and other formats).
Download, unpack, configure and install ffmpeg
Download ffmpeg from http://www.ffmpeg.org/. The version I tested is 0.6.1: http://www.ffmpeg.org/releases/ffmpeg-0.6.1.tar.gz
Unpack:tar xzvf ffmpeg-0.6.1.tar.gz. This will create a directory named ffmpeg-0.6.1
cd ffmpeg-0.6.1
All the funny video streaming things are packaged in VideoLan. So you
better install VideoLan right now. Go to http://www.videolan.org/ and see how easy is to
install it. You may be surprised that the package dependencies contains ffmpeg libraries.
After installing VideoLan do ./configure and then make.
After 3 or 4 hours you will have mmpeg and mmserver compiled and working.
Now we are almost ready to stream the whole world. First of all, let's try to
get openRTSP working.
Go to your "live" directory (remember 3.2) and do: cd testProgs
Try this:./openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> First of
all, you'll see logs which says something like:
- opening conection blah blah.
- sending DESCRIBE blah blah.
- receiving streamed data.
If all goes OK, your console will start to print a lot of strange characters very quickly.
These characters are bytes of video, but you can't see it (now). If you don't see your screen
printing characters, there is something wrong with your configuration. Check the steps up
to now.
We got the signal! Now let's send it to an useful component: ffmpeg, which is bound to
ffserver. We need to create a configuration file for ffserver.
Use your favorite editor to create this text file:
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
NoDaemon
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 200K
ACL allow 127.0.0.1
</Feed>
<Stream testFlash.swf>
Feed feed1.ffm
Format swf
VideoFrameRate 25
VideoSize 352x288
VideoIntraOnly
NoAudio
</Stream>
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
Name the file, for example, ffserver.conf. Save it anywhere, for example in the same directory of ffserver.
So, ffserver will be bound to the port 8090, for input and output. <Feed> tag configures the
input stream. The name of the configured feed in this case is feed1.ffm. Remember it for step 14.
<Stream> contains configuration for the output stream. In this case the name will be testFlash.swf (remember too), and the format will be SWF. The video frame rate will be 25 and the size 352x288, and it won't contain audio. The last stream is a HTML file (stat.html) that will show you the status of the server.
Start ffserver: ./ffserver -f ffserver.conf (or wherever you have left the config file). The -f parameter indicated
that you will load the confugration from a custom file.
Open a navigator and go to http://localhost:8090/stat.html. A status page of the server will show up, and we'll see a line of information about our testFlash.swf stream. It seems very quiet now, so let's feed this stream with the output of openRTSP (from step 7).
Do this:
<path to openRTSP>/openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> | <path to ffmeg>/ffmpeg -i - http://localhost:8090/feed1.ffm
The first path (before the "|" is the same as step 9. "|" is a symbol that connects the output of
openRTSP (the sequence of video signal, aka strage chars) to be the input of ffmpeg. "-I -" means that
the input of mmpeg is taken from the pipe "|" and http://localhost:8090/feed1.ffm is the destination (output)
of ffmpeg, wich is basically the input of ffserver.
So with this command we have connected openRTSP -> ffmpeg -> ffserver
When you enter this command a lot of information will be shown. Is important to note that the input params
and the output params are shown, and these params NEED to be "compatible". In my case, this will be shown:
Input #0, h264, from 'pipe: ':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264, yuv420p, 352x288, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
Metadata:
encoder: Lavf52.64.2
Stream #0.0: Video: FLV, yuv420p, 352x288, q=2-31, 200 kb/s, 1000k tbn, 25 tbc
Stream mapping:
Stream #0.0 -> #0.0
</pre>
And then the stream begins to play. You will see in the last line numbers CONSTANTLY changing,
telling you the live frame rating in each moment. Something like
frame= 395 fps= 37 q=31.7 Lsize = 1404kB time=15.80 bitrate = 727.9kbits/s
If you don't see this line of metrics, then there is something wrong with your output configuration. Go back and change the parameters of testFlash.swf.
Everything is done. You can see the video in http://localhost:8090/testFlash.swf. You can use this URL to embed a Flash movie or, as in my case, a Flex application.
Red5 is fine, especially now with xuggle support (allows for off the shelf integration of FFMPEG) which provides great integration with red5 (ie. great tutorial on doing live stream conversion etc).
If you're familiar with programming in flex or whatever it takes to bring it into a swf, then you could try and implement rtsp-over-tcp, AFAIK udp isn't available in flash.
I just tested this with Wowza.
Input was RTSP stream from Etrovision H264 DVS.
Take a look at this tread and use Application.xml file from there if you want to try it:
http://96.30.11.104/forums/showthread.php?p=24806
Video is playing in Flash player, but the price is 5 seconds delay for a single stream, all equipment in office LAN, server running at Core2Duo/2.8GHz/3GB RAM.
Not sure if it can go faster or it's the expected transcoding damage for this setup...
While its public / open source version is a bit dated now, you can have a look at this project which did RTSP and Image based camera transcoding into RTMP streams with Xuggler and Red5.
http://sourceforge.net/projects/imux/
(disclaimer: I worked for the company that created the original source)