I'd like to use OBS to stream via RTMP to a nginx server, and then locally send the RTMP fragments to WebRTC, so that they can be transmitted to the client via a MediaStream. I think this possible as it is essentially described here. I'm doing this because the multi-second latency of HLS is not appropriate for what I'm trying to do.
I'm having trouble extracting the RTMP fragments from nginx, the only plausible command I could find for doing this in the documentation was pull rtmp://.... When I tried this I did not see any files appearing in my root folder, where I would normally find the HLS files if I were using hls on. Does anyone know how to accomplish what I'm trying achieve above?
Thanks!
This is easily possible! You could base it off Pion’s rtp-to-webrtc example. This allows you to easily get media from ffmpeg into the browser.
The ffmpeg command you run instead would be like this one ffmpeg -re -i rtmp://localhost:1935/$app/$name -vn -acodec libopus -f rtp rtp://localhost:6000 -vcodec copy -an -f rtp rtp:localhost:5000 -sdp_file video.sdp
I would consider transcoding to VP8 since not all browsers support H264.
—-
If you want sub-second playback in the browser I would check out Project Lightspeed that’s your best option today IMO.
I am using VLC to transcode the rtsp stream from an IP camera to a http mjpg stream via the following command:
cvlc -vvv -Idummy -q rtsp://user:password#hostname:554 --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8081/}'
This works fine.
I do not need to transcode the stream all the time but only a fraction of it.
VLC is transcoding even if no client is connected. That utilizes a whole CPU core on my server all the time.
Is there any possibility to start transcoding only if at least one client is connected and stop transcoding if the last client is disconnected?
Thank you very much!
I think you are asking if you can run the command line command above from your server - if so, then it does depend on the server and language, but in general yes, you can.
Your server logic would be something like:
When a client connects:
if this is the first client, run the command to start the transcoding
provide the link to the stream to the client
When a client disconnects:
if this is the last client, stop the transcoding
There will be a delay for the first client as the stream is buffered, but I am guessing you know that already.
The way to run the command will depend on the sever, but you can usually find examples - e.g. for Node: Execute a command line binary with Node.js
If you are using Java, there is a very well know and useful article on running cmd line from Java also - even if you are not using Java it is good reading: https://www.javaworld.com/article/2071275/core-java/when-runtime-exec---won-t.html
This is my first time encountering video codecs/video streaming.
I am receiving raw h.264 packets over TCP. When I connect to the socket, listen to it and simply save the received data to a file, I am able to play it back using
ffplay data.h264
However, when I try to directly play it from the stream without saving it, using
ffplay tcp://addr:port
all I get is the error
Invalid data found when processing input
Why is that?
Specify the format: ffplay -f h264 tcp://addr:port
Alright I found another way to display the video stream.
ffplay -f h264 -codec:v h264 tcp://addr:port?listen
The ?listen parameter makes it so ffplay creates its own tcp server. All I do now is send the data to the specified address.
I want to record an audio stream using ffmpeg. The problem is, that this stream is not available over the whole timespan, that I want to record.
I have a device which takes input from a microphone, encodes it and broadcasts a rtp stream. This device can be controlled to start and stop this stream via a button (or a telnet command).
So basicly I want to start a recording session. ffmpeg should capture this rtp stream, if the stream is available. Otherwise silence audio should be recorded and when the stream is available again, it should record this stream again and so on.
How can I achieve this? When I start ffmpeg, it always aborts at the time the stream ends.
For now my ffmpeg command just looks like this:
ffmpeg -i rtp://192.168.2.255:3131 out.wav
If this is not possible with a ffmpeg command what will be the best (or easiest) way to do this.
If I understand the question correctly, you want to record when there is audio, and not record when there is silence using ffmpeg. Try the following:
ffmpeg -i rtp://192.168.2.255:3131 -af silenceremove=stop_periods=-1:stop_duration=1:stop_threshold=-30dB output-file.mp3
More details available here: https://ffmpeg.org/ffmpeg-filters.html
Hope that helps.
We are currently working on a Flex application that needs to connect to a set a traffic detection cameras via RTSP. Being totally new to the world of video streaming in general, I was wondering if that is possible.
AFAIK it is not possible to consume an RTSP feed in the Flash player, so I'm thinking that we would need some sort of a converter on the server that takes the RTSP stream and converts it to RTMP so we can consume the feed in our Flex app. We were hoping that Red5 could helps us do that.
Am I correct in my assumption and has anyone done this?
Wowza Media seems to support RTSP to RTMF converting: http://www.wowzamedia.com/comparison.html
And there is also general video stream transcoder Xuggle http://www.xuggle.com/ based on Red5 and FFMPEG.
You could try restreaming it via Red5 and connecting your Flex app to the Red5 server.
Read more at: http://red5wiki.com/wiki/SteamStream
Based on this work
I tried to convert a H264 signal to a SWF stream that could be
easily be displayed in Flash. Here is the recipe. (This recipe is
for Linux.)
Download Live555 streaming media, from http://www.live555.com/liveMedia/
The src file you have is usually named live555-latest.tar.gz
Unpack and compile:
Unpack:tar xzvf live555-latest.tar.gz. This will create a directory named live.
cd live
./genMakefiles linux (if you have a 32 bit system) or ./genMakefiles linux-64bit if your system is 64-bit)
make, and after a while you'll have a brand new compiled code
Live55 has a lot of good stuff, but we are only interested in the "testProgs"
directory, where openRTSP resides. OpenRTSP will let us receive a signal and send it
to ffmpeg, a program wich feeds ffserver. Ffserver is a server that receives
the signal from ffmpeg and converts it to SWF (and other formats).
Download, unpack, configure and install ffmpeg
Download ffmpeg from http://www.ffmpeg.org/. The version I tested is 0.6.1: http://www.ffmpeg.org/releases/ffmpeg-0.6.1.tar.gz
Unpack:tar xzvf ffmpeg-0.6.1.tar.gz. This will create a directory named ffmpeg-0.6.1
cd ffmpeg-0.6.1
All the funny video streaming things are packaged in VideoLan. So you
better install VideoLan right now. Go to http://www.videolan.org/ and see how easy is to
install it. You may be surprised that the package dependencies contains ffmpeg libraries.
After installing VideoLan do ./configure and then make.
After 3 or 4 hours you will have mmpeg and mmserver compiled and working.
Now we are almost ready to stream the whole world. First of all, let's try to
get openRTSP working.
Go to your "live" directory (remember 3.2) and do: cd testProgs
Try this:./openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> First of
all, you'll see logs which says something like:
- opening conection blah blah.
- sending DESCRIBE blah blah.
- receiving streamed data.
If all goes OK, your console will start to print a lot of strange characters very quickly.
These characters are bytes of video, but you can't see it (now). If you don't see your screen
printing characters, there is something wrong with your configuration. Check the steps up
to now.
We got the signal! Now let's send it to an useful component: ffmpeg, which is bound to
ffserver. We need to create a configuration file for ffserver.
Use your favorite editor to create this text file:
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 1000
CustomLog -
NoDaemon
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 200K
ACL allow 127.0.0.1
</Feed>
<Stream testFlash.swf>
Feed feed1.ffm
Format swf
VideoFrameRate 25
VideoSize 352x288
VideoIntraOnly
NoAudio
</Stream>
<Stream stat.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
Name the file, for example, ffserver.conf. Save it anywhere, for example in the same directory of ffserver.
So, ffserver will be bound to the port 8090, for input and output. <Feed> tag configures the
input stream. The name of the configured feed in this case is feed1.ffm. Remember it for step 14.
<Stream> contains configuration for the output stream. In this case the name will be testFlash.swf (remember too), and the format will be SWF. The video frame rate will be 25 and the size 352x288, and it won't contain audio. The last stream is a HTML file (stat.html) that will show you the status of the server.
Start ffserver: ./ffserver -f ffserver.conf (or wherever you have left the config file). The -f parameter indicated
that you will load the confugration from a custom file.
Open a navigator and go to http://localhost:8090/stat.html. A status page of the server will show up, and we'll see a line of information about our testFlash.swf stream. It seems very quiet now, so let's feed this stream with the output of openRTSP (from step 7).
Do this:
<path to openRTSP>/openRTSP -v -c -t rtsp://<hostname>:<port>/<cam_path> | <path to ffmeg>/ffmpeg -i - http://localhost:8090/feed1.ffm
The first path (before the "|" is the same as step 9. "|" is a symbol that connects the output of
openRTSP (the sequence of video signal, aka strage chars) to be the input of ffmpeg. "-I -" means that
the input of mmpeg is taken from the pipe "|" and http://localhost:8090/feed1.ffm is the destination (output)
of ffmpeg, wich is basically the input of ffserver.
So with this command we have connected openRTSP -> ffmpeg -> ffserver
When you enter this command a lot of information will be shown. Is important to note that the input params
and the output params are shown, and these params NEED to be "compatible". In my case, this will be shown:
Input #0, h264, from 'pipe: ':
Duration: N/A, bitrate: N/A
Stream #0.0: Video: h264, yuv420p, 352x288, 25 fps, 25 tbr, 1200k tbn, 50 tbc
Output #0, ffm, to 'http://localhost:8090/feed1.ffm':
Metadata:
encoder: Lavf52.64.2
Stream #0.0: Video: FLV, yuv420p, 352x288, q=2-31, 200 kb/s, 1000k tbn, 25 tbc
Stream mapping:
Stream #0.0 -> #0.0
</pre>
And then the stream begins to play. You will see in the last line numbers CONSTANTLY changing,
telling you the live frame rating in each moment. Something like
frame= 395 fps= 37 q=31.7 Lsize = 1404kB time=15.80 bitrate = 727.9kbits/s
If you don't see this line of metrics, then there is something wrong with your output configuration. Go back and change the parameters of testFlash.swf.
Everything is done. You can see the video in http://localhost:8090/testFlash.swf. You can use this URL to embed a Flash movie or, as in my case, a Flex application.
Red5 is fine, especially now with xuggle support (allows for off the shelf integration of FFMPEG) which provides great integration with red5 (ie. great tutorial on doing live stream conversion etc).
If you're familiar with programming in flex or whatever it takes to bring it into a swf, then you could try and implement rtsp-over-tcp, AFAIK udp isn't available in flash.
I just tested this with Wowza.
Input was RTSP stream from Etrovision H264 DVS.
Take a look at this tread and use Application.xml file from there if you want to try it:
http://96.30.11.104/forums/showthread.php?p=24806
Video is playing in Flash player, but the price is 5 seconds delay for a single stream, all equipment in office LAN, server running at Core2Duo/2.8GHz/3GB RAM.
Not sure if it can go faster or it's the expected transcoding damage for this setup...
While its public / open source version is a bit dated now, you can have a look at this project which did RTSP and Image based camera transcoding into RTMP streams with Xuggler and Red5.
http://sourceforge.net/projects/imux/
(disclaimer: I worked for the company that created the original source)