Gstreamer : internal data flow error : H264 to Mjpeg to TCP - tcp

I want to take a h264 video, decode it, re-encode it in mjpeg and stream it over tcp.
For this, I use a raspivid video caputre which give a h264 output video piped with Gstreamer which decode, re-encode and transmit using tcp:
raspivid -n -t 0 -b 7000000 -fps 25 -o - | \
gst-launch-1.0 fdsrc ! video/x-h264,framerate=25/1,stream-format=byte-stream ! decodebin ! videorate ! video/x-raw,framerate=10/1 ! \
videoconvert ! jpegenc ! tcpserversink host=192.168.24.5 port=5000 &
To receive I use:
gst-launch-1.0 tcpclientsrc host=192.168.24.5 port=5000 ! jpegdec ! autovideosink
On my tcp server my CPU work at 90% and I have no error. We could think it's ok, but ...
On my tcp client I have this error:
ERROR: from element /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2865): gst_base_src_loop (): /GstPipeline:pipeline0/GstTCPClientSrc:tcpclientsrc0:
streaming task paused, reason not-negotiated (-4)
ERROR: pipeline doesn't want to preroll.
Did you have any ideas why my pipeline is broken ?

Tried adding a videoconvert before the videosink yet?
Also, you should specify the caps for the tcp source as the second pipeline needs to know at least the framerate. I'd do something like:
gst-launch-1.0 tcpclientsrc host=192.168.24.5 port=5000 ! image/jpeg, framerate=25/1 ! jpegparse ! jpegdec ! queue ! videoconvert ! autovideosink
If that still doesn't work, a GST_DEBUG=6 log from the receiving pipeline should help pinpointing the issue.

Related

Concurrent streaming from filesrc to udpsink and appsink using python opencv with gstreamer support

Hi I am trying to open a video file using opencv with gstreamer support in python. The idea is to grab frames from the file and to simultaneously pass it to my python3 application for processing while also encoding it into h264 and sending it to a udpsink. Each of these streams work when being run independently but I run into errors when trying to run it together. This pipeline works if I pull from a web camera instead of a filesrc.
The code that I used to open the cv2.VideoCapture is below. I am running this on a TX2 with Jetpack 4.3 and a recompiled Opencv 4.1.1
video_stream = cv2.VideoCapture("filesrc location=video.mp4 ! \
qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! tee name=t \
t. ! queue leaky=downstream ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink \
t. ! queue leaky=downstream ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)320, height=(int)240 ! omxh264enc ! video/x-h264, streamformat=byte-stream ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.1 port=1234")
The error I get is as follows
[ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module demux reported: Internal data stream error.
[ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /usr/local/src/opencv-4.1.1/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Any suggestions on how I should proceed? Thanks!
I figured it out. I need to add another nvvidconv before the tee. Not sure exactly why, but it allows the entire pipeline to flow correctly.
video_stream = cv2.VideoCapture("filesrc location=video.mp4 ! \
qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! nvvidconv ! tee name=t \
t. ! queue leaky=downstream ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink \
t. ! queue leaky=downstream ! nvvidconv ! video/x-raw(memory:NVMM), width=(int)320, height=(int)240 ! omxh264enc ! video/x-h264, streamformat=byte-stream ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.1.1 port=1234")

Using gstreamer how can I generate encrypted hls streams

Can anyone help me generate a GStreamer pipeline to create an encrypted hls stream? I have been able to do the following which works well but I would like to add encryption to the final output.
gst-launch-1.0 -v souphttpsrc location=http://192.168.1.20/1.ts ! tsdemux program-number=1 name=tsmux tsmux.video_0_0044 ! queue ! muxer. tsmux.audio_0_0045 ! queue ! aacparse ! muxer. mpegtsmux name=muxer ! hlssink location="test/hlssink.%05d.ts" playlist-location="test/playlist.m3u8" max-files=6 target-duration=6

Streaming with gstreamer to vlc using tcpserversink

I'm attempting to stream an h264 encoded video using gstreamer and tcp. The command is:
gst-launch-1.0 videotestsrc is-live=true ! videoconvert ! videoscale ! video/x-raw,width=800,height=600 ! x264enc key-int-max=12 ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink port=5000
gop size is set to 12, and configuration sent every second. I can't receive this stream using vlc (neither on the same machine nor on other machine). The command on vlc is:
vlc rtp://localhost:5000
but nothing showed. Anyone can help ?
regards
wrap the stream up in some container like mpegts
gst-launch-1.0 -v videotestsrc ! x264enc key-int-max=12 byte-stream=true ! mpegtsmux ! tcpserversink port=8888 host=localhost
now in vlc using tcp://localhost:8888

DirectShow stream using ffmpeg point to point streaming through TCP protocol

I had set up a point-to-point stream using ffmpeg via UDP protocol and the stream worked, but there was screen tearing etc. I already tried raising the buffer size, but it did not help. This is a work network, so the UDP protocol won't work.
here is the full command:
ffmpeg -f dshow -i video="UScreenCapture" -r 30 -vcodec mpeg4 -q 12 -f mpegts udp://192.168.1.220:1234?pkt_size=188?buffer_size=65535
I've tried to make this work with TCP with no success
Here's what i've got now:
ffmpeg -f dshow -i video="UScreenCapture" -f mpegts tcp://192.168.1.194:5555
this returns an error:
real-time buffer [UScreenCapture] [Video input] too full or near too
full <323% of size: 3041280 [rtbufsize parameter]>! frame dropped!
This last message repeated xxxx times (it went up to around 1400 and I just turned it off).
I've tried to implement the -rtbufsize paremeter and raising the buffsize up to 800000000, didn't help.
I would appreciate any suggestions on how to solve this.

tshark: extract rtp payload of the codec G.723

In order to extract the RTP payload from a pcap file captured by wireshark, I'm using tshark with the command
tshark -nr stream.pcap -i wlan1 -R 'rtp && ip.dst==192.168.1.64' -T fields -e rtp.payload
this succeeded with the codecs g.729 and ilbc but with the codec g.723 it wasn't the case. I think that this problem is due to the fact that the field payload of the rtp protocol doesn't exist any more (when consulting the wireshark).
Any idea of how to extract the payload of the codec g.723?
I did it this way:
used rtpxtract.pl from
here
then used ffmpeg to convert it to format user can listen to. like MP3.
ffmpeg -f g723_1 -i ${infile} ${outfile}.mp3
to solve this problem you have just to disable the protocol g723 in wireshark in the item Enabled Protocols from the Analyze menu then the field "payload" will appear in the protocol rtp and the command
tshark -nr stream.pcap -i wlan1 -R 'rtp && ip.dst==192.168.1.64' -T fields -e rtp.payload
will succeed!

Resources