h265 raw files not decoding to image format from RTSP stream
I want to store each raw frame from rtsp stream and copy this file to another system and decode that frames from H265 to jpeg. because I have three client systems with low configuration so I want to decode some frames from another system.
I am trying to use the below pipeline but not decode the frame.
Pipeline from client system : gst-launch-1.0 rtspsrc location=rtsp://(camera) latency=0 ! rtph265depay ! h265parse ! multifilesink location="C:/img/RAW%d.raw"
Pipeline from main system : gst-launch-1.0 filesrc location="C:/img/RAW15.raw" ! d3d11h265dec ! video/x-raw,format=NV12,width=3840,height=2160 ! jpegenc ! filesink location="C:/img/DEC15.jpeg"
I am stucking from last 3 months but my problem does not resolve, please help me
Related
I have a problem with streaming videosource over HTTP using gstreamer. Windows OS.
Send command (server side) looks like:
gst-launch-1.0 -v filesrc location=d:/TestVideos/costarica.mp4 ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.1.162 port=6001
And command on client side:
gst-launch-1.0 -v tcpclientsrc host=192.168.1.162 port=6001 ! gdpdepay ! rtph264depay ! h264parse ! decodebin ! autovideosink sync=false
Pipeline is starting, but I can't see opened display window on my screen.
It would be great if anyone have solution. Thank you.
The h264 codec will not display the video when it cannot reconstruct the video from received packets. Since you are using, TCP, the chances of losing packets are less. But there might be a latency introduced due to retries by TCP. I suggest the following :
Insert an element videorate that can limit the rate at which video is transferred.
Also use queue at the receiver side, to accomodate for latency.
This is my first time encountering video codecs/video streaming.
I am receiving raw h.264 packets over TCP. When I connect to the socket, listen to it and simply save the received data to a file, I am able to play it back using
ffplay data.h264
However, when I try to directly play it from the stream without saving it, using
ffplay tcp://addr:port
all I get is the error
Invalid data found when processing input
Why is that?
Specify the format: ffplay -f h264 tcp://addr:port
Alright I found another way to display the video stream.
ffplay -f h264 -codec:v h264 tcp://addr:port?listen
The ?listen parameter makes it so ffplay creates its own tcp server. All I do now is send the data to the specified address.
I am trying to develop alljoyn applications using C as my language binding. I have understood and implemented the basic tutorial, customized it and able to build applications at both server and client. Now comes the second part of my development to program a file transfer server and client by reading the files and putting them onto alljoyn bus reply.
Since Alljoyn reply can be only of 65536 bytes I framed my own protocol between server and client where server breaks down the message and the client receives the message chunks sequentially one after another. Now I am facing a problem here which I would like to describe briefly.
(1) If I transmit text messages I receive them perfectly.
(2) If I transmit a binary data I would loose data. My understanding is that the alljoyn bus reply is a string and whenever I am receiving a NULL all the subsequent characters are read as zeros at the receiver.
What to do for mitigating this.
I want to know if there any ways where I can mask off the NULL characters in my binary data string or the approach what I am following itself is flawed.
I just started to use this alljoyn framework and I am very much newbie. Any help would be greatly helpful.
You should be using an array of bytes (signature of 'ay') to send as binary data. That should prevent alljoyn from truncating your strings when it sees a NULL. AllJoyn can handle binary data as long as you tell it that's what you're using.
Does anybody know where within tshark or wireshark the code is that I could use to reassemble pcap files? I am working on an app and need to reassemble pcap files, but don't need the other functionality of wireshark / tshark...so, hoping to use this as guidance.
Thanks.
If it's tcpdump file (not ng format), you can throw away the first 24 bytes (file header) of the second file and concatenate the rest to the first file, then you do the same for all the other files.
mergecap (from wireshark suite) will merge two or more pcap files.
see if i have one media file like .mkv or .mp3 now i am playing it on my computer by using streaming. Now i want to ask you when data comes from server to client at that time who is going to do demuxing?
Does at server side any demuxer program open file & give frames & then
streaming application make packet of that demuxed frame and then transmit
it to client ?
or
At server side streaming application just read any media file in chunks of some
bytes & transmit to client then client side one demuxer program parse that & find
real frames from that and play?
Definitely later, I'm not sure why would server do any sort of work regarding understanding of data it sends. It would just burn cpu cycles on server, and I can't see any benefit of doing that.