Streaming video sourse using http and GStreamer - http

I have a problem with streaming videosource over HTTP using gstreamer. Windows OS.
Send command (server side) looks like:
gst-launch-1.0 -v filesrc location=d:/TestVideos/costarica.mp4 ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=192.168.1.162 port=6001
And command on client side:
gst-launch-1.0 -v tcpclientsrc host=192.168.1.162 port=6001 ! gdpdepay ! rtph264depay ! h264parse ! decodebin ! autovideosink sync=false
Pipeline is starting, but I can't see opened display window on my screen.
It would be great if anyone have solution. Thank you.

The h264 codec will not display the video when it cannot reconstruct the video from received packets. Since you are using, TCP, the chances of losing packets are less. But there might be a latency introduced due to retries by TCP. I suggest the following :
Insert an element videorate that can limit the rate at which video is transferred.
Also use queue at the receiver side, to accomodate for latency.

Related

h265 raw file frames to jpeg conversion using gstreamer

h265 raw files not decoding to image format from RTSP stream
I want to store each raw frame from rtsp stream and copy this file to another system and decode that frames from H265 to jpeg. because I have three client systems with low configuration so I want to decode some frames from another system.
I am trying to use the below pipeline but not decode the frame.
Pipeline from client system : gst-launch-1.0 rtspsrc location=rtsp://(camera) latency=0 ! rtph265depay ! h265parse ! multifilesink location="C:/img/RAW%d.raw"
Pipeline from main system : gst-launch-1.0 filesrc location="C:/img/RAW15.raw" ! d3d11h265dec ! video/x-raw,format=NV12,width=3840,height=2160 ! jpegenc ! filesink location="C:/img/DEC15.jpeg"
I am stucking from last 3 months but my problem does not resolve, please help me

Transcoding camera stream on demand

I am using VLC to transcode the rtsp stream from an IP camera to a http mjpg stream via the following command:
cvlc -vvv -Idummy -q rtsp://user:password#hostname:554 --sout '#transcode{vcodec=MJPG,venc=ffmpeg{strict=1}}:standard{access=http{mime=multipart/x-mixed-replace;boundary=--7b3cc56e5f51db803f790dad720ed50a},mux=mpjpeg,dst=:8081/}'
This works fine.
I do not need to transcode the stream all the time but only a fraction of it.
VLC is transcoding even if no client is connected. That utilizes a whole CPU core on my server all the time.
Is there any possibility to start transcoding only if at least one client is connected and stop transcoding if the last client is disconnected?
Thank you very much!
I think you are asking if you can run the command line command above from your server - if so, then it does depend on the server and language, but in general yes, you can.
Your server logic would be something like:
When a client connects:
if this is the first client, run the command to start the transcoding
provide the link to the stream to the client
When a client disconnects:
if this is the last client, stop the transcoding
There will be a delay for the first client as the stream is buffered, but I am guessing you know that already.
The way to run the command will depend on the sever, but you can usually find examples - e.g. for Node: Execute a command line binary with Node.js
If you are using Java, there is a very well know and useful article on running cmd line from Java also - even if you are not using Java it is good reading: https://www.javaworld.com/article/2071275/core-java/when-runtime-exec---won-t.html

VLC HTTP STREAMING OVER HTTP IS STOPPING AFTER 10 SECONDS ALWAYS

I am trying to run my Ubuntu machine as vlc server. where i run below command to stream my local video over http.vlc 1.avi
:sout=#transcode{vcodec=theo,vb=800,acodec=vorb,ab=128,channels=2,samplerate=44100}:duplicate{dst=http{dst=:8080/test.ogg}} :sout-all :sout-keep
Below is vlc client commad to display the http streaming output which is stopping always after 10 sec. For subsequent attempt this is not working.("failed to find url")
vlc http://localhost:8080/test.ogg .
Please suggest any workaround. Also please let me knwo if i should switched to ffmpeg if this is legacy problem. please suggest the command as well.
Note : using the latest vlc
Thanks in advance!
this was vlc version mismatch,once i make same vlc version in both client and server then it works perfectly

Does Paw.app include support for sending HTTP requests to UNIX sockets?

Does Paw.app support sending HTTP requests to UNIX sockets similar to curl --unix-socket=/tmp/my.sock?
Thanks!
--
-a
No, unfortunately, this isn't supported by Paw. Maybe later :) It should be doable as we have our own HTTP library. The limitations of OS X sandboxing may limit this, but we can find workarounds…

Boost ASIO will send fast and then very slow

The product I work on uses Boost ASIO (TCP) for network communication. During a test I noticed something very strange: ASIO would send at ~60MB/s for 13 seconds and then drop to ~300K/s for 9 seconds then it would then go back to ~60MB/s for 13 seconds; this would repeat for the entire transfer. I wrote a test application to see if I could reproduce this and to my surprise, I could.
In my test app, after the client and server connect the code is very basic. Of note, blocking and non-blocking sockets produce the same results. Here is how the server receives data in a blocking situation:
while(true)
{
boost::system::error_code ec;
serverSocket.read_some(boost::asio::buffer(serverBuffer, bufferLength), ec);
}
The client does this:
while(true)
boost::asio::write(clientSocket, boost::asio::buffer(clientBuffer, bufferLength)
If bufferLength is 4K, I see the problem, but if I push it up to 32K the transfer speed is fast (>120MB/s) and consistent.
Can anyone shed some light on what may be happening? This is a Windows application running on Windows Server 2008.
Edit: I did a Wireshark capture and there is sometimes a delay of > 150ms of the server sending an ACK. i.e. client sends the last bit of data [PSH,ACK]; server reponds 150ms later with an [ACK]. No other traffic in between.
2nd Edit: I wrote a C# app that exhibits the same behavior. Client sends 4k packets with NetworkStream.Write, and server reads them with NetworkStream.Read. Any suggestions on where to go from here?
I don't know what exactly in your program. but I got the meassge of your wireshark for the 150ms one trip. It's abnormal. I thinks your should check the network cards if they working in same mode. Duplex or half-Duplex, 100M or 1000M. and then, check the error packet counter, it should be 0 or a very small number. if it's a big number and increse with your C/S Communication. you should change your cable.
I wish it can help you. :)
Did you try to disable Nagle's algorithm (tcp::no_delay)?
Also try to change buffer size.
socket->set_option( boost::asio::ip::tcp::no_delay( true) );
socket->set_option( boost::asio::socket_base::send_buffer_size( 65536 ) );
socket->set_option( boost::asio::socket_base::receive_buffer_size( 65536 ) );

Resources