RTPDump File does not play correctly (tatering) when converted via Sox, but plays correctly in wireshark - wav

I am trying to Convert the *.rtpdump file, created by Wireshark into wav file by Sox.
In Wireshark the original file is played without any tatering sound in the audio file, but when I convert it to wav file via SOX (on Windows), there is continuous tatering sound throughout the clip and the actual voice remains in background.
I tried the u-law encoding, a-law and others, the best it can get is with u-law, but it's also not so much audible. I tried the lowpass, gain, treble things but that also is not helping, changing channels, bit rate and other options make it worse.
Tried many things but tatering is not going
sox.exe -t raw -r 8000 -e u-law -c 1 66.rtpdump -t wav d:\out.wav -V
sox.exe -t raw -r 8000 -e a-law -c 1 66.rtpdump -t wav d:\out.wav -V

The first few bytes within each packet are causing this tatering sound.
I removed these bytes and the combined all the packets without these bytes to create a tatering free sound.

Related

How to create an audio file from a Pcap file with Tshark?

I want to make audio data from a Pcap file with Tshark.
I have successfully created audio data from a Pcap file using Wireshark in RTP analysis function.
This Pcap file is created from a VoIP phone conversation.
Next time I want to do the same thing with Tshark.
What command would do that?
I read the Tshark manual to find out how.
but couldn't find it.
do i need any tools?
On Linux, extracting the RTP packets from PCAP file is possible with tshark together with shell tools tr and xxd, but then you might need other tools to convert to an audio format.
If you have a single call recording in the pcap, so all rtp packets belong to it, try with:
tshark -n -r call.pcap -2 -R rtp -T fields -e rtp.payload | tr -d '\n',':' | xxd -r -ps >call.rtp
If the pcap has the recordings from many calls, then you have to identify the calls and their RTP streams by source/destination IPs or SSRC and build the filter accordingly, for example if SSRC is 0x7f029328:
tshark -n -r call.pcap -2 -R rtp -R "rtp.ssrc == 0x7f029328" -T fields -e rtp.payload | tr -d '\n',':' | xxd -r -ps >call.rtp
Tools like sox or ffmpeg can be used to convert from call.rtp file to wav format, depending on the codec that was used in the call. If the codec was G711u (PCMU) with sample rate 8000:
sox -t ul -r 8000 -c 1 call.rtp call.wav
The audio formats supported by sox are listed by sox -h. The ffmpeg might be needed for codecs such as G729 or G722, example for G722 with sample rate 16000:
ffmpeg -f g722 -i call.rtp -acodec pcm_s16le -ar 16000 -ac 1 call.wav
These guidelines are from some brief notes I made during the past when I had similar needs, hope they are good and still valid nowadays, or at least provide the right direction to explore further.

DirectShow: How to capture audio and video

I am looking for a way to capture my desktop. I came across something called direct Show but I cannot seem to get the syntax right on ffmpeg.
What can I do to capture the audio and video ?
I tried the syntax given in direct show site but not sure about it.
I just got mine to work and below i've given two examples of how you can do it and play it.
First one is
ffmpeg -f dshow -i video="screen-capture-recorder":audio="virtual-audio-capturer" -vcodec h264_nvenc -f mpegts udp://10.1.0.0:1234
This will stream it in the same network in the udp link
play it by typing ffplay udp://#10.1.0.0:1234.
You can change the udp link to what you want. Try different variation so it work. or even type this into VLC, which will also make it work.
2ND is
ffmpeg -f dshow -i video="screen-capture-recorder":audio="virtual-audio-capturer" -vcodec h264_nvenc output.mp4
You will get a mp4 file with the recording. Just press ctrl + c to stop the recording. Or if you know how long to record for add -t *seconds*. Replace seconds with actual number of seconds you want to record for. just add the -t before the output file name.

DirectShow stream using ffmpeg point to point streaming through TCP protocol

I had set up a point-to-point stream using ffmpeg via UDP protocol and the stream worked, but there was screen tearing etc. I already tried raising the buffer size, but it did not help. This is a work network, so the UDP protocol won't work.
here is the full command:
ffmpeg -f dshow -i video="UScreenCapture" -r 30 -vcodec mpeg4 -q 12 -f mpegts udp://192.168.1.220:1234?pkt_size=188?buffer_size=65535
I've tried to make this work with TCP with no success
Here's what i've got now:
ffmpeg -f dshow -i video="UScreenCapture" -f mpegts tcp://192.168.1.194:5555
this returns an error:
real-time buffer [UScreenCapture] [Video input] too full or near too
full <323% of size: 3041280 [rtbufsize parameter]>! frame dropped!
This last message repeated xxxx times (it went up to around 1400 and I just turned it off).
I've tried to implement the -rtbufsize paremeter and raising the buffsize up to 800000000, didn't help.
I would appreciate any suggestions on how to solve this.

How to stream with ffmpeg via http protocol

I'm currently doing a stream that is supposed to display correctly within Flowplayer.
First I send it to another PC via RTP. Here, I also checked with VLC that the codec etc. arrive correctly, which they do.
Now I want to expose this stream to Flowplayer as a file, so it can be displayed, via something I used in VLC:
http://localhost:8080/test.mp4
for example.
The full line I got is: ffmpeg -i input -f mp4 http://localhost:8080/test.mp4
However, no matter how I try to do this, I only get an input/output error. Is this only possible with something like ffserver or another?
What I think is this doesn't work because ffmpeg can't act as a server; on VLC it works since it can. (Though VLC ruins the codecs I set and it can't be read afterwards for some reason)
A (sort of) workaround I can use is saving the RTP stream to a file, and then letting flowplayer load it. This, however, only works once the file is not accessed anymore; I get a codec error otherwise.
To have FFmpeg act as an HTTP server, you need to pass the -listen 1 option. Additionally, -f mp4 will result in a non-fragmented MP4, which is not suitable for streaming. You can get a fragmented MP4 with -movflags frag_keyframe+empty_moov. A full working command line is:
ffmpeg -i input -listen 1 -f mp4 -movflags frag_keyframe+empty_moov http://localhost:8080
Other options you may find helpful are -re to limit the streaming speed to the input framerate, -stream_loop -1 to loop the input, and -c copy to avoid reencoding.
you need this command line
ffmpeg -f v4l2 -s 320x240 -r 25 -i /dev/video0 -f alsa -ac 1 -i hw:0 http://localhost:8090/feed1.ffm
make sure that your feed name ends with ".ffm" and if it's not the case, then add "-f ffm" before your feed URL, to manually specify the output format (because ffmpeg won't be able to figure it out automatically any more), like this "-f ffm http://localhost:8090/blah.bleh".

Sox: Failed reading file: Sorry don't understand .wav size

I want to convert a 44,100 Hz, 24 Bit Mono wav file to aformat that I can play it in asterisk. How can I do this using sox? When I use "Sox filename.wav -t raw -r 44100 -s -w -c 1 filename.sln" I get error "Sox: Failed reading file: Sorry don't understand .wav size". What is the problem?
SoX can't handle 24-bit samples. You will have to use a different program for those.
Try sndfile-convert.
http://itdp.fh-biergarten.de/transcode-users/2005-12/msg00150.html

Resources