DVB Recording of a channel - directshow

I'm trying to record a DVB-Channel with a DVB-T Tuner.
I already did much research on this topic but I don't get really "information" what to do.
Basically I'm already able to create a own Graph with the default GraphEdit, make a tune request and watch a channel. Converting the Graph to C# Code with the DirectShowLib or to C++ isn't a big problem for me.
But what I don't know, what is the right approach to record the movie. (Without decode it to mpeg / avi and so on.)

The most important parts of the graph are some tuning related filters, they connect to the demultiplexer (demux), and the demux will output a video and audio stream.
The easiest way to get the mpeg stream is putting a filter before the demux. For example a samplegrabber. There you will receive the complete transport stream as it is broadcasted. But that normally contains multiple programs which are multiplexed on the same frequency. If you only need one program, you need to filter the other programs out of the stream.
If you only need a single program, it is probably easier to directly connect the audio and video stream coming out of the demultiplexer, to a multiplexer, and write it's output to a file. You need to make sure there is no decoder or any other filter between the demux and the mux. The problem is that you need to find a directshow multiplexer, as windows does not contain a standard multiplexer. I don't know any free multiplexer.
What you also can do is write the audio and video directly to a file. (again without decoding, or anything else). Then use for example ffmpeg to join the audio and video to a single file.
C:\> ffmpeg -i input.m2v -i input.mp2 -vcodec copy -acodec copy output.mpg
You probably also need to delay the audio or video stream to get them in sync.
One addition, of course you can also use ffmpeg to convert the multi program transport stream to a single program stream.

Related

How to compile audio and videos from a stream mpd file?

I have downloaded a DRM protected content audio and video files with a stream.mpd file. The audio and video files are encrypted using a key that can be found in stream.mpd file. So, how can I decrypt it and compile audio and video files and make a running mp4 file?
Just a quick check first - if the video and/or audio is protected by a standard DRM it would not be normal for the key to be included in the mpd file, so I am guessing you are using Clearkey protection (https://github.com/Dash-Industry-Forum/ClearKey-Content-Protection).
Assuming this is the case you can concatenate the segments into an mp4 file - see example and also some discussion on limitation on windows systems here: https://stackoverflow.com/a/27017348/334402
You can use ffmpeg to decrypt - e.g:
ffmpeg -decryption_key {key} -I {input-file} {output-file}
(https://ffmpeg.org/ffmpeg-formats.html#Options-1)
One thing to also be aware of is that most dash videos will have multi bit rate renditions and your client will download whatever bitrate is appropriate for the device and network conditions at any point during the streaming. For this reason you may have a mix of bit rates/resolutions and hence quality in the final video. If this is an issue your client may allow you to select a particular bitrate for the entire video instead of switching.

How can save video after live stream of nginx-rtmp-module and play it back using hls?

How can I save the video after live streaming of nginx-rtmp-module and play it back with hls . I use record to save to flv and then convert flv to m3u8, it takes a lot of time if the video is large. If I use hls_cleanup off, I can't actively choose to turn the record on or off. What is the correct way to save and play back using hls ? Please or show me if you know . Thanks very much
For small video file, both DVR-FLV or HLS are OK.
For large video file, as you mentioned, HLS is better. You need to manage each ts file and its duration, to generate the m3u8 index when streaming finished.
If you need to merge multiple publish stream to one stream, HLS is also better, for example, if need to adjust the encoder, use another encoder, or reconnect to server for network fail. If use DVR-FLV, there will be more than one FLV file and it's hard to merge them(need to covert to ts, concat them, then transcoding).
Furthermore, HLS is much better for producing during streaming, like sport programs, you may need to produce many VoD files during live streaming, and we can't wait streaming end:
encoder ---RTMP---> Server --HLS--> VoD During Streaming

HTTP Video Streaming

I have a server (not internet connected) that hosts a webpage with company data on an internal website. The server also contains videos (thousands of them) in a defined directory structure.
When a client connects I can display the videos to them on the internal website. The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow; the browser seems to be trying to download them in order to play them rather than stream them.
Is there a video streaming server that I could send a file path to and it would serve the video back to the client as a stream?
I guess this is essentially transcoding the video that I need done. I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
Sorry if i'm not being clear on my need. Send me a question if I haven't been clear on a point.
...the browser seems to be trying to download them in order to play them rather than stream them.
To echo what #Offbeatmammal said in the comments, if you're using MP4 files, you need to ensure the MOOV atom is at the beginning of the file. Without it, the browser doesn't know what byte offsets to request.
Ideally, encode your video files as fragmented. In FFmpeg:
ffmpeg -i ... -f mp4 -movflags frag_keyframe+empty_moov output.mp4
See also: https://stackoverflow.com/a/9734251/362536
That should allow the client to stream the MP4 files from any web server that supports HTTP/1.1 range requests. (Most all do, unless configured otherwise.)
However, there is another point to address:
The problem is some of the video files are 1Gb or larger and the connection to some clients is rather slow...
While fixing the streaming issue means the clients won't have to download the whole file first, they still need the bandwidth to keep up with the stream. If it's possible they won't, you'll want to implement some sort of transcoder.
I would recommend using an existing segmented streaming method such as DASH or HLS. HLS is currently the most compatible, thanks to Apple's platform policies. Either will enable adaptive bitrate switching, which will allow slow clients to automatically switch to a lower bitrate stream that they can smoothly keep up with. That way, slower clients can still see the video, albeit a lower quality one, while fast clients can get the full quality video.
You can use FFmpeg to do the transcoding and HLS playlist creation.
I'm not sure if PLEX or something like that is able to do it dynamically as there are hundreds of videos and new videos added all the time.
As for when you do this transcode, I suppose it depends on how much load you're looking at. If this is just one or two people viewing the file, you can transcode on demand if your servers can keep up. Ideally, you have at least a couple stream variants around for less popular files, and add more later if needed.
If you're doing this live, I'd recommend doing all of your transcoding up front. You can always prune old files/variants if you need the storage back.

incremental http live streaming

I seek to use the http live streaming standard with video. I'd like to eliminate any delay while a user is working with our app, but the current architecture requires fully encoding audio with any new or removed video clips.
Is there an incremental encoding approach to http live streaming so that I can
keep the audio track separate, but playback seamlessly with the video stream
allows .ts chunks to be independently encoded and streamed back to a user faster than re-encoding an entire video
References:
https://datatracker.ietf.org/doc/html/draft-pantos-http-live-streaming
https://developer.apple.com/streaming/
You could re-encode the required segments fairly easily -- there's no need to have the entire stream encoded before playing it (otherwise live events would be impossible). You have to be careful with the timestamps in the TS packets if you want it to be truly seamless. But what might be easiest is to use EXT-X-DISCONTINUITY markers around the re-created portions.
As for audio, there's no need to re-encode it. You should be able to just copy the encoded audio from one TS container to another. For example, if you're using ffmpeg, you would use -acodec copy to take it from the original ts.

Howto pipe raw PCM-Data from /dev/ttyUSB0 to soundcard?

I'm working currently on a small microhpone, connected to PC via an FPGA. The FPGA spits a raw datastream via UART/USB into my computer. I'm able to record, play and analyze the data.
But I can't play the "live" audiostream directly.
What works is saving the datastream in PCM raw-format with a custom made C-program, and piping the content of the file into aplay. But that adds a 10sec lag into the datastream... Not so nice for demoing or testing.
tail -f snd.raw | aplay -t raw -f S16_LE -r 9000
Does someone have another idea, how get the audiostream faster into my ears? Why does
cat /dev/ttyUSB0 | aplay
not work? (nothing happens)
Thanks so far
marvin
You need an api that lets you stream audiobuffers directly to the soundcard. I haven't done it on Linux, but I've used FMOD for this purpose. You might find another API in this question. SDL seems popular.
The general idea is that you set up a streaming buffer, then your c program stuffs the incoming bytes into an array. The size is chosen to balance lag with jitter in the incoming stream. When the array is full, you pass it to the API, and start filling another one while the first plays.
That would seem to be the domain of the alsaloop program. However, this program requires two ALSA devices to work with and you can see from its options that it goes to considerable effort in order to match the data flow of the devices, something that you would not necessarily want to do yourself.
This Stackoverflow topic talks about how to create a virtual userspace device available to ALSA: maybe that is a route worth pursuing.

Resources