I've been stucked with this requirement for a long time.And how can I do that?Any suggestions would be appreciated!
I think that there are several options
a)Do the processing on the Flash Player (skip frames) - I think it is inefficient and I do not think that you will have a good user experience - but you can give it a try
b)Write a plugin for your streaming server (in FMS you can do it in C++) which is doing the same thing - skipping frames in order to obtain the desired effect.
c)Encode your video files in several formats: 1x, 2x, 4x etc and switch the stream from the flash player accordingly. I think this is the easiest solution.
Related
I looking for read raw h264 streaming video into a web browser.
It is a live flow (so there is no way to convert file), from a raspberry's camera.
I successfully read this flow with vlc with a manipulation (forcing the codec input : https://www.unifore.net/ip-video-surveillance/how-to-play-264-video-files-from-ip-cameras-dvrs.html ).
But h264 raw without a container (like mp4) seems prohibited by browsers.
There no many ressources about this point. And some questions on Stackoverflow.com like this post are asked without answer.
Maybe I do not use the good way, so all suggestions are welcome. :)
Thanks !
I've been doing some work in VB.Net with Directshow over the past 3-4 weeks. I'm creating an application to keep tags on a video and eventually want to be able to extract the tagged parts of the video to a new file. In a video that is 2 hours long I might want to extract say 50 10-15 second "clips" up to 15 times (event tagging). This will be for a free application.
I've found it brilliant (and easy) to render / seek / play clips, etc on XP-Win7 with no issues. I've "discovered" the joys of GraphEdit, creating graphs, the issues with COM in VB.NET, GMFBridge, ....etc.
Now I need some advice. Am I using the right technology. Directshow seems to be very resistant to the idea of "open video", "seek to clip", "write clip to file", .....repeat for all clips, close file. I can sort of do this already if I visibly render the video but would need to do it as a background task faster than realtime render speed.
Things that seem to be missing are:
- an example of anyone doing anything similar (export multiple clips to a single file)
- no easily available 64bit compressors (lots of 32bit stuff around)
- all the references and examples I do find are VERY old
- VB.NET is not the first "port of call" for DirectShow developers
So, the question is, should I be using something else?
If not, has anyone done anything similar before. I'm not looking for their code, I just want some guidelines as it takes ages to figure things out in DirectShow and VB.Net just using trial & error (and Google).
I've looked at AFORGE (no sound), FFMPEG (command line toolset), Media Foundation (reluctant to throw away XP) and a variety of commercial helper libraries but not really getting any further.
Apologies for the length but I wanted readers to understand the background.
All help appreciated.
To output clips to a single file Microsoft had created the "DirectShow Editing Services". Sometimes it works, sometimes not. We use it in our software to create videos from clips like you. With a little bit work you can also include effects to the video.
It is also possible to use AviSynth. It's a scripting system and frameserver for DirectShow.
As I know, with MediaFoundation you can also create a video from multiple clips, but I never tried this.
I´m working on an application based on directshow that has to convert an AVI source file to to an mp4-file that can be played back with Quicktime.
Since 3ivx, according to my web research the most popular way to fulfill this task, has become commercial (and my budget is quite limited), I decided to use a solution based on ffdshow.
I created a simple graph in graphedit, using LAME for audio encoding and GDCL MPEG 4 Multiplexor for the muxing, but everytime I try to play the movie with Quicktime, I´m getting an error indicating a wrong "sample description".
Playback with Windows Media Player is working, except that there is no sound.
My guess is that there´s a problem with the muxer, because every time I try to add audio encoding, graphedit automatically adds an decoder after the encoding unit (see picture link).
http://imageshack.us/photo/my-images/39/graphjrgr.png/
Any ideas on how to integrate ffdshow in a better way, tips for alternative mp4 muxers, or a complete different approach are appreciated!
The GDCL muxer has limited number of audio formats that it supports, probably you should check the source code for the muxer to see if the formats you are using are in fact supported. Basically, you need to choose an audio encoder that the mux recognizes as valid. It might be possible to use GraphEdit to choose different properties for the encoder filter that allow things to work better.
I have had some luck with the Monogram x264(video) and AAC(audio) encoders. See http://blog.monogram.sk/janos/directshow-filters/
Finally, try the debug version of the GDCL mp4 muxer.
Also, you must be aware of MPEG-4 LA licensing requirements for x264 http://www.mpegla.com/main/programs/AVC/Pages/FAQ.aspx
I am embedding an mp3 into my Flex project for use as a sound effect, but I am finding that every time I play it, there is a delay of about half a second from when I call .play() to when you can hear the sound. This makes it weird because I want the sound effects to sync to game events. My mp3 itself is only about a fifth of a second long so it isn't because of the contents of the mp3.
I'm embedding with
[Embed(source="assets/Tock.mp3")]
[Bindable]
public static var TockSound:Class;
public var tock_sound:SoundAsset;
and then playing with
if (tock_sound == null) {
tock_sound = new TockSound() as SoundAsset;
}
Alert.show("tock");
tock_sound.play();
I know there's a delay because the sound plays about a half second after the Alert displays. I did consider that maybe it was the initial loading time of constructing the TockSound, but the delay is there on all the subsequent calls as well.
How can I avoid this delay on playing a sound?
Update: It turns out this delay is only present when playing the swf on Linux. I believe it is a Linux-specific flaw in Adobe's flash player.
Not sure about the reason, other than Flash always has had some bad audio latency issues. Read Tinic's blog to stay on top of this stuff: http://www.kaourantin.net/
One thing that might help: make sure your MP3 is 44.1kHz or else Flash will need to resample it.
You can actually embed a WAV file, it just takes work. You embed it as a byte array, and in FP9, dynamically construct a SWF file on the fly. Pretty horrible, but doable. :-) In FP10, you can use the dynamic sound API, so it's easy.
Try StandingWave
http://code.google.com/p/standingwave/
It has the ability to "cache" the sound before playing getting rid of those delays and clicks you normally hear
I haven't worked with audio in Flash too much but it sounds like the half second delay might be the Flash Player opening up the file and reading it into memory. You could try doing a play() and a stop() when you load the application. That might push it into memory.
The other option is using the StandingWave library which was built by the guys at Noteflight. You can get some additional control over the audio files with that library and hopefully it'll help your delay problem.
The problem is that all MP3s have a random amount of blank time at the beginning of the file that is put there during the compression process. Modern software jukeboxes(itunes, songbird etc...) compensate for this by scanning the file before its played and determining the songs actual starting point. Your best bet for sound effects is to use .wav files as their format allows for instant playback, but with a filesize hit.
you might also try: http://www.mptrim.com/ <- they claim to be able to trim the space off the mp3.
I have a legacy file format that contains sounds embedded in it (in various encodings). I would like to be able to play these sounds in Flash (Air?) by reading the sound bytes out of the file and instantiating a Sound object with them.
If the sound is unencoded (e.g., raw pcm), I've found that I can use the new flex 4 SampleDataEvent.SAMPLE_DATA event to play the sound.
However, if the sound is encoded (e.g., mp3), then I'm at a loss. The sound expected by SampleDataEvent.SAMPLE_DATA has to be raw pcm. From what I've seen, encoded Sounds can only be instantiated by [Embed]ing them, or by using a URLRequest with Sound.load().
Surely there's a third way? AMF or e4x?
There are really only two routes for you to go. The first is to write a decoder in ActionScript. You may be able to use Alchemy to port over some C/C++ code to make this job significantly easier (and possibly more performant). This is exactly how I got Ogg Vorbis playback to work with Flash.
The other option is to dynamically create a valid SWF inside of a ByteArray. That SWF could contain an embedded sound object that was made up of your sound data. A number of folks have pulled off similar hacks in the past before Flash Player 10 was available. I believe you can find a good place to start in Andre Michelle's and Joa Ebert's PopForge codebase.