I want to make a app that record video using webcam,
I made the logic like get each frame as bitmap and store it to file using
AForge VideoFileWriter WriteVideoFrame function,
When I open then file using VideoFileWriter Open function,
writer.Open(path, VideoWidth, VideoHeight, frameRate, VideoCodec.H264, bitRate);
It is hard to determine the bitRate, when the bitrate is wrong, the whole program die without any error.
I think the bitrate is related to video frame width, height, framerate, bitcount as well as codec,
But I not sure the specific formular to calculate.
I want to compress the video using h264 codec.
Can anyone help me to find out the solution?
Thank you very much.
Related
I want to prevent my file to copy from any devices. Means that i have memory card and when i insert it in any device like android or computer than my file can't copy from that.
Any resources to read or any place that i can get some information about copy preventing
Maybe you could partition the SD card and leave some space unpartitioned and write some magic bytes to it. When your program executes you'll determine the device the application ran from. If this is the SD card you'll try and read the raw bytes from the SD card and compare it with the magic bytes, if the program is not ran from SD card or if the magic bytes do not match it does not execute. Done!
Please don't get me wrong, this won't be easy, but maybe it could work. Copying would still work, but the file will be useless. Also, this is not a ready to made solution, but rather an outline how you could achieve your goal.
For accessing SD-Cards raw data please see
http://www.codeproject.com/Articles/28314/Reading-and-Writing-to-Raw-Disk-Sectors
And for partitioning http://geeks.lockergnome.com/profiles/blogs/how-to-partition-an-sd-card
I'm working on a speech recognition project and my program can recognize words from audio files. Now I need to work with the audio stream coming from microphone. I'm using QAudio for getting sound data from mic and QAudio has a function to start the process. This start(* QBuffer) function writes the data into a QBuffer(inherited from QByteArray) object. When I'm not dealing with continuous stream, I can stop recording from mic anytime I want and copy the whole data from QBuffer into a QByteArray and I can do whatever I wanna do with the data. But in continuous stream QBuffer's size increases by time and becomes 100Mb in 15 mins.
So I need to use some kind of circular buffer but I can't figure out how to do that especially with this start(*QBuffer) function. I also avoid of cutting the streaming sound at a point where the speech continues.
What is the basic way to handle streaming audio data for speech recognition?
Is it possible to change the start(*QBuffer) function into start(*QByteArray) and make the function to overwrite on that QByteArray to build and circular buffer?
Thanks in advance
boost.com is offering a circular buffer
http://www.boost.org/doc/libs/1_37_0/libs/circular_buffer/doc/circular_buffer.html#briefexample
It should meet what you need
Alain
I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.
I am using the below jQuery plugin for playing mp3
www.happyworm.com/jquery/jplayer
However, there is a bug in Flash that the total play (track) time won't show up correctly UNTIL AFTER the whole mp3 is completed downloaded.
I wonder if there is a way to work around this to get the correct total time using either javascript / another flash / even backend library in ASP.NET. Any suggestion helps. Thanks
You sure that's a bug? Looking at the header definition for the MP3 format I don't see any values for the length of the file. Generally applications that play MP3s would have to calculate the time, and that may not be doable until the entire file is downloaded. So the behavior you're seeing from Flash might be expected.
Theoretically if it's a fixed bitrate file (as opposed to VBR) then knowing the bitrate (gotten from the header) and the total size of the file should be enough to calculate it. However, the server would have to report the size of the file in the response headers (and that's not guaranteed to be accurate).
My guess is you'd need some service on the server that could calculate the length and report that to you in a separate request.
I am working on a Flex application/game where a lot of UIComponents are moved around on a canvas.
I would like to "record" an flv movie of the movement on the canvas. Is there anyway this can be accomplished ?
I essentially want my users to be able to record small flv videos of their games to be uploaded on youtube.
Any ideas or suggestions about how to do this ?
There is SimpleFlvWriter (for AIR). You may modify it to get a non-AIR version. But memory management will be an issue since BitmapData will take up a lot of memory... It may be possible for a few seconds flv but definite not for several minutes.
Usually we stream things to a Flash server (eg. Flash Media Server, Red5) and let the server create the flv. But you need to find a way to convert the screen captures to NetStream. Or you may find other server side technology that can create flv from sequence of BitmapData. But in anyway it will consume a lot of bandwidth.
An alternative I can think of, is to save all the game commands(in XML, or other text format) and send it to the server. And you write a program in server-side to generate the flv from only the game commands. But it will be the most difficult solution to be implemented.