I'm using javaFX in a game for playing music and sounds. Originally I was using mp3 files for the audio, but then I learned that because of the license attached with this format, it was probably not wise to continue using this format. Because of this, I switched to m4a (AAC). However, after testing the program with m4a versus mp3, I have determined that using m4a increases the memory consumption of the program by up to a gig with increases occurring when loading new music (SSCCE not possible due to extent of the program). The file format was the only thing changed for each test.
My questions:
Is m4a known to be problematic in javaFX programs?
What audio formats should I be using in javaFX?
Related
My current code follows this: Disk > bytearray > QImage (or QPixmap) > Painter
// Step 1: disk to qbytearray
QFile file("/path/to/file");
file.open(QIODevice::ReadOnly);
QByteArray blob = file.readAll();
file.close();
// Step 2: bytearray to QImage
this->mRenderImg = new QImage();
this->mRenderImg->loadFromData(blob, "JPG");
// Step 3: render
QPainter p(this);
QRect target; // draw rectangle
p.drawImage(target, *(this->mRenderImg));
Step 1 takes 0.5s
Step 2 takes 3.0s - decoding jpeg is the slowest step
Step 3 takes 0.1s
Apparently, decoding jpeg data is the slowest step. How do I make it faster?
Is there a third party library that can transform bytearray of jpeg data to bytearray of ppm that can be loaded to QImage faster?
Btw, Using QPixmap takes the same time as QImage.
I was able to reduce QImage load time significantly using libjpeg-turbo. Here are the steps
Step 1: Load .jpeg in memory filebuffer
Step 2a: Use tjDecompress2() to decompress filebuffer to uncompressedbuffer
Step 2b: Use QImage(uncompressedbuffer, int width, int height, Format pixfmt) to load the QImage
Step 3: Render
Step 2a and 2b combined offers atleast 3x speedup compared to QImage::loadFromData()
Notes
PixelFormat used in libjpeg's tjDecompress2() should match with format specified in step 2b
You can derive the width and height used in step 2b using tjDecompressHeader2()
I think your question is more addressing a end result of a design issue rather than the design issue itself. Here are my thoughts related to the loading times of jpegs or images in general in GUIs.
Loading of data from a harddrive is a common bottleneck of software design. You have to get the harddrive to spin to that location and pull it out and copy it to the ram. Then when you are ready, you have the ram push it to the video buffer or maybe to the graphics card.
An SSD will be much faster, but demanding this of your end user is not practical in most situations. Loading the image once on startup and never closing your program is a way to avoid hitting this delay multiple times.
It is a big reason why lots of programs have a loading bar or a splash screen when it is starting up, so that it doesn't have a slow user experience when pulling up data.
Some other ways that programs handle longer processes are with the classic hour glass, or the spinning beach ball, or the "wait a bit" gifs are common.
Probably the best example of handing lots of large jpegs is google maps or some of the higher quality photo manager programs such as Picasa.
Google maps stores many different resolutions of the same area, and tiles and then load then specific to the resolution that can be viewed. Picasa "processes" all the images it finds, and stores a few different thumbnails for each one that can be loaded much faster than the full resolution image (in most cases).
My suggestion would be to either store a copy of your jpeg at a lower resolution, load that one, and then after it is loaded, replace it with the high resolution one when it is loaded, or to look into breaking your image into tiles and load them as needed.
On another related note, if you UI is getting slowed down by the jpeg loading, move the loading part into a thread and keep your UI responsive!
Hope that helps.
You asked about third-party software - -
ImageMagick can do this task quickly (even with larger files):
convert 1.jpg 2.jpg 3.jpg outputfilenamehere.ppm
While you have the filestream open, you can do numerous operations...
Hope that helps,
Gette
I am developing a scanner application in C++. Currently I am able to scan the documents and get the images in file transfer mode. But all the scanned documents have same size even though the content of the documents are different.
FileFormat:TWFF_TIFF
Pixel flavout: TWPF_CHOCOLATE
Xresoultion:75
Yresoultion:75
ICAP_UNITS: TWUN_INCHES
ICAP_PIXELTYPE: TWPT_GRAY
ICAP_BRIGHTNESS:0
ICAP_CONTRAST:0
ICAP_BITDEPTH: 8
Every time scanned image size as 327kb. Why would this be?
Also, how can I set JPEG_Compression. Does file transfer mode supports JPEG_compression?
Probably your scanner/driver is writing uncompressed TIFF files, so the file size depends only on the dimensions of the image. If each image is the same width & height, the resulting files will be the same size.
All the file-transfer stuff in TWAIN is implemented by the driver (not TWAIN itself) and all the features are optional. So you need to check if your scanner/driver supports JPEG compression when transferring TIFF files. It might, it might not.
You can try setting ICAP_COMPRESSION to TWCP_JPEG, after setting ICAP_IMAGEFILEFORMAT to TWFF_TIFF. Probably if both succeed you will get JPEG compression in your TIFFs, although it might be either "Old Style" JPEG or "New Style" JPEG. If you don't know what that means, you probably should find out.
I wrote a tool for this kind of experimenting, years ago, still maintained and free from Atalasoft: Twirl TWAIN Probe
Caution: Many scanners don't support File Transfer Mode (it is optional) and those that do may not support the TIFF file format (the only required file format is BMP!) If you need to support a wide variety of scanners, you'll have to use TWAIN's Native Transfer Mode or Memory Transfer Mode, and write the images to file yourself e.g. using LibTiff.
I am using DirectShow in my application to capture video from webcams. I have issues while using cameras to preview and capture 1080P videos. Eg: HD Pro Webcam C910 camera of Logitech.
1080P video preview was very jerky and no HD clarity was observed. I could see that the enumerated device name was "USB Video Device"
Today we installed Logitech webcam software on these XP machines . In that application, we could see the 1080P video without any jerking. Also we recorded 1080P video in the Logitech application and saw them in high quality.
But when I test my application,
I can see that the enumerated device name has been changed to "Logitech Pro Webcam C910" instead of the "USB Video Device" as in the previous case.
The CPU eaten up by my application is 20%, but the process "SYSTEM" eats up 60%+ and the overall CPU revolves around 100%
Even though the video quality has been greatly improved, the jerks are still there, may be due to the 100% CPU.
When I closed my application, the high CPU utlizaton by "System" process goes away.
Regarding my application - It uses ICaptureGraphBuilder2::RenderStream to create Preview and Capture streams.
In Capture Stream, I connect Camera filter to NULL renderer with sample grabber as the intermediate filter.
In preview stream, I have
g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW,&MEDIATYPE_Video,cam,NULL,NULL);
Preview is displayed on a windows as specified using IVideoWindow interface. I use the following
g_vidWin->put_Owner((OAHWND)(HWND)hWnd);
g_vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS);
g_vidWin->put_MessageDrain((OAHWND)hWnd);
I tried setting Frame rate to different values ( AvgTimePerFrame = 500000 ( 20 fps ) and 666667(15 fps) etc.
But all the trials, still give the same result. Clarity has become more, but some jerks still remain and CPU is almost 100% due to 60+ % utlilization by "System". When I close my video application, usage by "System" goes back to 1-2 %.
Any help on this is most welcome.
Thanks in advance,
Use IAMStreamConfig.SetFormat() to select the frame rate, dimensions, color space, and compression of the output streams (Capture and Preview) from a capture device.
Aside: The comment above of "It doesn't change the source filter's own rate" is completely wrong. The whole purpose of this interface is to define the output format and framerate of the captured video.
Use IAMStreamConfig.GetStreamCaps() to determine what frames rates, dimensions, color spaces, and compression formats are available. Most cameras provide a number different formats.
It sounds like the fundamental issue you're facing is that USB bandwidth (at least prior to USB3) can't sustain 30fps 1080P without compression. I'm most familiar with the Microsoft LifeCam Studio family of USB cameras, and these devices perform hardware compression to send the video over the wire, and then eat up a substantial fraction of your CPU on the receiving end converting the compressed video from Motion JPEG into a YUV format. Presumably the Logitech cameras work in a similar fashion.
The framerate that cameras produce is influenced by the additional workload of performing auto-focus, auto-color correction, and auto-exposure in software. Try disabling all these features on your camera if possible. In the era of Skype, camera software and hardware has become less attentive to maintaining a high framerate in favor of better image quality.
The DirectShow timing model for capture continues to work even if the camera can't produce frames at the requested rate as long as the camera indicates that frames are missing. It does this using "dropped frame" count field which rides along with each captured frame. The sum of the dropped frames plus the "real" frames must equal the requested frame rate set via IAMStreamConfig.SetFormat().
Using the LifeCam Studio on an I7 I have captured at 30fps 720p with preview, compressed to H.264 and written an .mp4 file to disk using around 30% of the CPU, but only if all the auto-focus/color/exposure settings on the camera are disabled.
I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.
I am working on a Flex application/game where a lot of UIComponents are moved around on a canvas.
I would like to "record" an flv movie of the movement on the canvas. Is there anyway this can be accomplished ?
I essentially want my users to be able to record small flv videos of their games to be uploaded on youtube.
Any ideas or suggestions about how to do this ?
There is SimpleFlvWriter (for AIR). You may modify it to get a non-AIR version. But memory management will be an issue since BitmapData will take up a lot of memory... It may be possible for a few seconds flv but definite not for several minutes.
Usually we stream things to a Flash server (eg. Flash Media Server, Red5) and let the server create the flv. But you need to find a way to convert the screen captures to NetStream. Or you may find other server side technology that can create flv from sequence of BitmapData. But in anyway it will consume a lot of bandwidth.
An alternative I can think of, is to save all the game commands(in XML, or other text format) and send it to the server. And you write a program in server-side to generate the flv from only the game commands. But it will be the most difficult solution to be implemented.