Maximum quality of video with Flex AIR - apache-flex

I want to use a camera which is installed in my computer in a Flex AIR application i'm writing, and I have few questions regarding the quality options :
Do I have any limitation on the video image quality? If my camera supports HD recording, will I be able to record in HD format via the AIR application?
How can I export the recorded video in any format I want?
If I want to use the same camera for shooting stills, how can I ensure (within the code) the best quality for the result pictures ?
Thanks for your answers.

1) Air can play 1080p no prob, however it cannot 'record' it from the application. There are some workarounds but you won't have sound. This is something that the OS should handle.
2) You can't, see above.
3) Shooting stills with a camera for picture quality is done on the hardware side, not software. In the software, it would essentially be equal to just taking one frame out of the video.

Related

Maptiler Pro Demo 12 Core using only 12%

Using MapTiler Pro Demo. Testing zoom levels 1-21 for Google Maps export from a tiff image (about 21mb file covering polygons over 2000km).
At the moment its been running an hour with constant usage at 12% of 12 vcores (about 1.5 of 12) maxed to about 2.5ghz. No tiles has been exported yet, only the html's associated.
Am I too quick to judge performance?
Edit: Progressbar at 0%
Edit2: Hour 8 still 0%. Memory usage increased from 400mb to 2gb
You are trying to generate from your input 21 MBytes file about 350 GBytes of tiles (approx. 10 billion of map tiles at zoom level 21) by the options you have set in the software. Is this really what you want to do?
It sounds like a nonsense to render the very low-res image (2600 x 2000 pixel) covering a large area (such as the South Africa) down to zoom level 21!
The software has suggested you the default maxzoom 6. If your data are coverage maps or similar dataset it makes sense to render it maybe down to zoom level 12 or similar, definitely not deeper than 14. For standard input data (aerial photo) the native suggested maxzoom +1 or +2 is the max which really makes sense. Deeper zoom levels do not add any visual advantage.
The user can always zoom deeper - but the upper tiles can be displayed on the client side - so you don't really need to generate and save all these images at all...
MapTiler automatically provides you with a Google Maps V3 viewer, which is doing the client-side over zooming out of the box.
See a preview here:
http://tileserver.maptiler.com/#weather/gmapsmaptiler.embed
If you are interested in the math behind the map tiles, check:
http://tools.geofabrik.de/calc/#type=geofabrik_standard&bbox=16.44,-34.85,32.82,-22.16
Thanks for providing the report (with http://www.maptiler.com/how-to/submit-report/) to us. Your original email to our support did not contain any technical data at all (not even the data you write here on the stackoverflow).
Please, before you publicly rant on the performance of a software - double check you know what you do. MapTiler Pro is a powerful tool, but the user must know what he does.
Based on your feedback - we have decided to implement for a future version of the MapTiler software an estimated final output size - and warn the user in the graphical user interface if he chooses options which are probably unwanted.

Preview issues for 1080P video using DirectShow

I am using DirectShow in my application to capture video from webcams. I have issues while using cameras to preview and capture 1080P videos. Eg: HD Pro Webcam C910 camera of Logitech.
1080P video preview was very jerky and no HD clarity was observed. I could see that the enumerated device name was "USB Video Device"
Today we installed Logitech webcam software on these XP machines . In that application, we could see the 1080P video without any jerking. Also we recorded 1080P video in the Logitech application and saw them in high quality.
But when I test my application,
I can see that the enumerated device name has been changed to "Logitech Pro Webcam C910" instead of the "USB Video Device" as in the previous case.
The CPU eaten up by my application is 20%, but the process "SYSTEM" eats up 60%+ and the overall CPU revolves around 100%
Even though the video quality has been greatly improved, the jerks are still there, may be due to the 100% CPU.
When I closed my application, the high CPU utlizaton by "System" process goes away.
Regarding my application - It uses ICaptureGraphBuilder2::RenderStream to create Preview and Capture streams.
In Capture Stream, I connect Camera filter to NULL renderer with sample grabber as the intermediate filter.
In preview stream, I have
g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW,&MEDIATYPE_Video,cam,NULL,NULL);
Preview is displayed on a windows as specified using IVideoWindow interface. I use the following
g_vidWin->put_Owner((OAHWND)(HWND)hWnd);
g_vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS);
g_vidWin->put_MessageDrain((OAHWND)hWnd);
I tried setting Frame rate to different values ( AvgTimePerFrame = 500000 ( 20 fps ) and 666667(15 fps) etc.
But all the trials, still give the same result. Clarity has become more, but some jerks still remain and CPU is almost 100% due to 60+ % utlilization by "System". When I close my video application, usage by "System" goes back to 1-2 %.
Any help on this is most welcome.
Thanks in advance,
Use IAMStreamConfig.SetFormat() to select the frame rate, dimensions, color space, and compression of the output streams (Capture and Preview) from a capture device.
Aside: The comment above of "It doesn't change the source filter's own rate" is completely wrong. The whole purpose of this interface is to define the output format and framerate of the captured video.
Use IAMStreamConfig.GetStreamCaps() to determine what frames rates, dimensions, color spaces, and compression formats are available. Most cameras provide a number different formats.
It sounds like the fundamental issue you're facing is that USB bandwidth (at least prior to USB3) can't sustain 30fps 1080P without compression. I'm most familiar with the Microsoft LifeCam Studio family of USB cameras, and these devices perform hardware compression to send the video over the wire, and then eat up a substantial fraction of your CPU on the receiving end converting the compressed video from Motion JPEG into a YUV format. Presumably the Logitech cameras work in a similar fashion.
The framerate that cameras produce is influenced by the additional workload of performing auto-focus, auto-color correction, and auto-exposure in software. Try disabling all these features on your camera if possible. In the era of Skype, camera software and hardware has become less attentive to maintaining a high framerate in favor of better image quality.
The DirectShow timing model for capture continues to work even if the camera can't produce frames at the requested rate as long as the camera indicates that frames are missing. It does this using "dropped frame" count field which rides along with each captured frame. The sum of the dropped frames plus the "real" frames must equal the requested frame rate set via IAMStreamConfig.SetFormat().
Using the LifeCam Studio on an I7 I have captured at 30fps 720p with preview, compressed to H.264 and written an .mp4 file to disk using around 30% of the CPU, but only if all the auto-focus/color/exposure settings on the camera are disabled.

cheapest method to send small (24 bytes) data over long distances (600 miles)

I have a friend who is working on a project where they need to deploy a large number of devices over the midwest. For simplicity let's say these are temperature gauges - they read the current temperature and transmit that information to a server. The server would just need to know what device is reporting what temperature (412X|10c).
These devices will be in forests, near highways, in cities and swamps. All other technology is prototyped and working (ability to read the temperature, the hardware for the device) the open question they have right now is 'what is the cheapest way we can send this information to the primary server'?
I think they'll need to go with a wireless carrier (verizon/sprint/at&t) and use something similar to mobile broadband. Is there really any other option?
You could do it with ham radio and something like APRS, assuming they don't care about encryption and don't have a pecuniary interest in the project.
You wouldn't need full mobile broadband, as your data would fit in a text message. You can get cellular shields for arduino that would probably fit your needs.

Can't get good quality audio/video

I'm creating a video chat application, and no matter what combination of Camera/Microphone/NetStream properties and functions I use, I cannot get high quality video/audio. I get occasional audio latency, pixelated video, occasional frozen video and the degree of each depends on the combination of properties/functions I set/call.
Others such as TokBox, TinyChat, Chat Roulette, etc. have achieved great video/audio quality with FMS, what is the secret? At least point me in the right direction, because right now I'm not impressed with FMS ability to provide a good video/audio experience.
BTW, I'm using a P2P mesh using a group specifier, not NetStream.DIRECT_CONNECTIONS.
Thanks in advance!
Try Camera.setQuality function (on the client), particularly the 'bandwidth' parameter. You might also want to look at your own internet/server connection since by default the quality changes appropriately to the bandwidth. Those other sites have server farms with massive amounts of bandwidth, something I doubt you have yourself.
If you are using RTMFP, You cannot depend on the quality as it is a peer-2-peer technology. If you need quality and recording stuff, go for RTMP.

Best way to show image sequence as a movie in Adobe AIR

I need to show an image sequence as a movie in an Adobe AIR application - i.e. treat lots of images as video frames and show the result. For now I am going to try simply loading them and displaying in a movie clip but this might be too slow. Any advanced ideas how to make it work? Images are located on a hard drive or very fast network share, so the bandwidth should be enough. There can be thousands of them, so preloading everything to memory doesn't seem feasible.
Adobe AIR is not 100% decided, I am open to other ideas how to create a cross-platform desktop application for this purpose quickly enough.
You could have an image control as your movie frame, then load up a buffer of BitmapData objects. Fill the BitmapData objects with the images as they come in, and then call the image load function to load the next image in the buffer.
private drawNextImage(bitmapData:BitmapData):void {
movieFrame.load(new Bitmap(bitmmapData));
}
In case the images aren't big but you have a lots of them it can be interesting to group sequences on single bitmaps (à la mipmap). This way you can load in say, one bitmap containing say, 50 images forming 2 seconds of video playback at 25 fps.
This is method is specially useful online as you want to limit the amount of pings and handshakes causing slowness but I reckon it can also be useful in order to optimize loading, unloading and memory access.

Resources