VideoMaterial appears pixellated in Away3D - apache-flex

I'm working on a spherical movie viewer in Away3D & am having a problem when I apply a VideoMaterial texture to a 3D primitive. The video appears heavily pixellated, like it's being scaled or hugely compressed. When I apply a BitmapMaterial of a single still image from the video it appears fine, so I don't think the resolution of the video is the problem.
I found [this discussion][1] suggesting a solution by specifying the "fixedHeight" & "fixedWidth" when I call the constructor, but those arguements seem to have no effect, and I can't find them in the API either. I do see something called "lockH" & "lockW," [in the API][3], but I they don't seem to have any effect either.
Here's the code constructing the VideoMaterial.
//basic intro setup stuff and then...
var videoURL:String = "assets/clip.flv";
this.primitive = new Sphere({material:"blue:#cyan", radius:50000, rotationX:100, segmentsW:30, segmentsH:30});
//more code to setup the rest of the scene, and implement some texture switching, then...
this.primitive.material = new VideoMaterial({file:videoURL, lockH:1000, lockW:2000});
For reference, I'm building off this example as a starting point, and I'm using Away3D 3.6 & Flex 4.5.1 in Eclipse Indigo.
[1]:
[3]:

To get rid of the pixelation, set smooth to true. This will obviously not increase the resolution, but it will activate anti-aliasing, the same way that smoothing=true on a native BitmapData does (internally that's exactly what it does.)
If you are going to use a video or bitmap material on a sphere that is used as an environment in a full-screen view, you will need to have a really high resolution video/bitmap. At any one time you can only see at most a third of the sphere surface, and it covers a screen area of more than 1000 pixels in width, so that tells me that your video will need to be at least 3000 pixels wide for it not to suffer from stretch issues.

I'm afraid to say that this is 'normal'. It mostly has to do with the efficiency of actionscript code and the lack of hardware acceleration and anti-aliasing. It's essentially impossible to do a transform of your video onto a primitive without having some sort of loss in quality because frankly, actionscript isn't really made for this kind of intense calculations.
With that said however, there is hope. There's a new Flash Player coming out "soonish" (or so I've heard) that will have a basic hardware accelerated 3D renderer (codename "Molehill") that Away3d and other 3d engines (like Alternativa) is hard at work implementing already. This would mean that the video would then be anti-aliased and should therefor be smooth, but I can't confirm this since I've never tried.

Related

Texture taken from Item: can I make its filtering be gamma-correct?

If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.

SVG optimization and performance: human (shapes like polygon, circle...) vs application (path)

Has anybody experienced the difference in performance between:
Human SVG (shapes like polygon, circle, elipse, rect, line, text) (typed by human in text editor)
Application SVG (path) (generated by application / exporting as svg)
What renders faster in browsers? What has smaller size in bytes?
I bet this may depend largely on the actual image but I wonder if there are some general rules and if we can come up with a pattern as a community.
This is a follow up to question: Examples of polygons drawn by path vs polygon in SVG by chris Frisina.
Context: I'm working on reducing paint times by replacing simple JPEG backgrounds with SVGs (to save bytes) as CSS3 background image using data URI (to save on DOM nodes and HTTP request). I can't decide if I should go with Human SVG or Application SVG and if I should use Base64 encoding (I read a lot that modern browsers can use UTF-8 for SVG but I wonder if that's true for Human SVG or only for paths!).
Assuming the svg image resulting from the manually-created and application-generated svg's is exactly the same, you wouldn't be able to notice a difference in draw-time. The resulting code generated by a program may be substantially larger, though, even if saving to the most optimized version. That is something to watch for. Don't bother using something like inkscape if all you need is a triangle. Manual creation will always be simpler in that case.
I'm answering because I happened upon this question while working on a semi-complex svg application for a larger app. I used paths to create a large portion of the shapes, but I wondered if polygons would be better at performance. My shapes are draggable and can have corners added/removed/dragged as well. Many of them move in tandem. In my small amount of testing the two versions, I couldn't see a difference in loading, drawing, etc. It doesn't seem like this applies to your scenario, but I will add this: since I am dynamically updating the d/points attribute, I had to test both examples of code that join the strings from sets of x, y coordinates. Again, both observing the running application or benchmarking the code itself, no clear winner.
So, use whichever you think is best. A polygon may be more contextual, if you don't need some qualities of the path element, such as curving.
If others have more in-depth tests of runtime performance, I'd love to see them.

JavaFX Depth Testing in 3d Scene leading to Z-fighting

I've created an app that takes DTED positional data and creates a basic contour mesh. With depth testing enabled this works fine and I don't have issues simply rendering the terrain.
The problem I've run into is that when I place objects on the terrain surface I get a lot of z-fighting causing visual corruption in the boxes/spheres. Does there exist any way to mitigate this besides modifying nearclip/farclip?
I've tried using a nearclip of .1 and a farclip of 5000 and I still suffer a lot of flicker. Keep in mind my terrain may be 100k units wide so I want to keep my farclip high enough to view the entire terrain at once. I've gone through every question related to the depth buffer in FX and have not yet found anything to help mitigate it besides near/farclip settings.
I had the same problem with JavaFX. Your solution worked for me, but I was curious why that was, especially because I found someone with the same problem, but in Autodesk Maya.
The reason behind this, of course, is depth buffer precision. As the zFar plane moves away, or the zNear plane comes closer, too much precision is lost. This affects the zNear plane much more than the zFar plane.
A good explanation is on khronos' OpenGL wiki: Depth Buffer Precision
TL;DR: Always move the zNear plane as far away as possible and the zFar plane as near as possible.
After a lot of tinkering I found that adjusting my farclip never helped reduce flicker. However if I set my nearclip to 1 I can have a farclip of 500k with virtually no flicker.

Depth Of Field in OpenCL

This might be a "homework" issue, but I think I did enough so I can get a help here.
In my assignment, we have a working OpenGL/OpenCL application. OpenGL application renders a scene and OpenCL should apply depth-of-field like effect. OpenCL part gets texture where each pixel has original color and depth and should output color for given pixel. I'm supposed to only change per-pixel function, that is part of the OpenCL.
I already have working solution using variable-size gausian filter, that samples area around calculated pixel. But it gets laggy on higher resolutions even on my dedicated NVidia graphics card. I tried optimizing out most of the redundant operations, but I haven't gotten much performance gain.
I also tried searching the web, but all algorithms I'm finding are closely tied to graphical pipeline of OpenGL or DirectX, nothing that can be used in my scenario.
Are there any algorithms, that could work in my situation?
AMD APP SDK has a sample called URNGGL (Uniform Random Noise Generator with OpenGL/OpenCL interoperability).
Have a look at https://github.com/clockfort/amd-app-sdk-fixes/tree/master/samples/opencl/cl/app/URNG.

QGLWidget is slower than QWidget

The problem mainly is determined in the title. I tried out the Qt's example (2dpainting) and noticed, that the same code consumes more CPU power if I try to draw on QGLWidget and less if I try to draw simply on QWidget. I thought that the QGLWidget should be faster. And one more interesting phenomenon: In QGLWidget the antialiasing hint seems to be ignored.
OpenGL version: 3.3.0
So why is that?
Firstly, note this text at the bottom of the documentation that you link to:
The example shows the same painting operations performed at the same
time in a Widget and a GLWidget. The quality and speed of rendering in
the GLWidget depends on the level of support for multisampling and
hardware acceleration that your system's OpenGL driver provides. If
support for either of these is lacking, the driver may fall back on a
software renderer that may trade quality for speed.
Putting that aside, hardware rendering is not always guaranteed to be faster than software rendering; it all depends upon what the renderer is being asked to do.
An example of where software can exceed hardware is if the goal of the item being rendered is constantly changing. So, if you have a drawing program that draws a line being created by the mouse being constantly moved and it is implemented by adding points to a painter path that is drawn every frame, a hardware renderer will be subject to constant pipeline stalls as new points are added to the painter path. Setting up the graphics pipeline from a stall takes time, which is not something a software renderer has to deal with.
In the 2dPainting example you ask about, the helper class, which performs the paint calls, is doing a lot of unnecessary work; saving the painter state; setting the pen / brush; rotating the painter; restoring the brush. All of this is a bigger overhead in hardware than software. To really see hardware rendering outperform software, pre-calculating the objects' positions outside of the render loop (paint function) and then doing nothing put actually rendering in the paint function is likely to display a noticeable difference here.
Finally, regarding anti-aliasing, the documentation that you linked to states: "the QGLWidget will also use anti-aliasing if the required extensions are supported by your system's OpenGL driver"

Resources