Depth Of Field in OpenCL - opencl

This might be a "homework" issue, but I think I did enough so I can get a help here.
In my assignment, we have a working OpenGL/OpenCL application. OpenGL application renders a scene and OpenCL should apply depth-of-field like effect. OpenCL part gets texture where each pixel has original color and depth and should output color for given pixel. I'm supposed to only change per-pixel function, that is part of the OpenCL.
I already have working solution using variable-size gausian filter, that samples area around calculated pixel. But it gets laggy on higher resolutions even on my dedicated NVidia graphics card. I tried optimizing out most of the redundant operations, but I haven't gotten much performance gain.
I also tried searching the web, but all algorithms I'm finding are closely tied to graphical pipeline of OpenGL or DirectX, nothing that can be used in my scenario.
Are there any algorithms, that could work in my situation?

AMD APP SDK has a sample called URNGGL (Uniform Random Noise Generator with OpenGL/OpenCL interoperability).
Have a look at https://github.com/clockfort/amd-app-sdk-fixes/tree/master/samples/opencl/cl/app/URNG.

Related

QR Code Recognition in AGV (Auto Guided Vehicle)

I have some questions.
The first question is which equipment should be used to recognize QR Code.
I'm thinking of two things.
The first is the QR code Scanner used in the industrial field.
The second is the camera module. (opencv will be used)
However, the situation to consider is that it should be recognized at the speed of 50cm/s.
What do you think about?
And if I use a camera, is there a library that you can recommend to recognize QR Code? (C/C++ only)
Always start with the simplest solution and then go more complex if needed. If you're using ROS/OpenCV, OpenCV has a QR Code scanner, ex. Other options include ZBar, quirc, and more, found by searching github or the internet.
As for a camera, if you don't need the intrinsic matrix, then you only need to decide on the resolution: more resolution takes (non-linearly) longer to compute, but less resolution prohibits seeing the objects well.
Your comment about "recognize at 50cm/s" doesn't make much sense. I assume you mean that you want to be able to decode a QR code that's up-to 50 cm away, and do it in less than a second (to have time to stop). First you'll have to check if the algorithm, running on your hardware, can detect the QR code at different desired distances, and how that changes with scaling the image up/down in OpenCV. Then you'll have to time how long it takes to detect/decode it at those distances/resolutions/scales. If it fails to be good enough, you can try another algorithm, try different compilation settings, perhaps give it it's own thread, change the scaling on the image, accept the limitations, or change the hardware.

Texture taken from Item: can I make its filtering be gamma-correct?

If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.

Efficiency in drawing arbitrary meshes with OpenGL (Qt)

I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?
VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.

VideoMaterial appears pixellated in Away3D

I'm working on a spherical movie viewer in Away3D & am having a problem when I apply a VideoMaterial texture to a 3D primitive. The video appears heavily pixellated, like it's being scaled or hugely compressed. When I apply a BitmapMaterial of a single still image from the video it appears fine, so I don't think the resolution of the video is the problem.
I found [this discussion][1] suggesting a solution by specifying the "fixedHeight" & "fixedWidth" when I call the constructor, but those arguements seem to have no effect, and I can't find them in the API either. I do see something called "lockH" & "lockW," [in the API][3], but I they don't seem to have any effect either.
Here's the code constructing the VideoMaterial.
//basic intro setup stuff and then...
var videoURL:String = "assets/clip.flv";
this.primitive = new Sphere({material:"blue:#cyan", radius:50000, rotationX:100, segmentsW:30, segmentsH:30});
//more code to setup the rest of the scene, and implement some texture switching, then...
this.primitive.material = new VideoMaterial({file:videoURL, lockH:1000, lockW:2000});
For reference, I'm building off this example as a starting point, and I'm using Away3D 3.6 & Flex 4.5.1 in Eclipse Indigo.
[1]:
[3]:
To get rid of the pixelation, set smooth to true. This will obviously not increase the resolution, but it will activate anti-aliasing, the same way that smoothing=true on a native BitmapData does (internally that's exactly what it does.)
If you are going to use a video or bitmap material on a sphere that is used as an environment in a full-screen view, you will need to have a really high resolution video/bitmap. At any one time you can only see at most a third of the sphere surface, and it covers a screen area of more than 1000 pixels in width, so that tells me that your video will need to be at least 3000 pixels wide for it not to suffer from stretch issues.
I'm afraid to say that this is 'normal'. It mostly has to do with the efficiency of actionscript code and the lack of hardware acceleration and anti-aliasing. It's essentially impossible to do a transform of your video onto a primitive without having some sort of loss in quality because frankly, actionscript isn't really made for this kind of intense calculations.
With that said however, there is hope. There's a new Flash Player coming out "soonish" (or so I've heard) that will have a basic hardware accelerated 3D renderer (codename "Molehill") that Away3d and other 3d engines (like Alternativa) is hard at work implementing already. This would mean that the video would then be anti-aliased and should therefor be smooth, but I can't confirm this since I've never tried.

jogl picking example

hi guys
i am in trouble with add picking object in a JOGL project.
i know that this could be done with pick buffer.. but i can't find examples
anyone?
In general, as you are probably aware, JOGL code translates directly from any other OpenGL examples you might see on the web.
GL_SELECT based picking seems to be very much out of favour these days; deprecated in the spec and poorly implemented by drivers.
Alternatives you can use are:
Rendering each object with a unique color (and all lighting / fog etc disabled) so you can determine which object the mouse is over via glReadPixels. (Clearing buffers after the picking stage so that you can then render your normal graphics). This approach is explained by the top rated answer in OpenGL GL_SELECT or manual collision detection? for example.
Ray-casting into your geometry (see the selection FAQ link below). This also means that you don't have to have an active gl context in the thread you call the code from, fwiw.
I've used both of these methods in the same application, currently having good results with the latter, but since most of the objects in that application are spheres it is a lot cheaper than it might be with arbitrary models.
http://www.opengl.org/resources/faq/technical/selection.htm

Resources