Efficiency in drawing arbitrary meshes with OpenGL (Qt) - qt

I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?

VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.

Related

How expensive are QPainter::save() and QPainter::restore()?

I want to build a scene graph to store and manage my scene layout which will be painted using QPainter (like QPicture, but the layout should be modifiable).
The scene graph will contain nodes for transformations, clipping and primitives. The first two will need to store the current state of the painter to restore it afterwards. It seems natural to use QPainter::save() and QPainter::restore() respectively.
I am a bit concerned about the efficiency of these two functions. Qt's documentation gives no information here. Looking at Qt's source code, it seems
QPainter::save() copies every element of the state, e.g. the pen, the brush, the transformation, the clipping path, and many many more. It seems to me that storing the former state of the one or two relevant properties that I actually need by myself is far more efficient. Has anyone any experience with this?

Texture taken from Item: can I make its filtering be gamma-correct?

If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.

GLTF on demand and LOD for masive GLTF load

I am trying to load a very complex set of GLTF models in AFRAME.
My problem is very simple; my goal is to try to load about 9 million of gltf models in a unique scene.
My idea was to combine different level of detail in GLTF models depending on the camera distance and also only load those gltfs which are visible by the camera. If not the problem is that the assets are loaded in memory and my browser gets finally hung due to memory consumption.
Is this possible in AFRAME?
With some attention to A-Frame best practices, you should be able to make a performant scene with tens of thousands or even hundreds of thousands of polygons. But it will not be possible to load millions of distinct glTF models simultaneously in A-Frame, or any WebGL renderer for that matter.
Assuming you just want to show as many as models possible, try to take advantage of certain special cases:
If you need to render many copies of the same model, you can use a technique called "instancing". Check out aframe-instancing for some example code on how to do that. Depending on the complexity of your model, you may be able to show thousands (but probably not millions) of copies at once.
If you're making something like an RPG — which needs many things in the world, but only a few are in sight at any given time — then you can be clever about dividing your world into zones, and only loading models for the current zone.
Both of these are non-trivial to implement, and beyond the scope of a Stack Overflow question. My suggestion would be to try to get started on your own, and when you run into trouble, post new questions with the minimum amount of code necessary to see what you're trying to do. You may also find the A-Frame Slack group to be useful.

Is it possible to programatically create a simple 3d object on Unreal?

Is there a way to create and use a simple 3d model on the Unreal Engine?
Your best bet would be to create the initial 3D asset in a third party tool and import it into the IDE. From there you can change the texture map, and manipulate the aesthetics in one way or another, but the initial 3D model should be in an external 3D format, and then placed as a prefab into your world.
Creating an object dynamically in UDK is cumbersome and requires lots of tweaking, and won't save much in terms of cost of resources. Especially if you want it to look good and more than just 3D meshes thrown together rudimentarily. It is possible, but almost not worth it, especially if you have 3DSMax, Maya, Cinema4D, MotionBuilder, or one of the other hundred tools available to do the grunt work for you.
Most 3D Engines (IE Unity, UDK, Torque, Cry and now Havok) support many formats, and especially the unversal FBX. You could even use google sketchup and export to DAE or FBX format to get it into your Engine. Grant it you lose a lot of the elements, but the basic 3D mesh stays relatively in tact.

jogl picking example

hi guys
i am in trouble with add picking object in a JOGL project.
i know that this could be done with pick buffer.. but i can't find examples
anyone?
In general, as you are probably aware, JOGL code translates directly from any other OpenGL examples you might see on the web.
GL_SELECT based picking seems to be very much out of favour these days; deprecated in the spec and poorly implemented by drivers.
Alternatives you can use are:
Rendering each object with a unique color (and all lighting / fog etc disabled) so you can determine which object the mouse is over via glReadPixels. (Clearing buffers after the picking stage so that you can then render your normal graphics). This approach is explained by the top rated answer in OpenGL GL_SELECT or manual collision detection? for example.
Ray-casting into your geometry (see the selection FAQ link below). This also means that you don't have to have an active gl context in the thread you call the code from, fwiw.
I've used both of these methods in the same application, currently having good results with the latter, but since most of the objects in that application are spheres it is a lot cheaper than it might be with arbitrary models.
http://www.opengl.org/resources/faq/technical/selection.htm

Resources