How to use shader for postprocessing? - directx-10

Assume that I render a frame to a texture (YUY2 format), how can I use shader for YUY2->RGB processing? The idea is to get RGB for the render target output, but the GPU should handle the conversion.

Related

How can I apply different normal map textures for different faces of a minecraft-like cubic terrain blocks in Unity?

I'm making a procedurally generated minecraft-like voxel terrain in Unity. Mesh generation and albedo channel texturing is flawless; however I need to apply different normal map textures for different cube faces regarding whether they're neighboring to another cube or not. Materials accepts only single normal map file and doesn't provide a sprite-sheet-editor kind of functionality for normal maps. So I have no idea about how to use selected slices out of normal map file as if they were albedo textures. I couldn't find any related resources about the problem. Any help will be appreciated. Thanks...
First of all, I'm not an expert in this area, though I am going to try to help you based on my limited and incomplete understanding of parts of Unity.
If there are a finite number of "normal face maps" that can exist, I suggest something like you indicated ("sprite sheet") and create a single texture (also sometimes called a texture atlas) that contains all these normal maps.
The next step, which I'm not sure whether the Standard material shader will be to handle for your situation is to generate UV/texture coordinates for the normal map and pass those along with your vertex xyz positions to the shader. The UV coordinates need to be for each vertex of each face; they are specified as a 2-D (U, V) offset into your atlas of normal maps and are floating point values with a range of [0.0, 1.0], that map to the full X and Y coordinates of the actual normal texture. For instance, if you had an atlas with a grid of textures in 4 rows and 4 columns, a face that should use the top-left texture would have UV coords of [(0,0), (0.25,0), (0.25,0.25), (0, 0.25)].
The difficulty here may depend if you are you using UV coordinates for doing other texture mapping (e.g. in the Albedo or wherever else). If this is the case, I think the Unity Standard Shader permits two sets of texture coordinates, and if you need more, you might have to roll your own shader or find a Shader asset elsewhere that allows for more UV sets. This is where my understanding of gets shaky, as I'm not exactly sure how the shader uses these two UV coordinate sets, and whether there is some existing convention for how these UV coordinate are used, as the standard shader supports secondary/detail maps, which may mean you have to share the UV0 set with all non-detail maps, so albedo, normal, height, occlusion, etc.

Copying a single layer of a 2D Texture Array from GPU to CPU

I'm using a 2D texture array to store some data. As I often want to bind single layers of this 2D texture array, I create individual GL_TEXTURE_2D texture views for each layer:
for(int l(0); l < m_layers; l++)
{
QOpenGLTexture * view_texture = m_texture.createTextureView(QOpenGLTexture::Target::Target2D,
m_texture_format,
0,0,
l,l);
view_texture->setMinMagFilters(QOpenGLTexture::Filter::Linear, QOpenGLTexture::Filter::Linear);
view_texture->setWrapMode(QOpenGLTexture::WrapMode::MirroredRepeat);
assert(view_texture != 0);
m_texture_views.push_back(view_texture);
}
These 2D TextureViews work fine. However, if I want to retrieve the 2D texture data from the GPU side using that texture view it doesn't work.
In other words, the following copies no data (but throws no GL errors):
glGetTexImage(GL_TEXTURE_2D, 0, m_pixel_format, m_pixel_type, (GLvoid*) m_raw_data[layer] )
However, retrieving the entire GL_TEXTURE_2D_ARRAY does work:
glGetTexImage(GL_TEXTURE_2D_ARRAY, 0, m_pixel_format, m_pixel_type, (GLvoid*) data );
There would obviously be a performance loss if I need to copy across all layers of the 2D texture array when only data for a single layer has been modified.
Is there a way to copy GPU->CPU only a single layer of a GL_TEXTURE_2D_ARRAY? I know there is for the opposite (i.e CPU->GPU) so I would be surprised if there wasn't.
Looks like you found a solution with using glGetTexSubImage() from OpenGL 4.5. There is also a simple solution that works with OpenGL 3.2 or higher.
You can set the texture layer as an FBO attachment, and then use glReadPixels():
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTextureLayer(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
textureId, 0, layer);
glReadPixels(...);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
What version of GL are you working with?
You are probably not going to like this, but... GL 4.5 introduces glGetTextureSubImage (...) to do precisely what you want. That is a pretty hefty version requirement for something so simple; it is also available in extension form, but that extension is relatively new as well.
There is no special hardware requirement for this functionality, but it requires a very recent driver.
I would not despair just yet, however.
You can copy the entire texture array into a PBO and then read a sub-rectangle of that PBO back using the buffer object API (e.g. glGetBufferSubData (...)). That requires extra memory on the GPU-side, but will allow you to transfer a single slice of this 2D array.

Fast paletted screen blit with Direct3D 9

A game uses software rendering to draw a full-screen paletted (8-bit) image in memory.
What's the fastest way to put that image on the screen, using Direct3D?
Currently I convert the paletted image to RGB in software, then put it on a D3DUSAGE_DYNAMIC texture (which is locked with D3DLOCK_DISCARD).
Is there a faster way? E.g. using shaders to perform palettization?
Related questions:
Fast paletted screen blit with OpenGL - same question with OpenGL
How do I improve Direct3D streaming texture performance? - similar question from SDL author
Create a D3DFMT_L8 texture containing the paletted image, and an 256x1 D3DFMT_X8R8G8B8 image containing the palette.
HLSL shader code:
uniform sampler2D image;
uniform sampler1D palette;
float4 main(in float2 coord:TEXCOORD) : COLOR
{
return tex1D(palette, tex2D(image, coord).r * (255./256) + (0.5/256));
}
Note that the luminance (palette index) is adjusted with a multiply-add operation. This is necessary, as palette index 255 is considered as white (maximum luminance), which becomes 1.0f when represented as a float. Reading the palette texture at that coordinate causes it to wrap around (as only the fractionary part is used) and read the first palette entry instead.
Compile it with:
fxc /Tps_2_0 PaletteShader.hlsl /FhPaletteShader.h
Use it like this:
// ... create and populate texture and paletteTexture objects ...
d3dDevice->CreatePixelShader((DWORD*)g_ps20_main, &shader)
// ...
d3dDevice->SetTexture(1, paletteTexture);
d3dDevice->SetPixelShader(shader);
// ... draw texture to screen as textured quad as usual ...
You could write a simple pixel shader to handle the palettization. Create an L8 dynamic texture and copy your paletteized image to it and create a palette lookup texture (or an array of colors in constant memory). Then just render a fullscreen quad with the palettized image set as a texture and a pixel shader that performs the palette lookup from the lookup texture or constant buffer.
That said, performing the palette conversion on the CPU shouldn't be very expensive on a modern CPU. Are you sure that is your performance bottleneck?

Convert 4 corners into a matrix for OpenGL to Direct3D sprite conversion

I am working on code for Scrolling Game Development Kit. An old release (2.0) of this program was based on DirectX and was using Direct3D Sprite objects to draw all the graphics. It used the Transform property of the sprite object to specify how the texture rectangle would be transformed as it was being output to the display. The current release (2.1) was a conversion to OpenGL and is using GL TexCoord2 and GL Vertex2 calls to send coordinates of the source and output rectangles for drawing sprites. Now someone says that their video card worked great with DirectX, but their OpenGL drivers do not support GL_ARB necessary to use NPOTS textures (pretty basic). So I'm trying to go back to DirectX without reverting everything back to 2.0. Unfortunately it seems it's much easier to get 4 points given a matrix than it is to get a matrix given 4 points. I have done away with all the matrix info in version 2.1 so I only have the 4 corner points left when calling the function that draws images on the display. Is there any way to use the 4 corner information to transform a Direct3D Sprite?
Alternatively does anybody know why DirectX would be able to do something than OpenGL can't -- are some video cards' drivers just that bad where DirectX supports NPOTS textures but OpenGL doesn't?
It's probably worth reading up on how they do bump mapping. See e.g. this site. You end up with a tangent space matrix, which maps from world space to tangent space (the space relative to the current face). The purpose of that is taking a vector in world space, generally a vector from a light, and converting it into a vector in tangent space, that being the space that your texture defines surface normals in.
Anyway, if you inverted that matrix you'd have a mapping from tangent space to world space. Which I think is what you want? The mapping produced in that tutorial is purely for direction vectors, but expanding out to a 4x4 and anchoring the origin somewhere meaningful shouldn't be difficult.

How to blur 3d object? (Papervision 3d)

How to blur 3d object? (Papervision 3d) And save created new object as new 3d model? (can help in sky/clouds generation)
Like in 2d picture I've turn rectangel intu some blury structure
(source: narod.ru)
Set useOwnContainer to true the add the filter:
your3DObject.useOwnContainer = true;
your3DObject.filters = [new BlurFilter(4,4,2)];
When you set useOwnContainer to true, a new 2d DisplayObject is created to render the 3d projection into, and you can apply of the usual DisplayObject properties to that.
Andy Zupko has a good post about this and render layers.
Using this will cost your processor a bit, so use it wisely. For example
in the twigital I worked on at disturb media we used one Glow for
the layer that holds all the characters, not inidividual render layers for each
character. On other projects we 'baked' the filters into bitmaps and used them,
this meant a bit more memory, but freed up the processor a bit for other tasks.
HTH
I'm not familiar with Papervision 3D, but blurring in 3D is normally just blurring in 2D. You pick the object you want blurred, determine the blurring you want for that object, then apply a 2D blur before compositing other objects into the scene.
This is a cheat because in principle, different parts of the object may need different degrees of (depth of field) blurring. But it's not the only cheat in 3D graphics.
That said, there are other approaches. Ray-tracing can give true depth-of-field effects (if you're willing to pay the render-time costs). It's also possible to apply a blur to a 3D "voxel" grid instead of a 2D pixel grid - though I imagine that's more useful for smoothing shapes from e.g. medical scanners than for giving depth-of-field effects.
Blur is 2D operation, try to render object into texture and blur that texture.

Resources