opencl image with mirroring at border - opencl

Can I set up an opencl image so that coordinate access past the boundary
of the image will return the mirror image?
For example, if image is of dimensions width by height, then
read_floati(width, 0)
will return
read_floati(width-2,0)

Yes. Read section 6.11.13 of the OpenCL specification. OpenCL images are read using (for example) read_imagef function and that function takes a sampler which can be set up for mirroring using CLK_ADDRESS_MIRRORED_REPEAT.

Related

What is the convention for image (RGB) dimensions in Torch

What is the convention for image dimensions in Torch? I'm not a torch user, but am reading some code and I need to know if Torch typically represents an RGB image as {height, width, channels} or as {channels, width, height} as I think I'm reading out of the code I'm looking at.
If you are using the image package (which is the usual way to go), then it usually represents images as tensors of size (channels, height, width).

Reading generated image from OpenCL kernel

I have the following OpenCl kernel code:
kernel void generateImage(global write_only image2d_t output_image)
{
const int2 pos = {get_global_id(0), get_global_id(1)};
write_imagef(output_image, (int2)(pos.x, pos.y), (float4)(1.0f, 0.0f, 0.0f, 0.0f));
}
How can I read the generated image on the CPU side to render it ? I am using plain C. Also a link to some nice tutorial would be great.
The clEnqueueReadImage() function is an image object's equivalent to a buffer object's clEnqueueReadBuffer() function - with similar semantics. The main difference is that (2D) images have a "pitch" - this is the number of bytes by which you advance in memory if you move 1 pixel along the y axis. (This is not necessarily equal to width times bytes per pixel but can be larger if your destination has special storage/alignment requirements.)
The alternative, much as is the case with buffer objects, is to memory-map the image using clEnqueueMapImage().
How you further process the image once your host program can access it depends on what you're trying to do and what platform you're developing for.

QPainter::drawImage prints different size than QImage::save and print from Photoshop

I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.

Fast paletted screen blit with Direct3D 9

A game uses software rendering to draw a full-screen paletted (8-bit) image in memory.
What's the fastest way to put that image on the screen, using Direct3D?
Currently I convert the paletted image to RGB in software, then put it on a D3DUSAGE_DYNAMIC texture (which is locked with D3DLOCK_DISCARD).
Is there a faster way? E.g. using shaders to perform palettization?
Related questions:
Fast paletted screen blit with OpenGL - same question with OpenGL
How do I improve Direct3D streaming texture performance? - similar question from SDL author
Create a D3DFMT_L8 texture containing the paletted image, and an 256x1 D3DFMT_X8R8G8B8 image containing the palette.
HLSL shader code:
uniform sampler2D image;
uniform sampler1D palette;
float4 main(in float2 coord:TEXCOORD) : COLOR
{
return tex1D(palette, tex2D(image, coord).r * (255./256) + (0.5/256));
}
Note that the luminance (palette index) is adjusted with a multiply-add operation. This is necessary, as palette index 255 is considered as white (maximum luminance), which becomes 1.0f when represented as a float. Reading the palette texture at that coordinate causes it to wrap around (as only the fractionary part is used) and read the first palette entry instead.
Compile it with:
fxc /Tps_2_0 PaletteShader.hlsl /FhPaletteShader.h
Use it like this:
// ... create and populate texture and paletteTexture objects ...
d3dDevice->CreatePixelShader((DWORD*)g_ps20_main, &shader)
// ...
d3dDevice->SetTexture(1, paletteTexture);
d3dDevice->SetPixelShader(shader);
// ... draw texture to screen as textured quad as usual ...
You could write a simple pixel shader to handle the palettization. Create an L8 dynamic texture and copy your paletteized image to it and create a palette lookup texture (or an array of colors in constant memory). Then just render a fullscreen quad with the palettized image set as a texture and a pixel shader that performs the palette lookup from the lookup texture or constant buffer.
That said, performing the palette conversion on the CPU shouldn't be very expensive on a modern CPU. Are you sure that is your performance bottleneck?

DirectShow: IVMRWindowlessControl::SetVideoPosition stride(?)

I have my own video source and using VMR7. When I use 24 color depth, my graph contains Color Space Converter filter which converts 24 bits to ARGB32. Everything works fine. When I use 32 bit color depth, my image looks desintegrated. In this case my source produces RGB32 images and passes them directly to VMR7 without color conversion. During window sizing I noticed that when destination height is changing the image becomes "integrated" (normal) in some specific value of destination height. I do not know where is the problem. Here are the example photos: http://talbot.szm.com/desintegrated.jpg and http://talbot.szm.com/integrated.jpg
Thank you for your help.
You need to check for a MediaType change in your FillBuffer method.
HRESULT hr = pSample->GetMediaType((AM_MEDIA_TYPE**)&pmt);
if (S_OK == hr)
{
SetMediaType(pmt);
DeleteMediaType(pmt);
}
Depending on your graphic you get different width for your buffer. This means, you connect with an image width of 1000 pixels but with the first sample you get a new width for your buffer. In my example it was 1024px.
Now you have the new image size in the BitmapInfoHeader.biWidth and got the old size in the VideoInfoHeader.rcSource. So one line of your image has a size of 1024 pixels and not 1000 pixels. If you don't remember this you can sometimes get pictures like you.

Resources