I have my own video source and using VMR7. When I use 24 color depth, my graph contains Color Space Converter filter which converts 24 bits to ARGB32. Everything works fine. When I use 32 bit color depth, my image looks desintegrated. In this case my source produces RGB32 images and passes them directly to VMR7 without color conversion. During window sizing I noticed that when destination height is changing the image becomes "integrated" (normal) in some specific value of destination height. I do not know where is the problem. Here are the example photos: http://talbot.szm.com/desintegrated.jpg and http://talbot.szm.com/integrated.jpg
Thank you for your help.
You need to check for a MediaType change in your FillBuffer method.
HRESULT hr = pSample->GetMediaType((AM_MEDIA_TYPE**)&pmt);
if (S_OK == hr)
{
SetMediaType(pmt);
DeleteMediaType(pmt);
}
Depending on your graphic you get different width for your buffer. This means, you connect with an image width of 1000 pixels but with the first sample you get a new width for your buffer. In my example it was 1024px.
Now you have the new image size in the BitmapInfoHeader.biWidth and got the old size in the VideoInfoHeader.rcSource. So one line of your image has a size of 1024 pixels and not 1000 pixels. If you don't remember this you can sometimes get pictures like you.
Related
Summary
I want to write the opengl pixels(GL_RGB) by glReadPixels to a QImage.This renders correct, but when i resize the window, it scales weird and distorts my shape(triangle).
What i tried
I tried (QImage)img.scale(width(),height(),Qt::KeepAspectRatio)
but it didn't solve the problem.
Played with how i write the pixels buffer from glReadPixels to QImage but No.Didn't work.
Should i read the pixels in three buffers(GLubyte *rpixel,*gpixel,*bpixel) or on one(GLubyte **pixels)?Which one is the easiest because i will resize the array when i will resize my window(so i want dynamic arrays).
Some code
I have uploaded a minimal code recreating the bug-weird behaviour in github.Download and compile using the Qt Creator.
https://github.com/rivenblades/GlReadPixelsQT/tree/master
Pictures
Here is how i wanted(it works when not resizing)
Here is after resizing(Weird behaviour)
As you can see, when resizing, the image gets splitted at right and contunues at left at probably another row.So i am guessing the size of the image is wrong(needs more width?).
By default, the start of each row of an image is assumed to be aligned to 4 bytes. This is because the GL_PACK_ALIGNMENT respectively GL_UNPACK_ALIGNMENT parameter is by default 4, see glPixelStore.
When a framebuffer is read by glReadPixels the GL_PACK_ALIGNMENT parameter is considered.
If you want to read the image in a tightly packed memory, with not alignment at the start of each line, then you've to change the GL_PACK_ALIGNMENT parameter to 1, before reading the color plane of the framebuffer:
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_RED, GL_UNSIGNED_BYTE, tga.rpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_GREEN, GL_UNSIGNED_BYTE, tga.gpic);
glReadPixels(0,0,unchangable_w, unchangable_h, GL_BLUE, GL_UNSIGNED_BYTE, tga.bpic);
If that is missed, this cause a shift effect at each line of the image, except if the length of a line of the image in bytes is divisible by 4.
I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.
My project is a large map that can be panned around containing "info spots" that can be clicked. For now I use four large images, each spans 5000x5000 pixels (so total map size is 20'000x20'000 pixel). On my AMD Phenom 9950 Quad-Core with 8GB RAM and an NVIDIA GeForce 610 this takes a certain while to load while it's quite fast afterwards when panning the image. I tried tiling it up but there's no visible enhancement in loading speed as the image still has to be loaded completely before it's separated into tiles.
The only way to have some real improvement on speed and memory usage would be, to only load those parts of the map image that are actually shown.
Does PyGame offer any way of doing so? I'm thinking of a "theoretical" tile map which contains the needed x and y values of each tile (I group them a little, less to compute each frame) and a theoretical image information (like: which image and which position therein). Only when a tile comes near the visible part of the screen, its image information is loaded, otherwise it remains a number and string value.
Would this make any sense? Is there any way to achieve this?
The only way to accomplish this with Pygame would be to break the images themselves into smaller squares (say 250x250), and then, as the user pans, just get the current topleft x,y coordinates, as well as the screen size, and load any tiles that fit into that screen or around the buffer edge into memory, and clear out any others that are outside that range. The math will be fairly straightforward unless you add support for rotation and/or zooming. I would name the tiles after their location as a multiple of the square size (for example the tile at 500, 500 would be named 2-2.png). This will make it very trivial to generate the tile name that you need to load at each location - take the current x/y coordinates, integer divide by 250, subtract the buffer tile amount, and then loop by your screen width integer divided by 250 plus 1 plus the buffer tile amount for each row. Do that loop for each column.
After reading #lukevp 's reply, I was interested and tried this:
http://imgur.com/Q1N2UtU
Go get this image and create a folder named 'test_data'. Now place this image, code outside of test_data folder and run. The output would be cropped figures named as per their order (It's a bit off on the edges as the image is 1920 * 1080). You can try it with your custom size tho. Also note that I am on ubuntu so take care to appropriate paths.
OUTPUT: http://imgur.com/2v4ucGI Final link: http://imgur.com/a/GHc9l
import pygame, os
pygame.init()
original_image = pygame.image.load('test_pic.jpg')
x_max = 1920
y_max = 1080
current_x = 0
current_y = 0
count = 1
begin_surf = pygame.Surface((x_max,y_max), flags = pygame.SRCALPHA)
begin_surf.blit(original_image,(0,0))
cropped_surf = pygame.Surface((100,100),flags=pygame.SRCALPHA)
while current_y + 100 < y_max:
while current_x + 100 < x_max:
cropped_surf.blit(begin_surf, (0,0), (current_x, current_y,100,100))
pygame.image.save(cropped_surf, os.path.join("test_data", str(count) + '.jpg'))
current_x += 100
count += 1
current_x = 0
current_y += 100
Would now actually be working to load those images and span them as he said.
I have a pdf file which needs to be converted to postscript ( using pdf2ps ) and sent to an old ( Lanier 2145 ) printer. This file has a code at the very bottom of one of the pages, which unfortunately is being cut off.
After spending a long time mucking about with CUPS trying to get it to scale, we have given up and starting to manipulate the actual ps file.
I can scale the pages using
<</Install { .80 .80 scale } bind >> setpagedevice
However, the scaling is being locked to the bottom left hand corner rather than scaling from the center. This means that the very bottom of the page is still cut off, no matter what the scale.
How can you set where the scaling is done from? I would prefer to scale from the center, but could live with scaling from the top.
Thanks
You can do a translate in the procedure and move the origin. For example:
<</Install { 18 18 translate .80 .80 scale } bind >> setpagedevice
I've got an array of different sized images. I want to place these images on a canvas in a sort of automated collage.
Does anyone have an idea of how to work the logic behind this concept?
All my images have heights divisible by 36 pixels and widths divisible by 9 pixels. They have mouseDown functions that allow you to drag and drop. When dropped the image goes to the closest x point divisible by 9 and y point divisble by 36. There is a grid drawn on top of the canvas.
I've sorted the array of images based on height, then based on their widths.
imagesArray.sortOn("height", Array.NUMERIC | Array.DESCENDING);
imagesArray.sortOn("width", Array.NUMERIC | Array.DESCENDING);
I'd like to take the largest image ( imageArray[0] ) to put in corner x,y = 0,0. Then randomize the rest of the images and fit them into the collage canvas.
What you are trying to do sounds like treemapping.
I think this is what's known as a "Packing problem" or maybe a "2D bin packing problem". Googling those should find you some information, doing it efficiently is not a simple task. If you only have a small number of images, the easy methods would be:
Random...just randomly place images until no more can fit. Run this random placement 10..100..1000 or more times, and pick the best result (where "best" is determined by some criteria like least amount of wasted space, or most pictures fit, etc)
Brute force...try every single possible combination, one by one, and pick the "best" one. Downside to this method is that as number of items scale up, the amount of computation scales up very quickly.
I researched treemapping and packing problems.
.... and eventually decided to create an array of all the points on the canvas, then assign them a value of empty. I then looped through my array of images and placed them on the points that were "empty" and reassigned all the points it occupied with the source name of the image. It worked beautifully. But definitely takes time to create the array.
I did a different take on that I just fits all images to a tile size and tile the into a document.
Image are virturly center croped to the file size via a layer mask.
Paste Image Roll Script http://www.mouseprints.net/old/dpr/PasteImageRoll.html
http://www.mouseprints.net/old/dpr/PasteImageRoll.jsx