I need create clickable component with custom shape. Appearance is set by svg file. Clickable area must be constrained by svg shape. I find great example of what I need, but it use pixel mask or circle mask. Can you help me find solution?
Most probably you will need to create a pixel mask yourself from the SVG shape.
The question is how to approach this. Qt does not offer a simple way of doing it. However, in Qt you can render the SVG offscreen into an image that you initialize with transparent pixels or a color key. You can then use this image as a mask.
If the size of your viewpoint changes frequently, you might want to do the mask rendering in a higher resolution first and then scale it down accordingly for performance. Also note that if your SVG is animated, you would have to accomodate for that.
Or you might use a different library than Qt to obtain the mask. Also, if your SVG contains only a single polygon, you might go for a point-polygon test. But I doubt it, and such a test is also not trivial when the polygon is non-convex (you typically end up with a scanline algorithm anyways).
Related
Here's the issue at hand. I need to be able to pick a background (an image showing an object, let's say, a starship model). I want to be able to apply various previously prepared textures to various areas on it, as some kind of a "colour your own object" app, but without the need to prepare dozens of individual segments.
Ok, so this is one, newbie way to do it. We have those images:
Two kind of different versions, an original photo and a quickly Photoshopped one. Let's say we only want the Borg-ish green deflector and warp nacelle from the second picture, without the odd pink hull. You have to have a mask, basicly an image of an equal resolution (or at least the same aspect ratio, which you can reliably scale to image's resolution), with the area filled with color (or whatever else), and transparent area everywhere else. As the mask, I've used a few strokes of brush on an empty layer, set to overlay mode, and then saved as PNG, with transparency. And this is how the code went:
First, import images.
QPixmap background("orig.png"); //import base image
//import alt version/texture/whatever you want, anything will work with a good mask
QPixmap element("alt.png");
QPixmap mask("deflector.png"); //mask. Just nacelles and deflector.
Then, isolate the area that interests us from alt version
QPainter painter(&element);
painter.setCompositionMode(QPainter::CompositionMode_DestinationIn);
painter.drawPixmap(0, 0, mask.width(), mask.height(), mask);
And finally draw it onto the target object.
QPainter inter(&background);
inter.drawPixmap(0, 0, element);
ui->label->setPixmap(background);
The result:
This method respects any and all transparency you could've done in Photoshop or another image editing software.
Simple, but an effective solution, for when your app has to work with graphics prepared by someone else, elsewhere.
I have an SCNNode that has its geometry populated from a collada file (.dae) and displays correctly on screen. I can apply materials to the geometry easily enough, however I'd like to change the scale of the material.
I currently populate it with
nodeArray[0].geometry?.firstMaterial!.diffuse.contents="wood.png"
but the scale of the material is too small. While I can edit the png in GIMP or something similar and import it as wood2.png is there any way I can set the material scale programatically?
what do you mean by "too small" ?
Geometries are made of different sources such as the vertices' positions, but also their texture coordinates. These texture coordinates (they belong in [0,1]x[0,1]) are specified per vertex and indicate where to look in the texture.
In your 3D modeler please check that your texture coordinates match what you want (i.e. they cover the whole image i.e. they go from 0 to 1 in very direction), and make sure that your image has no extra transparent margin or other wasted space.
You can have a look at SCNMaterialProperty's contentsTransform property. But please check your model and texture before using it.
You need to open your UV snapshot in an image editing software like Photoshop, scale the wood texture in Photoshop over your UV's, then resave your PNG/JPG, move PNG/JPG back to Xcode
Hello!
I am no good Coder but a Visual Artist so I made a graphic to explain my project a bit.
http://imgur.com/OlllSyz
In the Image you can see my setting.
I want to achieve that the webgl background is only visible by the pattern layer masks.
And these are moving and shall be additive when overlapping (like in the graphic shown at the bottom) but masking as well. (I dont mean the blend mode at this part)
Here you can see what i mean by pattern and my state so far:
http://www.kevinbock.de/webgl/index.html
The black pattern elements which are moving shall mask the Background and dont mask each other. So basicly: all black elements shall let through the webgl background. And instead of masking each other they shall add together.
At the moment no masking is added because I dont know how to do that.
Now here are my questions:
How do I make the three layers as masks which add to each other as they overlap?
Do I need the svg masking or css masking or the path clipping ?
At the moment I am using three single svg files.
Do I need to add some mask commands to the svg files or to some html part or/and css?
The movement of every pattern layer is achived by animating css and the div they are in.
Does it matter for making masks additive?
Or should all three pattern layer be in one svg file and be animated there with the svg animations?
Do I need a fifth layer between the webgl layer and the first pattern layer which is a fullscreen black rectangle to mask the pattern out from and let the webgl layer shine through?
I'm currently upgrading from a DirectDraw system (yeah I know, it's very old) to DirectX10. It's a 2D system but simulates real world as each object has a range/depth in meters. There is a background image that is rendered and kept on the farthest z-order. All other objects are drawn on top of it and scaled according to what their range/depth would be. However, there is a certain type of object I have that is defined as a polygon and renders a bit different. It acts as an invisible occluder. For instance, an occluder is at a range/depth of 40 (my units are meters) and is defined by 5 vertices (a pentagon) in the middle of the viewport. There is a sprite object at the same viewport position but at a range/depth of 50. The desired output is to have the sprite object not rendered, but the background should be seen through both of them. So in essence these are invisible occluders, except that they do not occlude the background.
As a note, the occluders and the sprites all derive from the same base object type and are mixed together in a depth-sorted container.
My idea was to override the occluders Render method so they draw to a render target writing the range/depth values. I then would render the sprites as normal, but in the vertex or pixel shader would compare the range value of the sprite with the range values in the render target. However, it seems to me that I'd have to potentially read/write from the render target in the same pass before Present is called, and that's undefined. If i was to render the occluders, unbind the render target and pass the texture in for a lookup by the other objects, I'll have to convert the sprite positions into that texture space which may be non-trivial. Are either of these methods possible?
After thinking some more about it, one other idea came to mind. I could take the occluders and set their texture coordinates in reference to the background texture. In this way they would draw the same color values as the background, and because of the sorting if a sprite was behind it the user would still see the "background" but really it's the occluder looking like it.
Sorry if this is less a question and more thinking out loud, but I wanted to get impressions and ideas on the best way to go about this. Seems to me I have options but wasn't certain which was most efficient and which is easiest. Thanks in advance for any responses.
As stated in my comments I went with setting the texture coordinates in reference to the background image and then making sure the occluder, which was a simple polygon, was triangulated properly to make use of those texture coordinates.
I have a Flex component with a background image. The image is sharp in the beginning, but is jagged whenever I scale the component using scaleX and scaleY. How would I make the image anti-alias so that, it it's scaled to 0.75, the lines are smooth, not jaggedy?
Here is the image
Here is the scaled version
And the unscaled (good) one
If you load the image with an Image component, you can cast the content property of the component to a Bitmap and then set smoothing to true. Unfortunately, the image component doesn't provide this functionality out of the box. However, it's rather easy to hack in.
Here is a tutorial to show you how to create such a component:
http://www.adobe.com/cfusion/communityengine/index.cfm?event=showdetails&productId=2&postId=4001
However, if this is set using the backgroundImage style of a component, you just might be out of luck unless you override updateDisplayList and perform the drawing of the bitmap yourself by using Graphics.beginBitmapFill (which does provide smoothing support).
Why smoothing of images doesn't have better support (such as different interpolation methods) in Flex (and subsequently, Flash) boggles my mind. At least pixel bender filters will help a bit by letting us implement such filters ourselves.
If the dimensions of your Bitmap are both either n^2 or n^8, then the Flash player will automatically use a technique called mip-mapping that will dramatically improve the look (and performance) of scaled bitmap images.