Depth map for particle system using a texture map - dictionary

I'm trying to generate a depth map for a particle system, but if I render the particle system using MeshDepthMaterial, then every particle is only rendered as a single point for each vertex--not covering the entire area the texture mapped particle is displayed.
Do I need to use MeshDepthMaterial to generate a depth map, or are there other options?

Right now there is no way to get the MeshDepthMaterial to respect the size or texture of the ParticleSystem. However, it is not too hard to implement a custom ShaderMaterial that does that. First, you need a vertex shader and fragment shader.
<script type="x-shader/x-vertex" id="vertexShader">
uniform float size;
void main() {
gl_PointSize = size;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position , 1.0 );
}
</script>
<script type = "x-shader/x-fragment" id="fragmentShader">
uniform sampler2D map;
uniform float near;
uniform float far;
void main() {
float depth = gl_FragCoord.z / gl_FragCoord.w;
float depthColor = 1.0 - smoothstep( near, far, depth );
vec4 texColor = texture2D( map, vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = vec4( vec3(depthColor), texColor.a);
}
</script>
The vertex shader is totally standard, the fragment shader takes the texture (sampler2D map), but instead of using it for color values it just uses the alpha level texColor.a. For the rgb, a grayscale value based on the depth is used, just like the MeshDepthMaterial. Now to use this shader you just need to grab the html and create a THREE.ShaderMaterial like so:
var material = new THREE.ShaderMaterial({
uniforms : {
size : { type: 'f', value : 20.0 },
near : { type: 'f', value : camera.near },
far : { type: 'f', value : camera.far },
map : { type: "t", value : THREE.ImageUtils.loadTexture( url ) }
},
attributes : {},
vertexShader: vertShader,
fragmentShader: fragShader,
transparent: true
});
Here you have provided the shader with all the info it needs: the camera's near/far range, the size of the particle and the texture it needs to map.
You can see a jsFiddle demo of it here.

Related

Cut-out effect using a QML ShaderEffect

I'd like to achieve a "cutout" effect using a custom QML ShaderEffect item. The area that is cut out should display the pixels of the image (src) but only the pixels that are directly under the ShaderEffect item in the z order. In other words, only the pixels that exist at the same coordinates as the area of cutout square. The final effect would be exactly like if you had two images on top of each other and the top image was being masked in an area to allow the lower image to show through. Like so:
Because of application specific details, I need to achieve this using a custom fragment shader and a pixel shader but am almost a complete stranger to the GLSL language. What I currently have in the code is this:
ShaderEffect {
id: shader_element
x: resizeable.x
y: resizeable.y
width: resizeable.width
height: resizeable.height
property Image src: global_image_reference // from the app's root scope
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
gl_FragColor = texture2D(src, coord);
}"
}
I'm passing a global reference to the underlying image (that I want to show through) to the ShaderEffect item and using that reference in the pixel shader. This works but instead of getting a cutout effect, I get a squish effect where the referenced image is being squished when the container is resized:
Any advice on how I need to change either my fragment shader or my pixel shader to achieve the cutout effect instead of the squish effect? I was thinking maybe something utilizing Item's mapToItem() of mapFromItem() functions but I'm not sure how the points returned by those functions can be passed to the vertex or pixel shader.

Canvas with glsl shader breaks at certain css transform scale

I have a canvas with a shader on it, to do that I am using glslCanvas, and I want to transform that canvas with css transform. This works fine in general, but when I scale it such that it becomes significantly bigger, the shader freezes.
I tried to discover what causes this, but had no luck so far. It seemed I can do scaleX however large I want but not with scaleY or scale. I checked if there was a certain resolution or scale where it freezes but haven't found any result, it depends on the size of the canvas.
Minimal example
Here is an example where the canvas gets enlarged incrementally, and breaks on scale(4). Starting the canvas at scale(4) instead of incrementally increasing just results in a white canvas.
setTimeout(() => document.getElementById('glsl').style.transform = "scale(2)", 2000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(3)", 5000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(4)", 8000);
setTimeout(() => document.getElementById('glsl').style.transform = "scale(5)", 12000);
<script src="https://rawgit.com/patriciogonzalezvivo/glslCanvas/master/dist/GlslCanvas.js"></script>
<canvas id="glsl" class="glslCanvas" data-fragment="
precision mediump float;
uniform vec2 u_resolution;
uniform float u_time;
void main()
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
// Time varying pixel color
vec3 col = 0.5 + 0.5*cos(u_time+uv.xyx+vec3(0,2,4));
// Output to screen
gl_FragColor = vec4(col,1.0);
}"></canvas>
I'm not sure whether this is specific to the glslCanvas library or not, but haven't found any issues regarding this topic on their repository on Github.
Investigating this led me to uncovering an issue with glslCanvas.
The PR I submitted is here: http://github.com/patriciogonzalezvivo/glslCanvas/pull/47
Essentially, scaling can cause the code to falsely believe that the canvas is not visible, and so it stops rendering.

Implementing 3D Shader to 2D Object with Qt?

I am trying to implement the following shader from here:
https://gamedev.stackexchange.com/questions/68401/how-can-i-draw-outlines-around-3d-models
my base is a 2D Image that has preapplied shaders.
I was unsure how to apply this
glDrawBuffer( GL_COLOR_ATTACHMENT1 );
Vec3f clearVec( 0.0, 0.0, -1.0f );
// from normalized vector to rgb color; from [-1,1] to [0,1]
clearVec = (clearVec + Vec3f(1.0f, 1.0f, 1.0f)) * 0.5f;
glClearColor( clearVec.x, clearVec.y, clearVec.z, 0.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
So I didn't, this is what my QML code looks like:
ShaderEffect {
id: outline
anchors.fill: swirls
visible: true
property variant source: swirls
//property variant source: mascot
// first render target from the first pass
property variant uTexColor: swirls
// second render target from the first pass
property variant uTexNormals: swirls
property variant uResolution: Qt.vector2d(960, 640) //screen resolution
property variant delta: Qt.size(0.1 / width, 0.2 / height)
fragmentShader: "qrc:effects/shaders/outline.frag"
layer.enabled: true
layer.effect: OpacityMask {
maskSource: swirls
}
}
I don't know much about normal diffuse maps and have no idea what
in vec2 fsInUV;
is which seems to be important to getting this to work. I am trying to create sprite like outlines around a circle I have made with opacity mask + shaders (it's animated to look like water with shaders)
The original user for the shaders is inactive and I'm not familiar with how QML implements Shaders, as I'm very unfamiliar with Shaders in general.

How to scale the contents of a QGraphicsView using the QPinchGesture?

I'm implementing an image viewer on an embedded platform. The hardware is a sort of tablet and has a touch screen as input device. The Qt version I'm using is 5.4.3.
The QGraphicsView is used to display a QGraphicsScene which contains a QGraphicsPixmapItem. The QGraphicsPixmapItem containts the pixmap to display.
The relevant part of the code is the following:
void MyGraphicsView::pinchTriggered(QPinchGesture *gesture)
{
QPinchGesture::ChangeFlags changeFlags = gesture->changeFlags();
if (changeFlags & QPinchGesture::ScaleFactorChanged) {
currentStepScaleFactor = gesture->totalScaleFactor();
}
if (gesture->state() == Qt::GestureFinished) {
scaleFactor *= currentStepScaleFactor;
currentStepScaleFactor = 1;
return;
}
// Compute the scale factor based on the current pinch level
qreal sxy = scaleFactor * currentStepScaleFactor;
// Get the pointer to the currently displayed picture
QList<QGraphicsItem *> listOfItems = items();
QGraphicsItem* item = listOfItems.at(0);
// Scale the picture
item.setScale(sxy);
// Adapt the scene to the scaled picture
setSceneRect(scene()->itemsBoundingRect());
}
As result of the pinch, the pixmap is scaled starting from the top-left corner of the view.
How to scale the pixmap respect to the center of the QPinchGesture?
From The Docs
The item is scaled around its transform origin point, which by default is (0, 0). You can select a different transformation origin by calling setTransformOriginPoint().
That function takes in a QPoint so you would need to find out your centre point first then set the origin point.
void QGraphicsItem::setTransformOriginPoint(const QPointF & origin)

Maintaining relative child position after applying QGraphicsItem::ItemIgnoresTransformations

I have a QGraphicsTextItem parented to a QGraphicsItem. I want the QGraphicsTextItem to always reside directly above the QGraphicsItem, but I also want the text to remain the same size when the scale factor goes below 1, i.e. the text remains the size it is at a scale factor of 1 even when the parent graphics item is scaled smaller. I have found that setting the QGraphicsItem::ItemIgnoresTransformations flag to true when the scale factor is below 1 does the trick for retaining the size.
But I can’t seem to find a way to get the position of the text to always remain above the QGraphicsItem. Is there a way to do this? I tried using deviceTransform () function, but the text still moved off of the QGraphicsItem as I scrolled out. What was worse is that some of the text items started “jiggling”, i.e. they started continuously changing their position ever so slightly, so that it looked like they were shaking. If this is the function I need to use, I guess I don’t know how to use it properly.
In the constructor of my QGraphicsItem I’ve added a QGraphicsTextItem:
fTextItem = new QGraphicsTextItem(getName(), this);
fTextItem->setFlag(QGraphicsItem::ItemIgnoresTransformations);
Here is code snippet from paint function of QGraphicsItem
qreal lod = painter->worldTransform().m22();
if(lod <= 1.0) {
fTextItem-setFlag(QGraphicsItem::ItemIgnoresTransformations);
fTextItem->setPos(fTextItem->deviceTransform(view-viewportTransform()).inverted().map(view->mapFromScene(mapToScene(0,0))));
} else {
fTextItem->setFlag(QGraphicsItem::ItemIgnoresTransformations, false);
fTextItem->setPos(0, 0);
}
My suggestion is to subclass QGraphicsSimpleTextItem in this manner:
class TextItem
: public QGraphicsSimpleTextItem
{
public:
TextItem(const QString &text)
: QGraphicsSimpleTextItem(text)
{
}
void paint(QPainter *painter,
const QStyleOptionGraphicsItem *option, QWidget *widget)
{
painter->translate(boundingRect().topLeft());
QGraphicsSimpleTextItem::paint(painter, option, widget);
painter->translate(-boundingRect().topLeft());
}
QRectF boundingRect() const
{
QRectF b = QGraphicsSimpleTextItem::boundingRect();
return QRectF(b.x()-b.width()/2.0, b.y()-b.height()/2.0,
b.width(), b.height());
}
};
QGraphicsSimpleTextItem *mText = new TextItem("Item");
scene()->addItem(mText);
mText->setFlag(QGraphicsItem::ItemIgnoresTransformations, true);
mText->setPos(itemToFollow->pos());
Disclaimer: this may be overkill for what you are trying to do. We had some additional restrictions in our project that made this solution the easiest for us.
We had to do something similar in a project, and it ended up being easiest for us to not use ItemIgnoresTransformations and instead roll our own transform. Here is the main function we use to create a translation-only (no scaling) transform for drawing an item at a specific location. You might be able to modify it for your usage.
static QTransform GenerateTranslationOnlyTransform(
const QTransform &original_transform,
const QPointF &target_point) {
// To draw the unscaled icons, we desire a transform with scaling factors
// of 1 and shearing factors of 0 and the appropriate translation such that
// our icon center ends up at the same point. According to the
// documentation, QTransform transforms a point in the plane to another
// point using the following formulas:
// x' = m11*x + m21*y + dx
// y' = m22*y + m12*x + dy
//
// For our new transform, m11 and m22 (scaling) are 1, and m21 and m12
// (shearing) are 0. Since we want x' and y' to be the same, we have the
// following equations:
// m11*x + m21*y + dx = x + dx[new]
// m22*y + m12*x + dy = y + dy[new]
//
// Thus,
// dx[new] = m11*x - x + m21*y + dx
// dy[new] = m22*y - y + m12*x + dy
qreal dx = original_transform.m11() * target_point.x()
- target_point.x()
+ original_transform.m21() * target_point.y()
+ original_transform.m31();
qreal dy = original_transform.m22() * target_point.y()
- target_point.y()
+ original_transform.m12() * target_point.x()
+ original_transform.m32();
return QTransform::fromTranslate(dx, dy);
}
To use, take the QPainter transform that is passed to the paint method and do something like:
painter->save();
painter->setTransform(GenerateTranslationOnlyTransform(painter->transform(),
some_point));
// Draw your item.
painter->restore();
I've found another solution, which does not involve messing with any transformations or by hand scaling/positioning. There is a hint in QGraphicsItem::ItemIgnoresTransformations flag description:
QGraphicsItem::ItemIgnoresTransformations
The item ignores inherited transformations (i.e., its position is
still anchored to its parent, but the parent or view rotation, zoom or
shear transformations are ignored). [...]
And that's the key! We need two items: a parent that will keep the relative position (without any flags set) and a child item that will do the drawing at parent's (0,0) point (with QGraphicsItem::ItemIgnoresTransformations flag set). Simple as that!
I've encapsulated this functionality into a single class - here is some code:
#include <QGraphicsItem>
#include <QPainter>
class SampleShape : public QGraphicsItem
{
private:
/* This class implements shape drawing */
class SampleShapeImpl : public QGraphicsItem
{
public:
SampleShapeImpl (qreal len, QGraphicsItem *parent = nullptr)
: QGraphicsItem(parent), m_len(len)
{
/* ignore transformations (!) */
setFlag(QGraphicsItem::ItemIgnoresTransformations);
}
QRectF boundingRect (void) const override
{
/* sample bounding rectangle */
return QRectF(-m_len, -m_len, m_len*2, m_len*2);
}
void paint (QPainter *painter, const QStyleOptionGraphicsItem *, QWidget *) override
{
/* draw a shape, (0,0) is an anchor */
painter->drawLine(0, -m_len, 0, m_len);
painter->drawLine(-m_len, 0, m_len, 0);
// ...
}
private:
qreal m_len; // sample shape parameter
};
public:
/* This is actually almost an empty class, you only need to set
* a position and pass any parameters to a SampleShapeImpl class.
*/
SampleShape (qreal x, qreal y, qreal len, QGraphicsItem *parent = nullptr)
: QGraphicsItem(parent), m_impl(len, this) // <-- IMPORTANT!!!
{
/* set position at (x, y), view transformations will apply */
setPos(x, y);
}
QRectF boundingRect (void) const override
{
return QRectF(); // it's just a point, no size
}
void paint (QPainter *, const QStyleOptionGraphicsItem *, QWidget *) override
{
// empty, drawing is done in SampleShapeImpl
}
private:
SampleShapeImpl m_impl;
};
Great answer by Dave Mateer! I had the problem that I wanted to define a different scale factor at different zoom levels. This is how I did it:
void MyGraphicsItem::paint(QPainter * painter, const QStyleOptionGraphicsItem* option, QWidget* widget)
{
//save painter for later operations
painter->save();
QTransform originalTransform = painter->transform();
QPointF originalCenter = rect().center();
qreal dx = originalTransform.m11() * originalCenter.x() + originalTransform.m21() * originalCenter.y() + originalTransform.m31();
qreal dy = originalTransform.m22() * originalCenter.y() + originalTransform.m12() * originalCenter.x() + originalTransform.m32();
//normally our target scale factor is 1, meaning the item has keeps its size, regardless of zoom
//we adjust the scale factor though when the item is smaller than one pixel in comparison to the background image
qreal factor = 1.0;
//check if scale factor if bigger that the item size, and thus it occupies less that a pixel in comparision to the background image
if (rect().width() < originalTransform.m11()) {
//calculate adjusted scale factor
factor = originalTransform.m11() / rect().width();
}
//adjust position according to scale factor
dx -= factor * originalCenter.x();
dy -= factor * originalCenter.y();
//set the new transform for painting
painter->setTransform(QTransform::fromScale(factor, factor) * QTransform::fromTranslate(dx, dy));
//now paint...
QGraphicsXYZItem::paint(painter, option, widget);
//restore original painter
painter->restore();
}
You do need to adjust the bounding rectangle too in that case:
QRectF MyGraphicsItem::boundingRect() const
{
QRectF rect = QGraphicsEllipseItem::boundingRect();
//this is a bit hackish, let me know if you know another way...
if (scene() != NULL && scene()->views().at(0) != NULL)
{
//get viewport transform
QTransform itemTransform = scene()->views().at(0)->transform();
QPointF originalCenter = rect.center();
//calculate back-projected original size of item
qreal realSizeX = rect.width() / itemTransform.m11();
qreal realSizeY = rect.height() / itemTransform.m11();
//check if scale factor is bigger that the item size, and thus it occupies less that a pixel in comparison
//to the background image and adjust size back to equivalent of 1 pixel
realSizeX = realSizeX < 1.0 ? 1.0 : realSizeX;
realSizeY = realSizeY < 1.0 ? 1.0 : realSizeY;
//set adjusted position and size according to scale factor
rect = QRectF(rect.center().x() - realSizeX / 2.0, rect.center().y() - realSizeY / 2.0, realSizeX, realSizeY);
}
return rect;
}
With this solution the item work very well in my case.
Adding to Dave Mateer's answer, I think it'd be helpful to add that in some scenario, you should also maintain proper bounding rectangle (as well as shape) of the object. For me, I need to modify boundingRect() a little too for proper object selection behavior. Remember that the bounding rect of the object will be scaled and transformed as usual if we do NOT use ItemIgnoresTransformations flag. So we also need to rescale the boundingRect to maintain the view independence effect.
To maintain the view-independent bounding rectangle turns out to be quite easy: just grab the scaling factor from deviceTransform(m_view->viewportTransform()).inverted().m11() and multiply this constant to your local coordinate bounding rectangle. For example:
qreal m = this->deviceTransform(m_view->viewportTransform()).inverted().m11();
return QRectF(m*(m_shapeX), m*(m_shapeY),
m*(m_shapeR), m*(m_shapeR));
here is a solution I devised of very moderate complexity :
1) Get the boundingRect() of the parent and map it to scene
2) take the minimum X and Y of this list of points, this is the real origin of your item, in scene coordinates
3) set the position of the child
In Pyside :
br = parent.mapToScene(parent.boundingRect())
realX = min([item.x() for item in br])
realY = min([item.y() for item in br])
child.setPos(parent.mapFromScene(realX, realY)) #modify according to need

Resources