I am trying to implement the following shader from here:
https://gamedev.stackexchange.com/questions/68401/how-can-i-draw-outlines-around-3d-models
my base is a 2D Image that has preapplied shaders.
I was unsure how to apply this
glDrawBuffer( GL_COLOR_ATTACHMENT1 );
Vec3f clearVec( 0.0, 0.0, -1.0f );
// from normalized vector to rgb color; from [-1,1] to [0,1]
clearVec = (clearVec + Vec3f(1.0f, 1.0f, 1.0f)) * 0.5f;
glClearColor( clearVec.x, clearVec.y, clearVec.z, 0.0f );
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
So I didn't, this is what my QML code looks like:
ShaderEffect {
id: outline
anchors.fill: swirls
visible: true
property variant source: swirls
//property variant source: mascot
// first render target from the first pass
property variant uTexColor: swirls
// second render target from the first pass
property variant uTexNormals: swirls
property variant uResolution: Qt.vector2d(960, 640) //screen resolution
property variant delta: Qt.size(0.1 / width, 0.2 / height)
fragmentShader: "qrc:effects/shaders/outline.frag"
layer.enabled: true
layer.effect: OpacityMask {
maskSource: swirls
}
}
I don't know much about normal diffuse maps and have no idea what
in vec2 fsInUV;
is which seems to be important to getting this to work. I am trying to create sprite like outlines around a circle I have made with opacity mask + shaders (it's animated to look like water with shaders)
The original user for the shaders is inactive and I'm not familiar with how QML implements Shaders, as I'm very unfamiliar with Shaders in general.
Related
I have been working on a project where I need to apply image masking that applies an effect like this:
Pic1: https://i.stack.imgur.com/6zI2x.jpg
Pic2: https://i.stack.imgur.com/z7IVX.jpg
Mask frame: https://i.stack.imgur.com/3syEm.jpg
Desired effect: https://i.stack.imgur.com/t2kO5.jpg
I got it to work by using OpacityMask however to do that I had to use some photoshop and edit my mask frame image. I need to apply this affect to multiple mask frames with different shapes therefore using photoshop to edit all of them seem troublesome. Moreover, the inside of the mask frame images arent all transparent either.
Is there any ideas you can give me to solve this issue without using any pre photoshoping each mask frame images. I tried to look into ShaderEffect but I could not really understand how I should use it for my purpose. Moreover I searched for a OpacityMask like effect but working only on part of the mask image which has a specific color/specific shaped area. However, I could not find any.
ShaderEffect appears to be the only option, considering what you said in the comments that the frame shape could be anything.
The code examples below show how to solve your issue with ShaderEffect.
QML code
The only property on the QML side is the rect, which defines the x, y, width, and height of the frame, which are scaled down to between 0 and 1.
Image { id: img; visible: false; source: "2.jpg" }
Image { id: frme; visible: false; source: "3.jpg" }
Image { id: back; visible: false; source: "1.jpg" }
ShaderEffect {
width: img.sourceSize.width / 3.5
height: img.sourceSize.height / 3.5
property var back: back
property var image: img
property var frame: frme
property vector4d rect: Qt.vector4d(width/2-50, height/2-60, 100, 120);
readonly property vector4d frect: Qt.vector4d(rect.x/width,rect.y/height,
rect.z/width,rect.w/height);
fragmentShader: "qrc:/shader.glsl"
}
shader.glsl
I discovered that the saturation inside the image is very different from other areas after using a color picker in different points of the frame image.
So, in order to decide where to mask in the image, I used saturation.
uniform highp sampler2D back;
uniform highp sampler2D image;
uniform highp sampler2D frame;
varying highp vec2 qt_TexCoord0;
uniform highp vec4 frect;
uniform highp float qt_Opacity;
// From https://gist.github.com/983/e170a24ae8eba2cd174f
vec3 rgb2hsv(vec3 c) {
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
void main() {
vec2 u = qt_TexCoord0;
vec2 frameCoord = (u - frect.xy) / frect.zw ;
gl_FragColor = texture2D(back, u);
if(frameCoord.x>0. && frameCoord.y>0. && frameCoord.x<1. && frameCoord.y<1.) {
vec4 mask = texture2D(frame, frameCoord);
vec3 hsv = rgb2hsv(mask.xyz);
gl_FragColor = mask;
// Check that the saturation is between 0 and 0.2.
if(abs(hsv.y - 0.1) < 0.1) {
gl_FragColor = texture2D(image, u);
}
}
}
Note
You can also change the last line of code if you want the frame's shadow to cover your image.
gl_FragColor = mix(texture2D(image, u), mask, 1. - hsv.z);
Result
If you know that the geometry of your picture frame is just a rectangle, we can just create a rectangular mask aligned within your picture frame. I worked out that your picture frame was 410x500 pixels and decided to shrink it to 50% i.e. 205x250 pixels. At that scale, I worked out that your picture frame had a border size of about 18 pixels. So I created an inner rectangle based on those dimensions and used that rectangle for the OpacityMask maskSource:
import QtQuick 2.15
import QtQuick.Controls 2.15
import QtGraphicalEffects 1.0
Page {
Image {
id: pic1
anchors.fill: parent
source: "https://i.stack.imgur.com/6zI2x.jpg"
}
Image {
id: pic2
anchors.fill: parent
visible: false
source: "https://i.stack.imgur.com/z7IVX.jpg"
}
Image {
id: rawMask
anchors.centerIn: parent
width: 205
height: 250
source: "https://i.stack.imgur.com/3syEm.jpg"
}
Item {
id: mask
anchors.fill: parent
Rectangle {
id: borderMask
x: rawMask.x + 18
y: rawMask.y + 18
width: rawMask.width - 36
height: rawMask.height - 36
color: "white"
}
}
OpacityMask {
anchors.fill: parent
source: pic2
maskSource: mask
}
}
I'd like to achieve a "cutout" effect using a custom QML ShaderEffect item. The area that is cut out should display the pixels of the image (src) but only the pixels that are directly under the ShaderEffect item in the z order. In other words, only the pixels that exist at the same coordinates as the area of cutout square. The final effect would be exactly like if you had two images on top of each other and the top image was being masked in an area to allow the lower image to show through. Like so:
Because of application specific details, I need to achieve this using a custom fragment shader and a pixel shader but am almost a complete stranger to the GLSL language. What I currently have in the code is this:
ShaderEffect {
id: shader_element
x: resizeable.x
y: resizeable.y
width: resizeable.width
height: resizeable.height
property Image src: global_image_reference // from the app's root scope
vertexShader: "
uniform highp mat4 qt_Matrix;
attribute highp vec4 qt_Vertex;
attribute highp vec2 qt_MultiTexCoord0;
varying highp vec2 coord;
void main() {
coord = qt_MultiTexCoord0;
gl_Position = qt_Matrix * qt_Vertex;
}"
fragmentShader: "
varying highp vec2 coord;
uniform sampler2D src;
uniform lowp float qt_Opacity;
void main() {
gl_FragColor = texture2D(src, coord);
}"
}
I'm passing a global reference to the underlying image (that I want to show through) to the ShaderEffect item and using that reference in the pixel shader. This works but instead of getting a cutout effect, I get a squish effect where the referenced image is being squished when the container is resized:
Any advice on how I need to change either my fragment shader or my pixel shader to achieve the cutout effect instead of the squish effect? I was thinking maybe something utilizing Item's mapToItem() of mapFromItem() functions but I'm not sure how the points returned by those functions can be passed to the vertex or pixel shader.
I have a window size width: 1500 height: 780, I would like to render the 3d object with 2d co-ordinates, giving z=1 as dummy value. Following is the qml code.(If i have to render at x=0,y=0 3d should object should render exactly the same place where the Rectangle in qml renders i.e window coordinates)
Entity {
id: root
property real x: 2.0
property real y: 0.0
property real z: 0.0
property real scale: 1
property var mainCam
property var forwardRenderer
property Material material
components: [ trefoilMeshTransform, mesh, root.material ]
Transform {
id: trefoilMeshTransform
property real userAngle: 900.0
translation:Qt.vector3d(0,1,1)
property real theta: 0.0
property real phi:0.0
property real roll: 0.0
rotation: fromEulerAngles(theta, phi, roll)
scale: root.scale
}
Mesh {
id: mesh
source: "assets/obj/cube.obj"
}
}
The code is exactly the same as wireframe example, including the camera settings. I tried to use the qt3d unproject api in qml but ended up unsuccessfully . Can you please help me with the clue?
Please read this document about How to convert world to screen coordinates and vice versa this is good for understanding some logics.
and also this.
In qt3d I get 3d coordinate by using RayCaster and ScreenRayCaster
it gives you the local position and world position and screen x , y
see this site from kdab and its example
RenderSettings{
//all components that you need like viewport , InputSettings ,...
ScreenRayCaster {
id: screenRayCaster
onHitsChanged: printHits("Screen hits", hits)
}
}
MouseHandler {
id: mouseHandler
sourceDevice: MouseDevice {}
onReleased: { screenRayCaster.trigger(Qt.point(mouse.x, mouse.y)) }
}
function printHits(desc, hits) {
console.log(desc, hits.length)
for (var i=0; i<hits.length; i++) {
console.log(" " + hits[i].entity.objectName, hits[i].distance,
hits[i].worldIntersection.x, hits[i].worldIntersection.y, hits[i].worldIntersection.z)
}
}
I tried to add an environmental map to my PhongMaterial, but when I do so my geometry disappears. Here goes my code:
var reflection = THREE.ImageUtils.loadTextureCube( [ 'textures/hdr/pos-x.png', 'textures/hdr/neg-x.png', 'textures/hdr/pos-y.png', 'textures/hdr/neg-y.png', 'textures/hdr/pos-z.png', 'textures/hdr/neg-z.png' ] );
material = new THREE.MeshPhongMaterial(
{
map: textures.color,
normalMap: textures.normal,
specularMap: textures.specular,
envMap: reflection,
combine: THREE.MixOperation,
reflectivity: 0.25,
specular: 0xffffff,
}
);
If I change the Phong to a Lambert material, I can see thee geometry and the reflection. Do you have any idea what I did wrong?
Update: I have found out that the normal and the envMap don't work together. So the envMap works if I don't you use a normal Map and the normalMap only works without the envMap. Is this a known issue and is there any way I can add both maps to my mehsphong material?
EnvMap and normalMap can work together - example. You must have spotlight on the scene . Otherwise you will get "'vWorldPosition' : undeclared identifier" error.
Seems to be the reason is in phong shader:
"#if MAX_SPOT_LIGHTS > 0 || defined( USE_BUMPMAP )",
"varying vec3 vWorldPosition;",
"#endif"
I'm trying to generate a depth map for a particle system, but if I render the particle system using MeshDepthMaterial, then every particle is only rendered as a single point for each vertex--not covering the entire area the texture mapped particle is displayed.
Do I need to use MeshDepthMaterial to generate a depth map, or are there other options?
Right now there is no way to get the MeshDepthMaterial to respect the size or texture of the ParticleSystem. However, it is not too hard to implement a custom ShaderMaterial that does that. First, you need a vertex shader and fragment shader.
<script type="x-shader/x-vertex" id="vertexShader">
uniform float size;
void main() {
gl_PointSize = size;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position , 1.0 );
}
</script>
<script type = "x-shader/x-fragment" id="fragmentShader">
uniform sampler2D map;
uniform float near;
uniform float far;
void main() {
float depth = gl_FragCoord.z / gl_FragCoord.w;
float depthColor = 1.0 - smoothstep( near, far, depth );
vec4 texColor = texture2D( map, vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = vec4( vec3(depthColor), texColor.a);
}
</script>
The vertex shader is totally standard, the fragment shader takes the texture (sampler2D map), but instead of using it for color values it just uses the alpha level texColor.a. For the rgb, a grayscale value based on the depth is used, just like the MeshDepthMaterial. Now to use this shader you just need to grab the html and create a THREE.ShaderMaterial like so:
var material = new THREE.ShaderMaterial({
uniforms : {
size : { type: 'f', value : 20.0 },
near : { type: 'f', value : camera.near },
far : { type: 'f', value : camera.far },
map : { type: "t", value : THREE.ImageUtils.loadTexture( url ) }
},
attributes : {},
vertexShader: vertShader,
fragmentShader: fragShader,
transparent: true
});
Here you have provided the shader with all the info it needs: the camera's near/far range, the size of the particle and the texture it needs to map.
You can see a jsFiddle demo of it here.