This is weird, and I can't find anything like it online. I am rendering a reflective sphere in OpenGL, and I'm trying to reposition the camera and use glCopytexImage2D to render directly to the Cube Map textures. Problem is, my reflective sphere ends up reflecting everything on my desktop EXCEPT the OpenGL environment! How does something like this even happen? It's tripping me out. I will include my entire renderReflectiveSphere function here:
void drawReflectiveSphere(){
float sphere_pos[] = {10.0, 0.0, -40.0};
glMatrixMode(GL_MODELVIEW);
glDisable(GL_TEXTURE_2D);
glPushMatrix();
//Positive x
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0] + 1, sphere_pos[1], sphere_pos[2],
0.0, 1.0, 0.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X, 0, GL_RGB, -128, -128, 128, 128, 0);
//Negative x
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0] - 1, sphere_pos[1], sphere_pos[2],
0.0, 1.0, 0.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_X, 0, GL_RGB, -128, -128, 128, 128, 0);
//Positive y
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0], sphere_pos[1] + 1, sphere_pos[2],
0.0, 0.0, 1.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Y, 0, GL_RGB, -128, -128, 128, 128, 0);
//Negative y
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0], sphere_pos[1] - 1, sphere_pos[2],
0.0, 0.0, 1.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, 0, GL_RGB, -128, -128, 128, 128, 0);
//Positive z
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0], sphere_pos[1], sphere_pos[2] + 1,
0.0, 1.0, 0.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_Z, 0, GL_RGB, -128, -128, 128, 128, 0);
//Negative z
gluLookAt(sphere_pos[0], sphere_pos[1], sphere_pos[2],
sphere_pos[0], sphere_pos[1], sphere_pos[2] - 1,
0.0, 1.0, 0.0);
glCopyTexImage2D(GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0, GL_RGB, -128, -128, 128, 128, 0);
glPopMatrix();
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
glEnable(GL_TEXTURE_GEN_R);
glEnable(GL_TEXTURE_CUBE_MAP);
glPushMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(sphere_pos[0], sphere_pos[1], sphere_pos[2]);
gluSphere(quadratic, 5.0f, 128, 128);
glPopMatrix();
//Reset frustrum, etc.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-1.0, 1.0, -1.0, 1.0, 1.5, depth + 15.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_TEXTURE_GEN_S);
glDisable(GL_TEXTURE_GEN_T);
glDisable(GL_TEXTURE_GEN_R);
glDisable(GL_TEXTURE_CUBE_MAP);
glDrawBuffer(GL_BACK);
glReadBuffer(GL_BACK);
}
I assume I'm doing a few things wrong, but where would OpenGL get the contents of my desktop from? And how can I set myself straight to do what I want?
I eventually solved this but the problem is in code not shown above. Essentially, I initialized my cube map texture to a size much too large.
Related
So... I am trying to filter an environment map for a BRDF shader, as explained here: https://learnopengl.com/PBR/IBL/Specular-IBL. However, I can't get my filtered result to be properly stored (when loaded, I get a black texture full of artifacts.)
I figure it must have something to do with the frame buffer, since glCheckFramebufferStatus() keeps returning 0 on the LOD/sides loop, but I have spent a couple hours trying to understand why... and I can't see the problem. glGetError() returns 0, I made sure to generate the frame buffer/ render/ buffer before the loop starts, and at that point everything seemed complete. The rest of the program runs fine, and there were no errors compiling the shader I am using.
I am quite new to openGL, is there something obvious I am missing? I am assuming the problem must be in this section... but does it look like it should work? Could it be something I did wrong elsewhere?
This is the code:
if (cubeMapGenerated == false){
//Frame Buffer:
glGenFramebuffers(1, &frameBuffer);
glGenRenderbuffers(1, &renderBuffer);
glGenTextures(1, &genCubeMap);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glBindTexture(GL_TEXTURE_CUBE_MAP, genCubeMap);
for (unsigned int i = 0; i < 6; ++i)
{glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGBA16F, 128, 128, 0, GL_RGB, GL_FLOAT, nullptr);}
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); //params
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glGenerateMipmap(GL_TEXTURE_CUBE_MAP); //generate mipmaps
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width_, height_);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderBuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
std::cout << "Framebuffer is not complete at gen. Staus: " << glCheckFramebufferStatus(GL_FRAMEBUFFER) << std::endl;
GLuint projection_location, view_location, model_location, normal_matrix_location,
specular_map_location, roughness_location;
cubeMapGen_program_->bind(); //bind irradiance sahder
projection_location = cubeMapGen_program_->uniformLocation("projection");
view_location = cubeMapGen_program_->uniformLocation("view");
model_location = cubeMapGen_program_->uniformLocation("model");
normal_matrix_location = cubeMapGen_program_->uniformLocation("normal_matrix");
specular_map_location = cubeMapGen_program_->uniformLocation("specular_map");
roughness_location =brdf_program_->uniformLocation("roughness");
glUniformMatrix4fv(projection_location, 1, GL_FALSE, e_captureProjection.data());
glUniformMatrix4fv(model_location, 1, GL_FALSE, model.data());
glUniformMatrix3fv(normal_matrix_location, 1, GL_FALSE, normal.data());
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, specular_map_);
glUniform1i(specular_map_location, 0);
for (unsigned int mip = 0; mip < maxMipLevels; ++mip){//render each mip
// resize framebuffer according to mip-level size.
unsigned int mipWidth = 128 * std::pow(0.5, mip);
unsigned int mipHeight = 128 * std::pow(0.5, mip);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, mipWidth, mipHeight);
std::cout << "width: " << mipWidth << " height: " << mipHeight << std::endl;
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, renderBuffer);
glViewport(0, 0, mipWidth, mipHeight);
float mproughness = (float) mip / (float)(maxMipLevels - 1);
glUniform1f (roughness_location, mproughness);
for (unsigned int i = 0; i < 6; ++i)//render each side
{
glUniformMatrix4fv(view_location, 1, GL_FALSE, e_captureViews[i].data());
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, genCubeMap, mip);
if(i == 0 && glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{std::cout << "ERROR::FRAMEBUFFER:: Framebuffer is not complete! Map: " << mip << std::endl;}
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_TRIANGLES);
glVertex3f(2, -2, -2); glVertex3f(2, -2, 2); glVertex3f(2, 2, 2); //Right
glVertex3f(2, -2, -2); glVertex3f(2, 2, 2); glVertex3f(2, 2, -2);
glVertex3f(-2, -2, -2); glVertex3f(-2, 2, 2); glVertex3f(-2, -2, 2); //Left
glVertex3f(-2, -2, -2); glVertex3f(-2, 2, -2); glVertex3f(-2, 2, 2);
glVertex3f(-2, -2, 2); glVertex3f(-2, 2, 2); glVertex3f(2, 2, 2); //Front
glVertex3f(-2, -2, 2); glVertex3f(2, 2, 2); glVertex3f(2, -2, 2);
glVertex3f(-2, -2, -2); glVertex3f(2, 2, -2); glVertex3f(-2, 2, -2); //Back
glVertex3f(-2, -2, -2); glVertex3f(2, -2, -2); glVertex3f(2, 2, -2);
glVertex3f(-2, 2, -2); glVertex3f(2, 2, -2); glVertex3f(2, 2, 2); //Top
glVertex3f(-2, 2, -2); glVertex3f(2, 2, 2); glVertex3f(-2, 2, 2);
glVertex3f(-2, -2, -2); glVertex3f(2, -2, 2); glVertex3f(2, -2, -2); //Bottom
glVertex3f(-2, -2, -2); glVertex3f(-2, -2, 2); glVertex3f(2, -2, 2);
}
//std::cout << glGetError() << ", " << glCheckFramebufferStatus(GL_FRAMEBUFFER) << std::endl;
}
std::cout<<"New pre filtered map generated"<<std::endl;
cubeMapGenerated = true;
}//cubemapgen
glEnd();
I recommend to read about Vertex Specification and to use the state of the art way of Vertex Array Objects for drawing.
But, if you draw the objects in to deprecated Fixed Function Pipeline style, then geometric objects are drawn by enclosing a series of vertex coordinates between glBegin/glEnd pairs.
You have to finish the drawing sequence by glEnd, before you can change or manipulate the framebuffer.
Move the glEnd instruction in the inner loop and your code should work
for (unsigned int mip = 0; mip < maxMipLevels; ++mip)
{
.....
for (unsigned int i = 0; i < 6; ++i)//render each side
{
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, genCubeMap, mip);
glBegin(GL_TRIANGLES);
glVertex3f(2, -2, -2); glVertex3f(2, -2, 2); glVertex3f(2, 2, 2);
.....
glVertex3f(-2, -2, -2); glVertex3f(-2, -2, 2); glVertex3f(2, -2, 2);
glEnd(); // <---- "end" the draw sequence
}
}
See OpenGL 3.0 API Specification; 2.6.3 GL Commands within Begin/End; page 24
or OpenGL 4.6 API Compatibility Profile Specification; 10.7.5 Commands Allowed Between Begin and End; page 433:
The only GL commands that are allowed within any Begin/End pairs are the commands for specifying vertex coordinates, vertex colors, normal coordinates, texture coordinates, generic vertex attributes, and fog coordinates ...
I'm working on importing a 3D model (obj file) and using JavaFX triangle mesh to add it to the scene.
first, I read the obj file, parse it and save its content to (float array "Vertices", and integer array for "Faces". My mesh points :[0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0], and mesh faces : [1, 0, 7, 0, 5, 0, 1, 0, 3, 0, 7, 0, 1, 0, 4, 0, 3, 0,........]
and then I add it to my scene
MeshView cubeMesh = new MeshView(mesh);
cubeMesh.setDrawMode(DrawMode.FILL);
cubeMesh.setTranslateX(20);
cubeMesh.setTranslateY(10);
cubeMesh.setTranslateZ(20);
displayPane.getChildren().add(cubeMesh);
Unfortunately, nothing is added to the scene. Would anybody be able to suggest a solution, tutorial or a book?
Here is an initial tutorial directly from Oracle: https://docs.oracle.com/javase/8/javafx/graphics-tutorial/javafx-3d-graphics.htm
I am trying to learn how to use QPainter with a QGLFramebufferObject. When I try to display the texture in a QGLWidget, it is not visible. (complete code below)
The end goal is to use QPainter to draw text onto textures and then alpha blend the texture overtop of the 2D line geometry.
texture.pro
QT += core gui widgets opengl
TARGET = test
TEMPLATE = app
SOURCES = main.cpp
HEADERS = main.h
main.h
#include <QGLWidget>
#include <QGLFunctions>
class glview : public QGLWidget, protected QGLFunctions
{
Q_OBJECT
public:
explicit glview(QWidget *parent = 0);
~glview();
QSize sizeHint() const;
protected:
void initializeGL();
void resizeGL(int w, int h);
void paintGL();
private:
quint32 vbo_id[2], texture_id;
};
main.cpp
#include <QApplication>
#include <QGLFramebufferObject>
#include <QPainter>
#include "main.h"
struct vrtx {
GLint x;
GLint y;
GLubyte r;
GLubyte g;
GLubyte b;
}__attribute__((packed)) line_geo[] = {
// x, y, r, g, b
{1, 1, 255, 0, 0},
{1, 2, 0, 255, 0},
{1, 2, 0, 255, 0},
{2, 2, 255, 0, 0},
{2, 2, 255, 0, 0},
{2, 1, 0, 255, 0},
{2, 1, 0, 255, 0},
{1, 1, 255, 0, 0},
};
struct txtr_vrtx {
GLint x;
GLint y;
GLint tx;
GLint ty;
}__attribute__((packed)) txtr_geo[] = {
// x, y, tx,ty
{3, 1, 0, 0},
{3, 2, 0, 1},
{4, 2, 1, 1},
{4, 1, 1, 0},
};
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
glview widget;
widget.show();
return app.exec();
}
glview::glview(QWidget *parent) : QGLWidget(parent)
{
}
glview::~glview()
{
}
QSize glview::sizeHint() const
{
return QSize(500, 300);
}
void glview::initializeGL()
{
initializeGLFunctions();
qglClearColor(Qt::white);
glGenBuffers(2, vbo_id);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(line_geo), line_geo, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(txtr_geo), txtr_geo, GL_STATIC_DRAW);
QGLFramebufferObject fbo(100, 100, QGLFramebufferObject::CombinedDepthStencil/*GL_TEXTURE_2D*/);
fbo.bind();
texture_id = fbo.texture();
QPainter painter(&fbo);
painter.fillRect(0, 0, 100, 100, Qt::blue);
painter.end();
fbo.release();
}
void glview::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
void glview::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0, 5, 0, 3, -1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[0]);
glVertexPointer(2, GL_INT, sizeof(struct vrtx), 0);
glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(struct vrtx), ((char*)NULL + 8));
glDrawArrays(GL_LINES, 0, sizeof(line_geo) / sizeof(struct vrtx));
//glColor4ub(0, 0, 255, 255);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[1]);
glBindTexture(GL_TEXTURE_2D, texture_id);
glVertexPointer(2, GL_INT, sizeof(struct txtr_vrtx), 0);
glTexCoordPointer(2, GL_INT, sizeof(struct txtr_vrtx), ((char*)NULL + 8));
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisable(GL_TEXTURE_2D);
//glDisable(GL_BLEND);
glFlush();
}
The QGLFramebufferObject instance is destroyed when leaving initializeGL(). This results in deleting the texture too. You need to keep the QGLFramebufferObject alive until the texture is no longer needed.
Here is the corrected code. Also with alpha bending.
main.h
#include <QGLWidget>
#include <QGLFunctions>
#include <QGLFramebufferObject>
#include <QFont>
class glview : public QGLWidget, protected QGLFunctions
{
Q_OBJECT
public:
explicit glview(QWidget *parent = 0);
~glview();
QSize sizeHint() const;
protected:
void initializeGL();
void resizeGL(int w, int h);
void paintGL();
private:
QGLFramebufferObject *fbo;
QFont font;
quint32 vbo_id[2], texture_id;
};
main.cpp
#include <QApplication>
#include <QPainter>
#include "main.h"
struct vrtx {
GLint x;
GLint y;
GLubyte r;
GLubyte g;
GLubyte b;
}__attribute__((packed)) line_geo[] = {
// x, y, r, g, b
{1, 1, 255, 0, 0},
{1, 2, 0, 255, 0},
{1, 2, 0, 255, 0},
{2, 2, 255, 0, 0},
{2, 2, 255, 0, 0},
{2, 1, 0, 255, 0},
{2, 1, 0, 255, 0},
{1, 1, 255, 0, 0},
};
struct txtr_vrtx {
GLint x;
GLint y;
GLint tx;
GLint ty;
}__attribute__((packed)) txtr_geo[] = {
// x, y, tx,ty
{3, 1, 0, 0},
{3, 2, 0, 1},
{4, 2, 1, 1},
{4, 1, 1, 0},
};
int main(int argc, char *argv[])
{
QApplication app(argc, argv);
glview widget;
widget.show();
return app.exec();
}
glview::glview(QWidget *parent) : QGLWidget(parent)
{
font.setFamily("Helvetica");
}
glview::~glview()
{
delete fbo;
}
QSize glview::sizeHint() const
{
return QSize(500, 300);
}
void glview::initializeGL()
{
initializeGLFunctions();
qglClearColor(Qt::white);
glGenBuffers(2, vbo_id);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(line_geo), line_geo, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(txtr_geo), txtr_geo, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0); // must unbind for QPainter
fbo = new QGLFramebufferObject(100, 100, GL_TEXTURE_2D);
fbo->bind();
texture_id = fbo->texture();
QPainter painter(fbo);
painter.setPen(Qt::blue);
font.setPointSize(20);
painter.setFont(font);
painter.drawText(0, 60, "FBO");
painter.end();
fbo->release();
}
void glview::resizeGL(int w, int h)
{
glViewport(0, 0, w, h);
}
void glview::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glOrtho(0, 5, 0, 3, -1, 1);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[0]);
glVertexPointer(2, GL_INT, sizeof(struct vrtx), 0);
glColorPointer(3, GL_UNSIGNED_BYTE, sizeof(struct vrtx), ((char*)NULL + 8));
glDrawArrays(GL_LINES, 0, sizeof(line_geo) / sizeof(struct vrtx));
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, vbo_id[1]);
glBindTexture(GL_TEXTURE_2D, texture_id);
glVertexPointer(2, GL_INT, sizeof(struct txtr_vrtx), 0);
glTexCoordPointer(2, GL_INT, sizeof(struct txtr_vrtx), ((char*)NULL + 8));
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);
glFlush();
}
I'm trying to replicate sucking effect in ios using css3 -webkit-transform:matrix3d() property.
However, I can't manage the curved edges like in the picture. The closest solution by myself is the following:
-webkit-transform: matrix3d(0.85, 0.0678, 0, 0, 2.37, 0.85, -1.36, -0.0019, 0, 0, -1.53, -3.73, 0, 0, 0.34, 1);
Here is the jsfiddle result.
How can I do the transformation like in the picture. Note that how right and left edges are curved.
I've made some search about css3 transformations. If you're using matrix3d property, you can only make linear transformations which doesn't let you curve anything. It includes shear, scale and translations.
However, a current experimental technology lets you non-linear transformations. Therefore, you can warp, curl etc any object. This needs to write shaders so you need to code for GPU.
Adobe has CSS Filter lab to demonstrate this. Thanks to them, I managed to apply the transformation I wanted to. Here is the screenshot:
And here is the code to manage this
-webkit-filter:
custom(url(shaders/vertex/warp.vs) mix(url(shaders/fragment/warp.fs) normal source-atop), 20 20 border-box, k array(-0.429, -0.471, 467, -0.286, -0.507, 0, -0.086, -0.507, 0, 0.15, -0.514, 0, -0.407, -0.086, 0, -0.021, -0.171, 0, 0.193, -0.171, 0, 0.364, -0.171, 0, 0.036, 0.179, 0, 0.179, 0.171, 0, 0.35, 0.179, 0, 0.464, 0.171, 0, 0.2, 0.5, 0, 0.279, 0.5, 0, 0.414, 0.493, 0, 0.5, 0.5, 0), matrix perspective(1000) scale(1) rotateX(0deg) rotateY(0deg) rotateZ(0deg), useColoredBack 1, backColor 1 1 1 1);
You can test it yourself after enabling experimental features with this link: http://html.adobe.com/webplatform/graphics/customfilters/cssfilterlab/#suckeffect
Can I change the size of polygon wchich I defined by points?
var pierwszy = new Kinetic.Polygon({
points: [0, 0, 150, 0, 80, 150, 0, 150],
fillPatternImage: images.img1,
stroke: 'black',
strokeWidth: 5,
});
I have tried to just change points and add to tween atriubutes, but it doesn't work.
scaleX and scaleY work very well, but the background image is bluring.
Any ideas?
I don't know if this is useful for you,In my situation is ,I want to change the shape of the polygon with a tween:
poly1= new Kinetic.Polygon({
points: [0, 0, 150, 0, 80, 150, 0, 150],
fill: shadowLightColor,
stroke: '#bbbbbb',
strokeWidth: 1
});
layer.add(poly1);
poly1.tween = new Kinetic.Tween({
node: poly1,
duration: 1,
points: [0, 0, 300, 0, 200, 150, 0, 150],
easing: Kinetic.Easings.StrongEaseInOut
}).play();