I would like to display sample6 of the OptixSDK in a QGLWidget.
My application has only 3 QSlider for the rotation around the X,Y,Z axis and the QGLWidget.
For my understanding, paintGL() gets called whenever updateGL() is called by my QSlider or Mouseevents. Then I initialize a rotation matrix and apply this matrix to the PinholeCamera in order to trace the scene with new transformed cameracoordinates, right?
When tracing is finished i get the outputbuffer and use it draw the pixels with glDrawPixels(), just like in GLUTdisplay.cpp given in the OptiX framework.
But my issue is that the image is skewed/distorted. For example I wanted to display a ball, but the ball is extremley flatened, but the rotation works fine.
When I am zooming out, it seems that the Image scales much slower horizontally than vertically.
I am almost sure/hope that it has to do something with the gl...() functions that are not used properly. What am I missing? Can someone help me out?
For the completeness i post my paintGL() and updateGL() code.
void MyGLWidget::initializeGL()
{
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
m_scene = new MeshViewer();
m_scene->setMesh( (std::string( sutilSamplesDir() ) + "/ball.obj").c_str());
int buffer_width, buffer_height;
// Set up scene
SampleScene::InitialCameraData initial_camera_data;
m_scene->setUseVBOBuffer( false );
m_scene->initScene( initial_camera_data );
int m_initial_window_width = 400;
int m_initial_window_height = 400;
if( m_initial_window_width > 0 && m_initial_window_height > 0)
m_scene->resize( m_initial_window_width, m_initial_window_height );
// Initialize camera according to scene params
m_camera = new PinholeCamera( initial_camera_data.eye,
initial_camera_data.lookat,
initial_camera_data.up,
-1.0f, // hfov is ignored when using keep vertical
initial_camera_data.vfov,
PinholeCamera::KeepVertical );
Buffer buffer = m_scene->getOutputBuffer();
RTsize buffer_width_rts, buffer_height_rts;
buffer->getSize( buffer_width_rts, buffer_height_rts );
buffer_width = static_cast<int>(buffer_width_rts);
buffer_height = static_cast<int>(buffer_height_rts);
float3 eye, U, V, W;
m_camera->getEyeUVW( eye, U, V, W );
SampleScene::RayGenCameraData camera_data( eye, U, V, W );
// Initial compilation
m_scene->getContext()->compile();
// Accel build
m_scene->trace( camera_data );
m_scene->getContext()->launch( 0, 0 );
// Initialize state
glMatrixMode(GL_PROJECTION);glLoadIdentity();glOrtho(0, 1, 0, 1, -1, 1 );
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glViewport(0, 0, buffer_width, buffer_height);
}
And here is paintGL()
void MyGLWidget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
float3 eye, U, V, W;
m_camera->getEyeUVW( eye, U, V, W );
SampleScene::RayGenCameraData camera_data( eye, U, V, W );
{
nvtx::ScopedRange r( "trace" );
m_scene->trace( camera_data );
}
// Draw the resulting image
Buffer buffer = m_scene->getOutputBuffer();
RTsize buffer_width_rts, buffer_height_rts;
buffer->getSize( buffer_width_rts, buffer_height_rts );
int buffer_width = static_cast<int>(buffer_width_rts);
int buffer_height = static_cast<int>(buffer_height_rts);
RTformat buffer_format = buffer.get()->getFormat();
GLvoid* imageData = buffer->map();
assert( imageData );
switch (buffer_format) {
/*... set gl_data_type and gl_format ...*/
}
RTsize elementSize = buffer->getElementSize();
int align = 1;
if ((elementSize % 8) == 0) align = 8;
else if ((elementSize % 4) == 0) align = 4;
else if ((elementSize % 2) == 0) align = 2;
glPixelStorei(GL_UNPACK_ALIGNMENT, align);
gldata = QGLWidget::convertToGLFormat(image_data);
NVTX_RangePushA("glDrawPixels");
glDrawPixels( static_cast<GLsizei>( buffer_width ), static_cast<GLsizei>( buffer_height ),gl_format, gl_data_type, imageData);
// glDraw
NVTX_RangePop();
buffer->unmap();
}
After hours of debugging, I found out that I forgot to set the Camera-parameters right, it had nothing to go to with the OpenGL stuff.
My U-coordinate, the horizontal axis of view plane was messed up, but the V,W and eye coordinates were right.
After I added these lines in initializeGL()
m_camera->setParameters(initial_camera_data.eye,
initial_camera_data.lookat,
initial_camera_data.up,
initial_camera_data.vfov,
initial_camera_data.vfov,
PinholeCamera::KeepVertical );
everything was right.
Related
I used QWT for my project. I used Qwtplotmagnifier for zoom.
I want to zoom in relative to the mouse cursor. Can you Help me?
I had the same problem and I could not find any answer, so here is mine.
Based on this post : Calculating view offset for zooming in at the position of the mouse cursor
In order to implement a GoogleMap-style zoom, you have to inherit from QwtPlotMagnifier and reimplement widgetWheelEvent in order to store the cursor position when a scroll happens, and rescale function, to change the behavior of the zoom.
//widgetWheelEvent method
void CenterMouseMagnifier::widgetWheelEvent(QWheelEvent *wheelEvent)
{
this->cursorPos = wheelEvent->pos();
QwtPlotMagnifier::widgetWheelEvent(wheelEvent);
}
For the rescale method, I used the source code and modified it. You need to use the QwtScaleMap object of the canvas to transform the mouse cursor coordinates into axis coordinates of your plot. And finally, you just need to apply the formula given in the other post.
//rescale method
void CenterMouseMagnifier::rescale(double factor)
{
QwtPlot* plt = plot();
if ( plt == nullptr )
return;
factor = qAbs( factor );
if ( factor == 1.0 || factor == 0.0 )
return;
bool doReplot = false;
const bool autoReplot = plt->autoReplot();
plt->setAutoReplot( false );
for ( int axisId = 0; axisId < QwtPlot::axisCnt; axisId++ )
{
if ( isAxisEnabled( axisId ) )
{
const QwtScaleMap scaleMap = plt->canvasMap( axisId );
double v1 = scaleMap.s1(); //v1 is the bottom value of the axis scale
double v2 = scaleMap.s2(); //v2 is the top value of the axis scale
if ( scaleMap.transformation() )
{
// the coordinate system of the paint device is always linear
v1 = scaleMap.transform( v1 ); // scaleMap.p1()
v2 = scaleMap.transform( v2 ); // scaleMap.p2()
}
double c=0; //represent the position of the cursor in the axis coordinates
if (axisId == QwtPlot::xBottom) //we only work with these two axis
c = scaleMap.invTransform(cursorPos.x());
if (axisId == QwtPlot::yLeft)
c = scaleMap.invTransform(cursorPos.y());
const double center = 0.5 * ( v1 + v2 );
const double width_2 = 0.5 * ( v2 - v1 ) * factor;
const double newCenter = c - factor * (c - center);
v1 = newCenter - width_2;
v2 = newCenter + width_2;
if ( scaleMap.transformation() )
{
v1 = scaleMap.invTransform( v1 );
v2 = scaleMap.invTransform( v2 );
}
plt->setAxisScale( axisId, v1, v2 );
doReplot = true;
}
}
plt->setAutoReplot( autoReplot );
if ( doReplot )
plt->replot();
}
This works fine for me.
Based on this forum post:
bool ParentWidget::eventFilter(QObject *o, QEvent *e)
{
QMouseEvent *mouseEvent = static_cast<QMouseEvent*>(e);
if (mouseEvent->type()==QMouseEvent::MouseButtonPress && ((mouseEvent->buttons() & Qt::LeftButton)==Qt::LeftButton)) //do zoom on a mouse click
{
QRectF widgetRect(mouseEvent->pos().x() - 50, mouseEvent->pos().y() - 50, 100, 100); //build a rectangle around mouse cursor position
const QwtScaleMap xMap = plot->canvasMap(zoom->xAxis());
const QwtScaleMap yMap = plot->canvasMap(zoom->yAxis());
QRectF scaleRect = QRectF(
QPointF(xMap.invTransform(widgetRect.x()), yMap.invTransform(widgetRect.y())),
QPointF(xMap.invTransform(widgetRect.right()), yMap.invTransform(widgetRect.bottom())) ); //translate mouse rectangle to zoom rectangle
zoom->zoom(scaleRect);
}
}
When I try to create a 2x8 R8 texture in webgl2, I get an error. This doesn't happen for a 4x8 texture. If I double the size of the input buffer compared to what I expect, the 2x8 succeeds.
Does webgl2 have a 'column alignment' of 4 when creating/reading textures?
Here is some code that reproduces the issue. I tested it on Windows in both Chrome and Firefox:
function test_read(w) {
let gl = document.createElement('canvas').getContext('webgl2');
let h = 8;
let data = new Uint8Array(w*h);
data[5] = 5;
let texture = gl.createTexture();
let frameBuffer = gl.createFramebuffer();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R8, w, h, 0, gl.RED, gl.UNSIGNED_BYTE, data);
if (gl.getError() !== gl.NO_ERROR) {
return 'bad w=' + w;
}
return 'good w=' + w;
}
console.log(test_read(4)); // good w=4
console.log(test_read(2)); // bad w=2
The error code coming out is 0x502 (INVALID_OPERATION). A similar issue happens when reading textures that were created by expanding the buffer: it seems to expect a 'column alignment' of 4.
You need to set gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1)
The default UNPACK_ALIGNMENT is 4 which means WebGL expects every row of pixel to be a multiple of 4 bytes. Since you're using R8 (1 byte pixel) and a width of 2 your rows are only 2 bytes long. When you change the width to 4 it starts working.
function test_read(w) {
let gl = document.createElement('canvas').getContext('webgl2');
let h = 8;
let data = new Uint8Array(w*h);
data[5] = 5;
// ---=== ADDED ===---
gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
let texture = gl.createTexture();
let frameBuffer = gl.createFramebuffer();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R8, w, h, 0, gl.RED, gl.UNSIGNED_BYTE, data);
if (gl.getError() !== gl.NO_ERROR) {
return 'bad w=' + w;
}
return 'good w=' + w;
}
console.log(test_read(4)); // good w=4
console.log(test_read(2)); // bad w=2
I use OpenGL shader for apply median filter to image. Input image I copy to in_fbo buffer. All work fine.
QGLFramebufferObject *in_fbo, *out_fbo;
painter.begin(in_fbo); //Copy QImage to QGLFramebufferObject
painter.drawImage(0,0,image_in,0,0,width,height);
painter.end();
out_fbo->bind();
glViewport( 0, 0, nWidth, nHeight );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho( 0.0, nWidth, 0.0, nHeight, -1.0, 1.0 );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity( );
glEnable( GL_TEXTURE_2D );
out_fbo->drawTexture( QPointF(0.0,0.0), in_fbo->texture( ), GL_TEXTURE_2D );
But in shader code I need divide position of vertex by width and height of image, because texture coordinates are normalized to a range between 0 and 1.
How correctly calculate texture coordinates?
//vertex shader
varying vec2 pos;
void main( void )
{
pos = gl_Vertex.xy;
gl_Position = ftransform( );
}
//fragment shader
#extension GL_ARB_texture_rectangle : enable
uniform sampler2D texture0;
uniform int imgWidth;
uniform int imgHeight;
uniform int len;
varying vec2 pos;
#define MAX_LEN (100)
void main(){
float v[ MAX_LEN ];
for (int i = 0; i < len; i++) {
vec2 posi = pos + float(i);
posi.x = posi.x / float( imgWidth );
posi.y = posi.y / float( imgHeight );
v[i] = texture2D(texture0, posi).r;
}
//
//.... Calculating new value
//
gl_FragColor = vec4( m, m, m, 1.0 );
}
Before I did it in OpenFrameworks. But shader for texture in OF does not work for texture in Qt. I suppose because OF create textures with textureTarget = GL_TEXTURE_RECTANGLE_ARB. Now the result of applying shader above isn't correct. It isn't identical with result of the old shader (there are few pixels with different colors). I don't know how modify shader above :(.
Old shaders:
//vertex
#version 120
#extension GL_ARB_texture_rectangle : enable
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
//fragment
#version 120
#extension GL_ARB_texture_rectangle : enable
uniform sampler2D texture0;
uniform int len;
void main(){
vec2 pos = gl_TexCoord[0].xy;
pos.x = int( pos.x );
pos.y = int( pos.y );
float v[ MAX_LEN ];
for (int i=0; i<len; i++) {
vec2 posi = pos + i;
posi.x = int( posi.x + 0.5 ) + 0.5;
posi.y = int( posi.y + 0.5 ) + 0.5;
v[i] = texture2D(texture0, posi).r;
}
//
//.... Calculating new value
//
gl_FragColor = vec4( m, m, m, 1.0 );
}
OpenGL code from OpenFrameworks lib
texData.width = w;
texData.height = h;
texData.tex_w = w;
texData.tex_h = h;
texData.textureTarget = GL_TEXTURE_RECTANGLE_ARB;
texData.bFlipTexture = true;
texData.glType = GL_RGBA;
// create & setup FBO
glGenFramebuffersEXT(1, &fbo);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// Create the render buffer for depth
glGenRenderbuffersEXT(1, &depthBuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, texData.tex_w, texData.tex_h);
// create & setup texture
glGenTextures(1, (GLuint *)(&texData.textureID)); // could be more then one, but for now, just one
glBindTexture(texData.textureTarget, (GLuint)(texData.textureID));
glTexParameterf(texData.textureTarget, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(texData.textureTarget, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameterf(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(texData.textureTarget, 0, texData.glType, texData.tex_w, texData.tex_h, 0, texData.glType, GL_UNSIGNED_BYTE, 0);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
// attach it to the FBO so we can render to it
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, texData.textureTarget, (GLuint)texData.textureID, 0);
I do not think you actually want to use the texture's dimensions to do this. From the sounds of things this is a simple fullscreen image filter and you really just want fragment coordinates mapped into the range [0.0,1.0]. If this is the case, then gl_FragCoord.xy / viewport.xy, where viewport is a 2D uniform that defines the width and height of your viewport ought to work for your texture coordinates (in the fragment shader).
vec2 texCoord = vec2 (transformed_pos.x, transformed_pos.y) / transformed_pos.w * vec2 (0.5, 0.5) + vec2 (1.0, 1.0) may also work using the same principle -- clip-space coordinates transformed into NDC and then mapped to texture-space. This approach will not properly account for texel centers ((0.5, 0.5) rather than (0.0, 0.0)), however and can present problems when texture filtering is enabled and the wrap mode is not GL_CLAMP_TO_EDGE.
Hi I am using qwtspectrogram to plot my array data but the problem i am having is scaling.. the qwt scales are distributed over the intervals x and y and the axis are dependent on the pixels. for example if I have a data set vector of (500x500) and i want to plot for 500mm x500mm its perfect because it is translated over the pixels but if i want to plot 500x500 number of points for 300mmx300mm it is a mess and it shows in the plot 500x500 i know i have to manage the x and y axis but I have no idea how to do that. I manage to play a little and display 500x500 for 250mmx250mm area but can not do for others.
my code is as shown below
class mydata: public QwtRasterData
{
char filepath[35];
QFile myfile;
QVector<qint16> fileBuf;
int pixel =500; // it represents the number of pixels in one row or column
int dial =1000;
public:
mydata()
{
setInterval( Qt::XAxis, QwtInterval( 0, (pixel)-1 ) );
setInterval( Qt::YAxis, QwtInterval( 0, (pixel)-1 ) );
setInterval( Qt::ZAxis, QwtInterval( -dial, dial ) );
{
sprintf_s(filepath, "c:\\myfile.bin");
myfile.setFileName(filepath);
if(!myfile.open(QIODevice::ReadOnly)) return;
QDataStream data(&myfile);
data.setByteOrder(QDataStream::LittleEndian);
while(!data.atEnd()) {
qint16 x;
data >> x;
fileBuf.append(x);
}
myfile.close();
}
}
virtual double value( double x, double y ) const//shifted
{
int x_pos = static_cast<int>(x);
int y_pos = static_cast<int>(y);
double c = (fileBuf[ ((x_pos)+((pixel-(y_pos))*pixel))]);
return c;
}
}
in short i have same number of pixels for different areas but i have same representation of axis in plot.
here is a class that converts double value of scale into string and display instead
class MyScaleDraw: public QwtScaleDraw
{
public:
MyScaleDraw()
{
setTickLength( QwtScaleDiv::MajorTick, 10 );
setTickLength( QwtScaleDiv::MinorTick, 2 );
setTickLength( QwtScaleDiv::MediumTick, 0 );
setLabelRotation( 0 );
setLabelAlignment( Qt::AlignLeft | Qt::AlignVCenter );
setSpacing( 10 );
}
virtual QwtText label( double value ) const
{
QwtText h=QwtText(QString::number(value*0.75); //use any scaling factor you want
return h;
}
};
include the following below code after creating a plot
d_plot->setAxisScaleDraw( QwtPlot::xBottom, new MyScaleDraw() );
d_plot->setAxisScaleDraw( QwtPlot::yLeft, new MyScaleDraw() );
this solved my problem
I want to draw plane that picture.
now I try vertex buffer and DrawPrimitive is D3DPT_LINESTRIP. but not effect I want.
so any way more than effective that???
please give me some advice. thank you.
This could be an option, is not the optimal but would achive that grid
void DrawGrid (float32 Size, CColor Color, int32 GridX, int32 GridZ)
{
// Check if the size of the grid is null
if( Size <= 0 )
return;
// Calculate the data
DWORD grid_color_aux = Color.GetUint32Argb();
float32 GridXStep = Size / GridX;
float32 GridZStep = Size / GridZ;
float32 halfSize = Size * 0.5f;
// Set the attributes to the paint device
m_pD3DDevice->SetTexture(0,NULL);
m_pD3DDevice->SetFVF(CUSTOMVERTEX::getFlags());
// Draw the lines of the X axis
for( float32 i = -halfSize; i <= halfSize ; i+= GridXStep )
{
CUSTOMVERTEX v[] =
{ {i, 0.0f, -halfSize, grid_color_aux}, {i, 0.0f, halfSize, grid_color_aux} };
m_pD3DDevice->DrawPrimitiveUP( D3DPT_LINELIST,1, v,sizeof(CUSTOMVERTEX));
}
// Draw the lines of the Z axis
for( float32 i = -halfSize; i <= halfSize ; i+= GridZStep )
{
CUSTOMVERTEX v[] =
{ {-halfSize, 0.0f, i, grid_color_aux}, {halfSize, 0.0f, i, grid_color_aux} };
m_pD3DDevice->DrawPrimitiveUP( D3DPT_LINELIST,1, v,sizeof(CUSTOMVERTEX));
}
}
The CUSTOMVERTEX struct:
struct CUSTOMVERTEX
{
float32 x, y, z;
DWORD color;
static unsigned int getFlags()
{
return D3DFVF_CUSTOMVERTEX;
}
};
Note: Is only a grid with lines, so you need to draw a solid plane, in order to get a look like result as you want.
You can use DrawPrimitive with D3DPT_TRIANGLESTRIP for a plane. Then draw the indexed lines after with D3DPT_LINELIST with a depth bias. This way, even if they lie on the plane, you won't get any z-fighting.
I will introduce you a book Introduction to 3D programming with DirectX,it has a great
detail on how to do this in Chapter 8,Section 4.