I use MSER algorithm with opencv and find some rectangle parts
then I want to blur that inside rectangle.
my renctangles are vector like (x, y, width, height) but using dilate or erode need inputarray src.
how can i transform vector to inputarray src?
here is my code.
vector< vector< Point> > contours;
vector< Rect> bboxes;
Rect MserROI;
Ptr< MSER> mser = MSER::create(21, (int)(0.00002*textImg.cols*textImg.rows), (int)(0.05*textImg.cols*textImg.rows), 1, 0.7);
mser->detectRegions(textImg, contours, bboxes);
for (int i = 0; i < bboxes.size(); i++)
{
cout << bboxes[i] << '\n';
rectangle(inImg, bboxes[i], CV_RGB(0, 0, 0));
MserROI = bboxes[i];
dilate(MserROI, Mser_dil, Mat(), Point(-1, -1), 2) //error
}
I infer that you want to blur the part of the image that inside the rectangle.
If that's the case, then you need to correct the way you're declaring your ROI.
If 'inImg' is a Mat, then you can declare your ROI as follows:
for (int i = 0; i < bboxes.size(); i++)
{
rectangle(inImg, bboxes[i], CV_RGB(0, 0, 0));
Mat MserROIimg=inImg(bboxes[i]);
dilate(MserROI, Mser_dil, Mat(), Point(-1, -1), 2) //error
}
In your code, you haven't mentioned where you've declared Mser_dil, but if your error is pertaining to ROI declaration, then this should work for you
Related
I'm applying rotation matrix on Stewart platform joints to get their position referenced to basement.
Code used is here:
for (int i=0; i<6; i++) {
float mx = baseRadius*cos(radians(baseAngles[i]));
float my = baseRadius*sin(radians(baseAngles[i]));
baseJoint[i] = new PVector(mx, my, 0);
}
for (int i=0; i<6; i++) {
float mx = platformRadius*cos(radians(platformAngles[i]));
float my = platformRadius*sin(radians(platformAngles[i]));
platformJoint[i] = new PVector(mx, my, 0);
q[i] = new PVector(0, 0, 0);
l[i] = new PVector(0, 0, 0);
A[i] = new PVector(0, 0, 0);
}
for (int i=0; i<6; i++) {
// rotation
q[i].x = cos(rotation.z)*cos(rotation.y)*platformJoint[i].x +
(-sin(rotation.z)*cos(rotation.x)+cos(rotation.z)*sin(rotation.y)*sin(rotation.x))*platformJoint[i].y +
(sin(rotation.z)*sin(rotation.x)+cos(rotation.z)*sin(rotation.y)*cos(rotation.x))*platformJoint[i].z;
q[i].y = sin(rotation.z)*cos(rotation.y)*platformJoint[i].x +
(cos(rotation.z)*cos(rotation.x)+sin(rotation.z)*sin(rotation.y)*sin(rotation.x))*platformJoint[i].y +
(-cos(rotation.z)*sin(rotation.x)+sin(rotation.z)*sin(rotation.y)*cos(rotation.x))*platformJoint[i].z;
q[i].z = -sin(rotation.y)*platformJoint[i].x +
cos(rotation.y)*sin(rotation.x)*platformJoint[i].y +
cos(rotation.y)*cos(rotation.x)*platformJoint[i].z;
// translation
q[i].add(PVector.add(translation, initialHeight));
l[i] = PVector.sub(q[i], baseJoint[i]);
Each point it's correctly initialized providing their positions by angle. Providing a rotation vector, I've as result a rotated platform based on It's center.
What I'd like to do, is to perform a rotation on arbitrary point (distant x,y,z from platform center) that I provide.
I've thought about 2 methods:
Geometric calculation of each joint respect to a provided point (complex if It's also not on same level). Apply this calculations also on base to have their center perpendicular.
Apply an additional rotation matrix translated by a vector
I'd like a tip about which way is correct (computationally light) or If there's already known way to move rotational center of this points.
Update
See rationale at the end of my question below
Using WebGL2 I can access a texel by its denormalized coordinates (sorry don't the right lingo for this). That means I don't have to scale them down to 0-1 like I do in texture2D().
However the input to the fragment shader is still the vec2/3 in normalized values.
Is there a way to declare in/out variables in the Vertex and Frag shaders so that I don't have to scale the coordinates?
somewhere in vertex shader:
...
out vec2 TextureCoordinates;
somewhere in frag shader:
...
in vec2 TextureCoordinates;
I would like for TextureCoordinates to be ivec2 and already scaled.
This question and all my other questions on webgl related to general computing using WebGL. We are trying to do tensor (multi-D matrix) operations using WebGL.
We map our data in a few ways to a Texture. The simplest approach we follow is -- assuming we can access our data as a flat array -- to lay it out along the texture's width and go up the texture's height until we're done.
Since our thinking, logic, and calculations are all based on tensor/matrix indices -- inside the fragment shader -- we'd have to map back to/from the X-Y texture coordinates to indices. The intermediate step here is to calculate an offset for a given position of a texel. Then from that offset we can calculate the matrix indices from its strides.
Calculating an offset in webgl 1 for very large textures seems to be taking much longer than webgl2 using the integer coordinates. See below:
WebGL 1 offset calculation
int coordsToOffset(vec2 coords, int width, int height) {
float s = coords.s * float(width);
float t = coords.t * float(height);
int offset = int(t) * width + int(s);
return offset;
}
vec2 offsetToCoords(int offset, int width, int height) {
int t = offset / width;
int s = offset - t*width;
vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height);
return coords;
}
WebGL 2 offset calculation in the presence of int coords
int coordsToOffset(ivec2 coords, int width) {
return coords.t * width + coords.s;
}
ivec2 offsetToCoords(int offset, int width) {
int t = offset / width;
int s = offset - t*width;
return ivec2(s,t);
}
It should be clear that for a series of large texture operations we're saving hundreds of thousands of operations just on the offset/coords calculation.
It's not clear why you want do what you're trying to do. It would be better to ask something like "I'm trying to draw an image/implement post processing glow/do ray tracing/... and to do that I want to use un-normalized texture coordinates because " and then we can tell you if your solution is going to work and how to solve it.
In any case, passing int or unsigned int or ivec2/3/4 or uvec2/3/4 as a varying is supported but not interpolation. You have to declare them as flat.
Still, you can pass un-normalized values as float or vec2/3/4 and the convert to int, ivec2/3/4 in the fragment shader.
The other issue is you'll get no sampling using texelFetch, the function that takes texel coordinates instead of normalized texture coordinates. It just returns the exact value of a single pixel. It does not support filtering like the normal texture function.
Example:
function main() {
const gl = document.querySelector('canvas').getContext('webgl2');
if (!gl) {
return alert("need webgl2");
}
const vs = `
#version 300 es
in vec4 position;
in ivec2 texelcoord;
out vec2 v_texcoord;
void main() {
v_texcoord = vec2(texelcoord);
gl_Position = position;
}
`;
const fs = `
#version 300 es
precision mediump float;
in vec2 v_texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main() {
outColor = texelFetch(tex, ivec2(v_texcoord), 0);
}
`;
// compile shaders, link program, look up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
// create buffers via gl.createBuffer, gl.bindBuffer, gl.bufferData)
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
numComponents: 2,
data: [
-.5, -.5,
.5, -.5,
0, .5,
],
},
texelcoord: {
numComponents: 2,
data: new Int32Array([
0, 0,
15, 0,
8, 15,
]),
}
});
// make a 16x16 texture
const ctx = document.createElement('canvas').getContext('2d');
ctx.canvas.width = 16;
ctx.canvas.height = 16;
for (let i = 23; i > 0; --i) {
ctx.fillStyle = `hsl(${i / 23 * 360 | 0}, 100%, ${i % 2 ? 25 : 75}%)`;
ctx.beginPath();
ctx.arc(8, 15, i, 0, Math.PI * 2, false);
ctx.fill();
}
const tex = twgl.createTexture(gl, { src: ctx.canvas });
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// no need to set uniforms since they default to 0
// and only one texture which is already on texture unit 0
gl.drawArrays(gl.TRIANGLES, 0, 3);
}
main();
<canvas></canvas>
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
So in response to your updated question it's still not clear what you want to do. Why do you want to pass varyings to the fragment shader? Can't you just do whatever math you want in the fragment shader itself?
Example:
uniform sampler2D tex;
out float result;
// some all the values in the texture
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(tex, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
sum4 += texelFetch(tex, ivec2(x, y), 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Example2
uniform isampler2D indices;
uniform sampler2D data;
out float result;
// some only values in data pointed to by indices
vec4 sum4 = vec4(0);
ivec2 texDim = textureSize(indices, 0);
for (int y = 0; y < texDim.y; ++y) {
for (int x = 0; x < texDim.x; ++x) {
ivec2 index = texelFetch(indices, ivec2(x, y), 0).xy;
sum4 += texelFetch(tex, index, 0);
}
}
result = sum4.x + sum4.y + sum4.z + sum4.w;
Note that I'm also not an expert in GPGPU but I have an hunch the code above is not the fastest way because I believe parallelization happens based on output. The code above has only 1 output so no parallelization? It would be easy to change so that it takes a block ID, tile ID, area ID as input and computes just the sum for that area. Then you'd write out a larger texture with the sum of each block and finally sum the block sums.
Also, dependant and non-uniform texture reads are a known perf issue. The first example reads the texture in order. That's cache friendly. The second example reads the texture in a random order (specified by indices), that's not cache friendly.
I'm trying to blur QImage alpha channel. My current implementation use deprecated 'alphaChannel' method and works slow.
QImage blurImage(const QImage & image, double radius)
{
QImage newImage = image.convertToFormat(QImage::Format_ARGB32);
QImage alpha = newImage.alphaChannel();
QImage blurredAlpha = alpha;
for (int x = 0; x < alpha.width(); x++)
{
for (int y = 0; y < alpha.height(); y++)
{
uint color = calculateAverageAlpha(x, y, alpha, radius);
blurredAlpha.setPixel(x, y, color);
}
}
newImage.setAlphaChannel(blurredAlpha);
return newImage;
}
I was also trying to implement it using QGraphicsBlurEffect, but it doesn't affect alpha.
What is proper way to blur QImage alpha channel?
I have faced a similar question about pixel read\write access :
Invert your loops. An image is laid out in memory as a succession of rows. So you should access first by height then by width
Use QImage::scanline to access data, rather than expensives QImage::pixel and QImage::setPixel. Pixels in a scan (aka row) are guaranteed to be consecutive.
Your code will look like :
for (int ii = 0; ii < image.height(); ii++) {
uchar* scan = image.scanLine(ii);
int depth =4;
for (int jj = 0; jj < image.width(); jj++) {
//it is in fact an rgba
QRgb* rgbpixel = reinterpret_cast<QRgb*>(scan + jj*depth);
QColor color(*rgbpixel);
int alpha = calculateAverageAlpha(ii, jj, color, image);
color.setAlpha(alpha);
//write
*rgbpixel = color.rgba();
}
}
You can go further and optimize the computation of the alpha average. Lets look at the sum of pixel in a radius. The sum of alpha value at (x,y) in the radius is s(x,y). When you move one pixel in either direction, a single line is added while a single line is removed. lets say you move horizontally. if l(x,y) is the sum of the vertical line of length 2*radius centered around (x,y), you have
s(x + 1, y) = s(x, y) + l(x + r + 1, y) - l(x - r, y)
Which allow you to efficiently compute a matrix of sum (then average, by dividing with the number of pixel) in a first pass.
I suspect this kind of optimization is already implemented in a much better way in libraries such as opencv. So I would encourage you to use existing opencv functions if you wish to save time.
I made a simple graphical user interface with Qt and I use OpenCV for making processing on webcam streaming, i.e canny edge detection.
I try to implement a switch between two displays of the webcam :
1*) "normal Mode" : a grayscale display where webcam gives a border detection video with grayscale color
2*) "greenMode" : a green and black display where webcam gives the same "border detected" but with green and black colors.
The first one works (with grayscale) works. Here's the result :
Now I have problems with the second one. Here's the part of the code where I can't find a solution :
// Init capture
capture = cvCaptureFromCAM(0);
first_image = cvQueryFrame( capture );
// Init current qimage
current_qimage = QImage(QSize(first_image->width,first_image->height),QImage::Format_RGB32);
IplImage* frame = cvQueryFrame(capture);
int w = frame->width;
int h = frame->height;
if (greenMode) // greenMode : black and green result
{
current_image = cvCreateImage(cvGetSize(frame),8,3);
cvCvtColor(frame,current_image,CV_BGR2RGB);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
}
else // normal Mode : grayscale result WHICH WORKS
{
current_image = cvCreateImage(cvGetSize(frame),8,1);
cvCvtColor(frame,current_image,CV_BGR2GRAY);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
}
gaussianfilter(webcam_off);
border_detect(webcam_off);
cvReleaseImage(¤t_image);
repaint();
The "greenMode" doesn't seem to put good pixels with this "setPixel" (I take the middle rgb value : current_image->imageData[i+j*w+1]) :
current_image = cvCreateImage(cvGetSize(frame),8,3);
cvCvtColor(frame,current_image,CV_BGR2RGB);
for(int j = 0; j < h; j++)
{
for(int i = 0; i < w; i++)
{
current_qimage.setPixel(i,j,qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1]));
}
}
Here's what I get :
Firstly, the output is not green and black and secondly, it's zoomed compared to the grayscale image.
Could you have any clues to get the greenMode ?
qRgb(current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1],current_image->imageData[i+j*w+1])
You're using an identical value for all three RGB color components. R == G == B will always result in grey.
To convert an RGB value to green/black, you could for example convert to greyscale (using the luminosity method) and then tint it green:
const int v = qRound( 0.21 * pixel.red() + 0.71 * pixel.green() + 0.07 * pixel.blue() );
setPixel( i, j, qRgb( v, 0, 0 ) );
(There are probably more sophisticated methods for the tinting).
For the scaling, I assume the error occurs when calculating the index for current_image. You're using the same (i+j*w+1) for both images, but the grey has 1 channel and the second 3 (third cvCreateImage argument). So the latter will have two more values per pixel.
I am trying to display a mathematical surface f(x,y) defined on a XY regular mesh using OpenGL and C++ in an effective manner:
struct XYRegularSurface {
double x0, y0;
double dx, dy;
int nx, ny;
XYRegularSurface(int nx_, int ny_) : nx(nx_), ny(ny_) {
z = new float[nx*ny];
}
~XYRegularSurface() {
delete [] z;
}
float& operator()(int ix, int iy) {
return z[ix*ny + iy];
}
float x(int ix, int iy) {
return x0 + ix*dx;
}
float y(int ix, int iy) {
return y0 + iy*dy;
}
float zmin();
float zmax();
float* z;
};
Here is my OpenGL paint code so far:
void color(QColor & col) {
float r = col.red()/255.0f;
float g = col.green()/255.0f;
float b = col.blue()/255.0f;
glColor3f(r,g,b);
}
void paintGL_XYRegularSurface(XYRegularSurface &surface, float zmin, float zmax) {
float x, y, z;
QColor col;
glBegin(GL_QUADS);
for(int ix = 0; ix < surface.nx - 1; ix++) {
for(int iy = 0; iy < surface.ny - 1; iy++) {
x = surface.x(ix,iy);
y = surface.y(ix,iy);
z = surface(ix,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy);
y = surface.y(ix + 1, iy);
z = surface(ix + 1,iy);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix + 1, iy + 1);
y = surface.y(ix + 1, iy + 1);
z = surface(ix + 1,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
x = surface.x(ix, iy + 1);
y = surface.y(ix, iy + 1);
z = surface(ix,iy + 1);
col = rainbow(zmin, zmax, z);color(col);
glVertex3f(x, y, z);
}
}
glEnd();
}
The problem is that this is slow, nx=ny=1000 and fps ~= 1.
How do I optimize this to be faster?
EDIT: following your suggestion (thanks!) regarding VBO
I added:
float* XYRegularSurface::xyz() {
float* data = new float[3*nx*ny];
long i = 0;
for(int ix = 0; ix < nx; ix++) {
for(int iy = 0; iy < ny; iy++) {
data[i++] = x(ix,iy);
data[i++] = y(ix,iy);
data[i] = z[i]; i++;
}
}
return data;
}
I think I understand how I can create a VBO, initialize it to xyz() and send it to the GPU in one go, but how do I use the VBO when drawing. I understand that this can either be done in the vertex shader or by glDrawElements? I assume the latter is easier? If so: I do not see any QUAD mode in the documentation for glDrawElements!?
Edit2:
So I can loop trough all nx*ny quads and draw each by:
GL_UNSIGNED_INT indices[4];
// ... set indices
glDrawElements(GL_QUADS, 1, GL_UNSIGNED_INT, indices);
?
1/. Use display lists, to cache GL commands - avoiding recalculation of the vertices and the expensive per-vertex call overhead. If the data is updated, you need to look at client-side vertex arrays (not to be confused with VAOs). Now ignore this option...
2/. Use vertex buffer objects. Available as of GL 1.5.
Since you need VBOs for core profile anyway (i.e., modern GL), you can at least get to grips with this first.
Well, you've asked a rather open ended question. I'd suggest using modern (3.0+) OpenGL for everything. The point of just about any new OpenGL feature is to provide a faster way to do things. Like everyone else is suggesting, use array (vertex) buffer objects and vertex array objects. Use an element array (index) buffer object too. Most GPUs have a 'post-transform cache', which stores the last few transformed vertices, but this can only be used when you call the glDraw*Elements family of functions. I also suggest you store a flat mesh in your VBO, where y=0 for each vertex. Sample the y from a heightmap texture in your vertex shader. If you do this, whenever the surface changes you will only need to update the heightmap texture, which is easier than updating the VBO. Use one of the floating point or integer texture formats for a heightmap, so you aren't restricted to having your values be between 0 and 1.
If so: I do not see any QUAD mode in the documentation for glDrawElements!?
If you want quads make sure you're looking at the GL 2.1-era docs, not the new stuff.