I've got some coordinates of the region of interest for object detection, i got them from a model, and i want to draw the bounding box using these coordinates:
(x,y) :coordinates of the vertex at the top left
(x+w,y+h): coordinates of the vertex at the lower right, but when i give these parameters to cv2.rectalgle(), it says that it needs int values, and i need to draw the exact values not the int ones.
Related
How can I draw filled elliptical sector using Bresenham's algorithm and bitmap object with DrawPixel method?
I have written method for drawing ellipse, but this method uses symmetry and passes only first quadrant. This algorithm is not situable for sectors. Of course, I can write 8 cycles, but I think it's not the most elegant solution of the task.
On integer math the usual parametrization is by using limiting lines (in CW or CCW direction) instead of your angles. So if you can convert those angles to such (you need sin,cos for that but just once) then you can use integer math based rendering for this. As I mentioned in the comment bresenham is not a good approach for a sector of ellipse as you would need to compute the internal iterators and counters state for the start point of interpolation and also it will give you just the circumference points instead of filled shape.
There are many approaches out there for this here a simple one:
convert ellipse to circle
simply by rescaling the smaller radius axis
loop through bbox of such circle
simple 2 nested for loops covering the outscribed square to our circle
check if point inside circle
simply check if x^2 + y^2 <= r^2 while the circle is centered by (0,0)
check if point lies between edge lines
so it should be CW with one edge and CCW with the other. You can exploit the cross product for this (its z coordinate polarity will tell you if point is CW or CCW against the tested edge line)
but this will work only up to 180 deg slices so you need also add some checking for the quadrants to avoid false negatives. But those are just few ifs on top of this.
if all conditions are met conver the point back to ellipse and render
Here small C++ example of this:
void elliptic_arc(int x0,int y0,int rx,int ry,int a0,int a1,DWORD c)
{
// variables
int x, y, r,
xx,yy,rr,
xa,ya,xb,yb, // a0,a1 edge points with radius r
mx,my,cx,cy,sx,sy,i,a;
// my Pixel access (you can ignore it and use your style of gfx access)
int **Pixels=Main->pyx; // Pixels[y][x]
int xs=Main->xs; // resolution
int ys=Main->ys;
// init variables
r=rx; if (r<ry) r=ry; rr=r*r; // r=max(rx,ry)
mx=(rx<<10)/r; // scale from circle to ellipse (fixed point)
my=(ry<<10)/r;
xa=+double(r)*cos(double(a0)*M_PI/180.0);
ya=+double(r)*sin(double(a0)*M_PI/180.0);
xb=+double(r)*cos(double(a1)*M_PI/180.0);
yb=+double(r)*sin(double(a1)*M_PI/180.0);
// render
for (y=-r,yy=y*y,cy=(y*my)>>10,sy=y0+cy;y<=+r;y++,yy=y*y,cy=(y*my)>>10,sy=y0+cy) if ((sy>=0)&&(sy<ys))
for (x=-r,xx=x*x,cx=(x*mx)>>10,sx=x0+cx;x<=+r;x++,xx=x*x,cx=(x*mx)>>10,sx=x0+cx) if ((sx>=0)&&(sx<xs))
if (xx+yy<=rr) // inside circle
{
if ((cx>=0)&&(cy>=0)) a= 0;// actual quadrant
if ((cx< 0)&&(cy>=0)) a= 90;
if ((cx>=0)&&(cy< 0)) a=270;
if ((cx< 0)&&(cy< 0)) a=180;
if ((a >=a0)||((cx*ya)-(cy*xa)<=0)) // x,y is above a0 in clockwise direction
if ((a+90<=a1)||((cx*yb)-(cy*xb)>=0))
Pixels[sy][sx]=c;
}
}
beware both angles must be in <0,360> range. My screen has y pointing down so if a0<a1 it will be CW direction which matches the routione. If you use a1<a0 then the range will be skipped and the rest of ellipse will be rendered instead.
This approach uses a0,a1 as real angles !!!
To avoid divides inside loop I used 10 bit fixed point scales instead.
You can simply divide this to 4 quadrants to avoid 4 if inside loops to improve performance.
x,y is point in circular scale centered by (0,0)
cx,cy is point in elliptic scale centered by (0,0)
sx,sy is point in elliptic scale translated to ellipse center position
Beware my pixel access is Pixels[y][x] but most apis use Pixels[x][y] so do not forget to change it to your api to avoid access violations or 90deg rotation of the result ...
I have azimuth , elevation and direction vector of the sun.. i want to place a view point on sun ray direction with some distance. Can anyone describe or provide a link to a resource that will help me understand and implement the required steps?
I used cartesian coordinate system to find direction vector from azimuth and elevation.and then for find
viewport origin.image for this question
x = distance
y = distance* tan azimuth
z = distance * tan elevation.
i want to find that distance value... how?
azimutal coordinate system is referencing to NEH (geometric North East High(Up)) reference frame !!!
in your link to image it is referencing to -Y axis which is not true unless you are not rendering the world but doing some nonlinear graph-plot projection so which one it is?
btw here ECEF/WGS84 and NEH you can find out how to compute NEH for WGS84
As I can see you have bad computation between coordinates so just to be clear this is how it looks like:
on the left is global Earth view and one NEH computed for its position (its origin). In the middle is surface aligned side view and on the right is surface aligned top view. Blue magenta green are input azimutal coordinates, Brown are x,y,z cartesian projections (where the coordinate is on its axis) so:
Dist'= Dist *cos(Elev );
z = Dist *sin(Elev );
x = Dist'*cos(Azimut);
y =-Dist'*sin(Azimut);
if you use different reference frame or axis orientations then change it accordingly ...
I suspect you use 4x4 homogenous transform matrices
for representing coordinate systems and also to hold your view-port so look here:
transform matrix anatomy
constructing the view-port
You need X,Y,Z axis vectors and O origin position. O you already have (at least you think) and Z axis is the ray direction so you should have it too. Now just compute X,Y as alignment to something (else the view will rotate around the ray) I use NEH for that so:
view.Z=Ray.Dir // ray direction
view.Y=NEH.Z // NEH up vector
view.X=view.Y x view.Z // cross product make view.X axis perpendicular to Y ansd Z
view.Y=view.Z x view.X // just to make all three axises perpendicular to each other
view.O=ground position - (distance*Ray.Dir);
To make it a valid view_port you have to:
view = inverse(view)*projection_matrix;
You need inverse matrix computation for that
if you want the whole thing
Then you also want to add the Sun/Earth position computation in that case look here:
complete Earth-Sun position by Kepler's equation
The distance
Now that is clear what is behind you just need to set the distance if you want to set it to Sun then it will be distance=1.0 AU; (astronomical unit) but that is huge distance and if you have perspective your earth will be very small instead use some closer distance to match your view size look here:
How to position the camera so that the object always has the same size
Given that:
The shape is a regular polygon in 3D space
The start point (the end of one arbitrary vertex of the shape) is known
the point in the middle of the shape (not on an edge - equidistant from all corners) is known
the angle at each corner (((numEdges-2)*PI)/numEdges), the radius of the shape (distance from a corner to the midpoint = sqrt(dx^2 + dy^2 + dz^2)), and the length of each edge (radius*2*sin(pi/numEdges)) can be calculated.
Given all this information, is it possible to fill in the blanks, if you like, and work out the rest of the start/endpoints for each vertex of the shape?
I can sort of see the beginnings of the logic in 2D, but in 3D i'm lost.
I'm thinking it can't be done, since your knowns do not uniquely identify your polygon. The points you do know define a unique line, but I can provide infinitely many congruent polygons with the same vertex and center, all rotations of one another about this line.
I'd like to map from normalized device coordinates back to viewspace.
The other way arround works like this:
viewspace -> clip space : multiply the homogeneous coordinates by the projection matrix
clip space -> normalized device coordinates: divide the (x,y,z,w) by w
now in normalized device coordinates all coordinates which were within the view frustum fall into the cube x,y,z € [-1,1] and w=1
Now i'd like to transform some points on the boundary of that cube back into view coordinates. The projection matrix is nonsingular, so I can use the inverse to get from clipsace to viewspace. but i don't know how to get from normalized device space to clipspace, since i don't know how to calculate the 'w' i need to multiply the other coordinates with.
can someone help me with that? thanks!
Unless you actually want to recover your clip space values for some reason you don't need to calculate the W. Multiply your NDC point by the inverse of the projection matrix and then divide by W to get back to view space.
The flow graph on the top, and the formulas described on the following page, might help you : http://www.songho.ca/opengl/gl_transform.html
Given a list of points that form a simple 2d polygon oriented in 3d space and a normal for that polygon, what is a good way to determine which points are specific 'corner' points?
For example, which point is at the lower left, or the lower right, or the top most point? The polygon may be oriented in any 3d orientation, so I'm pretty sure I need to do something with the normal, but I'm having trouble getting the math right.
Thanks!
You would need more information in order to make that decision. A set of (co-planar) points and a normal is not enough to give you a concept of "lower left" or "top right" or any such relative identification.
Viewing the polygon from the direction of the normal (so that it appears as a simple 2D shape) is a good start, but that shape could be rotated to any arbitrary angle.
Is there some other information in the 3D world that you can use to obtain a coordinate-system reference?
What are you trying to accomplish by knowing the extreme corners of the shape?
Are you looking for a bounding box?
I'm not sure the normal has anything to do with what you are asking.
To get a Bounding box, keep 4 variables: MinX, MaxX, MinY, MaxY
Then loop through all of your points, checking the X values against MaxX and MinX, and your Y values against MaxY and MinY, updating them as needed.
When looping is complete, your box is defined as MinX,MinY as the upper left, MinX, MaxY as upper right, and so on...
Response to your comment:
If you want your box after a projection, what you need is to get the "transformed" points. Then apply bounding box loop as stated above.
Transformed usually implies 2D screen coordinates after a projection(scene render) but it could also mean the 2D points on any plane that you projected on to.
A possible algorithm would be
Find the normal, which you can do by using the cross product of vectors connecting two pairs of different corners
Create a transformation matrix to rotate the polygon so that it is planer in XY space (i.e. normal alligned along the Z axis)
Calculate the coordinates of the bounding box or whatever other definition of corners you are using (as the polygon is now aligned in 2D space this is a considerably simpler problem)
Apply the inverse of the transformation matrix used in step 2 to transform these coordinates back to 3D space.
I believe that your question requires some additional information - namely the coordinate system with respect to which any point could be considered "topmost", or "leftmost".
Don't forget that whilst the normal tells you which way the polygon is facing, it doesn't on its own tell you which way is "up". It's possible to rotate (or "roll") around the normal vector and still be facing in the same direction.
This is why most 3D rendering systems have a camera which contains not only a "view" vector, but also "up" and "right" vectors. Changes to the latter two achieve the effect of the camera "rolling" around the view vector.
Project it onto a plane and get the bounding box.
I have a silly idea, but at the risk of gaining a negative a point, I'll give it a try:
Get the minimum/maximum value from
each three-dimensional axis of each
point on your 2d polygon. A single pass with a loop/iterator over the list of values for every point will suffice, simply replacing the minimum and maximum values as you go. The end result is a list that has the "lowest" X, Y, Z coordinates and "highest" X, Y, Z coordinates.
Iterate through this list of min/max
values to create each point
("corner") of a "bounding box"
around the object. The result
should be a box that always contains
the object regardless of axis
examined or orientation (no point on
the polygon will ever exceed the
maximum or minimums you collect).
Then get the distance of each "2d
polygon" point to each corner
location on the "bounding box"; the
shorter the distance between points,
the "closer" it is to that "corner".
Far from optimal, certainly crummy, but certainly quick. You could probably post-capture this during the object's rotation, by simply looking for the min/max of each rotated x/y/z value, and retaining a list of those values ahead of time.
If you can assume that there is some constraints regarding the shapes, then you might be able to get away with knowing less information. For example, if your shape was the composition of a small square with a long thin triangle on one side (i.e. a simple symmetrical geometry), then you could compare the distance from each list point to the "center of mass." The largest distance would identify the tip of the cone, the second largest would be the two points farthest from the tip of the cone, etc... If there was some order to the list, like points are entered in counter clockwise order (about the normal), you could identify all the points. This sounds like a bit of computation, so it might be reasonable to try to include some extra info with your shapes, like the "center of mass" and a reference point that is located "up" above the COM (but not along the normal). This will give you an "up" vector that you can cross with the normal to define some body coordinates, for example. Also, the normal can be defined by an ordering of the point list. If you can't assume anything about the shapes (or even if the shapes were symmetrical, for example), then you will need more data. It depends on your constraints.
If you know that the polygon in 3D is "flat" you can use the normal to transform all 3D-points of the vertices to a 2D-representation (of the points with respect to the plan in which the polygon is located) - but this still leaves you with defining the origin of this coordinate-system (but this don't really matter for your problem) and with the orientation of at least one of the axes (if you want orthogonal axes you can still rotate them around your choosen origin) - and this is where the trouble starts.
I would recommend using the Y-axis of your 3D-coordinate system, project this on your plane and use the resulting direction as "up" - but then you are in trouble in case your plan is orthogonal to the Y-axis (now you might want to use the projected Z-Axis as "up").
The math is rather simple (you can use the inner product (a.k.a. scalar product) for projection to your plane and some matrix stuff to convert to the 2D-coordinate system - you can get all of it by googling for raytracer algorithms for polygons.