how can I learn size of geometry for iteration in Google Earth Engine? - google-earth-engine

I want to learn count of pixels in my rectangle bound. For example; I should see MxN size of bound.
my codes is :
https://code.earthengine.google.com/11cafc2e8de63293000ee2a3c2036ae4

Related

Getting pixel position from relative position to camera

I am looking for a way to find the (x, y) pixel position of a point in an image taken by camera. I know the physical position of the object (distance - width, height and depth), the resolution of the image and probably the focal distance (maybe I could also get some others camera parameteres - bbut I want as less information as possible).
In case I am not clear I want a formula/algorithm/procedure to map from (width, heigh, depth) to (x_pixel_position_in_image, y_pixe_position_in_image) - to connect the physical coordates with the pixel ones.
Thank you very much.
If you check the diagram linked below, the perspective projection of a 3d point with a camera depends on two main sources of information.
Diagram
Camera Parameters (Intrinsics) and the where the camera is in a fixed world coordinate (Extrinsics). Since you want to project points in the camera coordinate system, you can assume the world coordinate is coinciding with the camera. Hence the extrinsic matrix [R|t] can be expressed as,
R = eye(3); and t = [0; 0; 0].
Therefore, all you need to know is the camera parameters (focal length and optical center location). You can read more about this here.

GLSL Shader - 2D Rim Lighting Effect

How can I create a gradient like the one seen in the image below using GLSL?
What is the best method for achieving a smooth transition from opaque at the edges of the polygon being drawn to transparent at its center?
The image you referred to is achieved through what is called distance transform. It is a very useful and common operation widely applied in image processing, computer vision and robot path calculation, etc. What it does is for each pixel of a image, compute the 2D Euclidean distance from the pixel to the nearest edges of the polygon. The output is an image whose pixel value indicates the minimum distance. To visualize the results, we map the distance to gray scale. Particularly, in your reference image, the ridge with bright white has the largest distance to the boundary while the dark area contains much smaller values because they are very close to the polygon boundary.
In terms of implementation, a brutal force approach is to draw a 2D image you want to transform, and in the fragment shader, compute the distance from current fragment position to every edge of a polygon and output the minimum value to an framebuffer. The geometry information of the polygon can be stored in another texture. Eventually, you ends up with a 2D texture whose pixel value encodes the shortest distance to the edges of the polygon.
You can also find this common transform implementation in OpenCV library.

Maximum Projection FIeld of View

What is the maximum field of view that can be accomplished via a projection matrix with no distortion? There is a hard limit of < 180 degrees before the math completely breaks down, but experimenting with 170-180 degrees leads me to believe that distortion and deviation from reality begins prior to the hard limit. Where does the point at which the projection matrix begins to distort the view lie?
EDIT: Maybe some clarification is in order. As I increased the FOV angle toward 180 with a fixed render size, I observed objects getting smaller much faster than they should in reality. With a fixed render size and the scene/camera being identical, the diameter of objects should be inversely proportionate to the field of view size, if I'm not mistaken. Yet I observed them shrinking exponentially, down to 0 size at 180 degrees. This is undoubtedly due to the fact that X and Y scaling in a projection matrix are proportionate to cot(FOV / 2). What I'm wondering is when exactly this distortion effect begins.
Short answer: There is no deviation from reality and there is always distortion.
Long answer: Common perspective projection matrices project a 3D scene onto a 2D plane with respect to a camera position. If you consider a fixed distance of the plane from the camera, then the field of view defines the plane's size. Larger angles define larger planes. If you fix the size, then the field of view defines the distance. Larger angles define a smaller distance.
Viewed from the camera, the image does not change whether it sees the original scene or the plane with the projected scene (i.e. there is no deviation from reality).
Problems occur when you look at the plane from a different view point. E.g. when the projected plane is displayed on the screen (fixed size), there is only one position of the camera (your eye) from which the image is realistic. For very large field of view angles, you'll need to be very close to the screen to find that position. All other positions will not result in the correct image. For small field of view angles, the resulting distortion is very small and users will mostly consider it a realistic projection. That's because for small angles, the projected image does not change significantly if you change the distance slightly (changing the distance from 1 meter to 1.1 meters (10%) with a small fov is less problematic than changing the distance from 0.1 meters to 0.2 meters (100%) with a large fov). The most extreme case is an orthographic projection with virtually zero fov. Then, the projection does not depend on the distance at all.
And there is always distortion if objects are not at the projection axis (i.e. for any fov greater than zero). This results in spheres not projecting to perfect circles. This effect also happens with small fovs but there it is less obvious.

In OpenGL, How can I determine the bounds of the view at a given depth

I'm playing around with OpenGL and I've got a question that I haven't been able to find an answer to or at least haven't found the right way to ask search engines. I have a a pretty simple setup. An 800x600 viewport and a projection matrix with a 45 degree field of view and near and far planes of 1.0 and 200.0. For the sake of discussion, the modelview matrix is the identity matrix.
What I'm trying to determine is the bounds of the view at a given depth. For example, (0,0,0) is the center of the screen. And I'm looking in the -Z direction.
I want to know, if I draw geometry on a plane 100 units into the screen (0,0,-100), what are the bounds of the view? How far in the x and y direction can I draw in this plane and the geometry still be visible.
More generically, Given a plane parallel to the near and far plane (and between them), what are the visible bounds of that plane?
Also, if what I'm trying to determine has a common name or is a common operation, what's it called? That way I can track down more reading material
Your view angle is 45 degrees, you have a plane at a distance of a away from the camera, with an unkown height h. The whole thing looks like this:
Note that the angle here is half of your field of view.
Dusting off the highschool maths books, we get:
tan(angle) = h/a
Rearrange for h and subsitute the half field of view:
h = tan(FieldOfView / 2) * a;
This is how much your plane extends upwards along the Y axis.
Since screens aren't square, the width of your plane is different to the height. More exactly, the width is the aspect ratio times the height. I.e. w = h * aspectRatio
I hope this answers your question.

Converting x/y values in camera view to pan/tilt values

If I have a camera which gives out 360 degree pan (x) and tilt (y) values, and I want to get the pan and tilt values of where I have my cursor in the camera's view, how would I convert that?
More info:
It's a Flash/AS3 project.
The pan and tilt values are from the center of the camera view.
Camera view size is 960x540.
You gave the "view" size in pixels. What you need to know is the Field of View (FOV), which is measured in degrees. From that you can tell the number of degrees from center to the image edges.
You might be able to find the FOV in your camera's technical specifications. (It's determined by the detector array size and the focal length). Alternatively, you could try measuring it. Here's a webpage that explains how:
http://www.panohelp.com/lensfov.html

Resources