How to check given a coordinate (lat,long) if is in a raster image file (tif file) and how to extract it with NXN window? - raster

Hi I am researching to complete a project.
The aim is by given a coordinate I can extract the image amongs the raster files.
How do I check if the coordinate that I have for example (51.3334198, 3.2973934) is in a raster image file - k_01.tif? If this coordinate is indeed in k_01.tif, how do I extract a small part of if i.e NxN window?
My code:
import rasterio
src = rasterio.open('k_01.tif')
src.bound
BoundingBox(left=145000.0, bottom=238000.0, right=162000.0, top=247000.0)
src.crs
CRS.from_epsg(31370)
Any ideas? Thanks in advance.

To check if a location (lat,long) is within the raster image's boundary.
import pyproj
pp = pyproj.Proj(init='epsg:31370')
x,y = pp(3.2973934, 51.3334198) #long, lat sequence
then, you can use (x,y) to check with the src.bound
src= rasterio.open( your_tiffile )
bnd = src.bounds
if bnd.left < x < bnd.right and bnd.top < y < bnd.bottom:
print("inside")
else:
print("outside")
To slice part of the array.
band1 = src.read(1) #read band-1
# usually we flip the image, uncomment next line if you dont need it
band1_flip = band1[::-1,:] # reverse height in (height, width, channel)
# extract part of the image array
fr_row, to_row, fr_col, to_col = 0, 100, 0, 100 #you set the values
aslice = band1_flip[fr_row:to_row, fr_col:to_col]

Related

Detecting a line in an image and save its coordinates

I have an image representing an intensity graph:
intensity graph
In order to multiplicate two intensitiy graphs I need to save the coordinates of this graph. Thus, I first want to find the (middle or one border of the) line and then get its coordinates. So far I tried a few things. Which came nearest to the solution was using the LineSegmentDetector package:
library(pixmap)
image <- read.pnm(file = "graph.pgm", cellres = 1)
x <- image#grey * 255
linesegments <- image_line_segment_detector(x)
linesegments
plot(image)
plot(linesegments, add = TRUE, col = "red")
This gives me a couple of line segments:
enter image description here
However, the aim is to get one line of 1 pixel width like this:
enter image description here
Subsequently, I would need the coordinates of this graph. I would need one y value for every pixel in x direction.
I hope my problem is clear and am thankful for any help!

Is there a function that gives the size of a pixel (not in meters) of an "ee.Image" in google earth engine?

I have an ee.Image that I export to TFRecord. I follow this tutorial (https://developers.google.com/earth-engine/guides/tfrecord).
I use this function :
ee.batch.Export.image.toDrive(
image = image,
description = name,
folder = folder,
fileNamePrefix = name,
region = region,
scale = 30,
fileFormat = 'TFRecord',
formatOptions = {
'patchDimensions': [128,128],
'kernelSize': [1,1],
'compressed': True,
}
)
After classifying my image, I want to convert it to KML. For that, I need the geodesic coordinates of my image's corners.
Normally, I would get them using ee.image.geometry().bounds(). However, when converting ee.Image to tfrecord, the patch dimensions (128,128) do not evenly divide the bounding box, so the border tiles along the greatest x/y edges are be dropped. Hence, the coordinates of the 4 corners of my image change (except for the top-left corner).
So, given the coordinates of the top-left corner of my image, and knowing the number of pixels (128,128), I want to recover the coordinates (geodesic) of the four corners.
How do I get the geodesic size of my pixel ?
ie :
x2 = x1 + size*128
y2 = y1 + size*128
Note: I know that my pixel is 30 meters !
Can anyone help? Thanks

How to compute the volume of a single voxel of nifti medical image?

I have loaded a nifti image file using nibabel tool and I have played with some properties.
But I don’t have idea how to compute the volume (in mm³) of a single voxel.
Here's the answer using NiBabel, as OP asked:
import nibabel as nib
nii = nib.load('t1.nii.gz')
sx, sy, sz = nii.header.get_zooms()
volume = sx * sy * sz
I am not a NiBabel expert, but I can instead recommend the SimpleITK package for Python. I often use it for reading NifTi image files. It has a method GetSpacing() which returns the pixel spacing in mm.
import SimpleITK as sitk
# read image
im = sitk.ReadImage("/path/to/input/image.nii")
# get voxel spacing (for 3-D image)
spacing = im.GetSpacing()
spacing_x = spacing[0]
spacing_y = spacing[1]
spacing_z = spacing[2]
# determine volume of a single voxel
voxel_volume = spacing_x * spacing_y * spacing_z

How to extract small dataset images from a large image using box in R

I would like to extract from an image a dataset of smaller images as illustrated as follow. I would like to create a box of 64x64 pixels, translate it in x and y and save each image in JPEG.
Could you suggest a function in R to do it ? I do not find the way the create a box.
You can use the magick-package for this. It has many functions for image-manipulation.
Basically what I did in the following code is reading in the image and then based on image-size create a list of coordinates which spans most of the image (a part of the edges might be missing), but this is only an example and you could modify it to the coordinates you need. Each point refers to the top-left point of one box you want to crop.
Afterwards in the loop I specify the size of the box 64x64 and use to coordinates from earlier to offset this box in each iteration and to give each cropped image a unique name.
install.packages("magick")
library(magick)
# read in image
im <- image_read("example.jpg")
# get image size
im_dim <- dim(image_data(im))
# create offsets for cropping image
coords <- expand.grid(x = seq(0, im_dim[2]-64, by = 64),
y = seq(0, im_dim[3]-64, by = 64))
coords$offset <- paste0("+", coords$x, "+", coords$y)
# crop and save
for(i in coords$offset) {
cropped <- image_crop(im, paste0("64x64", i))
image_write(cropped, paste0("example", i,".jpg"))
}

Kitti Velodyne point to pixel coordinate

From the Velodyne point, how to get pixel coordinate for each camera?
Using pykitti
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)
We can get the projection on the image which is equation 7 of the Kitti Dataset paper:
y = Prect(i) Rrect(0) Tvelocam x
But from there, how to get the actual pixel coordinates on each image?
Update: PyKitti version 0.2.1 exposes projection matrices for all cameras.
I recently faced the same problem. For me, the problem was that pykitty didn't expose Prect and Rrect matrices for all cameras.
For Pykitti > 0.2.1, use Prect and Rrect from calibration data.
For previous versions, you have two options:
Enter the matrices by hand (data is in the .xml calibration file for each sequence).
Use this fork of pykitti: https://github.com/Mi-lo/pykitti/
Then, you can use equation 7 to project a velodyne point into an image. Note that:
You will need 3D points as a 4xN array in homogeneous coordinates. Points returned by pykitti are a Nx4 numpy array, with the reflectance in the 4th column. You can prepare the points with the prepare_velo_points function below, which keeps only points with reflectance > 0, then replaces reflectance values with 1 to get homogeneous coordinates.
The velodyne is 360°. Equation 7 will give you a result even for points that are behind the camera (they will get projected as if they were in front, but vertically mirrored). To avoid this, you should project only points that are in front of the camera. For this, you can use the function project_velo_points_in_img below. It returns 2d points in homogeneous coordinates so you should discard the 3rd row.
Here are the functions I used:
def prepare_velo_points(pts3d_raw):
'''Replaces the reflectance value by 1, and tranposes the array, so
points can be directly multiplied by the camera projection matrix'''
pts3d = pts3d_raw
# Reflectance > 0
pts3d = pts3d[pts3d[:, 3] > 0 ,:]
pts3d[:,3] = 1
return pts3d.transpose()
def project_velo_points_in_img(pts3d, T_cam_velo, Rrect, Prect):
'''Project 3D points into 2D image. Expects pts3d as a 4xN
numpy array. Returns the 2D projection of the points that
are in front of the camera only an the corresponding 3D points.'''
# 3D points in camera reference frame.
pts3d_cam = Rrect.dot(T_cam_velo.dot(pts3d))
# Before projecting, keep only points with z>0
# (points that are in fronto of the camera).
idx = (pts3d_cam[2,:]>=0)
pts2d_cam = Prect.dot(pts3d_cam[:,idx])
return pts3d[:, idx], pts2d_cam/pts2d_cam[2,:]
Hope this helps!

Resources