Julia - converting Latitude, Longitude in Lambert conformal projection - julia

I have dowloaded data from the HRRR, similar to the grib2 file from this notebook:
https://nbviewer.org/github/microsoft/AIforEarthDataSets/blob/main/data/noaa-hrrr.ipynb
I now wish to use the data for specific Longitude, Latitude. But I do not know how to convert my (Longitude, Latitude) to the grid coordinates in the matrix data.
The notebook mentions that “the HRRR data comes in the Lambert conformal projection.” (See cell 8).
I have looked at the GMT package, and they seem to handle the Lambert conformal projection: https://docs.juliahub.com/GMT/EoU0j/0.30.1/proj_examples/.
But how can I convert the coordinates?
The following code seems to convert, but I don't think this is for Lambert, and after looking at the GMT documentation, I am not able to adjust the settings in the command.
lat=37.0; lon=-119.0;
gmt("mapproject -J+proj=merc", [lat;lon])
Vector{GMTdataset} with 1 segments
First segment DATA
Global BoundingBox: [-1.3247019404399555e7, 4.118821159351122e6]
First seg BoundingBox: [-1.3247019404399555e7, 4.118821159351122e6]
2×1 Matrix{Float64}:
4.118821159351122e6
-1.3247019404399555e7

I found out that the longitude and latitude were actually in the grib file, so there is no need to convert:
using GRIB
f = GribFile(grb2_filename)
lons, lats, values = data(Message(f))
# lons in range [225.90452026573686, 299.0828072281622] = [-134.09547973426314, -60.91719277183779]
# lats in range [21.138123000000018, 52.61565330680793]
So we can just look for the indexes of the closest longitude and latitude, and read the corresponding value in values.
Since lat and lon degree both approximate to 110 kilometers, I will just minimize the distance as follows:
(min_error, coord) = findmin(abs.(lats .- lat) .+ abs.(lons .- 360 .- lon))
(0.020456228700048484, CartesianIndex(269, 548))
values[coord]
294.8936767578125
While this actually does not answer the title question, it answers my current need and perhaps will be useful for someone else.

Related

Create buffers in km units for global point dataset in R?

I'm new to using spatial data so this probably seems like a very simple question, but something I'm struggling to get my head around.
I have a global dataset of sample sites and corresponding coordinates. I am using st_buffer in the sf package to create buffers of different sizes around these points. However, I need these buffers to be in km, for example 2,10,50 km radius, rather than in the units of the CRS projection (currently in long/lat, WGS84). As I understand it, UTM is the only planar projection, but does this mean I have to split my global dataset into each of the UTM zones before converting to UTM, and then create the buffers for each of these separately?
Is it possible to then convert the buffers back to my previous CRS projection?
Thank you!
It is not necessary to step out of the comfort of WGS84 to do a metric buffer; most of the tools are ready to combine longlat CRS with a metric definition of distance (it is a very common use case).
When buffering in WGS84 I kind of prefer terra::buffer() to sf::st_buffer() as it is likely to produce a smoother shape - the S2 functions that work under the hood of unprojected {sf} do not work smooth enough for me and the outcome is somewhat grainy. But I digress...
Consider this piece of code, what it does is:
looks up the coordinates of a semi random landmark (Greenwich Observatory in London)
buffers it by 25 kilometers
displays the result
Note how the terra documentation states that the buffer is in meters for unprojected coordinates.
library(nominatimlite)
library(sf)
library(terra)
a_point <- geo_lite_sf("Royal Observatory, Greenwich")
st_crs(a_point) # WGS84
terra_buffer <- a_point |>
vect() |>
buffer(width = 25000) %>% # 25 kilometers
st_as_sf()
mapview::mapview(terra_buffer)

R coordinates digits save to dataframe

Lets assume we have a point (described by latitude and longitude) (WGS84) and we form a SpatialPointDataFrame (gData.init). I would like to change the projection (transform) and then use the planar coordinates to estimate distances and intersection points using simple line-point methods. I use the following code to perform the transformation.
library(rgeos)
library(sp)
longitude = 22.954638
latitude = 40.617048
gData.init = data.frame(longitude,latitude)
gData.init$id <- as.numeric(rownames(gData.init))
coordinates(gData.init) <- gData.init[c("longitude", "latitude")]
proj4string(gData.init) <- "+proj=longlat +datum=WGS84"
gDataIn2100 <- spTransform( gData.init, CRS("+init=epsg:2100") )
Now I want to save the coordinates in any data type object; when I do this using the following code
gDataIn2100#coords
I get maximum one decimal:
longitude latitude
[1,] 411425.8 4496486
However when I print coordinates (I like lets say my coordinates to be more precise)
print(coordinates(gDataIn2100), digits = 12)
Then the resulting coordinates are somewhat different:
longitude latitude
[1,] 411425.810118 4496486.37561
This I think causes different estimation of minimum distances between a line and my point in case of using gDistance and by estimating the distance using LinkPointMinDistance
What do I do wrong?
DataIn2100#coords is equivalent to print(DataIn2100#coords, digits = getOption("digits"))
The decimals are only dropped when rendered to the screen. They are stored as numeric and have the precision of a floating point.
Note that coordinates(DataIn2100) is the recommended way to get the coordinates.

Unit length in spatstat

I have what may be a very simplistic question on the KEST function in Spatstat.KEST graph output I'm using the KEST function in Spatstat to assess spatial randomness in a dataset. I have uploaded lat and long values spread over London and converted them to a PPP object, using the ripras function to specify the spatial domain. When I run my KEST analysis on my ppp, and plot the graph, I end up with an r value on the x, but although I know this is a distance measurement, I don't know what units it's using. I get this summary output:
Planar point pattern: 113 points
Average intensity 407.9378 points per square unit
Coordinates are given to 9 decimal places
Window: polygonal boundary
single connected closed polygon with 14 vertices
enclosing rectangle: [-0.5532963, 0.3519148] x [51.2901, 51.7022] units
Window area = 0.277003 square units
with the max r on the x axis being 0.1 units, and the K(r) on the y axis being 0.04. How do I figure out what unit of distance these equate to?
Your lat,lon coordinates correspond to points on a sphere (or ellipsoid or whatever) used as a model for planet Earth. Essentially, spatstat assumes you are using coordinates projected on a flat map. This conversion could be done with e.g. the sp package (using Buckingham Palace as an example):
library(sp)
lat = c(51.501476)
lon = c(-0.140634)
xy = data.frame(lon, lat)
coordinates(xy) <- c("lon", "lat")
proj4string(xy) <- CRS("+proj=longlat +datum=WGS84")
NE <- spTransform(xy, CRS("+proj=utm +zone=30 ellps=WGS84"))
NE <- as.data.frame(NE)
The result is a data.frame with projected coordinates in Easting, Northing in metres. Then you can continue your analysis from there. To assign a unit label like "m" for prettier labels in figures use the function unitname on your ppp object (assuming the object is called X): unitname(X) <- "m"
If the function is able to accept geographic coordinates, then it is using a great circle equation to calculate distance. This normally results in units that are in Kilometers.
It is not very good practice to perform PPA on non-projected data. If possible, you should project your data into a coordinate system that is in distance units. I believe that most of the functions in spatstat use Euclidean distance, which is quite inappropriate for projection units in decimal degrees. Since there is not a latlong argument in the Kest function, I do not believe that your results are valid.
The K function itself (i.e. the theoretical K-function, not just the computer code) assumes that the space is flat rather than curved.
This would probably be a reasonable approximation in your case (points scattered over a few dozen kilometres) but not for a point pattern scattered over a continent. That is, in general the planar K-function should not be used for point patterns on a sphere.
The other posts are correct. The Kest function expects the coordinates to be given in an isometric coordinate system. You just need to express the spatial locations in a coordinate system in which the x and y coordinates are measured in the same distance units. Longitude and latitude are not measured in the same distance units because one degree (say) of longitude does not represent the same distance as one degree of latitude. Ege Rubak's example using spTransform is probably the best way to go.

Assigning spatial coordinates to an array in R

I have downloaded a text file of data from the following link: http://radon.unibas.ch/files/us_rn_50km.zip
After unzipping I use the following lines of code to plot up the data:
# load libraries
library(fields)
# function to rotate a matrix (and transpose)
rotate <- function(x) t(apply(x, 2, rev))
# read data
data <- as.matrix(read.table("~/Downloads/us_rn_50km.txt", skip=6))
data[data<=0] <- NA
# rotate data
data <- rotate(data)
# plot data
mean.rn <- mean(data, na.rm=T)
image.plot(data, main=paste("Mean Rn emissions =", sprintf("%.3f", mean.rn)) )
This all looks OK, but I want to be able to plot the data on a lat-long grid. I think I need to convert this array into an sp class object but I don't know how. I know the following (from the web site): "The projection used to project the latitude and longitude coordinates is that used for the Decade of North American Geology (DNAG) maps. The projection type is Spherical Transverse Mercator with a base latitude of zero degrees and a reference longitude of 100 degrees W. The scale factor used is 0.926 with no false easting or northing. The longitude-latitude datum is NAD27 and the units of the xy-coordinates are in meters. The ellipsoid used is Clarke 1866. The resolution of the map is 50x50km". But don't know what to do with this data. I tried:
proj4string(data)=CRS("+init=epsg:4267")
data.sp <- SpatialPoints(data, CRS("+proj=longlat+ellps=clrk66+datum=NAD27") )
But had various problems (with NA's) and fundamentally I think that the data isn't in the right format.. I think that the SpatialPoints function wants a data on location (in 2-D) and a third array of values associated with those locations (x,y,z data - I guess my problem is working out the x and the y's from my data!)
Any help greatly appreciated!
Thanks,
Alex
The file in question is an ASCII raster grid. Coordinates are implicit in this format; a header describes the position of the (usually) lower left corner, as well as the grid dimensions and resolution. After this header section, values separated by white space describe how the variable varies across the grid, with values given in row-major order. Open it in a text editor if you're interested.
You can import such files to R with the fantastic raster package, as follows:
download.file('http://radon.unibas.ch/files/us_rn_50km.zip',
destfile={f <- tempfile()})
unzip(f, exdir=tempdir())
r <- raster(file.path(tempdir(), 'us_rn_50km.txt'))
You can plot it immediately, without assigning the projection:
If you didn't want to transform it to another CRS, you wouldn't necessarily need to assign the current coordinate system. But since you do want to transform it to WGS84 (geographic), you need to first assign the CRS:
proj4string(r) <- '+proj=tmerc +lon_0=-100 +lat_0=0 +k_0=0.926 +datum=NAD27 +ellps=clrk66 +a=6378137 +b=6378137 +units=m +no_defs'
Unfortunately I'm not entirely sure whether this proj4string correctly reflects the info given at the website that provided the data (it would be great if they actually provided the definition in a standard format).
After assigning the CRS, you can project the raster with projectRaster:
r.wgs84 <- projectRaster(r, crs='+init=epsg:4326')
And if you want, write it out to a raster format of your choice, e.g.:
writeRaster(r.wgs84, filename='whatever.tif')

location data format for adehabitat package

I have a file in this format:
ASCII format
The first rows look like this:
ncols 1440
nrows 720
xllcorner -180.0
yllcorner -90
cellsize 0.25
NODATA_value -9999
Basically I have the world with 1440 'tiles' in x direction (longitude) and 720 'tiles' in y direction (latitude). Each 'tile' is a square with a length of 0.25 degrees. I think I have xllcorner and yllcorner correct. I can draw this map like this in R:
library("adehabitat")
bio1 <- import.asc("D:/ENFA/data.asc")
maps <- as.kasc(list(data = bio1))
image(maps, col = cm.colors(256), clfac = list(Aspect = cl))
The map looks fine.
I would like to perform some ecological niche factor analysis (ENFA) using the adehabitat package and am not too sure about the location data. Basically I have them as longitudes and latitudes at the moment but I could also generate then as 'tile index' (e.g. lower left corner has the latitude -90 and longitude -180 so the 'tile index' would be 0, 0 - right?). Which is the correct location data format? I would use ENFA code like this:
locs <- read.table("D:/ENFA/Locs.txt", header = TRUE, sep="\t")
dataenfa1 <- data2enfa(maps, locs)
pc <- dudi.pca(dataenfa1$tab, scannf = FALSE)
enfa1 <- enfa(pc, dataenfa1$pr,scannf = FALSE)
hist(enfa1)
I would appreciate any comments please. Thanks in advance.
The problem with leaving your coordinates in lat-long form is that, at most places on earth, a degree of longitude has a different length than a degree of latitude. This might distort your ENFA by exaggerating distances in some directions relative to those in others.
Especially if your data are from a relatively small area, I'd suggest re-expressing the coordinates in meters along an W/E x-axis and S/N y-axis. If all of your points fall inside a single UTM zone, then you could do the conversion within R, using project() in the rgdal package:
Here's one example, taken from here:
library(rgdal)
# Make a two-column matrix, col1 = long, col2 = lat
xy <- cbind(c(118, 119), c(10, 50))
# Convert it to UTM coordinates (in units of meters)
project(xy, "+proj=utm +zone=51 ellps=WGS84")
[,1] [,2]
[1,] -48636.65 1109577
[2,] 213372.05 5546301
Much more info about how to manipulate spatial data is available in the "Applied Spatial Data Analysis with R" by Bivand, Pebesma, and Gomez-Rubio. If you need more specific assistance, try the R-sig-Geo mailing list.
Hope this helps.
Maybe you want to convert the coordinates into
GHAM (Global, Hierarchical, Alphanumeric, and Morton-encoded)
which represents the globe by cells of arbitrary precision (as fine or coarse as you wish), so any lat/lon has a single alpha-numeric address that remains sortable.
Here's the abstract from GHAM: A compact global geocode suitable for sorting, by Duncan Agnew:
The GHAM code is a technique for labeling geographic locations based
on their positions. It defines addresses for equal-area cells bounded
by constant latitude and longitude, with arbitrarily fine precision.
The cell codes are defined by applying Morton ordering to a recursive
division into a 16 by 16 grid, with the resulting numbers encoded into
letter–number pairs. A lexical sort of lists of points so labeled will
bring near neighbors (usually) close together; tests on a variety of
global datasets show that in most cases the actual closest point is
adjacent in the list 50% of the time, and within 5 entries 80% of the
time.
Source code is the IAMG repository, but if you can't access it I'm sure he would provide it.

Resources