Copy raster exif in GDAL - raster

I have two images with the same size - first one is an image from a drone and the second - a mask. I need to subtract both images and of course translate the exif data from the original image into the resulting one. Is it possible in gdal or rasterio?
Thanks.

In my experience with DJI drones, the aerial imagery is often captured in JPEG format. The metadata can be accessed through GDAL like this:
from osgeo import gdal
im = gdal.Open('ImageName.jpg', 0)
exif = im.GetMetadata()
This returns a Python dictionary containing the EXIF metadata. To copy the metadata to a new image, you can simply do this:
outimg = im.GetDriver().CreateCopy('NewImageName.jpg', im, callback=gdal.TermProgress_nocb)
outimg.SetMetadata(exif)
del im, outimg, exif # close datasets to commit changes to disk
You can then edit the newly created image as required (e.g., multiply pixel values by binary mask).

Related

R terra function classify create very large files

I have an habitat classification map from Iceland (https://vistgerdakort.ni.is/) with 72 classes in a tif file of 5m*5m pixel size. I want to simplify it, so that there is only 14 classes. I open the files (a tif file and a text file containing the reclassification rules) and use the function classify in the terra package as follow on a subset of the map.
raster <- rast("habitat_subset.tif")
reclass_table<-read.table("reclass_habitat.txt")
habitat_simple<-classify(raster, reclass_table, othersNA=TRUE)
It does exactly what I need it to do and I am able to save the file back to tif using
writeRaster(habitat_simple, "reclass_hab.tif")
The problem is that my initial tif file was 105MB and my new reclassify tif file is 420MB. Since my goal is to reclassify the whole extent of the country, I can't afford to have the file become so big. Any insights on how to make it smaller? I could not find any comments online in relation to this issue.
You can specify the datatype, in your case you should be able to use "INT1U" (i.e., byte values between 0 and 254 --- 255 is used for NA, at least that is the default). That should give a file that is 4 times smaller than when you write it with the default "FLT4S". Based on your question, the original data come with that datatype. In addition you could use compression; I am not sure how well they work with "INT1U". You could have found out about this in the documentation, see ?writeRaster
writeRaster(habitat_simple, "reclass_hab.tif",
wopt=list(datatype="INT1U", gdal="COMPRESS=LZW"))
You could also skip the writeRaster step and do (with terra >= 1.1-4) you can just do
habitat_simple <- classify(raster, reclass_table, othersNA=TRUE,
datatype="INT1U", gdal="COMPRESS=LZW")

Merging multiple orthophto tif to gpkg with gdalwrap - artefacts, transaprent background (nodata value)

I try to merging multiple orthophoto photos (tif format) into gpkg fromat for using in qfield app. I found some answer how to do this in R with package gdalUtils.
I used this part of code:
gdalwarp(of="GPKG",srcfile=l[1:50],dstfile="M:/qfield/merge.gpkg",co=c("TILE_FORMAT=PNG_JPEG","TILED=YES))
Process was succesfully finished but when I looked results I found some artefacts.
In a photo you see merged gpkg file with added layer of fishnet (tif lists) and artefacts. Artefacts looks like small parts of original list of tif which are reordered and probably also overlaped.
First I was thougth that there is some error in originall orthophoto tif. But then I created raster mosaic, raster catalog, merged tif to new tif dataset and I also published raster catalog to esri server and created tpk file from them and artefacts did not show. Is problem my code? Thank you
Edit:
I found solution for my problem. If I create vrt then artefacts do not show so i used this code:
gdalbuildvrt(gdalfile = ll, output.vrt = "C:/Users/..../dmk_2017_2019.vrt")
gdalwarp(of="GPKG",srcfile="C:/Users/..../dmk_2017_2019.vrt",
dstfile="C:/Users/..../dmk_2017_2019.gpkg")
gdaladdo(filename="C:/Users/..../dmk_2017_2019.gpkg",r="average",levels=c(2,4,8,16,32,64,128,256),verbose=TRUE)
I have another question. What to do to get transparent nodata value (background)?
I tried with - srcnodata=255 255 255 but this helped only that black background become white I tried also with argument dstalpha but without succes. In qgis its possible this with setting transparenty:

How to batch edit a field in a jpg EXIF header?

I am busy with some drone mapping. However, the altitude value in the images are very inconsistent between repeating flight missions (up to 120m). The program I use to stitch my drone images into a orthomosaic thinks the drone is flying underground as the image altitude is lower than the actual ground elevation.
To rectify this issue, I want to batch edit the altitude values of all my images by adding the difference between actual ground elevation and the drone altitude directly into the EXIF of the images.
e.g.
Original image altitude = 250m. Edited image altitude = 250m+x
I have found the exiftoolr R packages which allows you to read and write EXIF data through using the standalone ExifTool and Perl programs (see here: https://github.com/JoshOBrien/exiftoolr)
This is my code so far:
library(exiftoolr)
#Object containing images in directory
image_files <-dir("D:/....../R/EXIF_Header_Editing/Imagery",full.names=TRUE)
#Reading info
exif_read(image_files, tags = c("filename", "AbsoluteAltitude")) #Only interested in "filename" and "AbsoluteAltitude"
#Saving to new variable
altitude<-list(exif_read(image_files, tags=c("filename","AbsoluteAltitude")))
This is how some of the output looks like:
FileName AbsoluteAltitude
1 DJI_0331.JPG +262.67
2 DJI_0332.JPG +262.37
3 DJI_0333.JPG +262.47
4 DJI_0334.JPG +262.57
5 DJI_0335.JPG +262.47
6 DJI_0336.JPG +262.57
ext.
I know need to add x to every "AbsoluteAltitude" entry in the list, and then overwrite the existing image altitude value with this new adjusted altitude value, without editing any other important EXIF information.
Any ideas?
I have a program that allows me to batch edit EXIF Altitude, but this makes all the vales the same, and I need to keep the variation between the values.
Thanks in advance
Just a follow up from #StarGeek answer. I managed to figure out the R equivalent. Here is my solution:
#Installing package from GitHub
if(!require(devtools)) {install.packages("devtools")}
devtools::install_github("JoshOBrien/exiftoolr",force = TRUE)
#Installing/updating ExifTool program into exiftoolr directory
exiftoolr::install_exiftool()
#Loading packages
library(exiftoolr)
#Set working directory
setwd("D:/..../R/EXIF_Header_Editing")
#Object containing images
image_files <- dir("D:/..../R/EXIF_Header_Editing/Imagery",full.names = TRUE)
#Editing "GPSAltitude" by adding 500m to Altitude value
exif_call(args = "-GPSAltitude+=500", path = image_files)
And when opening the .jpg properties, the adjusted Altitude shows.
Thanks StarGeek
If you're willing to try to just use exiftool, you could try this command:
exiftool -AbsoluteAltitude+=250 <DIRECTORY>
I'd first test it on a few copies of your files to see if it works to your needs.

How to load a geospatial pdf in R?

I am new handling spatial data, and newer doing it in R.
My last attemp was trying to read data in a geographical pdf format. It is information about mexican political bourdaries, so polygons, the file.
I tryied to use the rgdal package to read the data. After typing ogrDrivers()[40:45,], which show the drivers available, I got.
name write
40 PCIDSK TRUE
41 PDF TRUE
42 PDS FALSE
43 PGDump TRUE
44 PGeo FALSE
45 PostgreSQL TRUE
The result show that there is a driver for PDF's, but trying the usual way to read files readOGR(dsn = "data source name", layer = "LAYER") produces:
Error in ogrInfo(dsn = dsn, layer = layer, encoding = encoding, use_iconv = use_iconv, :
Cannot open file
The help of the function does not say the values expected neither for dsn nor for layer when the file is in a geospatial pdf format.
Does anybody knows a way to import data from a pdf? this is from geospatial format; I would appreciate any answer.
By the way, I have Ubuntu 14.04.3 with Qgis installed, and the latest versions of R and rgeos.
The dsn is the file path, and the layer name is internal to the PDF. You can get a list of layers with ogrListLayers on the file name:
> ogrListLayers("foo.pdf")
[1] "polys"
attr(,"driver")
[1] "PDF"
attr(,"nlayers")
[1] 1
Ugly output, but that's one layer called polys. So I can read it like this:
> polys = readOGR("./foo.pdf","polys")
OGR data source with driver: PDF
Source: "./foo.pdf", layer: "polys"
with 9 features
It has 1 fields
Note that this only applies to a special class of PDF files with the map data stored in a particular way. Just because your PDF has a map in it, doesn't make it a Geospatial PDF. Here's the command line test on my Geospatial PDF:
$ ogrinfo Monaco/foo.pdf
Had to open data source read-only.
INFO: Open of `Monaco/foo.pdf'
using driver `PDF' successful.
1: polys (Polygon)
and here's the test on yours:
$ ogrinfo CED06_CARTA_110614.pdf
FAILURE:
Unable to open datasource `CED06_CARTA_110614.pdf' with the following drivers.
-> ESRI Shapefile
-> MapInfo File
[etc etc]
-> PDF
[etc etc]
So you dont have a Geospatial PDF.
Your options are, in possible order of simplicity, something like:
Get the boundary data in a GIS data format (eg shapefile, GeoPDF)
Save as an image, load into a GIS, georeference and trace it (QGIS can do this)
Get the raw PDF vectors out of the PDF, assuming they are vectors (first glance shows me the map isn't an image), then find the right transformation to whatever coordinate system, then possibly rebuild the topology if all you have is line segments...
I've had a little bit of success using pstoedit to convert the PDF to a DXF file which can be loaded into QGIS, but then you have to clean it up and reconstruct the polygons, and then its still not in the right geographical location. It would be much simpler if you can get a shapefile of the regions you are interested in.
If what you want is a raster version of the PDF, then you can either use raster::stack("file.pdf") or readGDAL("file.pdf"). But you'll get an image with no georeferencing (it'll just have a bounding box of the number of pixels) since there's no coordinate system with the PDF.

Creating Shapefiles in R

I'm trying to create a shapefile in R that I will later import to either Fusion Table or some other GIS application.
To start,I imported a blank shapefile containing all the census tracts in Canada. I have attached other data (in tabular format) to the shapefile based on the unique ID of the CTs, and I have mapped my results. At the moment, I only need the ones in Vancouver and I would like to export a shapefile that contains only the Vancouver CTs as well as my newly attached attribute data.
Here is my code (some parts omitted due to privacy reasons):
shape <- readShapePoly('C:/TEST/blank_ct.shp') #Load blank shapefile
shape#data = data.frame(shape#data, data2[match(shape#data$CTUID, data2$CTUID),]) #data2 is my created attributes that I'm attaching to blank file
shape1 <-shape[shape$CMAUID == 933,] #selecting the Vancouver CTs
I've seen other examples using this: writePolyShape to create the shapefile. I tried it, and it worked to an extent. It created the .shp, .dbf, and .shx files. I'm missing the .prj file and I'm not sure how to go about creating it. Are there better methods out there for creating shapefiles?
Any help on this matter would be greatly appreciated.
Use rgdal and writeOGR. rgdal will preserve the projection information
something like
library(rdgal)
shape <- readOGR(dsn = 'C:/TEST', layer = 'blank_ct')
# do your processing
shape#data = data.frame(shape#data, data2[match(shape#data$CTUID, data2$CTUID),]) #data2 is my created attributes that I'm attaching to blank file
shape1 <-shape[shape$CMAUID == 933,]
writeOGR(shape1, dsn = 'C:/TEST', layer ='newstuff', driver = 'ESRI Shapefile')
Note that the dsn is the folder containing the .shp file, and the layer is the name of the shapefile without the .shp extension. It will read (readOGR) and write (writeOGR) all the component files (.dbf, .shp, .prj etc)
Problem solved! Thank you again for those who help!
Here is what I ended up doing:
As Mnel wrote, this line will create the shapefile.
writeOGR(shape1, dsn = 'C:/TEST', layer ='newstuff', driver = 'ESRI Shapefile')
However, when I ran this line, it came back with this error:
Can't convert columns of class: AsIs; column names: ct2,mprop,mlot,mliv
This is because my attribute data was not numeric, but were characters. Luckily, my attribute data is all numbers so I ran transform() to fix this problem.
shape2 <-shape1
shape2#data <- transform(shape1#data, ct2 = as.numeric(ct2),
mprop = as.numeric(mprop),
mlot = as.numeric(mlot),
mliv = as.numeric(mliv))
I tried the writeOGR() command again, but I still didn't get the .prj file that I was looking for. The problem was I didn't specified the coordinate systems for the shapefile when I was importing the file. Since I already know what the coordinate system is, all I had to do was define it when importing.
readShapePoly('C:/TEST/blank_ct.shp',proj4string=CRS("+proj=longlat +datum=WGS84")
After that, I re-ran all the things I wanted to do with the shapefile, and the writeOGR line for exporting. And that's it!

Resources