I am busy with some drone mapping. However, the altitude value in the images are very inconsistent between repeating flight missions (up to 120m). The program I use to stitch my drone images into a orthomosaic thinks the drone is flying underground as the image altitude is lower than the actual ground elevation.
To rectify this issue, I want to batch edit the altitude values of all my images by adding the difference between actual ground elevation and the drone altitude directly into the EXIF of the images.
e.g.
Original image altitude = 250m. Edited image altitude = 250m+x
I have found the exiftoolr R packages which allows you to read and write EXIF data through using the standalone ExifTool and Perl programs (see here: https://github.com/JoshOBrien/exiftoolr)
This is my code so far:
library(exiftoolr)
#Object containing images in directory
image_files <-dir("D:/....../R/EXIF_Header_Editing/Imagery",full.names=TRUE)
#Reading info
exif_read(image_files, tags = c("filename", "AbsoluteAltitude")) #Only interested in "filename" and "AbsoluteAltitude"
#Saving to new variable
altitude<-list(exif_read(image_files, tags=c("filename","AbsoluteAltitude")))
This is how some of the output looks like:
FileName AbsoluteAltitude
1 DJI_0331.JPG +262.67
2 DJI_0332.JPG +262.37
3 DJI_0333.JPG +262.47
4 DJI_0334.JPG +262.57
5 DJI_0335.JPG +262.47
6 DJI_0336.JPG +262.57
ext.
I know need to add x to every "AbsoluteAltitude" entry in the list, and then overwrite the existing image altitude value with this new adjusted altitude value, without editing any other important EXIF information.
Any ideas?
I have a program that allows me to batch edit EXIF Altitude, but this makes all the vales the same, and I need to keep the variation between the values.
Thanks in advance
Just a follow up from #StarGeek answer. I managed to figure out the R equivalent. Here is my solution:
#Installing package from GitHub
if(!require(devtools)) {install.packages("devtools")}
devtools::install_github("JoshOBrien/exiftoolr",force = TRUE)
#Installing/updating ExifTool program into exiftoolr directory
exiftoolr::install_exiftool()
#Loading packages
library(exiftoolr)
#Set working directory
setwd("D:/..../R/EXIF_Header_Editing")
#Object containing images
image_files <- dir("D:/..../R/EXIF_Header_Editing/Imagery",full.names = TRUE)
#Editing "GPSAltitude" by adding 500m to Altitude value
exif_call(args = "-GPSAltitude+=500", path = image_files)
And when opening the .jpg properties, the adjusted Altitude shows.
Thanks StarGeek
If you're willing to try to just use exiftool, you could try this command:
exiftool -AbsoluteAltitude+=250 <DIRECTORY>
I'd first test it on a few copies of your files to see if it works to your needs.
Related
I have an habitat classification map from Iceland (https://vistgerdakort.ni.is/) with 72 classes in a tif file of 5m*5m pixel size. I want to simplify it, so that there is only 14 classes. I open the files (a tif file and a text file containing the reclassification rules) and use the function classify in the terra package as follow on a subset of the map.
raster <- rast("habitat_subset.tif")
reclass_table<-read.table("reclass_habitat.txt")
habitat_simple<-classify(raster, reclass_table, othersNA=TRUE)
It does exactly what I need it to do and I am able to save the file back to tif using
writeRaster(habitat_simple, "reclass_hab.tif")
The problem is that my initial tif file was 105MB and my new reclassify tif file is 420MB. Since my goal is to reclassify the whole extent of the country, I can't afford to have the file become so big. Any insights on how to make it smaller? I could not find any comments online in relation to this issue.
You can specify the datatype, in your case you should be able to use "INT1U" (i.e., byte values between 0 and 254 --- 255 is used for NA, at least that is the default). That should give a file that is 4 times smaller than when you write it with the default "FLT4S". Based on your question, the original data come with that datatype. In addition you could use compression; I am not sure how well they work with "INT1U". You could have found out about this in the documentation, see ?writeRaster
writeRaster(habitat_simple, "reclass_hab.tif",
wopt=list(datatype="INT1U", gdal="COMPRESS=LZW"))
You could also skip the writeRaster step and do (with terra >= 1.1-4) you can just do
habitat_simple <- classify(raster, reclass_table, othersNA=TRUE,
datatype="INT1U", gdal="COMPRESS=LZW")
I want to use the toolbox sen2r to process Sentinel 2 L2A data in R. I have already manually downloaded the images in .SAFE format.
I have used the s2_translate() to convert .SAFE format to geotif:
in_dir <- "D:/data/s2"
out_dir <-"D:/s2_geotifs"
## translate .safe to geotif
s2_example <- file.path(
in_dir,
"S2B_MSIL2A_20200525T104619_N0214_R051_T31UFT_20200525T133932.SAFE")
s2_raster_dir <- s2_translate(s2_example,
format="GTiff",
outdir = out_dir)
This results is a raster brick with 11 layers, all corresponding to the optical bands of sentinel 2 as far as i can see.
Now i want to apply the s2_mask function (specifically to band 4 and 8 because i want to make NDVI) but the documentation for the code says you need the SCL product as input. The SCL product are bands with the classified cloud pixels used for masking. If i load the .SAFE image into SNAP e.g. i can see the SCL products. However i cannot find the SCL in my s2_translate() output, or in the original .SAFE for that matter.
According to the documentation the input should be as follows:
So the issue is that i cannot find the SCL product anywhere. I have applied s2_translate as required.
By default, s2_translate only generates BOA output. I think you need to explicitly generate also the SCL file from the SAFE using again s2_translate, using something on the lines:
s2_translate(s2_example,
prod_type = "SCL",
format="GTiff",
outdir = out_dir)
see documentation here:
http://sen2r.ranghetti.info/reference/s2_translate.html
http://sen2r.ranghetti.info/reference/safe_shortname.html
I try to merging multiple orthophoto photos (tif format) into gpkg fromat for using in qfield app. I found some answer how to do this in R with package gdalUtils.
I used this part of code:
gdalwarp(of="GPKG",srcfile=l[1:50],dstfile="M:/qfield/merge.gpkg",co=c("TILE_FORMAT=PNG_JPEG","TILED=YES))
Process was succesfully finished but when I looked results I found some artefacts.
In a photo you see merged gpkg file with added layer of fishnet (tif lists) and artefacts. Artefacts looks like small parts of original list of tif which are reordered and probably also overlaped.
First I was thougth that there is some error in originall orthophoto tif. But then I created raster mosaic, raster catalog, merged tif to new tif dataset and I also published raster catalog to esri server and created tpk file from them and artefacts did not show. Is problem my code? Thank you
Edit:
I found solution for my problem. If I create vrt then artefacts do not show so i used this code:
gdalbuildvrt(gdalfile = ll, output.vrt = "C:/Users/..../dmk_2017_2019.vrt")
gdalwarp(of="GPKG",srcfile="C:/Users/..../dmk_2017_2019.vrt",
dstfile="C:/Users/..../dmk_2017_2019.gpkg")
gdaladdo(filename="C:/Users/..../dmk_2017_2019.gpkg",r="average",levels=c(2,4,8,16,32,64,128,256),verbose=TRUE)
I have another question. What to do to get transparent nodata value (background)?
I tried with - srcnodata=255 255 255 but this helped only that black background become white I tried also with argument dstalpha but without succes. In qgis its possible this with setting transparenty:
I want to read 12 images at a time in R.
I don't know how to do it. I am complete new to working on images in R.
How can I read couple of images from a folder in my system?
I am using windows10 operating system. RAM 8 gb. CORE i5 processor.
GPU is Intel(R) HD Graphics 620.
I am able to read only single image in R and that image is displaying as numeric values. I tried to convert it into raster format and then tried to print image to view the image. But I am still finding the color codes in values but not the image in print.
Can anyone help me on this?
Thanks a lot.
install.packages("magick")
library(magick)
install.packages("rsvg")
install.packages("jpeg")
library(jpeg)
img <- readJPEG("C:/Users/folder/Abc.jpg", native = FALSE)
img1 <- as.raster(img, interpolate = F)
print(img1)
I want to read couple of images at a time into R console and want to view or print images.
The suggested duplicate gives you the basics for how to read in a number of files at once, but there are a few potential gotchas, and it won't help you with displaying the images.
This first bit is purely to set up the example
library(jpeg)
library(grid)
# Create a new directory and move to it
tdir <- "jpgtest"
dir.create(tdir)
setwd(tdir)
# Copy the package:jpeg test image twice, once as .jpg and once as .jpeg
# to the present working directory
file.copy(system.file("img", "Rlogo.jpg", package="jpeg"),
to=c(file.path(getwd(), "test.jpg"), file.path(getwd(), "test.jpeg")))
Then we can list the files, either using a regex match, or choose them interactively, then read and store the images in a list.
# Matches any file ending in .jpg or .jpeg
(flist <- list.files(pattern="*\\.jp[e]?g$"))
# Interactive selection
flist <- file.choose()
jpglist <- lapply(flist, readJPEG)
To display the images I tend to use grid, but there are a number of alternatives.
grid.raster(jpglist[[1]], interpolate=FALSE)
Remove temporary directory
setwd("..")
unlink(tdir)
I've made different plots (more than a hundred) for a project and I haven't capture them on the way (yes it's bad , i know). Now, I need to save them all at once but without running again my script (which takes hours). Is there a way to do so within Rstudio ?
Edit: All the plot are already there and I don't want to run them again.
In RStudio, every session has a temporary directory that can be obtained using tempdir(). Inside that temporary directory, there is another directory that always starts with "rs-graphics" and contains all the plots saved as ".png" files. Therefore, to get the list of ".png" files you can do the following:
plots.dir.path <- list.files(tempdir(), pattern="rs-graphics", full.names = TRUE);
plots.png.paths <- list.files(plots.dir.path, pattern=".png", full.names = TRUE)
Now, you can copy these files to your desired directory, as follows:
file.copy(from=plots.png.paths, to="path_to_your_dir")
Additional feature:
As you will notice, the .png file names are automatically generated (e.g., 0078cb77-02f2-4a16-bf02-0c5c6d8cc8d8.png). So if you want to number the .png files according to their plotting order in RStudio, you may do so as follows:
plots.png.detials <- file.info(plots.png.paths)
plots.png.detials <- plots.png.detials[order(plots.png.detials$mtime),]
sorted.png.names <- gsub(plots.dir.path, "path_to_your_dir", row.names(plots.png.detials), fixed=TRUE)
numbered.png.names <- paste0("path_to_your_dir/", 1:length(sorted.png.names), ".png")
# Rename all the .png files as: 1.png, 2.png, 3.png, and so on.
file.rename(from=sorted.png.names, to=numbered.png.names)
Hope it helps.
Although this discussion has been inactive for a while, there are some persons, like myself, who still come across the same problem, and the other solutions don't really seem to even get what the actual question is.
So, hands on. Your plot history gets saved in a variable called .SavedPlots. You can either access it directly, assign it to another variable in code or do the latter from the plots window.
# ph for plot history
ph <- .SavedPlots
In R 3.4.2, I could index ph to reproduce the corresponding plot in a device. What follows is rather straightforward:
Open a new device (png, jpeg, pdf...).
Reproduce your plot ph[index_of_plot_in_history].
Close the device (or keep plotting if it is a pdf with multiple pages).
Example:
for(i in 1:lastplot) {
png('plotname.png')
print(ph[i])
dev.off()
}
Note: Sometimes this doesn't happen because of poor programming. For instance, I was using the MICE package to impute many datasets with a large number of variables, and plotting as shown in section 4.3 of this paper. Problem was, that only three variables per plot were displayed, and if I used a png device in my code, only the last plot of each dataset would be saved. However, if the plots were printed to a window, all the plots of each dataset would be recorded.
If your plots are 3d, you can take a snapshot of all your plots and save them as a .png file format.
snapshot3d(filename = '../Plots/SnapshotPlots.png', fmt = 'png')
Or else, the best way is to create a multi-paneled plotting window using the par(mfrow) function. Try the following
plotsPath = "../Plots/allPlots.pdf"
pdf(file=plotsPath)
for (x in seq(1,100))
{
par(mfrow = c(2,1))
p1=rnorm(x)
p2=rnorm(x)
plot(p1,p2)
}
dev.off()
You can also use png, bmp, tiff, and jpeg functions instead of pdf. You can read their advantages and disadvantages and choose the one you think is good for your needs.
I am not sure how Rstudio opens the device where the plot are drawn, but I guess it uses dev.new(). In that case one quick way to save all opened graphs is to loop through all the devices and write them using dev.print.
Something like :
lapply(dev.list(),function(d){dev.set(d);dev.print(pdf,file=file.path(folder,paste0("graph_",d,".pdf"))})
where folder is the path of the folder where you want to store your graph (could be for example folder="~" if you are in linux and want to store all your graph in your home folder).
If you enter the following function all that will follow will be save in a document:
pdf("nameofthedocument.pdf")
plot(x~y)
plot(...
dev.off()
You can also use tiff(), jpg()... see ?pdf