Batch fill values by expression - raster

I have a Raster and a Polygon (5 Polygons in it, Name "Gitter") Shapefile.
I would like to cut the Raster with the each Polygon in a different file. Every Polygon has an attribute "id", from 1 to 5.
I choose the tool cut raster with layermask as a batch process. At the mask layer, i choose fill values by expression, but how the expression is right.
I have tried 'Gitter' "id"= #row_number or $currentfeature or 'Gitter'$currentfeature.
Nothing worked.

I have a bit of experience with QGIS and Python. I remember once I was testing what you needed, with the difference that I didn't mind naming the raster files with an attribute of the vector function. If your 'id' attribute is the same as the 'id' of the feature, you can try using the vector iterator button and you can set a file prefix that will be followed by the id of the mask feature. Here you can find more info about this: Vector iterator button.
If your id field is not the same as the feature id, then you can open the QGIS python console and paste this code replacing the file paths and field name.
from os import path
import processing
vector_layer_path = 'vector/path' # replace vector/path with the full path of the vector layer, remember the file extensions
raster_layer_path = 'raster/path' # replace raster/path with the full path of the raster layer, remember the file extensions
field = 'name_of_your_field' # replace name_of_your_field with the field that have the file names for the output rasters
output_folder = 'folder/path' # replace folder/path with the path where the output rasters will be stored
project = QgsProject().instance()
vector_layer = QgsVectorLayer(vector_layer_path, '', "ogr")
raster_layer = QgsRasterLayer(raster_layer_path, '')
project.addMapLayer(vector_layer)
vector_features = vector_layer.getFeatures()
for feature in vector_features:
vector_layer.select(feature.id())
processing.run('gdal:cliprasterbymasklayer', {
'INPUT': raster_layer_path,
'MASK': QgsProcessingFeatureSourceDefinition(vector_layer.id(), True),
'KEEP_RESOLUTION': True,
'OUTPUT': path.join(output_folder, str(feature[field]) + '.tiff')
})
vector_layer.removeSelection()

Related

"%s" random concatenation in R

I am looking for a way to concatenate a string or a number (3 digits at least) into a save file.
For instance, in python I can use '%s' % [format(str), format(number)]) and add it to csv file with a random generator.
How do I generate a random number into a format in R?
That is my save file and I want to add a random string or a number in the end of the file name:
file = paste(path, 'group1N[ADD FORMAT HERE].csv',sep = '')
file = paste(path, 'group1N.csv',sep = '') to become -- >
file = paste(path, 'group1N212.csv',sep = '') or file = paste(path, 'group1Nkut.csv',sep = '')
after using a random generator of strings or numbers and appending it to the save .csv file, each time it is saved, as a random generated end of file
You could use the built-in tempfile() function:
tempfile(pattern="group1N", tmpdir=".", fileext=".csv")
[1] "./group1N189d494eaaf2ea.csv"
(if you don't specify tmpdir the results go to a session-specific temporary directory).
This won't write over existing files; given that there are 14 hex digits in the random component, I think the "very likely to be unique" in the description is an understatement ... (i.e. at a rough guess the probability of collision might be something like 16^(-14) ...)
The names are very likely to be unique among calls to ‘tempfile’
in an R session and across simultaneous R sessions (unless
‘tmpdir’ is specified). The filenames are guaranteed not to be
currently in use.

R - extract data which changes position from file to file (txt)

I have a folder with tons of txt files from where I have to extract especific data. The problem is that the format of the file has changed once and the position of the data I need to extract has also changed. So I need to deal with files in different format.
To try to make it more clear, in column 4 I have the name of the variable and in 5 I have the value, but sometimes this is in a different row. Is there a way to find the name of the variable (in which row) and then extract its value?
Thanks in advance
EDITING
In some files I will have the data like this:
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Current--------28
But in some point in life, there was a change in the software to add another variable and the new file iis like this:
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Error------------5.
Current--------28
So I need to deal with these 2 types of data, extracting the same variables which are in different rows.
If these files can't be read with read.table use readLines and then find those lines that start with the keyword you need.
For example:
Sample file 1 (with the dashes included and extra line breaks):
Column 1-------Column 2.
Device ID------A.
Voltage------- 500.
Error------------5.
Current--------28
Sample file2 (with a comma as separator):
Column 1,Column 2.
Device ID,A.
Current,555
Voltage, 500.
Error,5.
For both cases do:
text = readLines(con = file("your filename here"))
curr = text[grepl("^Current", text, ignore.case = T)]
Which returns:
for file 1:
[1] "Current--------28"
for file 2:
[1] "Current,555"
Then use gsub to remove anything that is not a number.

Exif data for camera trapping image management. Creating new names for images based on date-time and folder names

I am sorry for my stupid questions but I am struggling to write the code I want. I am working with code found on the blogpost: Fish and Whistle by Dewey Dunnington (http://apps.fishandwhistle.net/archives/956).
I am trying to write a loop where I can rename all images in all folders (recursively) with the name of the folders and the "CreateDate" date-time from the EXIF data. Images are stored in camera station (e.g. Station "A") folders and then in camera folders (2 cameras at a station e.g. Camera "1").
So an individual image directory would be:
H:/GitHub/CT_Mara/images/raw_images/A/1/....jpg....jpg....jpg....etc.
So, ideally, I would want my images renamed to "A1_2017-05-03 15-45-13.jpg" and if there are two or more photos with the same name they should be called: "A1_2017-05-03 15-45-13(1).jpg" and "A1_2017-05-03 15-45-13(2).jpg"
What I am trying to accomplish:
rename all images according to the date and time in
exifdata$CreateDate
attach (1), (2), etc to images with the same name
attach the name of the station and camera folders to the image's
name
then lastly, as a separate function, it would be nice to know how I
could create a new coulomb in the exifdata frame for example a
"species" coulomb where animals can be identified
This is the code I am using:
library(lubridate)
define exif function
exifRip <- function(filename) {
command <- paste("exiftool -n -csv",
paste(shQuote(filename), collapse=" "))
read.csv(textConnection(system(command, intern=TRUE)),
header = TRUE,
sep = ",",
quote = "",
stringsAsFactors = FALSE)
}
load exif data from my directory
exifdata <- exifRip(list.files(path="H:/GitHub/CT_Mara/images/raw_images"))
View(exifdata)
set output directory
outdir <- dir.create("H:/GitHub/CT_Mara/images/raw_images/EXIFdata")
Everything runs perfect except for this loop:
for(i in 1:nrow(exifdata)) {
row <- exifdata[i, ]
d <- ymd_hms(row$CreateDate)
ext <- tools::file_ext(row$SourceFile) #maintain file extension
newname <- file.path(outdir,
sprintf("%04d-%02d-%02d %02d.%02d.%02d.%s",
year(d), month(d), day(d), hour(d), minute(d),
second(d), ext))
file.copy(row$SourceFile, newname)
}
I get the following error message:
Error in sprintf("%04d-%02d-%02d %02d.%02d.%02d.%s", year(d), month(d), :
invalid format '%04d'; use format %f, %e, %g or %a for numeric objects
In addition: Warning message:
All formats failed to parse. No formats found.
Any advice on how to clean this up would be highly appreciated.. Thanks in advance.
Kind Regards,
Philip
The following exiftool command gives you almost exactly what you want without the need to write a script. The only difference is that duplicate files will be named like "NAME_1", "NAME_2" instead of "NAME(1)", "NAME(2)":
exiftool '-filename<%-1:1D%-1:D_${createdate}%+c.%e' -d "%Y-%m-%d %H-%M-%S" -r DIR
Where DIR is the name of the directory containing the images. If you are in Windows, you should use double quotes instead of single quotes around the first argument.
Replace "filename" with "testname" in this command for a dry-run test to see what the file names will be before actually doing the renaming.

Is it possible to have a variable range and columns using csvRead in Scilab 5.5.2

I am a fairly novice, self-taught programmer using Scilab. I have .csv files that I want to read. They are mixed text and numerical values, and have a variable numbers of columns and rows. The part of the file I am interested in has a fixed amount of columns but not rows. I can skip the first part using the header argument but also have cells at the bottom that I do not need. An example of what it could look like:
DATA,1,0,3,3960.4,3236,3373,-132
DATA,1,0,4,4544.5,3530,3588,-76
RANDOM TEXT,0
INFO,1,0,#+BHO0 _:WRF&-11,S%00-0-03-1
INFO,2,1,#*BHO0 _8WRF&-11,NAS%00-0-15-1
I am only interested in the lines that start with DATA. If I try to run csvRead without removing the lines below I get this error:
Warning: Inconsistency found in the columns. At line 4993, found 2 columns
while the previous had 8.
I currently have a program that will read the file and manipulate it as required but I have to go into each file and delete the bottom rows. Is there a way to get around this?
My current program looks something like this:
D = uigetfile([".csv"],"path", "Choose a file name", %t);
filename = fullfile(D);
sub = ["DATA" "0"];
//Import data
data = csvRead(filename, ',', [], 'string', sub, [], [], 34);
edit(filename)
//determine # of rows
data_size = size(data);
limit = data_size(1);
Any ideas?
It is not possible to specify that csvRead should ignore lines with less columns, or to use a default set or anything (what would be nice).
A workaround could be in your case, to only parse lines starting with DATA. This can be accomplished with Regular Expressions.
The regexpcomments argument of csvRead gives the opportunity to ignore lines in the csv file matching a certain regular expression. Next to this, it is also possible to write a regular expression that matches all strings that do not match a certain pattern:
/^(?:(?!PATTERN).)*$/; # Matches strings not containing PATTERN
Applying this regex in your case, would lead that all lines not containing PATTERN, are to be assumed as comments and thus will be ignored.
In code that means something like the following.
filename = fullfile('data.csv');
sub = ["DATA" "0"];
//Import data
number_of_header_lines = 1
read_only_lines_starting_with = 'DATA'
regexp_magic = '/^(?:(?!' + read_only_lines_starting_with + ').)*$/'
data = csvRead(filename, ',', [], 'string', sub, regexp_magic, [], number_of_header_lines);
disp(data)

reading configuration from text file

I have a txt file which has entries
indexUrl=http://192.168.2.105:9200
jarFilePath = /home/soumy/lib
How can I read this file from R and get the value of jarFilePath ?
I need this to set the .jaddClassPath()... I have problem to copying the jar to classpath because of the difference in slashes in windows and linux
in linux I want to use
.jaddClassPath(dir("target/mavenLib", full.names=TRUE ))
but in windows
.jaddClassPath(dir("target\\mavenLib", full.names=TRUE ))
So thinking to read location of jar from property file !!!
If there is anyother alternative please let me know that also
As of Sept 2016, CRAN has the package properties.
It handles = in property values correctly (but does not handle spaces after the first = sign).
Example:
Contents of properties file /tmp/my.properties:
host=123.22.22.1
port=798
user=someone
pass=a=b
R code:
install.packages("properties")
library(properties)
myProps <- read.properties("/tmp/my.properties")
Then you can access the properties like myProps$host, etc., In particular, myProps$pass is a=b as expected.
I do not know whether a package offers a specific interface.
If not, I would first load the data in a data frame using read.table:
myProp <- read.table("path/to/file/filename.txt, header=FALSE, sep="=", row.names=1, strip.white=TRUE, na.strings="NA", stringsAsFactors=FALSE)
sep="=" is obviously the separator, this will nicely separate your property names and values.
row.names=1 says the first column contains your row names, so you can index your data properties this way to retrieve each property you want.
For instance: myProp["jarFilePath", 2] will return "/home/soumy/lib".
strip.white=TRUE will strip leading and trailing spaces you probably don't care about.
One could conveniently convert the loaded data frame into a named vector for a cleaner way to retrieve the property values: myPropVec <- setNames(myProp[[2]], myProp[[1]]).
Then to retrieve a property value from its name: myPropVec["jarFilePath"] will return "/home/soumy/lib" as well.

Resources