Selecting frequency range on audio files with fir {seewave} - r

A very very new user to audio R related stuff!
I have to process a bunch of files and extract a certain frequency range, let's say from 500 to 2000 Hz.
Given a certain working directory I have:
myFiles <- list.files()
for(i in seq_along(myFiles)){
track <- readWave(myFiles[[i]])
track <- fir(track, from=500, to=2000,output="Wave")
track <- normalize(track, unit = as.character(track#bit))
assign(paste0("pista",i),track)
}
I think fir from seewave is the right function to do so, but I have 2 additional doubts:
How can I include here a line of code to create wav files into my working directory instead of R objects? I don't mind swapping to lapply if necessary.
Something is wrong with my code, as I am not able to open the audio file afterwards in Raven (but I do can in Quicktime!). Any suggestion?
Thanks!

Here's an example using lapply.
library(seewave)
# Make some files to test with
writeWave(noise(kind='pink'), filename = 'example1.wav')
writeWave(noise(kind='white'), filename = 'example2.wav')
myFiles <- list.files(pattern = 'example')
myfilterandsave <- function(files, index) {
track <- readWave(files[index])
filtered <- fir(track, from=500, to=2000, output='Wave')
normalized <- normalize(filtered, unit = as.character(filtered#bit))
name <- paste0('filtered',index, files[index])
writeWave(object = normalized, filename = name)
cat(name, '\r\n')
}
lapply(seq_along(myFiles), function(i) myfilterandsave(myFiles, i))

Related

How to load .png images with image names listed in a .csv file to R

I am using a simple code below to append multiple images together with the R magick package. It works well, however, there are many images to process and their names are stored in a .csv file. Could anyone advise on how to load the image names to the image_read function from specific cells in a .csv file (see example below the code)? So far, I was not able to find anything appropriate that would solve this.
library (magick)
pic_A <- image_read('A.png')
pic_B <- image_read('B.png')
pic_C <- image_read('C.png')
combined <- c(pic_A, pic_B, pic_C)
combined <- image_scale(combined, "300x300")
image_info(combined)
final <- image_append(image_scale(combined, "x120"))
print(final)
image_write(final, "final.png") #to save
Something like this should work. If you load the csv into a dataframe then, it's then straightforward to point the image_read towards the appropriate elements.
And the index (row number) is included in the output filename so that things are not overwritten each iteration.
library (magick)
file_list <- read.csv("your.csv",header = F)
names(file_list) <- c("A","B","C")
for (i in 1:nrow(file_list)){
pic_A <- image_read(file_list$A[i])
pic_B <- image_read(file_list$B[i])
pic_C <- image_read(file_list$C[i])
combined <- c(pic_A, pic_B, pic_C)
combined <- image_scale(combined, "300x300")
image_info(combined)
final <- image_append(image_scale(combined, "x120"))
print(final)
image_write(final, paste0("final_",i,".png")) #to save
}

Extract image name metadata from a mask in an ImageJ batch (.tiff) using r

I am attempting to write a for loop in r that will extract image name metadata from a number of ImageJ batches (saved as .tiff files) each containing 1-10 individual masks. So far my code for achieving this for a single mask within a batch is as follows:
library(OpenImageR)
library(ijtiff)
tifflist <- list.files(pattern = ".tiff", recursive = T, full.names = T) #save all .tiff files from working directory to a list
batch <- readImage(tiffs[1], all = T) #read in one batch within the list
pixels <- sum(batch[[1]]) #count black pixels of one mask within the batch
name <- read_tags(tiffs[1], frames = "all")[1] #extract the name of that same mask
I've been attempting to use the "ijtiff" package to extract the metadata, and, while I'm able to see a bunch of the other metadata with read_tags(), I have not been successfully able to locate the name. Is there another package that I could use a achieve this? Or another function? I know there is an image name associated with each mask, because they can still be found through ImageJ.
The eventual for loop would scale up something like this:
results <- data.frame(nrow = length(tifflist), ncol = 2)
colnames(results) <- c("image name", "pixel count")
for (i in length(tifflist)) {
batch <- readImage(tifflist[i], all = T)
for (j in length(batch)) {
pixels <- sum(batch[[j]])
name <- NA #edit once solution is found
results[j, 1] <- pixels
results[j, 2] <- name
}
}
Any guidance would much be appreciated! I am new to working with ImageJ and .tiff files and have limiting experience scaling up my code into for loops.

Loop through subfolders and extract data from CSV files

I am trying to loop through all the subfolders of my wd, list their names, open 'data.csv' in each of them and extract the second and last value from that csv file.
The df would look like this :
Name_folder_1 2nd value Last value
Name_folder_2 2nd value Last value
Name_folder_3 2nd value Last value
For now, I managed to list the subfolders and each of the file (thanks to this thread: read multiple text files from multiple folders) but I struggle to implement (what I'm guessing should be) a nested loop to read and extract data from the csv files.
parent.folder <- "C:/Users/Desktop/test"
setwd(parent.folder)
sub.folders1 <- list.dirs(parent.folder, recursive = FALSE)
r.scripts <- file.path(sub.folders1)
files.v <- list()
for (j in seq_along(r.scripts)) {
files.v[j] <- dir(r.scripts[j],"data$")
}
Any hints would be greatly appreciated !
EDIT :
I'm trying the solution detailed below but there must be something I'm missing as it runs smoothly but does not produce anything. It might be something very silly, I'm new to R and the learning curve is making me dizzy :p
lapply(files, function(f) {
dat <- fread(f) # faster
dat2 <- c(basename(dirname(f)), head(dat$time, 1), tail(dat$time, 1))
write.csv(dat2, file = "test.csv")
})
Not easy to reproduce but here is my suggestion:
library(data.table)
files <- list.files("PARENTDIR", full.names = T, recursive = T, pattern = ".*.csv")
lapply(files, function(f) {
dat <- fread(f) # faster
# Do whatever, get the subfolder name for example
basename(dirname(f))
})
You can simply look recursivly for all CSV files in your parent directory and still get their corresponding parent folder.

How to process numerous files from two folders in a loop using R

I am working on Lidar(Light Detection and Ranging) data to produce an output called CHM (canopy height Model). I need two types of file with the same extension which is stored in two different folders. Basically, these two files don't have same characteristics so I am trying to apply different function for the files stored in two folders. For example, here is the code that I want to run
Setting the directory for one type of files
setwd("D:\\Raw_RS_Data\\LiDAR_Ground")
getwd()
fileList <- list.files(path "D:\\Raw_RS_Data\\LiDAR_Ground", pattern = ".las")
fileList
for (i in 1:length(fileList)) # apply loop function for all the files in this folder
{
MyLas <- readLAS(fileList[i]) #read all the las files in the directory
MyDTM <- grid_terrain(MyLas, res = 0.5, method = "knnidw", k = 6) # create DTM from the las files
# Need to change the directory for different files stored in the folder "D:\Raw_RS_Data\LiDAR_Non_Ground" and execute the function lasnormalize in the same loop
MyNorm <- lasnormalize(MyLas, MyDTM) # Normalize the lasfiles
The final output I need is CHM and here is the function for CHM
MyCHM = grid_canopy(MyNorm, res = 0.5, start = c(0, 0))
}
Alternatively, if I can combine those files stored in the different folder, then I could apply the function with only one loop. So, also don't know how to combine the files from different folder into one.
Thanks and Regards,
Yogendra
Would the following work? The question is a bit ambiguous but here's my shot at the interpretation:
#note that in general using "/" is better than "\"
file_path.1 = "D:/folder/non_ground"
file_path.2 = "D:/folder/ground"
#get the files of interest with full path name
files_non_ground = paste(file_path.1,dir(file_path.1, ".las"), sep='/')
files_ground = paste(file_path.1,dir(file_path.2, ".las"), sep='/')
#concatenate them into a single vector
files = c(files_non_ground, files_ground)
#initiate some storage container
out.list = list()
#iterate over files:
for(f in files){
MyLas <- readLAS(f)
MyDTM <- grid_terrain(MyLas, res = 0.5, method = "knnidw", k = 6)
MyNorm <- lasnormalize(MyLas, MyDTM)
writeLAS(MyNorm,sub(".las", "norm.las", f))
#out.list[f] = MyNorm
}
Just please be advised that I am not sure what's happening inside the for loop, the above is just my guess. Also, I think it is generally preferable to use full file path names instead of setting directories, especially when dealing with multiple folders.

Applying an R script to multiple files

I have an R script that reads a certain type of file (nexus files of phylogenetic trees), whose name ends in *.trees.txt. It then applies a number of functions from an R package called bGMYC, available here and creates 3 pdf files. I would like to know what I should do to make the script loop through the files for each of 14 species.
The input files are in a separate folder for each species, but I can put them all in one folder if that facilitates the task. Ideally, I would like to output the pdf files to a folder for each species, different from the one containing the input file.
Here's the script
# Call Tree file
trees <- read.nexus("L_boscai_1411_test2.trees.txt")
# To use with different species, substitute "L_boscai_1411_test2.trees.txt" by the path to each species tree
#Store the number of tips of the tree
ntips <- length(trees$tip.label[[1]])
#Apply bgmyc.single
results.single <- bgmyc.singlephy(trees[[1]], mcmc=150000, burnin=40000, thinning=100, t1=2, t2=ntips, start=c(1,1,ntips/2))
#Create the 1st pdf
pdf('results_single_boscai.pdf')
plot(results.single)
dev.off()
#Sample 50 trees
n <- sample(1:length(trees), 50)
trees.sample <- trees[n]
#Apply bgmyc.multiphylo
results.multi <- bgmyc.multiphylo(trees.sample, mcmc=150000, burnin=40000, thinning=100, t1=2, t2=ntips, start=c(1,1,ntips/2))
#Create 2nd pdf
pdf('results_boscai.pdf') # Substitute 'results_boscai.pdf' by "*speciesname.pdf"
plot(results.multi)
dev.off()
#Apply bgmyc.spec and spec.probmat
results.spec <- bgmyc.spec(results.multi)
results.probmat <- spec.probmat(results.multi)
#Create 3rd pdf
pdf('trees_boscai.pdf') # Substitute 'trees_boscai.pdf' by "trees_speciesname.pdf"
for (i in 1:50) plot(results.probmat, trees.sample[[i]])
dev.off()
I've read several posts with a similar question, but they almost always involve .csv files, refer to multiple files in a single folder, have a simpler script or do not need to output files to separate folders, so I couldn't find a solution to my specific problem.
Shsould I use a for loop or could I create a function out of this script and use lapply or another sort of apply? Could you provide me with sample code for your proposed solution or point me to a tutorial or another reference?
Thanks for your help.
It really depends on the way you want to run it.
If you are using linux / command line job submission, it might be best to look at
How can I read command line parameters from an R script?
If you are using GUI (Rstudio...) you might not be familiar with this, so I would solve the problem
as a function or a loop.
First, get all your file names.
files = list.files(path = "your/folder")
# Now you have list of your file name as files. Just call each name one at a time
# and use for loop or apply (anything of your choice)
And since you would need to name pdf files, you can use your file name or index (e.g loop counter) and append to the desired file name. (e.g. paste("single_boscai", "i"))
In your case,
files = list.files(path = "your/folder")
# Use pattern = "" if you want to do string matching, and extract
# only matching files from the source folder.
genPDF = function(input) {
# Read the file
trees <- read.nexus(input)
# Store the index (numeric)
index = which(files == input)
#Store the number of tips of the tree
ntips <- length(trees$tip.label[[1]])
#Apply bgmyc.single
results.single <- bgmyc.singlephy(trees[[1]], mcmc=150000, burnin=40000, thinning=100, t1=2, t2=ntips, start=c(1,1,ntips/2))
#Create the 1st pdf
outname = paste('results_single_boscai', index, '.pdf', sep = "")
pdf(outnam)
plot(results.single)
dev.off()
#Sample 50 trees
n <- sample(1:length(trees), 50)
trees.sample <- trees[n]
#Apply bgmyc.multiphylo
results.multi <- bgmyc.multiphylo(trees.sample, mcmc=150000, burnin=40000, thinning=100, t1=2, t2=ntips, start=c(1,1,ntips/2))
#Create 2nd pdf
outname = paste('results_boscai', index, '.pdf', sep = "")
pdf(outname) # Substitute 'results_boscai.pdf' by "*speciesname.pdf"
plot(results.multi)
dev.off()
#Apply bgmyc.spec and spec.probmat
results.spec <- bgmyc.spec(results.multi)
results.probmat <- spec.probmat(results.multi)
#Create 3rd pdf
outname = paste('trees_boscai', index, '.pdf', sep = "")
pdf(outname) # Substitute 'trees_boscai.pdf' by "trees_speciesname.pdf"
for (i in 1:50) plot(results.probmat, trees.sample[[i]])
dev.off()
}
for (i in 1:length(files)) {
genPDF(files[i])
}

Resources