How to merge a TPS with CSV file - r

I'm doing my thesis using R and geomorph and I need to produce some PCAs for an anlysis. I've got a TPS file with IDspecimen's correspondent to thier photo number. Also in the CSV the information are marked by the same photo number. How can I produce a TPS file, with the CSV information attached to each specimen?
So far I only managed to select on the csv the speciment present in the file, but I don't know how to merge them.
tps<-readland.tps("data.TPS", specID="imageID")
dat<-read.csv2("CSV.csv")
e<-match(dimnames(tps)[[3]] ,Parametri$foto)
table(is.na(e))
dimnames(tps)[[3]][is.na(e)]
View(Parametri)

Related

How to read macro enabled excel files in R?

I have 2 excel files which have macros in it. The file extension ends with .xlsb and .xlsm. I want to read these files into R and do exactly what excel is doing with these files in terms of data inputs in R. What is the way to go about it?
For example: if the excel file calculates house prices in sheet 2 based on data input in sheet 1, how can the same results for house price calculation be obtained in R?
You might take a look at the R package RDCOMClient:
https://github.com/omegahat/RDCOMClient
Here is a nice example shown:
https://www.r-bloggers.com/2021/07/rdcomclient-read-and-write-excel-and-call-vba-macro-in-r/

Is there a way to compare the structure/architecture of .nc files in R?

I have a sample .nc file that contains a number of variables (5 to be precise) and is being read into a program. I want to create a new .nc file containing different data (and different dimensions) that will also be read into that program.
I have created a .nc file that looks the same as my sample file (I have included all of the necessary attributes for each of the variables that were included in the original file).
However, my file is still not being ingested.
My question is: is there a way to test for differences in the layout/structure of .nc files?
I have examined each of the variables/attributes within Rstudio and I have also opened them in panoply and they look the same. There are obviously differences (besides the actual data that they contain) since the file is not being read.
I see that there are options to compare the actual data within .nc files online (Comparison of two netCDF files), but that is not what I want. I want to compare the variable/attributes names/states/descriptions/dimensions to see where my file differs. Is that possible?
The ideal situation here would be to create a .nc template from the variables that exist within the original file and then fill in my data. I could do this by defining the dimensions (ncdim_def), creating the file(nc_create), getting my data (ncvar_get) and putting it in the file (ncvar_put), but that is what I have done so far, and it is too reliant on me not making an error (which I obviously have as they are not the same).
If you are on unix this is more easily achieved using CDO. See the Information section of the reference card: https://code.mpimet.mpg.de/projects/cdo/embedded/cdo_refcard.pdf.
For example, if you wanted to check that the descriptions are the same in files just do:
cdo griddes example1.nc
cdo griddes example2.nc
You can easily use system in R, to wrap around this.

Writing a csv file that is too large R

I currently saved some data as a csv file on my computer. It has 581 rows, but when I try to open the saved file on my mac, the dataframe has been altered and the numbers app from which I am looking at my csv from says some data was deleted. Is there a way to fix this? Or is there a different type of file I can save my data as that would adjust for the number of rows?
This is how I am writing the csv. I'm trying to manually add my file to a github repo after it has been saved to my computer.
write.csv(coords, 'Top_50_Distances.csv', row.names = FALSE)

Is there a way to read multiple excel files into R, but only up to a certain creation date? (Note: Date does not exist within the actual excel files.)

I have multiple excel files in multiple directories that I am reading into R. However, I don't want to read in EVERY excel file; I only want to read in the most recent ones (for example, only the ones created in the last month). Is there a way to do this?
Currently I am using this to read in all of the excel files, which is working just fine:
filenames <- Sys.glob(file.path('(name of dir)', "19*", "Electrode*02.xlsx")) <br>
elecsheet <- do.call("cbind", lapply(filenames, read_excel))
Somewhere in this second line of code (I think), I need to tell R to look at the metadata and only read in the excel files that have been created since a certain date.
Thank you!

read a selected column from multiple csv files and combine into one large file in R

Hi,
My task is to read selected columns from over 100 same formatted .csv files in a folder, and to cbind into a big large file using R. I have attached a screen shot in this question for a sample data file.
This is the code I'm using:
filenames <- list.files(path="G:\\2014-02-04")
mydata <- do.call("cbind",lapply(filenames,read.csv,skip=12))
My problem is, for each .csv file I have, the first column is the same. So using my code will create a big file with duplicate first columns... How can I create a big with just a single column A (no duplictes). And I would like to name the second column read from each .csv file using the value of cell B7, which is the specific timestamp of each .csv file.
Can someone help me on this?
Thanks.

Resources