I have multiple global climate model (GCM) data files. I successfully cropped one file but it takes time to crop one by one over thousand data files manually. How can I do multiple crops? Thanks
What you need is a loop combined with some patience... ;-)
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc
done
the %??? chops off the ".nc" and then I add "_crop" to the output file name...
I know I am a little late to add to the answer, but still wanted to add my knowledge for those who would still pass through this question.
I followed the code given by Adrian Tompkins and it seems to work exceptionally fine. But there are somethings to be considered which I'd like to highlight. And because I am still novice at programming, please forgive my much limited answer. So here are my findings for the code above to work...
The code used calls CDO (Climate Data Operators) which is a non-GUI standalone program that can be utilized in Linux terminal. In my case, I used it in my Windows 10 through WSL (Ubuntu 20.04 LTS). There are good videos in youtube for using WSL in youtube.
The code initially did not work for until I made a slight change. The code
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc
done
worked for me when I wrote it as
for file in $(ls *.nc) ; do
cdo sellonlatbox,lon1,lon2,lat1,lat2 $file ${file%???}_crop.nc;
done
see the presence of a ; in the code in line 2.
The entire code (in my case to work) was put in a text file (can be put as script in other formats as well) and passed as a script in the Linux terminal. The procedure to execute the script file is as follows:
3.a) create the '.txt' file containing the script above. Do note that the directory should be checked in all steps.
3.b) make the file executable by running the command line
chmod +x <name_of_the_textfile_with_extension> in the terminal.
3.c) run the script (in my case it is the textfile) by running the command line ./<name_of_the_textfile_with_extension>
The above procedures will give you cropped netcdf files for the corresponding netcdf files in the same folder.
cheers!
Related
How can others who run my R program read a file(eg: csv) used in my R code without having to change the working directory in setwd()?
I will suggest you use the here() function in the here package in your code like this:
library(here)
Data1 <- read_csv(here("test_data.csv"))
read.csv has a file argument and if I were to quote from the inbuilt R help about file:
If it does not contain an absolute path, the file name is relative to
the current working directory, getwd().
So, providing the absolute path of the file inside the file argument solves this problem.
In Windows
Suppose your file name is test.csv and it's located in D:\files\test_folder (you can get the location of any file from its properties in Windows)
For reading this file you run:
df<-read.csv('D:\\files\\test_folder\\test.csv')
or
df<-read.csv('D:/files/test_folder/test.csv')
Suggested reading: Why \\ instead of \ and Paths in programming languages
Haven't used R in Linux but maybe Getting a file path in Linux might help
Read from web
Just type in the web address of the dataset in the file attribute. Try:
df<-read.csv('https://raw.githubusercontent.com/AdiPersonalWorks/Random/master/student_scores%20-%20student_scores.csv')
Note: This link contains a list of 25 students with their study hours and marks. I myself used this dataset for one of my earlier tasks and its perfectly safe
My question is exactly similar to this question. I tried all the solutions listed there but they didn't work :(
Only difference is that I am not sourcing other R files. I am going to read csv files that are at the same location as the current R script.
I need this feature as that way I can transfer R file easily to other PCs/systems
I want that solution to work on Rstudio and command line and on Windows and Linux.
I would like to offer a bounty of 50 credits
How about adding this to your script?
currentpath <- getwd()
Then you can read the csv file foo.csv with
read.csv(paste0(currentpath,'/','foo.csv'))
To make the code more platform independent you can explore the normalizePath function.
I'm writing an R script whose contents can change from time to time. It would be really helpful if I could insert a command that would copy the current contents of the script to a file, so I can go back later and see exactly what commands I executed during that run of the code.
How can I do this?
You can do this with the teaching demos package:
install.packages("TeachingDemos")
library(TeachingDemos)
#Will write to a file in the working directory
txtStart("captureCode.txt")
#This comment will not appear in the file, all commands and output will
Sys.Date()
#This command ends writing to the file
txtStop()
Source
I asked this question last week but was looking for how to do it with a batch script. I think it might be possible to do with R, but I'm not very experienced using it. However, after doing some research, I'm pretty sure its impossible to do it with a batch script alone so I think I'll need to use R or VBA script. I don't want someone to just throw the solution at me, if it requires using R or VBA then I'm only interested if you would link some good resources. I've been scouring the interwebs but have found nothing so far.
I need some sort of script that will:
Take the two folders as inputs
Generate a list of all the files in one of them.
For each file:
Read in the data from columns D-G
Find the matching file in the other folder and read in the same data
Compare each cell and verify that the two files match exactly
If they don’t match, report what data doesn’t match
This is what I was asked to do verbatim.
This is what I've done so far.
#echo off
setlocal disableDelayedExpansion
cls
rmdir c:\LocalDirectory/s /q
mkdir c:\LocalDirectory
xcopy "\\SERVER\Path\to\the\files" c:\LocalDirectory
cd c:\LocalDirectory
dir /b /a-d
as you can see, its not much. I can make a list of the files but I need something that can compare them. I know that I could manually do this comparison in Excel but I need to be able to do this for several files. So I'm trying to write some kind of script that will take a specific column of data in a specific excel file in a directory and compare all the rows of that column to a specific column in another xlsx file and then once its done that, generate a message that says either pass or fail. Once thats done, it should move on to the next excel file. The files that have the data thats being compared are named the exact same thing.
I'd like to be able to write the contents of a help file in R to a file from within R.
The following works from the command-line:
R --slave -e 'library(MASS); help(survey)' > survey.txt
This command writes the help file for the survey data file
--slave hides both the initial prompt and commands entered from the
resulting output
-e '...' sends the command to R
> survey.txt writes the output of R to the file survey.txt
However, this does not seem to work:
library(MASS)
sink("survey.txt")
help(survey)
sink()
How can I save the contents of a help file to a file from within R?
Looks like the two functions you would need are tools:::Rd2txt and utils:::.getHelpFile. This prints the help file to the console, but you may need to fiddle with the arguments to get it to write to a file in the way you want.
For example:
hs <- help(survey)
tools:::Rd2txt(utils:::.getHelpFile(as.character(hs)))
Since these functions aren't currently exported, I would not recommend you rely on them for any production code. It would be better to use them as a guide to create your own stable implementation.
While Joshua's instructions work perfectly, I stumbled upon another strategy for saving an R helpfile; So I thought I'd share it. It works on my computer (Ubuntu) where less is the R pager. It essentially just involves saving the file from within less.
help(survey)
Then follow these instructions to save less buffer to file
i.e., type g|$tee survey.txt
g goes to the top of the less buffer if you aren't already there
| pipes text between the range starting at current mark
and ending at $ which indicates the end of the buffer
to the shell command tee which allows standard out to be sent to a file