I have so many file(around 600) with these names:
x2008_1_3.txt
x2008_1_4.txt
x2008_1_5.txt
x2008_1_6.txt
x2008_1_7.txt
x2008_1_8.txt
.
.
.
.
x2009_1_3.txt
x2009_1_4.txt
x2009_1_5.txt
x2009_1_6.txt
x2009_1_7.txt
x2009_1_8.txt
.
.
.
.
I try so many ways to inter them as my infile all of them togather in R. But i still cannot have them all. i also want to make the output'names have the same name as input. any suggestion?
You can set the files pattern to list.files to get a list of the files:
list.files(path,pattern="^x[0-9]{4}_1_[0-9][.]txt",full.names = TRUE)
Set recursive=TRUE if your files in different directories.
I am not the best at R but this may help it is a version of a script i use for something similar using CSV's
Set dir, remember to use double \
directory = "Location of files you want imported" #IE c:\\Folder1\\Folder2
files = list.files(path=directory,pattern = "[.]txt") #Make a list of files, assuming you want all files in that folder
for(i in 1:length(files)) # loop though all files and use assign to create data frames or replace with a different function like read.csv or append ect..
{
file = files[i]
assign(file,read.table(paste(directory,file,sep = "\\"),sep="\t"))
}
I hope this helps a little!
Related
I have over 7,000 .wav files in one folder which need to be split up into groups of 12 and placed into separate smaller folders.
The files correspond to 1-minute recordings taken every 5 minutes, so every 12 files corresponds to 1 hour.
The files are stored on my PC in the working directory: "E:/Audiomoth Files/Winter/Rural/Emma/"
Examples of the file names are as follows:
20210111_000000.wav
20210111_000500.wav
20210111_001000.wav
20210111_001500.wav
20210111_002000.wav
20210111_002500.wav
20210111_003000.wav
20210111_003500.wav
20210111_004000.wav
20210111_004500.wav
20210111_005000.wav
20210111_005500.wav
which would be one hour, then
20210111_010000.wav
20210111_010500.wav
20210111_011000.wav
and so on.
I need the files split into groups of 12 and then I need a new folder to be created in: "E:/Audiomoth Files/Winter/Rural/Emma/Organised Files"
With the new folders named 'Hour 1', 'Hour 2' and so on.
What is the exact code I need to do this?
As is probably very obvious I'm a complete beginner with R so if the answer could be spelt out in layman's terms that would be brilliant.
Thank you in advance
Something like this?
I intentionally used copy instead of cut in order to prevent data from being lost. I edited the answer so the files will keep their old names. I order to give them new names, replace name in the last line by "Part_", i, ".wav", for example.
# get a list of the paths to all the files
old_files <- list.files("E:/Audiomoth Files/Winter/Rural/Emma/", pattern = "\\.wav$", full.names = TRUE)
# create new directory
dir.create("E:/Audiomoth Files/Winter/Rural/Emma/Organised Files")
# start a loop, repeat as often as there are groups of 12 within the list of files
for(i in 1:(round(length(old_files)/12)+1)){
# create a directory for the hour
directory <- paste("E:/Audiomoth Files/Winter/Rural/Emma/Organised Files", "/Hour_", i, sep = "")
dir.create(directory)
# select the files that are to copy (I guess it will start with 1*12-11 = 1st file
# and end with i*12 = 12th file)
filesToCopy <- old_files[(i*12-11):(i*12)]
# for those files run another loop:
for(file in 1:12){
# get the name of the file
name <- basename(filesToCopy[file])
# copy the file to the current directory
file.copy(filesToCopy[file], paste(directory, "/", name, sep = ""))
}
}
When you're not entirely sure, I'd recommend to copy the files instead of moving them directly (which is what I hope this script here does). You can delete them manually, later on. After you checked that everything worked well and all data is where it should be. Otherwise data can be lost due to even small errors, which we do not want to happen.
So I have a large number of databases (82) in Stata, that each contain around 1300 variables and several thousand observations. Some of these databases contain variables that give the mean or standard deviation of certain concepts. For example, a variable in such a dataset could be called "leverage_mean". Now, I want to know which datasets contain variables called concept_mean or concept_sd, without having to go through every dataset by hand.
I was thinking that maybe there is a way to loop through the databases looking for variables containing "mean" or "sd", unfortunately I have idea how to do this. I'm using R and Stata datafiles.
Yes, you can do this with a loop in stata as well as R. First, you should check out the stata command ds and the package findname, which will do many of the things described here and much more. But to show you what is happening "under the hood", I'll show the Stata code that can achieve this below:
/*Set your current directory to the location of your databases*/
cd "[your cd here]"
Save the names of the 82 databases to a list called "filelist" using stata's dir function for macros. NOTE: you don't specify what kind of file your database files are, so I'm assuming .xls. This command saves all files with extension ".xls" into the list. What type of file you save into the list and how you import your database will depend on what type of files you are reading in.
local filelist : dir . files "*.xls"
Then loop over all files to show which ones contain variables that end with "_sd" or "_mean".
foreach file of local filelist {
/*import the data*/
import excel "`file'", firstrow clear case(lower)
/*produce a list of the variables that end with "_sd" and "_mean"*/
cap quietly describe *_sd *_mean, varlist
if length("r(varlist)") > 0 {
/*If the database contains variables of interest, display the database file name and variables on screen*/
display "Database `file' contains variables: " r(varlist)
}
}
Final note, this loop will only display the database name and variables of interest contained within it. If you want to perform actions on the data, or do anything else, those actions need to be included in the position of the final "display" command (which you may or may not ultimately actually need).
You can use filelist, (from SSC) to create a dataset of files. To install filelist, type in Stata's Command window:
ssc install filelist
With a list of datasets in memory, you can then loop over each file and use describe to get a list of variables for each file. You can store this list of variables in a single string variable. For example, the following will collect the names of all Stata datasets shipped with Stata and then store for each the variables they contain:
findfile "auto.dta"
local base_dir = subinstr("`r(fn)'", "/a/auto.dta", "", 1)
dis "`base_dir'"
filelist, dir("`base_dir'") pattern("*.dta")
gen variables = ""
local nmatch = _N
qui forvalues i = 1/`nmatch' {
local f = dirname[`i'] + "/" + filename[`i']
describe using "`f'", varlist
replace variables = " `r(varlist)' " in `i'
}
leftalign // also from SSC, to install: ssc install leftalign
Once you have all this information in the data in memory, you can easily search for specific variables. For example:
. list filename if strpos(variables, " rep78 ")
+-----------+
| filename |
|-----------|
13. | auto.dta |
14. | auto2.dta |
+-----------+
The lookfor_all package (SSC) is there for that purpose:
cd "pathtodirectory"
lookfor_all leverage_mean
Just make sure the file extensions are in lowercase(.dta) and not upper.
Based on the count from some other file, I need to rename all the files extensions.
Ex: If the count is 10 and there are 5 files exists, I need to rename all the files as below.
from File_1.txt to File_11.txt,
from File_2.txt to File_12.txt,
from File_3.txt to File_13.txt,
from File_4.txt to File_14.txt,
from File_5.txt to File_15.txt
Can I use one unix command to do this, appreciate your help on this.
Regards,
NPK
With standard UNIX you'd need more than one command, e. g.
count=10
for file in File_*.txt
do augend=`echo $file|sed 's/File_\(.*\).txt/\1/'`
mv $file File_`expr $augend + $count`.txt
done
But if you have a system with this rename available, you can
rename 's/File_(.*).txt/"File_".($1+$ENV{count}).".txt"/e' File_*.txt
(assuming count has been exported to the environment) or
rename 's/File_(.*).txt/"File_".($1+'$count').".txt"/e' File_*.txt
as well.
I am writing a CGI script in Perl with a section of embedded R script which produces a graph. The original data filename is unknown as it has been uploaded by the CGI script and is stored in a Perl variable called $filename.
My question is that I now would like to open that file in R using read.table(). I am using Statistics::R and so I have tried:
my $R = Statistics::R->new();
$R->set('filename',$filename);
my $out1 = $R->run(
q`rm(list=ls())`,
# Fetch data
q`setwd("/var/www/uploads")`,
q`peakdata<-read.table(filename, sep="",col.names=c("mz","intensity","ionsscore","matched","query","index","hit"))`,
q`attach(peakdata)` ...etc
I can get this to work ONLY if I change $filename into something static and known like 'data.txt' before trying to open the file in read.table - is there a way for me to open a file with a variable for a name?
Thank you in advance.
One possible way to do this is by doing a little more work in Perl.
This is untested code, to give you some ideas:
my $filename = 'fileNameIGotFromSomewhere.txt'
my $source_dir = '/var/www/uploads';
my $file = "$source_dir/$fielname";
# make sure we can read it
unless ( -r $file ) {
die 'can read that data file: $!";
}
Then instead of $R->set, you could interpolate the file name into the R program. Where you've used the single-quote operator, use the double-quote operator instead:
So instead of:
q`peakdata<-read.table(filename, sep="",col.names= .... )`
Use:
qq`peakdata<-read.table($filename, sep="",col.names= .... )`
Now this looks like it would be inviting problems similar to SQL/Code Injections, so that's why I put int the logic to insure that the file exists and is readable. You might be able to think other checks to add to safeguard your use of user-supplied info.
I cd into a folder and start python. I want to apply a script to fix filenames in a directory and in sub folders.
import os
for dirname, subdirs, files in os.walk('.'):
os.rename(file, file.replace('\r', '').replace('\n', '').replace(' ', '_')
print 'Processed ' + file.replace('\r', '').replace('\n', '')
I get error "AttributeError: 'list" object has no attribute 'replace'. Help, please?
os.walk returns a 3-tuple that includes the root directory of the script, a list of subdirectories, and a list of files. You unpacked the 3-tuple in the for-loop and you're calling replace on the list of files.
You may want something like this:
for dirname, subdirs, files in os.walk('.'):
for file in files:
os.rename(file, file.replace('\r', '').replace('\n', '').replace(' ', '_')
print 'Processed ' + file.replace('\r', '').replace('\n', '')
You want to iterate through the list of the files and do you "replacing" on those individual files.