I have a folder containing subfolders named; *_1, *_2, *_3, *_4 . . . *_1000.
Then there is another set of folders named: Destination_folder1, Destination_folder2, Destination_folder3, ....Destination_folder10.
I would like to move (or copy) the subfolders by groups of 100 in Destination_folders* so that: the Destination_folder1 will contain subfolders *_1: *_100; Destination_folder2 will contain subfolders *_101: *_200 and so on and so forh. I tried to use:
for i in {1..100}
do
cp -r *_$((i)) Destination_folder$i/
done
but unfortunately folders are not copied by groups, but instead they are copied
individually. Can anyone help me please?
Best regards
Use a second loop (remove the word echo when you are happy):
for i in {1..10}; do
for j in {1..100}; do
(( dir = 100 * (i - 1) + j ))
echo cp -r *_$((dir)) Destination_folder${i}/
done
done
Related
I have 300 directories/folders, each directory has two columns single file (xxx.gz), I want to merge all files from all folders in a single file. In all files first column is Identifier (ID) which is same.
How to merge all files into single file?
And I want to header for each column as name of file in respective directory.
Directory names are are: (68a7eb0a-123, b5694957-764, etc.. ) and files name are : (a5c403c2, 292c4a2f etc),
directory name and respective file name are not same, I want file name as header.
all directories
ls
6809b1c3-75a5
68e9b641-0cc9
71ae07b8-8bde
b7815cd2-1e69
..
..
each directory contain single file:
cd 6809b1c3-75a5
ls bd21dc2e.txt.gz
Try this:
for i in * ; do for j in $i/*.gz ; do echo $j >> ../final.txt ; gunzip -c $j >> ../final.txt ; done ; done
Annotated version:
for i in * # for each directory under current working directory
do # have nothing else in there
for j in $i/*.gz # for each gzipped file under directories
do
echo $j >> ../final.txt # echo path/file to the final file
gunzip -c $j >> ../final.txt # append gunzipping the file to the final file
done
done
Result:
$ head -8 ../final.txt
6809b1c3-75a5/bd21dc2e.txt.gz
blabla
whatever
you
have
in
those
files
Remade a previous question so that it is more clear. I'm trying to search files in two directories and print matching character strings (+ line immediately following) into a new file from the second directory only if they match a record in the first directory. I have found similar examples but nothing quite the same. I don't know how to use awk for multiple files from different directories and I've tortured myself trying to figure it out.
Directory 1, 28,000 files, formatted viz.:
>ABC
KLSDFIOUWERMSDFLKSJDFKLSJDSFKGHGJSNDKMVMFHKSDJFS
>GHI
OOILKJSDFKJSDFLMOPIWERIOUEWIRWIOEHKJTSDGHLKSJDHGUIYIUSDVNSDG
Directory 2, 15 files, formatted viz.:
>ABC
12341234123412341234123412341234123412341234123412341234123412341234
>DEF
12341234123412341234123412341234
>GHI
12341234123412341234123412341234123412341234123412341234123412341234123412341234
Desired output:
>ABC
12341234123412341234123412341234123412341234123412341234123412341234
>GHI
12341234123412341234123412341234123412341234123412341234123412341234123412341234
Directories 1 and 2 are located in my home directory: (./Test1 & ./Test2)
If anyone could advise command to specific the different directories, I'd be immensely grateful! Currently when I include file path (e.g., /Test1/*.fa) I get the following error:
awk: can't open file /Test1/*.fa
You'll want something like this (untested):
awk '
FNR==1 {
dirname = FILENAME
sub("/.*","",dirname)
if (NR==1) {
dirname1 = dirname
}
}
dirname == dirname1 {
if (FNR % 2) {
key = $0
}
else {
map[key] = $0
}
next
}
(FNR % 2) && ($0 in map) && !seen[$0,map[$0]]++ {
print $0 ORS map[$0]
}
' Test1/* Test2/*
Given you're getting the error message /usr/bin/awk: Argument list too long which means you're exceeding your shells maximum argument length for a command and that 28,000 of your files are in the Test1 directory, try this:
find Test1 -type f -exec cat {} \; |
awk '
NR == FNR {
if (FNR % 2) {
key = $0
}
else {
map[key] = $0
}
next
}
(FNR % 2) && ($0 in map) && !seen[$0,map[$0]]++ {
print $0 ORS map[$0]
}
' - Test2/*
Solution in TXR:
Data:
$ ls dir*
dir1:
file1 file2
dir2:
file1 file2
$ cat dir1/file1
>ABC
KLSDFIOUWERMSDFLKSJDFKLSJDSFKGHGJSNDKMVMFHKSDJFS
>GHI
OOILKJSDFKJSDFLMOPIWERIOUEWIRWIOEHKJTSDGHLKSJDHGUIYIUSDVNSDG
$ cat dir1/file2
>XYZ
SDOIWEUROIUOIWUEROIWUEROIWUEROIWUEROUIEIDIDIIDFIFI
>MNO
OOIWEPOIUWERHJSDHSDFJSHDF
$ cat dir2/file1
>ABC
12341234123412341234123412341234123412341234123412341234123412341234
>DEF
12341234123412341234123412341234
>GHI
12341234123412341234123412341234123412341234123412341234123412341234123412341234
$ cat dir2/file2
>STP
12341234123412341234123412341234123412341234123412341234123412341234123412341234
>MNO
123412341234123412341234123412341234123412341234123412341234123412341234
$
Run:
$ txr filter.txr dir1/* dir2/*
>ABC
12341234123412341234123412341234123412341234123412341234123412341234
>GHI
12341234123412341234123412341234123412341234123412341234123412341234123412341234
>MNO
123412341234123412341234123412341234123412341234123412341234123412341234
Code in filter.txr:
#(bind want #(hash :equal-based))
#(next :args)
#(all)
#dir/#(skip)
#(and)
# (repeat :gap 0)
#dir/#file
# (next `#dir/#file`)
# (repeat)
>#key
# (do (set [want key] t))
# (end)
# (end)
#(end)
#(repeat)
#path
# (next path)
# (repeat)
>#key
#datum
# (require [want key])
# (output)
>#key
#datum
# (end)
# (end)
#(end)
To separate the dir1 paths from the rest, we use an #(all) match (try multiple pattern branches, which must all match) with two branches. The first branch matches one #dir/#(skip) pattern, binding the variable dir to text that is preceded by a slash, and ignore the rest. The second branch matches a whole consecutive sequence of #dir/#file patterns via #(repeat :gap 0). Because the same dir variable appears that already has a binding from the first branch of the all, this constrains the matches to the same directory name. Inside this repeat we recurse into each file via next and gather the >-delimited keys into the keep hash. After that, we process the remaining arguments as path names of files to process; they don't all have to be in the same directory. We scan through each one for the >#key pattern followed by a line of #datum. The #(require ...) directive will fail the match if key is not in the wanted hash, otherwise we fall through to the #(output).
I have many files like ABC_Timestamp.txt , RAM_Timestamp.txthere timestamp will be different everytime. I want to copy this file into other directory but while copying I want append one string at the end of the file , so the format will be ABC_Timestamp.txt.OK and RAM_Timestamp.txt.OK. How to append the string in dynamic file. Please suggest.
My 2 pence:
(cat file.txt; echo "append a line"; date +"perhaps with a timestamp: %T") > file.txt.OK
Or more complete for your filenames:
while sleep 3;
do
for a in ABC RAM
do
(echo "appending one string at the end of the file" | cat ${a}_Timestamp.txt -) > ${a}_Timestamp.txt.OK
done
done
Execute this on command line.
ls -1|awk '/ABC_.*\.txt/||/RAM_.*\.txt/
{old=$0;
new="/new_dir/"old".OK";
system("cp "old" "new); }'
Taken from here
You can say:
for i in *.txt; do cp "${i}" targetdirectory/"${i}".OK ; done
or
for i in ABC_*.txt RAM_*.txt; do cp "${i}" targetdirectory/"${i}".OK ; done
How about first dumping the names of the file in another file and then moving file one by one.
find . -name "*.txt" >fileNames
while read line
do
newName="${line}appendText"
echo $newName
cp $line $newName
done < fileNames
I want to develop a script that copies,verifies, and then deletes from one network location to another (files over x days old).
Here is my algorithm:
Recursively traverse a network location ($movePath)
for all files $_.LastWriteTime >= x days | forEach {
xcopy or robocopy $FileName = $_.FullName.Replace($movePath, $newPath)
if (the files where written correctly) {
(delete) Remove-Item $Filename from $movePath
}
Can I combine the xcopy /v (verify) with robocopy?
Do you want to maintain the subfolder structure (i.e. files from a subfolder in the source go into the same subfolder in the destination)? If so, this should suffice:
$src = 'D:\source\folder'
$dst = '\\server\share'
$age = 10 # days
robocopy $src $dst /e /move /minage:$age
robocopy can handle verification (done automatically) and deletion by itself.
Doug McCune had created something that was exactly what I needed (http://dougmccune.com/blog/2007/05/10/analyze-your-actionscript-code-with-this-apollo-app/) but alas - it was for AIR beta 2. I just would like some tool that I can run that would provide some decent metrics...any idea's?
There is a Code Metrics Explorer in the Enterprise Flex Plug-in below:
http://www.deitte.com/archives/2008/09/flex_builder_pl.htm
Simple tool called LocMetrics can work for .as files too...
Or
find . -name '*.as' -or -name '*.mxml' | xargs wc -l
Or if you use zsh
wc -l **/*.{as,mxml}
It won't give you what fraction of those lines are comments, or blank lines, but if you're only interested in how one project differs from another and you've written them both, it's a useful metric.
Here's a small script I wrote for finding the total numbers of occurrence for different source code elements in ActionScript 3 code (this is written in Python simply because I'm familiar with it, while Perl would probably be better suited for a regex-heavy script like this):
#!/usr/bin/python
import sys, os, re
# might want to improve on the regexes used here
codeElements = {
'package':{
'regex':re.compile('^\s*[(private|public|static)\s]*package\s+([A-Za-z0-9_.]+)\s*', re.MULTILINE),
'numFound':0
},
'class':{
'regex':re.compile('^\s*[(private|public|static|dynamic|final|internal|(\[Bindable\]))\s]*class\s', re.MULTILINE),
'numFound':0
},
'interface':{
'regex':re.compile('^\s*[(private|public|static|dynamic|final|internal)\s]*interface\s', re.MULTILINE),
'numFound':0
},
'function':{
'regex':re.compile('^\s*[(private|public|static|protected|internal|final|override)\s]*function\s', re.MULTILINE),
'numFound':0
},
'member variable':{
'regex':re.compile('^\s*[(private|public|static|protected|internal|(\[Bindable\]))\s]*var\s+([A-Za-z0-9_]+)(\s*\\:\s*([A-Za-z0-9_]+))*\s*', re.MULTILINE),
'numFound':0
},
'todo note':{
'regex':re.compile('[*\s/][Tt][Oo]\s?[Dd][Oo][\s\-:_/]', re.MULTILINE),
'numFound':0
}
}
totalLinesOfCode = 0
filePaths = []
for i in range(1,len(sys.argv)):
if os.path.exists(sys.argv[i]):
filePaths.append(sys.argv[i])
for filePath in filePaths:
thisFile = open(filePath,'r')
thisFileContents = thisFile.read()
thisFile.close()
totalLinesOfCode = totalLinesOfCode + len(thisFileContents.splitlines())
for codeElementName in codeElements:
matchSubStrList = codeElements[codeElementName]['regex'].findall(thisFileContents)
codeElements[codeElementName]['numFound'] = codeElements[codeElementName]['numFound'] + len(matchSubStrList)
for codeElementName in codeElements:
print str(codeElements[codeElementName]['numFound']) + ' instances of element "'+codeElementName+'" found'
print '---'
print str(totalLinesOfCode) + ' total lines of code'
print ''
Pass paths to all of the source code files in your project as arguments for this script to get it to process all of them and report the totals.
A command like this:
find /path/to/project/root/ -name "*.as" -or -name "*.mxml" | xargs /path/to/script
Will output something like this:
1589 instances of element "function" found
147 instances of element "package" found
58 instances of element "todo note" found
13 instances of element "interface" found
2033 instances of element "member variable" found
156 instances of element "class" found
---
40822 total lines of code
CLOC - http://cloc.sourceforge.net/. Even though it is Windows commandline based, it works with AS3.0, has all the features you would want, and is well-documented. Here is the BAT file setup I am using:
REM =====================
echo off
cls
REM set variables
set ASDir=C:\root\directory\of\your\AS3\code\
REM run the program
REM See docs for different output formats.
cloc-1.09.exe --by-file-by-lang --force-lang="ActionScript",as --exclude_dir=.svn --ignored=ignoredFiles.txt --report-file=totalLOC.txt %ASDir%
REM show the output
totalLOC.txt
REM end
pause
REM =====================
To get a rough estimate, you could always run find . -type f -exec cat {} \; | wc -l in the project directory if you're using Mac OS X.