How to find files in a directory using grep with wildcards? - unix

I have several hundred files with file names such as:
20110404_091415-R1-sometext
Another file name might be named:
20110404_091415-R1.2-sometext
What I would like to do is use the Unix grep tool in the terminal to find files that start with 2011 and also contain -R1 within the file name. Unfortunately, I have no idea to find files that satisfy both these criteria. I have tried to figure out a regex that would match this, but I am only a beginner programmer. Can anyone help please? Thanks in advance for your time.

why even use grep? I think ls 2011*R1* should suffice..

ls | grep "^2011.*-R1.*"
Should do the job

Just to find files, you can use ls 2011*R1* or echo 2011*R1*. To do something to files, use a loop (generally)
for file in 2011*R1*
do
....
done

Related

Copy folders to new folder with different ending

I have a huge number of folders all with different names but same ending.
Like this:
blabla_ending1
Now I want to copy all those folders and give them another ending (ending2). I tried this but it did not work like I want to:
cp -r *_ending1 *_ending2
Somehow I need to specify that the second * depends on the first one. Maybe I am also unaware of the precise meaning of *. I know its very basic but I could not find any help yet.
I can't think of a simple command to achieve that. However, the following will achieve the desired result:
for path in *_ending1; do
newpath=`echo $path | sed 's/_ending1$/_ending2/'`
cp -r $path $newpath
done

Split files linux and then grep

I'd like to split a file and grep each piece without writing them to indvidual files.
I've attempted a couple variations of split and grep and no such luck; any suggestions?
Something along the lines of:
split -b SIZE filename | grep "string"
I've attempted grep/fgrep to find the string but my shell complains that the files are too large. See: use fgrep instead
There is no point in splitting the file if you plan to [linearly] search each of the pieces anyway (assuming that's the only thing you are doing with it). Consider running grep on the entire file.
If however you plan to utilize the fact that the file is split later on, then the typical way would be:
Create a temporary directory and step into it
Run split/csplit on the original file
Use for loop over written fragment to do your processing.

Loop "paste" function over multiple files in the same folder

I'm trying to concatenate horizontally a number of files (1000) *.txt in a folder.
How can I loop over the files using the "paste" function?
NB: all the *.txt files are in the same directory.
Why loop? You can use wildcards.
paste *.txt > combined.txt
In general, it would be a question of just calling paste *.txt (and redirecting the output: paste *.txt > output.txt, as #zx did). Try it, but you'll be generating some enormously long lines. If paste can`t handle the line length you'll be generating, you'll have to reproduce its effect using a scripting language that has no line length limit, like perl or python.
Another possible sticking point is if your shell can't handle this many arguments in the expansion of the glob *.txt. Again, you can solve that with a script. It's easy to do so if that's your situation, let us know here.
PS. Given what paste does, looping is not going to do it for you: You (presumably) need the file contents side by side in the output, not one after the other.

How to use mv command to rename multiple files in unix?

I am trying to rename multiple files with extension xyz[n] to extension xyz
example :
mv *.xyz[1] to *.xyz
but the error is coming as - " *.xyz No such file or directory"
Don't know if mv can directly work using * but this would work
find ./ -name "*.xyz\[*\]" | while read line
do
mv "$line" ${line%.*}.xyz
done
Let's say we have some files as shown below.Now i want remove the part -(ab...) from those files.
> ls -1 foo*
foo-bar-(ab-4529111094).txt
foo-bar-foo-bar-(ab-189534).txt
foo-bar-foo-bar-bar-(ab-24937932201).txt
So the expected file names would be :
> ls -1 foo*
foo-bar-foo-bar-bar.txt
foo-bar-foo-bar.txt
foo-bar.txt
>
Below is a simple way to do it.
> ls -1 | nawk '/foo-bar-/{old=$0;gsub(/-\(.*\)/,"",$0);system("mv \""old"\" "$0)}'
for detailed explanation check here
Here is another way using the automated tools of StringSolver. Let us say your first file is named abc.xyz[1] a second named def.xyz[1] and a third named ghi.jpg (not the same extension as the previous two).
First, filter the files you want by giving examples (ok and notok are any words such that the first describes the accepted files):
filter abc.xyz[1] ok def.xyz[1] ok ghi.jpg notok
Then perform the move with the filter it created:
mv abc.xyz[1] abc.xyz
mv --filter --all
The second line generalizes the first transformation on all files ending with .xyz[1].
The last two lines can also be abbreviated in just one, which performs the moves and immediately generalizes it:
mv --filter --all abc.xyz[1] abc.xyz
DISCLAIMER: I am a co-author of this work for academic purposes. Other examples are available on youtube.
I think mv can't operate on multiple files directly without loop.
Use rename command instead. it uses regular expressions but easy to use once mastered and more powerful.
rename 's/^text-to-replace/new-text-you-want/' text-to-replace*
e.g to rename all .jar files in a directory to .jar_bak
rename 's/^jar/jar_bak/' jar*

How to quickly rename a bunch of files in the folder

I have a bunch of files that are named 'something_12345.doc' (any 5-digit number, not necessarily 12345). I need to rename them all to just 'something.doc'. This is a Unix filesystem, and I suspect there's a way to do this with just one command... Can any Unix regular expressions guru help?
Thanks!
#OP, the shell has already expanding your pattern for you, there in your mv statement, you don't have to specify the pattern for 5 digits again.
for file in *_[0-9][0-9][0-9][0-9][0-9].doc
do
echo mv "$file" "${file%_*}.doc"
done
This question was asked a lot of times on SO:
bash script to rename all files in a directory?
bash Linux - Massive folder rename
How to do a mass rename?
https://stackoverflow.com/questions/7137/replacing-one-string-in-a-bunch-of-file-names-with-another
My personal preference goes to mmv. But see "Mass Rename/copy/link Tools".
rename 's/_[0-9][0-9][0-9][0-9][0-9]//' *.doc
use sed
ls *.doc | sed 's:\([^0-9_]\)[0-9_][0-9_]*\.doc:$(mv & \1.doc)' | /bin/bash
Yes, rename takes perl style regular expressions. Do a man rename.
On FreeBSD, you might be interested in the sysutils/renameutils port. The command qmv opens your $EDITOR and allows you to specify all file renames in a reasonably comfortable environment. I personally prefer the qmv -fdo (single column) format.

Resources