I am using the following code to replace dos2unix line endings. Every time I execute the code it gets stuck at the command prompt. What is wrong with the below command?
for i in `find . -type f \( -name "*.c" -o -name "*.h" \)`; do sed -i 's/\r//' $i ; done
In Ubuntu, dos2unix and unix2dos are implemented as todos and frodos respectively. They are available in the package tofrodos.
I suggest using
find . -type f \( -name "*.c" -o -name "*.h" \) -print0 | xargs -0 frodos
I suggest confirming that your find command and for loop work properly.
You can do this by simply using an echo statement to print each file's name.
Depending on your platform (and how many .c and .h files you have) you might need to use xargs instead of directly manipulating the output from find. It's hard to say, because you still haven't told us which platform you're on.
Also, depending on your platform, different versions of sed work differently with the -i option.
Sometimes you MUST specify a file extension to use for the backup file, sometimes you don't have to.
All of the above are reasons that I suggest testing your command piece by piece.
You should read the man pages for each command you're trying to use on the system on which you're trying to use it.
Regarding the sed portion of your command, you should test that on a single file to make sure it works.
You can use the following sed command to fix your newlines:
sed 's/^M$//' input.txt > output.txt
You can type the ^M by typing CTRLv CTRLm
Like I said before, the -i option works differently on different platforms.
If you have trouble getting that to work, you could have sed output a new file and then overwrite the original file afterwards.
This would be very simple to do inside your for loop.
Related
I'm converting unix scripts into powershell scripts.
I want to know the unix test -f equivalent command in powershell.
If anybody is knowing this, please answer me.
test -f FILE exits with a success error code if "FILE exists and is a regular file". For PowerShell, you probably want to use Test-Path -Type Leaf FILE. We need the -Type Leaf to make sure that TestPath don't return $true for directories.
test -f and Test-Path -Type Leaf aren't going to be 100% identical. The fine differences between them may or may not matter, so I'd audit the script just to be sure. For example, test -F some_symlink is not true, but Test-Path -Type Leaf some_symlink is true. (Well, it was when I tested with a NTFS symlink.)
NB: test may be a built-in command in whichever shell you are using. I assume it has the semantics I quoted from the man page for test that I found.
I have multiple files in Unix directory.
files names are as below.
EnvName.Fullbkp.schema_121212_1212_Part1.expd
EnvName.Fullbkp.schema_121212_1212_Part2.expd
EnvName.Fullbkp.schema_121212_1212_Part3.expd
In each of the above file there is a common line like below. eg
EnvName.Fullbkp.schema_121212_1212_Part1.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part1.log
file=EnvName.Fullbkp.schema_10022012_0630_Part1.lst
EnvName.Fullbkp.schema_121212_1212_Part2.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part2.log
file=EnvName.Fullbkp.schema_10022012_0630_Part2.lst
I want to replace the 10022012_0630 from EnvName.Fullbkp.schema_121212_1212_Part*.expd files with 22052013_1000 without actully opening those files. Changes should happen in all EnvName.Fullbkp.schema_121212_1212_Part*.expdp files in a directory at a time
Assuming you mean you don't want to manually open the files:
sed -i 's/10022012_0630/22052013_1000/' filename*.log
update: since the "-i" switch is not available on AIX, but assuming you have ksh (or a compatible shell):
mkdir modified
for file in filename*.log; do
sed 's/10022012_0630/22052013_1000/' "$file" > modified/"$file"
done
Now the modified files will be in the modified directory.
It's some kind of extreme optimist who suggests sed -i on AIX.
It's a bit more likely that perl will be installed.
perl -pi -e 's/10022012_0630/22052013_1000/' EnvName.Fullbkp.schema_121212_1212_Part*.expd
If no perl, then you'll just have to do it like a Real Man:
for i in EnvName.Fullbkp.schema_121212_1212_Part*.expd
do
ed -s "$i" <<'__EOF'
1,$s/10022012_0630/22052013_1000/g
wq
__EOF
done
Have some backups ready before trying these.
I am trying to put the result of a find command to a text file on a unix bash shell
Using:
find ~/* -name "*.txt" -print > list_of_txt_files.list
However the list_of_txt_files.list stays empty and I have to kill the find to have it return the command prompt. I do have many txt files in my home directory
Alternatively How do I save the result of a find command to a text file from the commandline. I thought that this should work
The first thing I would do is use single quotes (some shells will expand the wildcards, though I don't think bash does, at least by default), and the first argument to find is a directory, not a list of files:
find ~ -name '*.txt' -print > list_of_txt_files.list
Beyond that, it may just be taking a long time, though I can't imagine anyone having that many text files (you say you have a lot but it would have to be pretty massive to slow down find). Try it first without the redirection and see what it outputs:
find ~ -name '*.txt' -print
You can redirect output to a file and console together by using tee.
find ~ -name '*.txt' -print | tee result.log
This will redirect output to console and to a file and hence you don't have to guess whether if command is actually executing.
Here is what worked for me
find . -name '*.zip' -exec echo {} \\; > zips.out
I've been stuck on a little unix command line problem.
I have a website folder (4gb) I need to grab a copy of, but just the .php, .html, .js and .css files (which is only a couple hundred kb).
I'm thinking ideally, there is a way to zip or tar a whole folder but only grabbing certain file extensions, while retaining subfolder structures. Is this possible and if so, how?
I did try doing a whole zip, then going through and excluding certain files but it seemed a bit excessive.
I'm kinda new to unix.
Any ideas would be greatly appreciated.
Switch into the website folder, then run
zip -R foo '*.php' '*.html' '*.js' '*.css'
You can also run this from outside the website folder:
zip -r foo website_folder -i '*.php' '*.html' '*.js' '*.css'
You can use find and grep to generate the file list, then pipe that into zip
e.g.
find . | egrep "\.(html|css|js|php)$" | zip -# test.zip
(-# tells zip to read a file list from stdin)
This is how I managed to do it, but I also like ghostdog74's version.
tar -czvf archive.tgz `find test/ | egrep ".*\.html|.*\.php"`
You can add extra extensions by adding them to the regex.
I liked Nick's answer, but, since this is a programming site, why not use Ant to do this. :)
Then you can put in a parameter so that different types of files can be zipped up.
http://ant.apache.org/manual/Tasks/zip.html
you may want to use find(GNU) to find all your php,html etc files.then tar them up
find /path -type f \( -iname "*.php" -o -iname "*.css" -o -iname "*.js" -o -iname "*.ext" \) -exec tar -r --file=test.tar "{}" +;
after that you can zip it up
You could write a shell script to copy files based on a pattern/expression into a new folder, zip the contents and then delete the folder. Now, as for the actual syntax of it, ill leave that to you :D.
I know that to find all the .h files I need to use:
find . -name "*.h"
but how to find all the .h AND .cpp files?
find . -name \*.h -print -o -name \*.cpp -print
or
find . \( -name \*.h -o -name \*.cpp \) -print
find -name "*.h" -or -name "*.cpp"
(edited to protect the asterisks which were interpreted as formatting)
Paul Tomblin Has Already provided a terrific answer, but I thought I saw a pattern in what you were doing.
Chances are you'll be using find to generate a file list to process with grep one day, and for such task there exists a much more user friendly tool, Ack
Works on any system that supports perl, and searching through all C++ related files in a directory recursively for a given string is as simple as
ack "int\s+foo" --cpp
"--cpp" by default matches .cpp .cc .cxx .m .hpp .hh .h .hxx files
(It also skips repository dirs by default so wont match on files that happen to look like files in them.)
A short, clear way to do it with find is:
find . -regex '.*\.\(cpp\|h\)'
From the man page for -regex: "This is a match on the whole path, not a search." Hence the need to prefix with .* to match the beginning of the path ./dir1/dir2/... before the filename.
find . -regex ".*\.[cChH]\(pp\)?" -print
This tested fine for me in cygwin.
You can use find in this short form:
find \( -name '*.cpp' -o -name '*.h' \) -print
-print can be omitted. Using -o just between expressions is especially useful when you want to find multiple types of files and do one same job (let's say calculating md5sum).