Editing/Replacing content in multiple files in Unix AIX without opening it - unix

I have multiple files in Unix directory.
files names are as below.
EnvName.Fullbkp.schema_121212_1212_Part1.expd
EnvName.Fullbkp.schema_121212_1212_Part2.expd
EnvName.Fullbkp.schema_121212_1212_Part3.expd
In each of the above file there is a common line like below. eg
EnvName.Fullbkp.schema_121212_1212_Part1.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part1.log
file=EnvName.Fullbkp.schema_10022012_0630_Part1.lst
EnvName.Fullbkp.schema_121212_1212_Part2.expd
is having below data
Log=EnvName.Fullbkp.schema_10022012_0630_Part2.log
file=EnvName.Fullbkp.schema_10022012_0630_Part2.lst
I want to replace the 10022012_0630 from EnvName.Fullbkp.schema_121212_1212_Part*.expd files with 22052013_1000 without actully opening those files. Changes should happen in all EnvName.Fullbkp.schema_121212_1212_Part*.expdp files in a directory at a time

Assuming you mean you don't want to manually open the files:
sed -i 's/10022012_0630/22052013_1000/' filename*.log
update: since the "-i" switch is not available on AIX, but assuming you have ksh (or a compatible shell):
mkdir modified
for file in filename*.log; do
sed 's/10022012_0630/22052013_1000/' "$file" > modified/"$file"
done
Now the modified files will be in the modified directory.

It's some kind of extreme optimist who suggests sed -i on AIX.
It's a bit more likely that perl will be installed.
perl -pi -e 's/10022012_0630/22052013_1000/' EnvName.Fullbkp.schema_121212_1212_Part*.expd
If no perl, then you'll just have to do it like a Real Man:
for i in EnvName.Fullbkp.schema_121212_1212_Part*.expd
do
ed -s "$i" <<'__EOF'
1,$s/10022012_0630/22052013_1000/g
wq
__EOF
done
Have some backups ready before trying these.

Related

How to extract a .tar.gz file on UNIX

I have a file (reviews_dataset.tar.gz) that contains many files which contains data. I am required to extract the files in this archive and then perform some basic commands on the file. So far I have created a directory named (CW) and found a command tar zxvf fileNameHere.tgz but when I run this it of course cannot find my file as I have not "downloaded it" into my directory yet? How do I get this file into my directory so that I can then extract it? Sorry if this is poorly worded I am extremely new to this.
You must either run the command from the directory your file exists in, or provide a relative or absolute path to the file. Let's do the latter:
cd /home/jsmith
mkdir cw
cd cw
tar zxvf /home/jsmith/Downloads/fileNameHere.tgz
You should use the command with the options preceded by dash like this:
tar -zxvf filename.tar.gz
If you want to specify the directory to save all the files use -C:
tar -zxf filename.tar.gz -C /root/Desktop/folder

Unix tail script with grep

Can you help me with a shell script, which tail on live every new file in folder and reads lines in the file that coming and grep specific lines and write it to the file. For example:
Find latest file in folder
then unix read it like cat command that newest file, then grep it out specific lines like grep -A3 "some word" and those lines will be saved in file like >>someother file.
To detect file-system changes, you might want to have a look at inotify.

replacing dos2unix line endings

I am using the following code to replace dos2unix line endings. Every time I execute the code it gets stuck at the command prompt. What is wrong with the below command?
for i in `find . -type f \( -name "*.c" -o -name "*.h" \)`; do sed -i 's/\r//' $i ; done
In Ubuntu, dos2unix and unix2dos are implemented as todos and frodos respectively. They are available in the package tofrodos.
I suggest using
find . -type f \( -name "*.c" -o -name "*.h" \) -print0 | xargs -0 frodos
I suggest confirming that your find command and for loop work properly.
You can do this by simply using an echo statement to print each file's name.
Depending on your platform (and how many .c and .h files you have) you might need to use xargs instead of directly manipulating the output from find. It's hard to say, because you still haven't told us which platform you're on.
Also, depending on your platform, different versions of sed work differently with the -i option.
Sometimes you MUST specify a file extension to use for the backup file, sometimes you don't have to.
All of the above are reasons that I suggest testing your command piece by piece.
You should read the man pages for each command you're trying to use on the system on which you're trying to use it.
Regarding the sed portion of your command, you should test that on a single file to make sure it works.
You can use the following sed command to fix your newlines:
sed 's/^M$//' input.txt > output.txt
You can type the ^M by typing CTRLv CTRLm
Like I said before, the -i option works differently on different platforms.
If you have trouble getting that to work, you could have sed output a new file and then overwrite the original file afterwards.
This would be very simple to do inside your for loop.

how do I zip a whole folder tree in unix, but only certain files?

I've been stuck on a little unix command line problem.
I have a website folder (4gb) I need to grab a copy of, but just the .php, .html, .js and .css files (which is only a couple hundred kb).
I'm thinking ideally, there is a way to zip or tar a whole folder but only grabbing certain file extensions, while retaining subfolder structures. Is this possible and if so, how?
I did try doing a whole zip, then going through and excluding certain files but it seemed a bit excessive.
I'm kinda new to unix.
Any ideas would be greatly appreciated.
Switch into the website folder, then run
zip -R foo '*.php' '*.html' '*.js' '*.css'
You can also run this from outside the website folder:
zip -r foo website_folder -i '*.php' '*.html' '*.js' '*.css'
You can use find and grep to generate the file list, then pipe that into zip
e.g.
find . | egrep "\.(html|css|js|php)$" | zip -# test.zip
(-# tells zip to read a file list from stdin)
This is how I managed to do it, but I also like ghostdog74's version.
tar -czvf archive.tgz `find test/ | egrep ".*\.html|.*\.php"`
You can add extra extensions by adding them to the regex.
I liked Nick's answer, but, since this is a programming site, why not use Ant to do this. :)
Then you can put in a parameter so that different types of files can be zipped up.
http://ant.apache.org/manual/Tasks/zip.html
you may want to use find(GNU) to find all your php,html etc files.then tar them up
find /path -type f \( -iname "*.php" -o -iname "*.css" -o -iname "*.js" -o -iname "*.ext" \) -exec tar -r --file=test.tar "{}" +;
after that you can zip it up
You could write a shell script to copy files based on a pattern/expression into a new folder, zip the contents and then delete the folder. Now, as for the actual syntax of it, ill leave that to you :D.

Have rsync only report files which were updated

When rsync prints out the details of what it did for each file (using one of the verbose flags) it seems to include both files that were updated and files that were not updated. For example a snippet of my output using the -v flag looks like this:
rforms.php is uptodate
robots.txt is uptodate
sorry.html
thankyou.html is uptodate
I'm only interested about the files that were updated. In the above case that's sorry.html. It also prints out directory names as it enters them even if there is no file in that directory that is updated. Is there a way to filter out uptodate files and directories with no updated files from this output?
You can pipe it through grep:
rsync -vv (your other rsync options here) | grep -v 'uptodate'
Rsync's output can be extensively customized, take a look at rsync --info=help; -v is a fairly coarse way to get information from a modern rsync.
In your case, I'm not sure exactly what you consider "updated" to mean. For example, deleted on the receiver too? Only files/dirs, but also pipes and symlinks too? Mod/access times or only content?
As a simple test I suggest you look at: rsync --info=name1 <other opts>.
Here's my take... (work-proven and very happy with it.)
rsync -arzihv --stats --progress \
/media/frank/foo/ \
/mnt/backup_drive/ | grep -E '^[^.]|^$'
The important bit is the -i for itemize.
The grep lets all output lines pass (also any summary as in -h --stats, also empty ones before that, which benefits legibility) except those starting with a dot: These are the ones, that describe unchanged files:
A . means that the item is not being updated (though it
might have attributes that are being modified).

Resources