viewing the updated data in a file - unix

If i have file for eg: a log file created by any background process on unix,
how do i view the data that getting updated each and every time.
i know that i can use tail command to see the file.but let's say i have used
tail -10 file.txt it will give the last 10 lines.but if lets say at one time 10 lines got added and at the next instance it has been added 2 lines. now from the tail command i can see previous 8 lines also.i don't want this.i want only those two lines which were added.
In short i want to view only those lines which were appended.how can i do this?

tail -f file.txt
That always outputs the last added lines immediately.

Related

Inplace editing of DNS records via SSH with sed/awk or the likes

I have many files in a single directory with 3 lines I need to edit if the first line is edited, but I cannot wrap my head around how to do it.
The point is to change my MX records from pointing to my server, to pointing to mail. instead and have that point to the servers IP. This makes migration in Cpanel a lot easier. I have thousands of records to go through.
DNS updates
Line 1:
example.com. 14400 IN MX 0 servername.com.
If the last block is "servername.com", it should be changed to mail.example.com instead like this
example.com. 14400 IN MX 0 mail.example.com.
If nothing is changed here, I want to skip the next parts and just go to the next file in the directory.
Line 2:
mail IN CNAME servername.com
For any record where line 1 was changed, this line should be changed to
mail IN A 8.8.8.8
Line3:
Increment the serial number for the DNS record in any file where Line 1 was changed. The 10 digits can be anything, but are always the first thing on that line, and always on line #5.
2015061800 ; serial, todays date+todays
-OR-
2015061800 ;Serial Number
I need to set the digits to today's date in format YYYYMMDD09
Here is what I have so far:
#! /bin/sh
while IF= read dname
do
sed -i -r "s/MX\s*0\s*servername.com/MX 0 mail.$dname/g" $dname.db
done < dirlist.txt
This takes dirlist.txt which contains a list of domain names, finds and replaces servername.com with mail.domainname.com for the file domainname.com.db. This works BUT the original field can have different numerical values between MX and servername.com. I would like to ignore the field, but writing
MX\s**\s*servername.com
instead does not seem to work. So how do I ignore that 0 or 100 or whatever the value is?
Also, IF an edit is done above, I would like to loop the below statement for the same file. If an edit is NOT done, I would like to break the loop and continue to the next file.
#! /bin/sh
while IF= read dname
do
sed -i -r "s/mail\s*IN*\sCNAME\s*servername.com/mail IN A 8.8.8.8/g" $dname.db
done < dirlist.txt
After that is done, again assuming the first edit was done, I would like to edit line 5, first block of the file to the value YYYYMMDD09.
#! /bin/sh
while IF= read dname
do
sed -i '5s/.*/\t\t2015121009 ; serial, todays date/' $dname.db
done < dirlist.txt
Seems to do the trick, so its really just the looping and the wildcard/ignoring the numerical value that doesnt work as intended.
Your specification is not very clear (actually confusing), but for the first line you can do this and perhaps get you started for a prototype for the rest.
$ awk 'FNR==1 {sub("servername.com.$","mail."$1)}1' file
will change the final field "servername.com." with "mail." prefixed first field, only for the first line. If you provide multiple input files, it will do the same for all files (but the output will be only one, which can be solved). Note that this preserves input file spacing.
I would like to ignore the field, but writing
MX\s**\s*servername.com
instead does not seem to work.
There between the two \s* you have only a *, but to match any number of any character, you need .*, so MX\s*.*\s*servername.com works.
If an edit is NOT done, I would like to break the loop and continue to
the next file.
You can combine all three edits into one sed call. Since you want the second and third substitution take place only if an edit is done on line 1, you'd wrap all in a commands block for line 1 1{…} and advance inside this block to the next lines if so (with n, see below).
I would like to edit line 5, first block of the file to the value
YYYYMMDD09.
…
sed -i '5s/.*/\t\t2015121009 ; serial, todays date/' $dname.db
To automatically insert today's date, you could use
s/.*/\t\t`date +%Y%m%d09` ; serial, todays date/
(but within double quotes).
Altogether:
while read dname
do
sed -i "1{s/MX\s*.*\s*servername.com/MX 0 mail.$dname/;tcontinue;b
:continue;n;s/mail\s*IN*\sCNAME\s*servername.com/mail IN A 8.8.8.8/
n;n;n;s/.*/\t\t`date +%Y%m%d09` ; serial, todays date/}" $dname.db
done < dirlist.txt

How to display the first line for a set of files(say like 10) in unix?

I don't know how to do that guys.I know only how to get first line for an individual file.
First i listed only the files that has ssa as a part of its name.I used the command
ls | grep ssa
This command gives me 10 files now i want to display only the first lines for all 10 files.I don't know how to do that.Can anyone help me with that?
The head command can accept multiple input files. So when you suppress the header output and limit the number of lines to 1 that should be what you are looking for:
head -qn 1 *
If you want to combine this with other commands, you have to take care to really hand over all input arguments to a call of head:
ls | xargs head -qn 1

tail -f did not continue output the new lines added using vi

When I invoked "tail -f myfile.txt", the new line added using the following command output the new line, but not the line added/saved using vi. Does anyone know why ?
$echo "this is new line" >> myfile.txt
Thanks.
It has something to do w/the fact that while you are editing the file, vi keeps your changes in a second file (.myfile.txt.swp in this case).
When you save the changes, it's likely that vi is replacing the original file w/the second file. This means the file that tail was watching is no longer valid.
To prove this, try your echo command after saving the file with vi. When you do that, the output won't be displayed by tail.
The tail program opens a file, seeks to the end, and (with "-f") waits, then checks again if that open file has anything new to read.
vi does not append to a file. It makes a copy, (not a "swap", which is something else altogether) writes it out, and then moves the new file to have the same name as the old file.
tail is still watching the old file, not looking up a file by that file name every time.
In addition, tail uses the location in the file, so if you delete 10 characters and add 15, the next loop of 'tail' will emit the next 5 it thinks are new because they are after its placeholder.
Run 'tail --follow=name ...' to get tail to look up the file every loop by name, instead of watching the location on disk of a file it opens at start.

How can I tail -f but only in whole lines?

I have a constantly updating huge log file (MainLog).
I want to create another file which is only the last n lines of the log file BUT also updating.
If I use:
tail -f MainLog > RecentLog
I get ALMOST what I want except RecentLog is written as MainLog is available and might at any point only have part of the last MainLog line.
How can I specify to tail that I only want it to write when a WHOLE line is available?
By default, tail outputs whole lines unless you use the -c switch to count characters. Something like
tail -n 20 -f MainLog > RecentLog
(substituting the number of lines you want prepended to the second file for "20") should work as you want.
But if if doesn't, it is possible that using grep to line-buffer your output will fix this condition. See this question.
After many attempts, the only solution for multiple files that worked (fantastically well) for me is the fdlinecombine command. It's a small binary that reads multiple file descriptors and prints data to stdout linewise.
My use case is spawning multiple long-running ssh commands in the background and following their output, without having the lines garbled or interrupted in between.

Using the mv command to move a file to a destination stored in a variable

I want to move a file called dog to $HOME/deleted2.
The unix command I use is:
mv dog $HOME/deleted2
However I want to move it to the exact same destination but this time $HOME/deleted2 is stored in a hidden file called .rm.cfg
I want to extract the location from .rm.cfg, this file contains one line which says $HOME/deleted2.
Here is what I did:
pathname=$(cat $HOME/.rm.cfg),
mv dog $pathname.
However this time I get an error saying $HOME/deleted2 does not exist. Why is this?
Sorry for not putting it in code format, I tried to indent by fours spaces but it did not work.
cat $HOME/.rm.cfg will only "outputs" the raw file, but it does not evaluate variables.
To put the full interpreted string in your pathname variable, you need to evaluate it:
pathname=$(eval echo $(cat $HOME/.rm.cfg))

Resources