Can you help me with a shell script, which tail on live every new file in folder and reads lines in the file that coming and grep specific lines and write it to the file. For example:
Find latest file in folder
then unix read it like cat command that newest file, then grep it out specific lines like grep -A3 "some word" and those lines will be saved in file like >>someother file.
To detect file-system changes, you might want to have a look at inotify.
Related
I am trying to take the files that I get with PuTTY's Plink command but save the file names to a text file so that I can only pull those files with PSFTP afterwards. Or can this be done without a temp text file?
The files I get are modified in the last 15 min, and I only want to get those files. I am new to PuTTY and FTP in general. I searched everywhere but cannot find anything that helps.
Any help is appreciated,
Thank you
You have to generate the PSFTP script file dynamically by reading the Plink output file line by line and producing a put command for each line.
See Batch files: How to read a file?
Or use an SFTP client that can directly download only files created in the last 15 minutes.
For example with WinSCP scripting:
winscp.com /command ^
"open sftp://username:password#example.com/ -hostkey=""fingerprint""" ^
"get /path/*>15N c:\path\" ^
"exit"
Read about file masks with a time constraint.
(I'm the author of WinSCP)
I'm writing a script that will print the file names of every file in a subdirectory of my home directory. My code is:
foreach file (`~/.garbage`)
echo "$file"
end
When I try to run my script, I get the following error:
home/.garbage: Permission denied.
I've tried setting permissions to 755 for the .garbage directory and my script, but I can't get over this error. Is there something I'm doing incorrectly? It's a tcsh script.
Why not just use ls ~/.garbage
or if you want each file on a separate line, ls -1 ~/.garbage
backtic will try to execute whatever is inside them. You are getting this error since you are giving a directory name within backtic.
You can use ls ~/.garbage in backtics as mentioned by Mark or use ~/.garbage/* in quotes and rely on the shell to expand the glob for you. If you want to get only the filename from a full path; use the basename command or some sed/awk magic
I am have a simple egrep command that searches through all the files in the current directory for lines that contain the word "error":
egrep -i "error" *
This command will also go through the sub-directories as well. Here is a sample of what the whole folder looks like:
/Logfile_20120630_030000_ID1.log
/Logfile_20120630_030001_ID2.log
/Logfile_20120630_030005_ID3.log
/subfolder/Logfile_20120630_031000_Errors_A3.log
/subfolder/Logfile_20120630_031001_Errors_A3.log
/subfolder/Logfile_20120630_031002_Errors_A3.log
/subfolder/Logfile_20120630_031003_Errors_A3.log
The logfiles at the top directory contain "error" lines. But the logfiles in the "subfolder" directory do not contain lines with "error". (only in the filename)
So the problem I am getting is that the egrep command seems to be looking at the information within the "subfolder". My result gets a chunk of what seems to be binary block, then the text lines that contain the word "error" from the top folder logfiles.
If I deleted all the files underneath "subfolder", but did not delete the folder itself, I get the exact same results.
So does Unix keep file history information inside a folder??
The problem was corrected by running:
find . -type f | egrep -i "error" *
But I still dont understand why it was a problem. I'm running C-shell on a SunOS.
egrep -i error *
The * metacharacter matches ANY file name. Directories are files, too. * is expanded by the shell into any and all files in the current directory, this is traditionally called globbing.
set noglob
turns off that behavior. However, it is unlikely there are files named * in your directory, so in this example the command would find no files of any kind. BTW - Do not create a file named * to test this, because files named * may cause all kinds of interesting and unwanted things to happen. Think about what might happen when you tried to delete the file? rm '*' would be the right command, but if you or someone else did a rm * unthinkingly then you have problems...
I made an Unix command, macmac2unix, which converts Mac's Word file for Unix platforms.
I would like to run the command as
$macmac2unix file1 file2 file3 ...
Problem:
How can I run this command in every path?
I added the following to .bashrc unsuccessfully
CDPATH=:/Users/Sam/Documents/Unix
Try adding
export PATH=$PATH:/Users/Sam/Documents/Unix
to your .bashrc
Make your script executeable be sure it's located in /Users/Sam/Documents/Unix.
You could reread your .bashrc with:
~> . ~/.bashrc
But if you already played around with your enviroment variables a restart of your terminal
would be cleaner.
Add it to PATH, not CDPATH.
Try adding it in PATH like this:
PATH=/Users/Sam/Documents/Unix:$PATH
I need to regularly send a collection of log files that can grow quite large, so I would like to only send the last n lines of the each of the files.
for example:
/usr/local/data_store1/file.txt (500 lines)
/usr/local/data_store2/file.txt (800 lines)
Given a file with a list of needed files named files.txt, I would like to create an archive (tar or zip) with the last 100 lines of each of those files.
I can do this by creating a separate directory structure with the tail-ed files, but that seems like a waste of resources when there's probably some piping magic that can happen to accomplish it. Full directory structure also must be preserved since files can have the same names in different directories.
I would like the solution to be a shell script if possible, but perl (without added modules) is also acceptable (this is for Solaris machines that don't have ruby/python/etc.. installed on them.)
You could try
tail -n 10 your_file.txt | while read line; do zip /tmp/a.zip $line; done
where a.zip is the zip file and 10 is n or
tail -n 10 your_file.txt | xargs tar -czvf test.tar.gz --
for tar.gz
You are focusing in an specific implementation instead of looking at the bigger picture.
If the final goal is to have an exact copy of the files on the target machine while minimizing the amount of data transfered, what you should use is rsync, which automatically sends only the parts of the files that have changed and also can automatically compress while sending and decompress while receiving.
Running rsync doesn't need any more daemons on the target machine that the standard sshd one, and to setup automatic transfers without passwords you just need to use public key authentication.
There is no piping magic for that, you will have to create the folder structure you want and zip that.
mkdir tmp
for i in /usr/local/*/file.txt; do
mkdir -p "`dirname tmp/${i:1}`"
tail -n 100 "$i" > "tmp/${i:1}"
done
zip -r zipfile tmp/*
Use logrotate.
Have a look inside /etc/logrotate.d for examples.
Why not put your log files in SCM?
Your receiver creates a repository on his machine from where he retrieves the files by checking them out.
You send the files just by commiting them. Only the diff will be transmitted.