UNIX Script for searching in logs and files and extracting lines - unix

I'm trying to write a script that can search in log files for a specific text and write down the line in a txt file. Log files are archived once every 2 days, so i need to search in archived files also.
Something like:
-bash-3.2$ ssh server.com
-bash-3.2$ cd test/log/
less server.log.2012-06-19.gz | grep "text" -> ~/test.txt
I'm kind of a newbie in UNIX
Thanks

like this?
zgrep text server.log* >~/test.txt

gzcat <your_gz_file>|grep string >output_file

Related

To find the filename who use the particular account name in unix server

I have to prepare one sheet like below. I want to find those files where we have used EDW account. But when I am using find command then it is returning everything which contains the word EDW. Even it returns those file don't have permission & can't open etc (printing unnecessary line).
I only need to print those file using the account name EDW,GDW etc with path name & file name. So that I can prepare one sheet as below.
AccountName Server Path name File name
XCM uk0300uv550 /home/super/MKBP/scripts/xtc/rap proc_build.sql
Can anyone please help me ?
If I understand your question correctly, you want a list with all files where "EDW" occurs? Try -l to list just the filenames and -r to grep recursively:
echo "Text with EDW inside." > file.txt
grep -lr EDW .
If you would like to find files that are owned by a specific user/group, you could simply use:
find /path/ -user EDW -group EDW
If you would like to find only files that are both owned by your user and contain the word EDW in their name you could:
find /path/ -user EDW -name "*EDW*"

UNIX: How to change all instances of a string within all files in a directory

My question is basically the same as this previous question...
How to change all occurrences of a word in all files in a directory
...except I'm trying to change the reference to a header file.
For example,I'm trying to change [lessthan]abc/filename.h[greaterthan] to "filename.h", is this even possible using the same syntax, or should I be looking to whip myself up a quick program to do it?
Thanks
You can do it easily with sed:
sed -i -e 's,<abc/filename.h>,"filename.h",' *

How to generate translation file (.po, .xliff, .yml,...) from a Symfony2/Silex project?

Im going to build a Silex/Symfony2 project and I have been looking around for a method to generate XLIFF/PO/YAML translation files based on texts-to-be-translated inside the project but not found any instruction or documentation on it.
My question is: Is there an automated way to generate translation file(s) in specific format for a Symfony2/Silex project?
If yes, please tell me how to generate the file then update the translation after that.
If no, please tell me how to create translation file(s) then adding up more text for my project? I am looking for an editor desktop based or web-based instead of using normal editor such as Transifex, GetLocalization (but they dont have option to create a new file or add more text)
After a long time searching the internet, I found a good one:
https://github.com/schmittjoh/JMSTranslationBundle
I see you've found a converter, but to answer your first question about generating your initial translation file -
If you have Gettext installed on your system you could generate a PO file from your "texts-to-be-translated inside the project". The command line program xgettext will scan the source files looking for whatever function you're using.
Example:
To scan PHP files for instances of the trans method call as shown here you could use the following command -
find . -name "*.php" | xargs xgettext --language=PHP --keyword=trans --output=messages.pot
To your question about editors:
You could use any PO editor, such as POEdit, to manage your translations, but as you say you eventually need to convert the PO file to either an XLIFF or YAML language pack for Symfony.
I see you've already found a converter tool. You may also like to try the one I wrote for Loco. It supports PO to YAML, and PO to XLIFF
Workaround for busy people (UNIX)
You can run the following command in the Terminal:
$ grep -rEo --no-filename "'.+'\|\btrans\b" templates/ > output.txt
This will output the list of messages to translate:
'Please provide your email'|trans
'Phone'|trans
'Please provide your phone number'|trans
...
I mean almost.. But you can usually do some work from here...
Obviously you must tweak the command to your liking (transchoice, double-quotes instead of single...).
Not ideal but can help!
grep options
grep -R, -r, --recursive: Read all files under each directory, recursively this is equivalent to the -d recurse option.
grep -E, --extended-regexp: Interpret PATTERN as an extended regular expression.
grep -o, --only-matching: Show only the part of a matching line that matches PATTERN.
grep -h, --no-filename: Suppress the prefixing of filenames on output when multiple files are searched.
(source)

How to find files in a directory using grep with wildcards?

I have several hundred files with file names such as:
20110404_091415-R1-sometext
Another file name might be named:
20110404_091415-R1.2-sometext
What I would like to do is use the Unix grep tool in the terminal to find files that start with 2011 and also contain -R1 within the file name. Unfortunately, I have no idea to find files that satisfy both these criteria. I have tried to figure out a regex that would match this, but I am only a beginner programmer. Can anyone help please? Thanks in advance for your time.
why even use grep? I think ls 2011*R1* should suffice..
ls | grep "^2011.*-R1.*"
Should do the job
Just to find files, you can use ls 2011*R1* or echo 2011*R1*. To do something to files, use a loop (generally)
for file in 2011*R1*
do
....
done

Locating most recently updated file recursively in UNIX

For a website I'm working on I want to be able to automatically update the "This page was last modified:" section in the footer as I'm doing my nightly git commit. Essentially I plan on writing a shell script to run at midnight each night which will do all of my general server maintenance. Most of these tasks I already know how to automate, but I have a file (footer.php) which is included in every page and displays the date the site was last updated. I want to be able to recursively look through my website and check the timestamp on every file, then if any of these were edited after the date in footer.php I want to update this date.
All I need is a UNIX command that will recursively iterate through my files and return ONLY the date of the last modification. I don't need file names or what changes were made, I just need to know a single day (and hopefully time) that the most recently updated file was changed.
I know using "ls -l" and "cut" I could iterate through every folder to do this, but I was hoping for a quicker-running and easier command. Preferably a single-line shell command (possibly with a -R parameter)
The find outputs all the access times in Unix format, then sort and take the biggest.
Converting into whatever date format is wanted is left as an exercise for the reader:
find /path -type f -iname "*.php" -printf "%T#" | sort -n | tail -1
GNU find
find /path -type -f -iname "*.php" -printf "%T+"
check the find man page to play with other -printf specifiers.
You might want to look at a inotify script that updates the footer every time any other file is modified, instead of looking all through the file system for new updates.

Resources