I have a grep command that I run daily to find an entry in a huge logfile.
This command works fine in our Development environment. But in our Production environment, it outputs a response that is different from the entry in the logfile.
Here's the command:
logentry=$(grep -m1 -oP '.*(?<=Reset\s).*' $log)
Actual entry in log file:
******Reset Counter:[Total:1849766] [Success:1849766] [Insert:102] [Update:1848861] [Delete:803] [Key:0]
Command output:
******Reset Counter:[Total:1849766] 1 [Insert:102] [Update:1848861] [Delete:803] [Key:0]
Expected output:
******Reset Counter:[Total:1849766] [Success:1849766] [Insert:102] [Update:1848861] [Delete:803] [Key:0]
What could be the reason behind this inconsistent behavior of the grep command?
Thanks #Ed Morton for comment. The fix is working fine.
Root cause:
The variable is not quoted so it's open to globbing, word-splitting, and filename expansion and so the net result will be dependent on files in your directory.
Solution: Use echo "$logentry" instead and ALWAYS quote your shell variabes unless you have a specific purpose in mind by not doing so and fully understand all of the implications.
Security implications of forgetting to quote a variable in bash/POSIX shells
Related
I am attempting to bend zsh, my shell of choice, to my will, and am completely at a loss on the syntax and operation of completions.
My use case is this: I wish to have completions for 'ansible-playbook' under the '-e' option support three variations:
Normal file completion: ansible-playbook -e vars/file_name.yml
Prepended file completion: ansible-playbook -e #vars/file_name.yml
Arbitrary strings: ansible-playbook -e key=value
I started out with https://github.com/zsh-users/zsh-completions/blob/master/src/_ansible-playbook which worked decently, but required modifications to support the prefixed file pathing. To achieve this I altered the following lines (the -e line):
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:(EXTRA_VARS)"\
'--flush-cache[clear the fact cache]'\
to this:
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:__at_files"\
'--flush-cache[clear the fact cache]'\
and added the '__at_files' function:
__at_files () {
compset -P #; _files
}
This may be very noobish, but for someone that has never encountered this before, I was pleased that this solved my problem, or so I thought.
This fails me if I have multiple '-e' parameters, which is totally a supported model (similar to how docker allows multiple -v or -p arguments). What this means is that the first '-e' parameter will have my prefixed completion work, but any '-e' parameters after that point become 'dumb' and only allow for normal '_files' completion from what I can tell. So the following will not complete properly:
ansible-playbook -e key=value -e #vars/file
but this would complete for the file itself:
ansible-playbook -e key=value -e vars/file
Did I mess up? I see the same type of behavior for this particular completion plugin's '-M' option (it also becomes 'dumb' and does basic file completion). I may have simply not searched for the correct terminology or combination of terms, or perhaps in the rather complicated documentation missed what covers this, but again, with only a few days experience digging into this, I'm lost.
If multiple -e options are valid, the _arguments specification should start with * so instead of:
"(-e --extra-vars)"{-e,--extra-vars}"[EXTR ....
use:
\*{-e,--extra-vars}"[EXTR ...
The (-e --extra-vars) part indicates a list of options that can not follow the one being specified. So that isn't needed anymore because it is presumably valid to do, e.g.:
ansible-playbook -e key-value --extra-vars #vars/file
I've been using this utility successfully for many years, in many environemnts. But I'm noticing that on one particular environment, it produces very unexpected results.
grep -r 'search-term1' . | grep 'search-term2'
The above code greps recursively for all instances of search-term1, in the current-dir. The results are then piped to another grep, which selects only those lines that also contain search-term2. This works exactly as I would expect.
grep -r 'search-term1' . | grep -r 'search-term2'
The only difference in the above code is that the -r recursive flag in specified in both grep commands. I would expect the behavior to not change for this particular case. After all, the input to the 2nd grep is a pipe-input, and there's nothing further to be found recursively.
I have been using the command successfully, for many years, in many different environments (both unix and mac-os). However, the most recent environment that I started working in (unix), breaks the above behavior. The second piped grep searches for all instances of search-term2, not only in the piped-input, but also all files in my current directory. Because of this, instead of getting only results that contain both search-terms, I get all results in current-dir that contain the 2nd search term.
Is there any reason why this one particular environment produces this odd behavior? Is there any way I can avoid this, while still preserving the -r flag?
FAQ:
Q: Why am I using the -r flag on a piped input?
Ans: I actually have grep saved as an alias, with many different options and flags that I always want to use as a default. The recursive flag is one of them. I would like to always use this alias, instead of having to type out all the flags every time.
Q: If you want to search for all instances matching both search terms, why not do (insert-superior-method-here) instead?
Ans: You're probably right. I'm sure there are things I can change in my usual habits that would workaround this issue. However, as intellectual curiosity, I would like to find out why recursive-greps-on-pipes work as intended on most environments, but not all, and if that can somehow be resolved.
The -r flag to grep changed in grep version 2.11 (release notes to implicitly use the working directory as the input if no file arguments are given.
If no file operand is given, and a command-line -r or equivalent
option is given, grep now searches the working directory.
You aren't giving the second grep any file arguments so it defaults to the current directory despite there being pipe input.
Try grep -r 'search-term1' . | grep -r 'search-term2' - as a workaround.
grep -r 'search-term1' . | grep -r -d skip 'search-term2' may also work around the problem.
I use rsync to backup a few thousands of files and pipe the output to a file.
Given the number of files I'd like to see a list of only those transfers that had issues as well as a summary to show which completed.
So, using the -q flag displays nicely by exception any error only.
Using --stats shows a helpful summary at the end.
The problem is that I cannot combine them because it appears that -q suppresses the stats output.
Any ideas welcome.
This did the trick for me :
rsync -azh --stats <source> <destination>
-a/--archive: archive mode; equals -rlptgoD (no -H,-A,-X)
-z/--compress: compress file data during the transfer
-h/--human-readable: output numbers in a human-readable format
--stats: give some file-transfer stats
Perhaps this will help someone else. In the end the only thing that worked was to swap the output as suggested here.
So in my case it was simply redirecting as follows:
2>> /output.log >> /output.log
I have a query regarding the execution of a complex command in the makefile of the current system.
I am currently using shell command in the makefile to execute the command. However my command fails as it is a combination of a many commands and execution collects a huge amount of data. The makefile content is something like this:
variable=$(shell ls -lart | grep name | cut -d/ -f2- )
However the make execution fails with execvp failure, since the file listing is huge and I need to parse all of them.
Please suggest me any ways to overcome this issue. Basically I would like to execute a complex command and assign that output to a makefile variable which I want to use later in the program.
(This may take a few iterations.)
This looks like a limitation of the architecture, not a Make limitation. There are several ways to address it, but you must show us how you use variable, otherwise even if you succeed in constructing it, you might not be able to use it as you intend. Please show us the exact operations you intend to perform on variable.
For now I suggest you do a couple of experiments and tell us the results. First, try the assignment with a short list of files (e.g. three) to verify that the assignment does what you intend. Second, in the directory with many files, try:
variable=$(shell ls -lart | grep name)
to see whether the problem is in grep or cut.
Rather than store the list of files in a variable you can easily use shell functionality to get the same result. It's a bit odd that you're flattening a recursive ls to only get the leaves, and then running mkdir -p which is really only useful if the parent directory doesn't exist, but if you know which depths you want to (for example the current directory and all subdirectories one level down) you can do something like this:
directories:
for path in ./*name* ./*/*name*; do \
mkdir "/some/path/$(basename "$path")" || exit 1; \
done
or even
find . -name '*name*' -exec mkdir "/some/path/$(basename {})" \;
The following statement work at command prompt. But does not work in a cron.
myvar=`date +'%d%m'`; echo $myvar >> append.txt
The cron log shows that only a part of the date statement is run.
How do I use it in a cron?
Escape the percent signs with a backslash (\%).
My general rule of thumb is "do not write scripts in the crontab file". That means I don't place anything other than a simple script name (with absolute path) and possibly some control arguments in the crontab file. In particular, I do not place I/O redirection or variable evaluations in the crontab file; such things go in a (shell) script run by the cron job.
This avoids the trouble - and works across a wide variety of variants of cron, both ancient and modern.
from man 5 crontab:
The sixth field (the rest of the
line) specifies the command to be run.
The entire command portion of the line, up to a newline or % character, will be
executed by /bin/sh or by
the shell specified in the SHELL variable of the cronfile.
Percent-signs (%) in the command, unless escaped with backslash (), will be changed into
newline characters, and all
data after the first % will be sent to the command as standard input.
Your %s are being changed to newlines, and the latter part of your command is being fed to the command as stdin. As Ignacio says, you need to escape the %s with a \