I want to pipe the output of a command to something able to color the occurrences of a specific word.
Example: echo "ABC DEF GHI" | magic_color_thing("DEF") should print out ABC DEF GHI with DEF colored.
I want to do it with ZSH and I want to preserve all the lines as well as the carriage returns.
Thank you in advance for any help!
If you have (a recent version of) GNU grep, use its --color option. To have it print non-matching lines as well, use a pattern that matches the empty string.
… | grep --color -E '|DEF'
If you want to do it entirely within zsh, make it iterate over the lines, surrounding DEF with color codes.
autoload colors; colors
while IFS= read -r line; do
print -r -- "${line//DEF/$fg[red]DEF$fg[default]}"
done
See also How to have tail -f show colored output, and a few other questions tagged color.
Does
echo "...... DEF....." | grep --color "DEF"
do the job for you?
It would help if you said more about the kind of data you were piping in.
(And also whether lines without matches are important or not)
Related
I'm trying to remove lines from the output that contains "xyz" in 1st column using awk as
grep -H --with-filename "Rule" *| awk '$1!=" \*xyz\* "{print $0}'
but I'm not having any success with this.
For example after doing grep -H --with-filename "Rule" I'm getting the output as
file_xyz.log abc p12r
file1.log asd ef23t
fi_xyz.log gbc or26r
file1.log asd ef2t
but I want to remove all lines which contain xyz.
Some notes on your question:
What you have isn't very close to valid awk syntax, you should find an intro to awk tutorial or read a few pages of the manual to get started. I highly recommend everyone get the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
A glance at the grep man page will tell you that -H and --with-filename are the short and long versions of exactly the same option - you don't need to use both, just one of them.
The string Rule doesn't appear anywhere in the output you say you get when grep-ing for Rule and grep -H will output a : after the file name while you show a blank - make sure your input, output, and code are consistent and correct when asking a question.
The approach you're trying to use will fail for filenames that contain spaces.
You never need grep when you're using awk.
This is probably all you need:
awk '(FILENAME !~ /xyz/) && /Rule/{print FILENAME, $0}' *
but there are also ways in some shells (see https://unix.stackexchange.com/q/164025/133219 and https://unix.stackexchange.com/q/335484/133219 for bash examples) to specify a globbing pattern that excludes some strings so then you never open them to search inside in the first place.
try
grep -H --with-filename "Rule" *| awk '$1 !~ /xyz/'
This might be the worst example ever given on StackOverflow, but my purpose is to remove everything in File1 against File2. Whilst ignoring case sensitivity and matching the entire line. For example Cats#123:bob would be removed from File2 as the word Cat appears in File1. So regardless of case sensitivty, if a matching word is found it should eradicate the entirety of the line.
Input (File1):
Cat
Dog
Horse
Wheel
MainFile (File2)
Cats#123:bob
dog#1:truth
Horse-1:fairytale
Wheel:tremendous
Divination:maximus
Desired output
Divination:maximus
As the output shows, only "Divination:maximus" should be outputted as no matching words were found in File1. I prefer to use Sed or Awk generally as I use Cygwin. But any suggestions are welcomed, I can answer all questions you may have, thanks.
Here's what I've tried so far, but it's not working unfortunately, as my output is incorrect. To add to this, simply the wrong lines are being outputted. I'm fairly inexperienced so I don't know how to develop upon this syntax below, and maybe it's completely irrelevant to the job at hand.
grep -avf file1.txt file2.txt > output.txt
The grep command can do that for you:
grep -v -i -f file1 file2
The -f file1 tells grep to use the patterns in file1
The -i flag means case insensitive
The -v flag means to search lines that do not contain those patterns
real beginner in Unix commands so not sure if the following is actually possible but here goes.
Is it possible to highlight just one item in a ls output?
I.e.: in a directory I use the following
ls -l --color=auto
this lists 4 items in green
file1.xls
file2.xls
file3.xls
file4.xls
But I want to highlight a specific item, in this case file2.
Is this possible?
The ls program will not do this for you. But you could filter the results from ls through a custom script which modifies the text to highlight just one item. It would be simpler if no color was originally given; then you could match on the given filename (for example as the pattern in an awk script, or in a sed script) and modify just that one item, adding colors.
That is, certainly it is possible. Writing a sample script is a different question.
How you approach the problem depends on what you want from the output. If that is (literally) the output from ls with a single filename in color, then a script would be the normal approach. You could use grep as suggested in the other answer, which raises a few issues:
commenting on ls -l --color=auto makes it sound as if you are using GNU ls, hence likely using Linux. An appropriate tag for the question would be linux rather than unix. If you ask for unix, the answers should differ.
supposing that you are using Linux. Then likely you have GNU grep, which can do colors. That would let you do something like this:
ls -l | grep --color=always file2 |less -R
however, there is a known bug in GNU grep's use of color (see xterm FAQ "grep --color" does not show the right output).
using grep like this shows only the matching lines. For ls that might be a good choice. For matches in a manual page -- definitely not.
Alternatively, less (which is found more often on Unix systems than GNU grep) also can highlight matches (not in color) and would show the file you are looking for in context. You could do this:
ls -l | less -p file2
(Both grep and less use patterns aka regular expressions, but I left the example simple — read the documentation to learn more).
If you're a beginner I would strongly suggest you learn the grep command if you want to filter results - A Unix users best friend (mine anyway)
Use grep to only display the list items you want to see...
ls- l | grep "file2"
NOTE: This is no different to typing ls -l file2 by the way but your pattern could be expanded based on what you actually want displayed on the screen.
So if you had a directory full of files ".txt", ".xls", ".doc" and you wanted to only see ".doc" with the word "work" in the name (work1.doc) you could write:
ls -ls | grep "work" | grep "txt"
This would list work1.txt, work2.txt, work3.txt and so on.
This is a very basic example but I use grep extensively whilst in the unix shell and would advise using this to filter all results instead of colours.
A little side note using grep -v will show you everything but the pattern you give it
ls -l | grep -v ".txt" will show everything BUT .txt files.
I have a dictionary (not python dict) consisting of many text files like this:
##Berlin
-capital of Germany
-3.5 million inhabitants
##Earth
-planet
How can I show one entry of the dictionary with the facts?
Thank you!
You can't. grep doesn't have a way of showing a variable amount of context. You can use -A to show a set number of lines after the match, such as -A3 to show three lines after a match, but it can't be a variable number of lines.
You could write a quick Perl program to read from the file in "paragraph mode" and then print blocks that match a regular expression.
as andy lester pointed out, you can't have grep show a variable amount of context in grep, but a short awk statement might do what you're hoping for.
if your example file were named file.dict:
awk -v term="earth" 'BEGIN{IGNORECASE=1}{if($0 ~ "##"term){loop=1} if($0 ~ /^$/){loop=0} if(loop == 1){print $0}}' *.dict
returns:
##Earth
-planet
just change the variable term to the entry you're looking for.
assuming two things:
dictionary files have same extension (.dict for example purposes)
dictionary files are all in same directory (where command is called)
If your grep supports perl regular expressions, you can do it like this:
grep -iPzo '(?s)##Berlin.*?\n(\n|$)'
See this answer for more on this pattern.
You could also do it with GNU sed like this:
query=berlin
sed -n "/$query/I"'{ :a; $p; N; /\n$/!ba; p; }'
That is, when case-insensitive $query is found, print until an empty line is found (/\n$/) or the end of file ($p).
Output in both cases (minor difference in whitespace):
##Berlin
-capital of Germany
-3.5 million inhabitants
In Unix, how would one do this?
#!/bin/sh
x=echo "Hello" | grep '^[A-Z]'
I want x to take the value "Hello", but this script does not seem to work. What would be the proper way of spelling something like the above out?
You can use command substitution as:
x=$(echo "Hello" | grep '^[A-Z]')
You could also use the outdated back-quote style as:
x=`echo "Hello" | grep '^[A-Z]'`
you can also use shell internals without calling external tools, eg case/esac
str="Hello"
case "$str" in
[A-Z]* ) x=$str;;
esac
be sure that you are using expected regex supporting grep, grep has many variants across unixs.