For example I have a git branch name feature/ABC-123-my-stuff
I want to just capture ABC-123 in this format.
I tried
cut -d "/" -f2 <<< "$branchName"
result in
ABC-123-my-stuff
but I want to only keep the string right after the / and before 2nd -
What do I add / modify to achieve that?
NOTE: I am using zsh on MacOS
Use cut 2 times:
(cut -d"/" -f2 | cut -d"-" -f1,2) <<< $branchName
or with echo:
echo $branchName | cut -d"/" -f2 | cut -d"-" -f1,2
Another way you can use grep:
echo $branchName | egrep -o '[A-Z]{3}-[0-9]{3}'
Note: this solution will work if you have every time 3 times capital letter, then -, then 3 digits.
All solutions gives me the output:
ABC-123
Use regexp matching:
if [[ $branchName =~ /([^-]+-[^-]+)- ]]
then
desired_part=$match[1]
else
echo $branchName has not the expected format
fi
Related
cat DecisionService.txt
/MAGI/Household/MAGI_EDG_FLOW.erf;/Medicaid/MAGI_EDG_FLOW;4;4
/VCL/VCL_Ruleflow_1.erf;/VCL/VCL1_EBDC_FLOW;4;4
/VCL/VCL_Ruleflow_2.erf;/VCL/VCL2_EBDC_FLOW;4;4
I tried this:
cat DecisionService.txt | cut -d ';' -f2 | cut -d '/' -f2 | tr -s ' ' '\n'
My output is:
$i=Medicaid
VCL
VCL
Whereas I need the output to be:
$a=Medicaid
$b=VCL
If you just want the unique values then:
awk -F'/' 'NF&&!a[$(NF-1)]++{print $(NF-1)}' file
Medicaid
VCL
If you actually want the output to contain prefixed incremental variables then:
awk -F'/' 'NF&&!a[$(NF-1)]++{printf "$%c=%s\n",i++,$(NF-1)}' i=97 file
$a=Medicaid
$b=VCL
Note: If your input may contain more than 26 unique value you will need to do something cleverer to avoid output such as $|=VCL.
Well from the question, it's not much clear what exactly you want, but i guess you don't want repeated VCL in output. Try adding sort and uniq at the end.
cat DecisionService.txt
/MAGI/Household/MAGI_EDG_FLOW.erf;/Medicaid/MAGI_EDG_FLOW;4;4
/VCL/VCL_Ruleflow_1.erf;/VCL/VCL1_EBDC_FLOW;4;4
/VCL/VCL_Ruleflow_2.erf;/VCL/VCL2_EBDC_FLOW;4;4
cat DecisionService.txt | cut -d ';' -f2 | cut -d '/' -f2 | tr -s ' ' '\n'|sort|uniq
Medicaid
VCL
I'm trying to extract an address from a file.
grep keyword /path/to/file
is how I'm finding the line of code I want. The output is something like
var=http://address
Is there a way I can get only the part directly after the = i.e. http://address , considering the keyword I'm greping for is both in the var and http://address parts
grep keyword /path/to/file | cut -d= -f2-
Just pipe to cut:
grep keyword /path/to/file | cut -d '=' -f 2
You can avoid the needless pipes:
awk -F= '/keyword/{print $2}' /path/to/file
This question already has answers here:
How to pass command output as multiple arguments to another command
(5 answers)
Closed 5 years ago.
I have this for:
for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done
ls will give access.log, access.log.1, access.log.2 etc.
tail will give me the last line of each file, which looks like: 192.168.1.23 - - [08/Oct/2010:14:05:04 +0300] etc. etc. etc
awk+cut will extract the date (08/Oct/2010 - but different in each access.log), which will allow me to grep for it and redirect the output to a separate file.
But I cannot seem to pass the output of awk+cut to grep.
The reason for all this is that those access logs include lines with more than one date (06/Oct, 07/Oct, 08/Oct) and I just need the lines with the most recent date.
How can I achieve this?
Thank you.
As a sidenote, tail displays the last 10 lines.
A possible solution would be to grepthis way:
for i in `ls -lf access.log*`; do grep $(tail $i |awk {'print $4'} |cut -d: -f 1| sed 's/\[/\\[/') $i > $i.output; done
why don't you break it up into steps??
for file in *access.log
do
what=$(tail "$i" |awk {'print $4'} |cut -d: -f 1)
grep "$what" "$file" >> output
done
You shouldn't use ls that way. Also, ls -l gives you information you don't need. The -f option to grep will allow you to pipe the pattern to grep. Always quote variables that contain filenames.
for i in access.log*; do awk 'END {sub(":.*","",$4); print substr($4,2)}' "$i" | grep -f - $i > "$i.output"; done
I also eliminated tail and cut since AWK can do their jobs.
Umm...
Use xargs or backticks.
man xargs
or
http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html , section 3.4.5. Command substitution
you can try:
grep "$(stuff to get piped over to be grep-ed)" file
I haven't tried this, but my answer applied here would look like this:
grep "$(for i in `ls -1 access.log*`; do tail $i |awk {'print $4'} |cut -d: -f 1 |grep - $i > $i.output; done)" $i
I've got strange problem with cut
I wrote script, there I have row:
... | cut -d" " -f3,4 >! out
cut recieves this data (I checked it with echo)
James James 033333333 0 0.00
but I recieve empty lines in out, can somebody explain why?
You need to compress out the sequences of spaces, so that each string of spaces is replaced by a single space. The tr command's -s (squeeze) option is perfect for this:
$ ... | tr -s " " | cut -d" " -f3,4 >! out
If you want fields from a text file, awk is almost always the answer:
... | awk '{print $3" "$4}'
For example:
$ echo 'James James 033333333 0 0.00' | cut -d" " -f3,4
$ echo 'James James 033333333 0 0.00' | awk '{print $3" "$4}'
033333333 0
Cut doesn't see multiple spaces as single space, so it matches "nothingness" between spaces.
Do you get empty lines when you leave out >! out part? Ie, are you targeting correct fields?
If your input string uses fixed spacing, you might want to use cut -c 4-10,15-20 | tr -d ' ' to extract character groups 4-10 and 15-20 and remove spaces from them..
... | grep -o "[^ ]*"
will extract fields, each on separate line. Then you might head/tail them. Not sure about putting them on the same line again.
If I want to cut a list of text using a string as a delimiter, is that possible?
For example I have a directory where a list of shell scripts call same perl script say
abc.pl
So when I do
$grep abc.pl *
in that directory, it gives me following results
xyz.sh: abc.pl 1 2
xyz2.sh: abc.pl 2
mno.sh: abc.pl 3
pqr.sh: abc.pl 4 5
I basically want all the output after "abc.pl" (to check what range arguments are being passed to the perl right now)
When I tried
$grep abc.pl * | cut -d'abc.pl' -f2
OR
$grep abc.pl * | cut -d'abc\.pl' -f2
its giving me
cut: invalid delimiter
When I read man for cut it states
delim can be a multi-byte character.
What am I doing/interpreting wrong here?
Try using this.
$grep abc.pl * | awk -F 'abc.pl' '{print $2}'
-F fs
--field-separator fs Use fs for the input field separator (the value of the FS predefined variable).
When I read man for cut it states ... delim can be a multi-byte character.
Multi-byte, but just one character, not a string.
canti:~$ ll | cut --delimiter="delim" -f 1,2
cut: the delimiter must be a single character
Try `cut --help' for more information.
canti:~$ cut --version
cut (GNU coreutils) 5.97
You can specify only output delimiter as a string (useless in this case):
--output-delimiter=STRING
use STRING as the output delimiter the default is to use the input delimiter
why not use grep abc.pl | awk '{print $3, $4}'?
$ grep abc.pl * | cut -d' ' -f3-999
In that case just use the space character as the delimiter.
Or you can try eg Ruby:
grep abc.pl * | ruby -ne 'p $_.chomp.split("abc.pl").last'