I'm currently using a compiler that constantly returns warnings, I don't want to see the warnings. I've noticed that all warnings begin with the string "Note :", so I figured it's possible to filter out these lines.
I compile with
jrc *.jr
Is there a unix command that alters the output it gives to not print out the lines that begin with "Note :"?
grep -v "^Note:"
Also, you may want to redirect stderr to stdout:
command 2>&1 | grep -v "^Note:"
Another way would be to use sed.
sed '/^Note:/d'
Related
I have this line inside a file:
ULNET-PA,client_sgcib,broker_keplersecurities
,KEPLER
I try to get rid of that ^M (carriage return) character so I used:
sed 's/^M//g'
However this does remove everything after ^M:
[root#localhost tmp]# vi test
ULNET-PA,client_sgcib,broker_keplersecurities^M,KEPLER
[root#localhost tmp]# sed 's/^M//g' test
ULNET-PA,client_sgcib,broker_keplersecurities
What I want to obtain is:
[root#localhost tmp]# vi test
ULNET-PA,client_sgcib,broker_keplersecurities,KEPLER
Use tr:
tr -d '^M' < inputfile
(Note that the ^M character can be input using Ctrl+VCtrl+M)
EDIT: As suggested by Glenn Jackman, if you're using bash, you could also say:
tr -d $'\r' < inputfile
still the same line:
sed -i 's/^M//g' file
when you type the command, for ^M you type Ctrl+VCtrl+M
actually if you have already opened the file in vim, you can just in vim do:
:%s/^M//g
same, ^M you type Ctrl-V Ctrl-M
You can simply use dos2unix which is available in most Unix/Linux systems. However I found the following sed command to be better as it removed ^M where dos2unix couldn't:
sed 's/\r//g' < input.txt > output.txt
Hope that helps.
Note: ^M is actually carriage return character which is represented in code as \r
What dos2unix does is most likely equivalent to:
sed 's/\r\n/\n/g' < input.txt > output.txt
It doesn't remove \r when it is not immediately followed by \n and replaces both with just \n. This fails with certain types of files like one I just tested with.
alias dos2unix="sed -i -e 's/'\"\$(printf '\015')\"'//g' "
Usage:
dos2unix file
If Perl is an option:
perl -i -pe 's/\r\n$/\n/g' file
-i makes a .bak version of the input file
\r = carriage return
\n = linefeed
$ = end of line
s/foo/bar/g = globally substitute "foo" with "bar"
In awk:
sub(/\r/,"")
If it is in the end of record, sub(/\r/,"",$NF) should suffice. No need to scan the whole record.
This is the better way to achieve
tr -d '\015' < inputfile_name > outputfile_name
Later rename the file to original file name.
I agree with #twalberg (see accepted answer comments, above), dos2unix on Mac OSX covers this, quoting man dos2unix:
To run in Mac mode use the command-line option "-c mac" or use the
commands "mac2unix" or "unix2mac"
I settled on 'mac2unix', which got rid of my less-cmd-visible '^M' entries, introduced by an Apple 'Messages' transfer of a bash script between 2 Yosemite (OSX 10.10) Macs!
I installed 'dos2unix', trivially, on Mac OSX using the popular Homebrew package installer, I highly recommend it and it's companion command, Cask.
This is clean and simple and it works:
sed -i 's/\r//g' file
where \r of course is the equivalent for ^M.
Simply run the following command:
sed -i -e 's/\r$//' input.file
I verified this as valid in Mac OSX Monterey.
remove any \r :
nawk 'NF+=OFS=_' FS='\r'
gawk 3 ORS= RS='\r'
remove end of line \r :
mawk2 8 RS='\r?\n'
mawk -F'\r$' NF=1
Let's say that you would like to remove comments from a datafile using one of two methods:
cat file.dat | sed -e "s/\#.*//"
cat file.dat | grep -v "#"
How do these individual methods work, and what is the difference between them? Would it also be possible for a person to write the clean data to a new file, while avoiding any possible warnings or error messages to end up in that datafile? If so, how would you go about doing this?
How do these individual methods work, and what is the difference
between them?
Yes, they work same though sed and grep are 2 different commands. Your sed command simply substitutes all those lines which having # with NULL. On other hand grep will simply skip or ignore those lines which will skip lines which have # in it.
You could get more information on these by man page as follows:
man grep:
-v, --invert-match
Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.)
man sed:
s/regexp/replacement/
Attempt to match regexp against the pattern space. If successful, replace that portion matched with replacement. The
replacement may
contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1
through \9 to
refer to the corresponding matching sub-expressions in the regexp.
Would it also be possible for a person to write the clean data to a
new file, while avoiding any possible warnings or error messages to
end up in that datafile?
yes, we could re-direct the errors by using 2>/dev/null in both the commands.
If so, how would you go about doing this?
You could try like 2>/dev/null 1>output_file
Explanation of sed command: Adding explanation of sed command too now. This is only for understanding purposes and no need to use cat and then use sed you could use sed -e "s/\#.*//" Input_file instead.
sed -e " ##Initiating sed command here with adding the script to the commands to be executed
s/ ##using s for substitution of regexp following it.
\#.* ##telling sed to match a line if it has # till everything here.
//" ##If match found for above regexp then substitute it with NULL.
That grep -v will lose all the lines that have # on them, for example:
$ cat file
first
# second
thi # rd
so
$ grep -v "#" file
first
will drop off all lines with # on it which is not favorable. Rather you should:
$ grep -o "^[^#]*" file
first
thi
like that sed command does but this way you won't get empty lines. man grep:
-o, --only-matching
Print only the matched (non-empty) parts of a matching line,
with each such part on a separate output line.
I have a text file of million lines to precess on UNIX as below:
"item"
"item"
"item"
"item"
And I use sed -i "s/$/,/g" filename > new_file to add comma at the end of each line.
What I expected is this way:
["item",
"item",
"item",
"item"]
Now, I am just using Vim to edit manually. Is there anyway to add brackets at the beginning and ending automatically with removing the comma at last line? So that, I could write a bash script to process these text files neatly.
Thanks!
sed -e '1s/^/[/' -e 's/$/,/' -e '$s/,$/]/' file_name > new_file
The only funny bit is replacing the comma added to the last line with the close square bracket.
Also note that using -i means there will be no output to standard output. Either use -i or use I/O redirection but not both. (And if you're a portability nut — like me — note that Mac OS X or BSD sed supports -i but requires a suffix for the backup. It will quite happily use -e as the suffix, if there's a -e after the -i, or use the sed script if you don't specify a -e — but then it complains about the file name not being a valid sed script).
I have a file test.sh from which I am executing the following awk command.
awk -f x.awk < result/output.txt >>difference.txt
x.awk
while (getline < result/$bld/$DeviceType)
the variable DeviceType and bld are available in test.sh.
I have declared them as export type.
export DeviceType=$line
Even then while executing test.sh file, the script stops at following line
awk -f x.awk < result/output.txt >>difference.txt
and I am getting
awk: x.awk:4: (FILENAME=- FNR=116) fatal: division by zero attempted
error.
The awk script is read by awk, not touched by the shell. Inside an awk script, $bld means 'the field designated by the number in the variable bld' (that's the awk variable bld).
You can set awk variables on the command line (officially with the -v option):
awk -v bld="$bld" -v dev="$DeviceType" -f x.awk < result/output.txt >> difference.txt
Whether that does what you want is still debatable. Most likely you need x.awk to contain something like:
BEGIN { file = sprintf("result/%s/%s", bld, dev); }
{ while ((getline < file) > 0) print }
awk is not shell just like C is not shell. You should not expect to be able to access shell variables within an awk program any more than you can access shell variables within a C program.
To pass the VALUE of shell variables to an awk script, see http://cfajohnson.com/shell/cus-faq-2.html#Q24 for details but essentially:
awk -v awkvar="$shellvar" '{ ... use awkvar ...}'
is usually the right approach.
Having said that, whatever you're trying to do it looks like the wrong approach. If you are considering using getline, make sure to read http://awk.freeshell.org/AllAboutGetline first and understand all of the caveats but if you tell us what it is you're trying to do with sample input and expected output we can almost certainly help you come up with a better approach that has nothing to do with getline.
Im trying to get the following script to work, but Im having some issues:
g++ -g -c $1
DWARF=echo $1 | sed -e `s/(^.+)\.cpp$/\1/`
and Im getting -
./dcompile: line 3: test3.cpp: command not found
./dcompile: command substitution: line 3: syntax error near unexpected token `^.+'
./dcompile: command substitution: line 3: `s/(^.+)\.cpp$/\1/'
sed: option requires an argument -- 'e'
and then bunch of stuff on sed usage. What I want to do is pass in a cpp file and then extract the file name without the .cpp and put it into the variable DWARF. I would also like to later use the variable DWARF to do the following -
readelf --debug-dump=info $DWARF+".o" > $DWARF+".txt"
But Im not sure how to actually do on the fly string concats, so please help with both those issues.
You actually need to execute the command:
DWARF=$(echo $1 | sed -e 's/(^.+)\.cpp$/\1/')
The error message is a shell error because your original statement
DWARF=echo $1 | sed -e `s/(^.+)\.cpp$/\1/`
is actually parsed like this
run s/(^.+)\.cpp$/\1/
set DWARF=echo
run the command $1 | ...
So when it says test3.cpp: command not found I assume that you are running with argument test3.cpp and it's literally trying to execute that file
You also need to wrap the sed script in single quotes, not backticks
In BASH you can crop off the extension from $1 by
${1%*.cpp}
if you need to set the DWARF var use
DWARF="${1%*.cpp}"
or just reference $1 as
readelf --debug-dump=info "${1%*.cpp}.o" > "${1%*.cpp}.txt"
which will chop off the rightmost .cpp so test.cpp.cpp will be test.cpp
You can use awk for this:
$ var="testing.cpp"
$ DWARF=$(awk -F. '{print $1}' <<< $var)
$ echo "$DWARF"
testing