Its a simple question but giving too much trouble to workaround.
All solutions mentioned works with ksh99. But unfortunately i use ksh88 and i am unable to get substring from a string.
I am trying to get year part of the string. but i am getting an error. The cut syntax seems fine. also the assignment to the variable.
cut: The list arguments following the c option are not correct.
Here is the statement used.
typeset -i dt_year=`echo 201610118 | cut -c1-4`
I would in ksh88 separate your line in:
typeset -i dt_year=0
dt_year=`echo "201610118" | cut -c1-4`
You can also try to leave out the cut -c-4
And check with alias if cut is aliased.
Related
I reduced my problem to this minimal example:
# This is script myscript
echo $ZSH_VERSION
zparseopts -D vxa:=opt_a vx:=opt_b
echo $1
I'm calling the script with
zsh myscript -vxa xx -vx zz a
I would expect that after processing the options, $1 would output a, but I get
5.8
xx
being printed. Why is this the case?
I am aware about a problem with my script, which is that the option vx is a prefix of vxa, and when I modify the script to
zparseopts -D vxa:=opt_a vxb:=opt_b
and call it accordingly, I indeed get the expected result (a) in the output. So, I can fix my script by just renaming the options so that neither one is a prefix of a different option.
However, I also would like to understand, why I see the xx with my original code. There is no error in the invocation (if I replace the -D by -F, I don't get an error message). I could understand it, if my original script would simply fill the wrong variable (taking vx as an abbreviation for vxa), but I don't understand, why the script silently stops parsing after consuming the -vxa option, but not even picking up the mandatory argument xx. What is going on here, and why?
Yes it's due to the overlapping names + ambiguity regarding how -vxa should be parsed vs -vx a (the issue being that both are being taken as a -vx option with option-arg a - its mandatory arg has been consumed, and the unhyphenated xx looks like a non-option so parsing stops without any error).
man zshmodules has some useful info (bold emphasis added):
In all cases, option-arguments must appear either immediately following the option in the same positional parameter or in the next one. ...
When the names of two options that take no arguments overlap, the longest one wins, so that parsing for the specs -foo -foobar (for example) is unambiguous. However, due to the aforementioned handling of option-arguments, ambiguities may arise when at least one overlapping spec takes an argument, as in -foo: -foobar. In that case, the last matching spec wins.
If the specs are swapped around so that the shorter of the overlapping names are placed before longer ones (i.e. zparseopts -D vx:=opt_b vxa:=opt_a) it should work as you expect. For example:
#!/bin/zsh -
echo $ZSH_VERSION
zparseopts -D vx:=opt_b vxa:=opt_a
print -r -- arg:$^# opt_a:$^opt_a opt_b:$^opt_b
$ zsh thatScript -vxa xx -vx zz a
5.9
arg:a opt_a:-vxa opt_a:xx opt_b:-vx opt_b:zz
I'm trying to find the density of a word by finding the number of lines containing a word and the total number of lines. I tried this:
echo $((grep 'word' filename | wc -l)/(wc -l filename))
But it's throwing me a syntax error. I'm sure it's something basic, but I'm pretty new so any help would be appreciated!
grep and expr are the wrong tools. You want a simple awk script:
awk '/word/{count++} END { print count/NR}' input-file
Note that you're not even explicitly calling expr, but your attempt to use / as a division operator implies that is your intent. But in that context, / is integer division, so very likely not what you want. Using grep piped to wc will work, but that forces you to read the input mulitple times, which you don't want. Using awk to scan the file once, counting the lines that match seems like your best bet.
I've been using this utility successfully for many years, in many environemnts. But I'm noticing that on one particular environment, it produces very unexpected results.
grep -r 'search-term1' . | grep 'search-term2'
The above code greps recursively for all instances of search-term1, in the current-dir. The results are then piped to another grep, which selects only those lines that also contain search-term2. This works exactly as I would expect.
grep -r 'search-term1' . | grep -r 'search-term2'
The only difference in the above code is that the -r recursive flag in specified in both grep commands. I would expect the behavior to not change for this particular case. After all, the input to the 2nd grep is a pipe-input, and there's nothing further to be found recursively.
I have been using the command successfully, for many years, in many different environments (both unix and mac-os). However, the most recent environment that I started working in (unix), breaks the above behavior. The second piped grep searches for all instances of search-term2, not only in the piped-input, but also all files in my current directory. Because of this, instead of getting only results that contain both search-terms, I get all results in current-dir that contain the 2nd search term.
Is there any reason why this one particular environment produces this odd behavior? Is there any way I can avoid this, while still preserving the -r flag?
FAQ:
Q: Why am I using the -r flag on a piped input?
Ans: I actually have grep saved as an alias, with many different options and flags that I always want to use as a default. The recursive flag is one of them. I would like to always use this alias, instead of having to type out all the flags every time.
Q: If you want to search for all instances matching both search terms, why not do (insert-superior-method-here) instead?
Ans: You're probably right. I'm sure there are things I can change in my usual habits that would workaround this issue. However, as intellectual curiosity, I would like to find out why recursive-greps-on-pipes work as intended on most environments, but not all, and if that can somehow be resolved.
The -r flag to grep changed in grep version 2.11 (release notes to implicitly use the working directory as the input if no file arguments are given.
If no file operand is given, and a command-line -r or equivalent
option is given, grep now searches the working directory.
You aren't giving the second grep any file arguments so it defaults to the current directory despite there being pipe input.
Try grep -r 'search-term1' . | grep -r 'search-term2' - as a workaround.
grep -r 'search-term1' . | grep -r -d skip 'search-term2' may also work around the problem.
I'm trying to run this terminal command from within R using system(mess):
mess <- "sed -i -e '62i\ \\\usepackage[margin=2cm]{geometry}' intro-spatial-rl.tex"
But it keeps failing with the following error:
Error: '\u' used without hex digits in character string starting ""sed -i -e '62i\ \\\u"
I've seen paste used for system commands also, but this fails also.
Could use a different regex program, but thought this may be useful to others and improve my understanding of how R deals with characters. Thank you!
Your problem is the unequal number of \ in your escape sequence.
R sees two escape sequences here: \\ and \u. The second one is invalid and gives an error. You probably want to escape the second backslash as well, yielding \\\\. Likewise, you probably meant to escape the previous \ in \ as well, leaving you with \\ .
All that being said, would replace the sed invocation completely by R code in this instance. The way I understand it you just want to insert a line of text. That’s easy in R (although it’s not clear what your input and output here is).
I have a simple thing to do, but I'm novice in UNIX.
So, I have a file and on each line I have an ID.
I need to go through the file and put all ID's into one variable.
I've tried something like in Java but does not work.
for variable in `cat myFile.txt`
do
param=`echo "${param} ${variable}"`
done
It does not seems to add all values into param.
Thanks.
I'd use:
param=$(<myFile.txt)
The parameter has white space (actually newlines) between the names. When used without quotes, the shell will expand those to spaces, as in:
cat $param
If used with quotes, the file names will remain on separate lines, as in:
echo "$param"
Note that the Korn shell special-cases the '$(<file)' notation and does not fork and execute any command.
Also note that your original idea can be made to work more simply:
param=
for variable in `cat myFile.txt`
do
param="${param} ${variable}"
done
This introduces a blank at the front of the parameter; it seldom matters. Interestingly, you can avoid the blank at the front by having one at the end, using param="${param}${variable} ". This also works without messing things up, though it looks as though it jams things together. Also, the '${var}' notation is not necessary, though it does no harm either.
And, finally for now, it is better to replace the back-tick command with '$(cat myFile.txt)'. The difference becomes crucial when you need to nest commands:
perllib=$(dirname $(dirname $(which perl)))/lib
vs
perllib=`dirname \`dirname \\\`which perl\\\`\``/lib
I know which I prefer to type (and read)!
Try this:
param=`cat myFile.txt | tr '\n' ' '`
The tr command translates all occurrences of \n (new line) to spaces. Then we assign the result to the param variable.
Lovely.
param="$(< myFile.txt)"
or
while read line
do
param="$param$line"$'\n'
done < myFile.txt
awk
var=$(awk '1' ORS=" " file)
ksh
while read -r line
do
t="$t $line"
done < file
echo $t