unix command to extract part of a hostname - unix

I would like to extract the first part of this hostname testsrv1
from testsrv1.main.corp.loc.domain.com in UNIX, within a shell script.
What command can I use? It would be anything before the first period .

Do you have the server name in a shell variable? Are you using a sh-like shell? If so,
${SERVERNAME%%.*}
will do what you want.

You can use cut:
echo "testsrv1.main.corp.loc.domain.com" | cut -d"." -f1

To build upon pilcrow's answer, no need for new variable, just use inbuilt $HOSTANME.
echo $HOSTNAME-->my.server.domain
echo ${HOSTNAME%%.*}-->my
Tested on two fairly different Linux's.
2.6.18-371.4.1.el5, GNU bash, version 3.2.25(1)-release (i386-redhat-linux-gnu)
3.4.76-65.111.amzn1.x86_64, GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)

try the -s switch:
hostname -s

I use command cut, awk, sed or bash variables
Operation
Via cut
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | cut -d. -f1
testsrv1
[flying#lempstacker ~]$
Via awk
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | awk -v FS='.' '{print $1}'
testsrv1
[flying#lempstacker ~]$
Via sed
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | sed -r 's#([^.]*).(.*)#\1#g'
testsrv1
[flying#lempstacker ~]$
Via Bash Variables
[flying#lempstacker ~]$ hostName='testsrv1.main.corp.loc.domain.com'
[flying#lempstacker ~]$ echo ${hostName%%.*}
testsrv1
[flying#lempstacker ~]$

You could have used "uname -n" to just get the hostname only.

You can use IFS to split text by whichever token you want. For domain names, we can use the dot/period character.
#!/usr/bin/env sh
shorthost() {
# Set IFS to dot, so that we can split $# on dots instead of spaces.
local IFS='.'
# Break up arguments passed to shorthost so that each domain zone is
# a new index in an array.
zones=($#)
# Echo out our first zone
echo ${zones[0]}
}
If this is in your script then, for instance, you'll get test when you run shorthost test.example.com. You can adjust this to fit your use case, but knowing how to break the zones into the array is the big thing here, I think.
I wanted to provide this solution, because I feel like spawning another process is overkill when you can do it easily and completely within your shell with IFS. One thing to watch out for is that some users will recommend doing things like hostname -s, but that doesn't work in the BSD userland. For instance, MacOS users don't have the -s flag, I don't think.

Assuming the variable $HOSTNAME exists, so try echo ${HOSTNAME%%.*} to get the top-most part of the full-qualified hostname. Hope it helps.
If interested, the hint is from the below quoted partial /etc/bashrc on a REHL7 host:
if [ -e /etc/sysconfig/bash-prompt-screen ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen
else
PROMPT_COMMAND='printf "\033k%s#%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"'
fi
;; ... ```

Related

mutt command with multiple attachments in single mail unix

My requirement is to attach all the .csv files in a folder and send them in a single mail.
Here is what have tried,
mutt -s "subject" -a *.csv -- abc#gmail.com < subject.txt
The above command is not working (It's not recognizing multiple files) and throwing the error
Error sending message, child exited 67 (User unknown.).
Could not send the message.
Then I tried using multiple -a option as follows,
mutt -s "subject" -a aaa.csv -a bbb.csv -- abc#gmail.com < subject.txt
This works as expected.
But this is not feasible for 100 files for example. I should be able use it with file mask (as like *.csv to take all csv files). Is there is any way we can use like *.csv in single command?
Thanks
Mutt doesn't support such syntax, but it doesn't mean it's impossible. You just have to build the mutt command.
mutt -s "subject" $( printf -- '-a %q ' *.csv ) ...
The command in $( ... ) produces something like this:
-a aaa.csv -a bbb.csv -a ...
Here is the example of sending multiple files using a single command -
mutt -s "Subject" -i "Mail_body text" email_id#abc.com -c email_cc_id#abc.com -a attachment1.pdf -a attachment2.pdf
At the end of the command line use -a for the attachment .
Some linux system have attachment size limit . Mostly it support less size .
I'm getting backslash( \ ) Additionally
Daily_Batch_Status{20131003}.PDF
Daily_System_Monitoring{20131003}.PDF
printf -- '-a %q ' *.PDF
-a Daily_Batch_Status \ {20131003 \ }.PDF -a Daily_System_Monitoring \ {20131003 \ }.PDF
#!/bin/bash
from="me#address.com"
to="target#address.com"
subject="pdfs $(date +%B) $(date +%Y)"
body="You can find the pdfs from $(date +%B) $(date +%Y)"
# here comes the attachments
mutt -s "$subject" $( printf -- ' -a %q' $PWD/*.pdf ) -- $to <<EOF
Dear Mr and Ms,
$(echo $body)
$(cat ~/.signature)
EOF
but it does not work with escape characters in file name like "\[5\]" which can come in MacOs.
I created as a script and collect needed PDFs in a folder and just run the script from that location. So monthly reports are sent... it does not matter how many pdfs (number can vary) but also there should be no white space.

sed edit file in place

I am trying to find out if it is possible to edit a file in a single sed command without manually streaming the edited content into a new file and then renaming the new file to the original file name.
I tried the -i option but my Solaris system said that -i is an illegal option. Is there a different way?
The -i option streams the edited content into a new file and then renames it behind the scenes, anyway.
Example:
sed -i 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
while on macOS you need:
sed -i '' 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
On a system where sed does not have the ability to edit files in place, I think the better solution would be to use perl:
perl -pi -e 's/foo/bar/g' file.txt
Although this does create a temporary file, it replaces the original because an empty in place suffix/extension has been supplied.
Note that on OS X you might get strange errors like "invalid command code" or other strange errors when running this command. To fix this issue try
sed -i '' -e "s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g" <file>
This is because on the OSX version of sed, the -i option expects an extension argument so your command is actually parsed as the extension argument and the file path is interpreted as the command code. Source: https://stackoverflow.com/a/19457213
The following works fine on my mac
sed -i.bak 's/foo/bar/g' sample
We are replacing foo with bar in sample file. Backup of original file will be saved in sample.bak
For editing inline without backup, use the following command
sed -i'' 's/foo/bar/g' sample
One thing to note, sed cannot write files on its own as the sole purpose of sed is to act as an editor on the "stream" (ie pipelines of stdin, stdout, stderr, and other >&n buffers, sockets and the like). With this in mind you can use another command tee to write the output back to the file. Another option is to create a patch from piping the content into diff.
Tee method
sed '/regex/' <file> | tee <file>
Patch method
sed '/regex/' <file> | diff -p <file> /dev/stdin | patch
UPDATE:
Also, note that patch will get the file to change from line 1 of the diff output:
Patch does not need to know which file to access as this is found in the first line of the output from diff:
$ echo foobar | tee fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin
*** fubar 2014-03-15 18:06:09.000000000 -0500
--- /dev/stdin 2014-03-15 18:06:41.000000000 -0500
***************
*** 1 ****
! foobar
--- 1 ----
! fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin | patch
patching file fubar
Versions of sed that support the -i option for editing a file in place write to a temporary file and then rename the file.
Alternatively, you can just use ed. For example, to change all occurrences of foo to bar in the file file.txt, you can do:
echo ',s/foo/bar/g; w' | tr \; '\012' | ed -s file.txt
Syntax is similar to sed, but certainly not exactly the same.
Even if you don't have a -i supporting sed, you can easily write a script to do the work for you. Instead of sed -i 's/foo/bar/g' file, you could do inline file sed 's/foo/bar/g'. Such a script is trivial to write. For example:
#!/bin/sh
IN=$1
shift
trap 'rm -f "$tmp"' 0
tmp=$( mktemp )
<"$IN" "$#" >"$tmp" && cat "$tmp" > "$IN" # preserve hard links
should be adequate for most uses.
You could use vi
vi -c '%s/foo/bar/g' my.txt -c 'wq'
sed supports in-place editing. From man sed:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied)
Example:
Let's say you have a file hello.txtwith the text:
hello world!
If you want to keep a backup of the old file, use:
sed -i.bak 's/hello/bonjour' hello.txt
You will end up with two files: hello.txt with the content:
bonjour world!
and hello.txt.bak with the old content.
If you don't want to keep a copy, just don't pass the extension parameter.
If you are replacing the same amount of characters and after carefully reading “In-place” editing of files...
You can also use the redirection operator <> to open the file to read and write:
sed 's/foo/bar/g' file 1<> file
See it live:
$ cat file
hello
i am here # see "here"
$ sed 's/here/away/' file 1<> file # Run the `sed` command
$ cat file
hello
i am away # this line is changed now
From Bash Reference Manual → 3.6.10 Opening File Descriptors for Reading and Writing:
The redirection operator
[n]<>word
causes the file whose name is the expansion of word to be opened for
both reading and writing on file descriptor n, or on file descriptor 0
if n is not specified. If the file does not exist, it is created.
Like Moneypenny said in Skyfall: "Sometimes the old ways are best."
Kincade said something similar later on.
$ printf ',s/false/true/g\nw\n' | ed {YourFileHere}
Happy editing in place.
Added '\nw\n' to write the file. Apologies for delay answering request.
You didn't specify what shell you are using, but with zsh you could use the =( ) construct to achieve this. Something along the lines of:
cp =(sed ... file; sync) file
=( ) is similar to >( ) but creates a temporary file which is automatically deleted when cp terminates.
mv file.txt file.tmp && sed 's/foo/bar/g' < file.tmp > file.txt
Should preserve all hardlinks, since output is directed back to overwrite the contents of the original file, and avoids any need for a special version of sed.
To resolve this issue on Mac I had to add some unix functions to core-utils following this.
brew install grep
==> Caveats
All commands have been installed with the prefix "g".
If you need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/grep/libexec/gnubin:$PATH"
Call with gsed instead of sed. The mac default doesn't like how grep -rl displays file names with the ./ preprended.
~/my-dir/configs$ grep -rl Promise . | xargs sed -i 's/Promise/Bluebird/g'
sed: 1: "./test_config.js": invalid command code .
I also had to use xargs -I{} sed -i 's/Promise/Bluebird/g' {} for files with a space in the name.
Very good examples. I had the challenge to edit in place many files and the -i option seems to be the only reasonable solution using it within the find command. Here the script to add "version:" in front of the first line of each file:
find . -name pkg.json -print -exec sed -i '.bak' '1 s/^/version /' {} \;
In case you want to replace stings contain '/',you can use '?'. i.e. replace '/usr/local/bin/python' with '/usr/bin/python3' for all *.py files.
find . -name \*.py -exec sed -i 's?/usr/local/bin/python?/usr/bin/python3?g' {} \;

Pipe output of cat to cURL to download a list of files

I have a list URLs in a file called urls.txt. Each line contains 1 URL. I want to download all of the files at once using cURL. I can't seem to get the right one-liner down.
I tried:
$ cat urls.txt | xargs -0 curl -O
But that only gives me the last file in the list.
This works for me:
$ xargs -n 1 curl -O < urls.txt
I'm in FreeBSD. Your xargs may work differently.
Note that this runs sequential curls, which you may view as unnecessarily heavy. If you'd like to save some of that overhead, the following may work in bash:
$ mapfile -t urls < urls.txt
$ curl ${urls[#]/#/-O }
This saves your URL list to an array, then expands the array with options to curl to cause targets to be downloaded. The curl command can take multiple URLs and fetch all of them, recycling the existing connection (HTTP/1.1), but it needs the -O option before each one in order to download and save each target. Note that characters within some URLs ] may need to be escaped to avoid interacting with your shell.
Or if you are using a POSIX shell rather than bash:
$ curl $(printf ' -O %s' $(cat urls.txt))
This relies on printf's behaviour of repeating the format pattern to exhaust the list of data arguments; not all stand-alone printfs will do this.
Note that this non-xargs method also may bump up against system limits for very large lists of URLs. Research ARG_MAX and MAX_ARG_STRLEN if this is a concern.
A very simple solution would be the following:
If you have a file 'file.txt' like
url="http://www.google.de"
url="http://www.yahoo.de"
url="http://www.bing.de"
Then you can use curl and simply do
curl -K file.txt
And curl will call all Urls contained in your file.txt!
So if you have control over your input-file-format, maybe this is the simplest solution for you!
Or you could just do this:
cat urls.txt | xargs curl -O
You only need to use the -I parameter when you want to insert the cat output in the middle of a command.
xargs -P 10 | curl
GNU xargs -P can run multiple curl processes in parallel. E.g. to run 10 processes:
xargs -P 10 -n 1 curl -O < urls.txt
This will speed up download 10x if your maximum download speed if not reached and if the server does not throttle IPs, which is the most common scenario.
Just don't set -P too high or your RAM may be overwhelmed.
GNU parallel can achieve similar results.
The downside of those methods is that they don't use a single connection for all files, which what curl does if you pass multiple URLs to it at once as in:
curl -O out1.txt http://exmple.com/1 -O out2.txt http://exmple.com/2
as mentioned at https://serverfault.com/questions/199434/how-do-i-make-curl-use-keepalive-from-the-command-line
Maybe combining both methods would give the best results? But I imagine that parallelization is more important than keeping the connection alive.
See also: Parallel download using Curl command line utility
Here is how I do it on a Mac (OSX), but it should work equally well on other systems:
What you need is a text file that contains your links for curl
like so:
http://www.site1.com/subdirectory/file1-[01-15].jpg
http://www.site1.com/subdirectory/file2-[01-15].jpg
.
.
http://www.site1.com/subdirectory/file3287-[01-15].jpg
In this hypothetical case, the text file has 3287 lines and each line is coding for 15 pictures.
Let's say we save these links in a text file called testcurl.txt on the top level (/) of our hard drive.
Now we have to go into the terminal and enter the following command in the bash shell:
for i in "`cat /testcurl.txt`" ; do curl -O "$i" ; done
Make sure you are using back ticks (`)
Also make sure the flag (-O) is a capital O and NOT a zero
with the -O flag, the original filename will be taken
Happy downloading!
As others have rightly mentioned:
-cat urls.txt | xargs -0 curl -O
+cat urls.txt | xargs -n1 curl -O
However, this paradigm is a very bad idea, especially if all of your URLs come from the same server -- you're not only going to be spawning another curl instance, but will also be establishing a new TCP connection for each request, which is highly inefficient, and even more so with the now ubiquitous https.
Please use this instead:
-cat urls.txt | xargs -n1 curl -O
+cat urls.txt | wget -i/dev/fd/0
Or, even simpler:
-cat urls.txt | wget -i/dev/fd/0
+wget -i/dev/fd/0 < urls.txt
Simplest yet:
-wget -i/dev/fd/0 < urls.txt
+wget -iurls.txt

Passing Local IP as argument when running command line application in Unix

I have a command line application which I use and also have to pass my local ip address as an argument, like:
jekyll --url 'http://192.168.1.2:3000' --pygments --safe --server 3000 --auto
I would like to make the url argument get my ip automatically, since I am always on different networks and get different loal ip addresses.
so I can use this alias in my .bashrc
alias jkl="jekyll --url 'http://$IP:3000' --pygments --safe --server 3000 --auto"
where $IP would be my local ip adress acquired dynamically.
Is there any way to do it?
First, use double quotes instead of single quotes around your $IP variable or else it won't interpolate the value
#!/bin/bash
# tested on bash 4
while read -r line
do
case "$line" in
"inet "* )
line="${line/inet /}"
line="${line%% *}"
if [[ ! $line =~ ^(127|172) ]] ;then
IP="$line"
echo "IP: $IP"
fi
;;
esac
done < <(ifconfig)
echo jekyll --url "http://$IP:3000" --pygments --safe --server 3000 --auto
Note that you will have a few different IPs in the output. Choose the one that fits your requirement most.
A computer does not necessarily have "a local IP address", there are often several. For instance, you typically have the localhost address (127.0.0.1), and one or more "true" externally visible addresses. It's hard for an automated solution to know which one to pick.
One easy solution is perhaps to hard-code the "eth0" interface (or whatever the name is of your most typical interface).
On Linux, you could use something like this:
$ ifconfig | grep -A1 eth0 | cut -d: -f2 | cut -d ' ' -f1 | grep \\.
192.168.0.8
So to stuff this into a variable (assuming bash) you would use
MY_IP=$(ifconfig | grep -A1 eth0 | cut -d: -f2 | cut -d ' ' -f1 | grep \\.)
Note that this hard-codes the interface name as eth0.

Determining age of a file in shell script

G'day,
I need to see if a specific file is more than 58 minutes old from a sh shell script. I'm talking straight vanilla Solaris shell with some POSIX extensions it seems.
I've thought of doing a
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
where the timestamp is 58 minutes ago and then doing a
find ./logs_dir \! -newer /var/tmp/toto -print
We need to postprocess some log files that have been retrieved from various servers using mirror. Waiting for the files to be stable is the way this team decides if the mirror is finished and hence that day's logs are now complete and ready for processing.
Any suggestions gratefully received.
cheers,
I needed something to test age of a specific file, to not re-download too often. So using GNU date and bash:
# if file's modtime hour is less than current hour:
[[ $(date +%k -r GPW/mstall.zip) -lt $(date +%k) ]] && \
wget -S -N \
http://bossa.pl/pub/metastock/mstock/mstall.zip \
Update--this version works much better for me, and is more accurate and understandable:
[[ $(date +%s -r mstall.zip) -lt $(date +%s --date="77 min ago") ]] && echo File is older than 1hr 17min
The BSD variant (tested on a Mac) is:
[[ $(stat -f "%m" mstall.zip) -lt $(date -j -v-77M +%s) ]] && echo File is older than 1hr 17min
You can use different units in the find command, for example:
find . -mtime +0h55m
Will return any files with modified dates older than 55 minutes ago.
This is now an old question, sorry, but for the sake of others searching for a good solution as I was...
The best method I can think of is to use the find(1) command which is the only Un*x command I know of that can directly test file age:
if [ "$(find $file -mmin +58)" != "" ]
then
... regenerate the file ...
fi
The other option is to use the stat(1) command to return the age of the file in seconds and the date command to return the time now in seconds. Combined with the bash shell math operator working out the age of the file becomes quite easy:
age=$(stat -c %Y $file)
now=$(date +"%s")
if (( (now - age) > (58 * 60) ))
then
... regenerate the file ...
fi
You could do the above without the two variables, but they make things clearer, as does use of bash math (which could also be replaced). I've used the find(1) method quite extensively in scripts over the years and recommend it unless you actually need to know age in seconds.
A piece of the puzzle might be using stat. You can pass -r or -s to get a parseable representation of all file metadata.
find . -print -exec stat -r '{}' \;
AFAICR, the 10th column will show the mtime.
Since you're looking to test the time of a specific file you can start by using test and comparing it to your specially created file:
test /path/to/file -nt /var/tmp/toto
or:
touch -t YYYYMMDDHHmm.SS /var/tmp/toto
if [/path/to/file -nt /var/tmp/toto]
...
You can use ls and awk to get what you need as well. Awk has a c-ish printf that will allow you to format the columns any way you want.
I tested this in bash on linux and ksh on solaris.
Fiddle with options to get the best values for your application. Especially "--full-time" in bash and "-E" in ksh.
bash
ls -l foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37
ls --full-time foo | awk '{printf "%3s %1s\n", $6, $7}'
2011-04-19 11:37:51.211982332
ksh
ls -l bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
May 3 11:19
ls -E bar | awk '{printf "%3s %1s %s\n", $6, $7, $8}'
2011-05-03 11:19:23.723044000 -0400

Resources