single quotes not working in shell script - unix

I have a .bash_profile script and I can't get the following to work
alias lsls='ls -l | sort -n +4'
when I type the alias lsls
it does the sort but then posts this error message
"-bash: +4: command not found"
How do I get the alias to work with '+4'?
It works when type ls -l | sort -n +4 in the command line
I'm in OS X 10.4
Thanks for any help

bash-4.0$ ls -l | sort -n +4
sort: open failed: +4: No such file or directory
You need ls -l | sort -n -k 5, gnu sort is different from bsd sort
alias lsls='ls -l | sort -n -k 5'
Edit: updated to reflect change from 0 based indexing to 1 based indexing, thanks Matthew.

alias lsls='ls -l | sort -n +4' should work fine with the sort in OS X 10.4 (which does support that syntax).
when I type the alias lsls it does the sort but then posts this error message "-bash: +4: command not found"
Is it possible that you inserted a stray newline when editing your .bash_profile? e.g. if you ended up with something like this:
alias lsls='ls -l | sort -n
+4'
...that might explain the error message.
As an aside, you can get the same effect without piping through sort at all, using:
ls -lrS

This link discusses a very similar alias containing a pipe.
The problem may not have been the pipe, but the interesting solution was to use a function.

Related

sed edit file in place

I am trying to find out if it is possible to edit a file in a single sed command without manually streaming the edited content into a new file and then renaming the new file to the original file name.
I tried the -i option but my Solaris system said that -i is an illegal option. Is there a different way?
The -i option streams the edited content into a new file and then renames it behind the scenes, anyway.
Example:
sed -i 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
while on macOS you need:
sed -i '' 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' filename
On a system where sed does not have the ability to edit files in place, I think the better solution would be to use perl:
perl -pi -e 's/foo/bar/g' file.txt
Although this does create a temporary file, it replaces the original because an empty in place suffix/extension has been supplied.
Note that on OS X you might get strange errors like "invalid command code" or other strange errors when running this command. To fix this issue try
sed -i '' -e "s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g" <file>
This is because on the OSX version of sed, the -i option expects an extension argument so your command is actually parsed as the extension argument and the file path is interpreted as the command code. Source: https://stackoverflow.com/a/19457213
The following works fine on my mac
sed -i.bak 's/foo/bar/g' sample
We are replacing foo with bar in sample file. Backup of original file will be saved in sample.bak
For editing inline without backup, use the following command
sed -i'' 's/foo/bar/g' sample
One thing to note, sed cannot write files on its own as the sole purpose of sed is to act as an editor on the "stream" (ie pipelines of stdin, stdout, stderr, and other >&n buffers, sockets and the like). With this in mind you can use another command tee to write the output back to the file. Another option is to create a patch from piping the content into diff.
Tee method
sed '/regex/' <file> | tee <file>
Patch method
sed '/regex/' <file> | diff -p <file> /dev/stdin | patch
UPDATE:
Also, note that patch will get the file to change from line 1 of the diff output:
Patch does not need to know which file to access as this is found in the first line of the output from diff:
$ echo foobar | tee fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin
*** fubar 2014-03-15 18:06:09.000000000 -0500
--- /dev/stdin 2014-03-15 18:06:41.000000000 -0500
***************
*** 1 ****
! foobar
--- 1 ----
! fubar
$ sed 's/oo/u/' fubar | diff -p fubar /dev/stdin | patch
patching file fubar
Versions of sed that support the -i option for editing a file in place write to a temporary file and then rename the file.
Alternatively, you can just use ed. For example, to change all occurrences of foo to bar in the file file.txt, you can do:
echo ',s/foo/bar/g; w' | tr \; '\012' | ed -s file.txt
Syntax is similar to sed, but certainly not exactly the same.
Even if you don't have a -i supporting sed, you can easily write a script to do the work for you. Instead of sed -i 's/foo/bar/g' file, you could do inline file sed 's/foo/bar/g'. Such a script is trivial to write. For example:
#!/bin/sh
IN=$1
shift
trap 'rm -f "$tmp"' 0
tmp=$( mktemp )
<"$IN" "$#" >"$tmp" && cat "$tmp" > "$IN" # preserve hard links
should be adequate for most uses.
You could use vi
vi -c '%s/foo/bar/g' my.txt -c 'wq'
sed supports in-place editing. From man sed:
-i[SUFFIX], --in-place[=SUFFIX]
edit files in place (makes backup if extension supplied)
Example:
Let's say you have a file hello.txtwith the text:
hello world!
If you want to keep a backup of the old file, use:
sed -i.bak 's/hello/bonjour' hello.txt
You will end up with two files: hello.txt with the content:
bonjour world!
and hello.txt.bak with the old content.
If you don't want to keep a copy, just don't pass the extension parameter.
If you are replacing the same amount of characters and after carefully reading “In-place” editing of files...
You can also use the redirection operator <> to open the file to read and write:
sed 's/foo/bar/g' file 1<> file
See it live:
$ cat file
hello
i am here # see "here"
$ sed 's/here/away/' file 1<> file # Run the `sed` command
$ cat file
hello
i am away # this line is changed now
From Bash Reference Manual → 3.6.10 Opening File Descriptors for Reading and Writing:
The redirection operator
[n]<>word
causes the file whose name is the expansion of word to be opened for
both reading and writing on file descriptor n, or on file descriptor 0
if n is not specified. If the file does not exist, it is created.
Like Moneypenny said in Skyfall: "Sometimes the old ways are best."
Kincade said something similar later on.
$ printf ',s/false/true/g\nw\n' | ed {YourFileHere}
Happy editing in place.
Added '\nw\n' to write the file. Apologies for delay answering request.
You didn't specify what shell you are using, but with zsh you could use the =( ) construct to achieve this. Something along the lines of:
cp =(sed ... file; sync) file
=( ) is similar to >( ) but creates a temporary file which is automatically deleted when cp terminates.
mv file.txt file.tmp && sed 's/foo/bar/g' < file.tmp > file.txt
Should preserve all hardlinks, since output is directed back to overwrite the contents of the original file, and avoids any need for a special version of sed.
To resolve this issue on Mac I had to add some unix functions to core-utils following this.
brew install grep
==> Caveats
All commands have been installed with the prefix "g".
If you need to use these commands with their normal names, you
can add a "gnubin" directory to your PATH from your bashrc like:
PATH="/usr/local/opt/grep/libexec/gnubin:$PATH"
Call with gsed instead of sed. The mac default doesn't like how grep -rl displays file names with the ./ preprended.
~/my-dir/configs$ grep -rl Promise . | xargs sed -i 's/Promise/Bluebird/g'
sed: 1: "./test_config.js": invalid command code .
I also had to use xargs -I{} sed -i 's/Promise/Bluebird/g' {} for files with a space in the name.
Very good examples. I had the challenge to edit in place many files and the -i option seems to be the only reasonable solution using it within the find command. Here the script to add "version:" in front of the first line of each file:
find . -name pkg.json -print -exec sed -i '.bak' '1 s/^/version /' {} \;
In case you want to replace stings contain '/',you can use '?'. i.e. replace '/usr/local/bin/python' with '/usr/bin/python3' for all *.py files.
find . -name \*.py -exec sed -i 's?/usr/local/bin/python?/usr/bin/python3?g' {} \;

Pipe output of cat to cURL to download a list of files

I have a list URLs in a file called urls.txt. Each line contains 1 URL. I want to download all of the files at once using cURL. I can't seem to get the right one-liner down.
I tried:
$ cat urls.txt | xargs -0 curl -O
But that only gives me the last file in the list.
This works for me:
$ xargs -n 1 curl -O < urls.txt
I'm in FreeBSD. Your xargs may work differently.
Note that this runs sequential curls, which you may view as unnecessarily heavy. If you'd like to save some of that overhead, the following may work in bash:
$ mapfile -t urls < urls.txt
$ curl ${urls[#]/#/-O }
This saves your URL list to an array, then expands the array with options to curl to cause targets to be downloaded. The curl command can take multiple URLs and fetch all of them, recycling the existing connection (HTTP/1.1), but it needs the -O option before each one in order to download and save each target. Note that characters within some URLs ] may need to be escaped to avoid interacting with your shell.
Or if you are using a POSIX shell rather than bash:
$ curl $(printf ' -O %s' $(cat urls.txt))
This relies on printf's behaviour of repeating the format pattern to exhaust the list of data arguments; not all stand-alone printfs will do this.
Note that this non-xargs method also may bump up against system limits for very large lists of URLs. Research ARG_MAX and MAX_ARG_STRLEN if this is a concern.
A very simple solution would be the following:
If you have a file 'file.txt' like
url="http://www.google.de"
url="http://www.yahoo.de"
url="http://www.bing.de"
Then you can use curl and simply do
curl -K file.txt
And curl will call all Urls contained in your file.txt!
So if you have control over your input-file-format, maybe this is the simplest solution for you!
Or you could just do this:
cat urls.txt | xargs curl -O
You only need to use the -I parameter when you want to insert the cat output in the middle of a command.
xargs -P 10 | curl
GNU xargs -P can run multiple curl processes in parallel. E.g. to run 10 processes:
xargs -P 10 -n 1 curl -O < urls.txt
This will speed up download 10x if your maximum download speed if not reached and if the server does not throttle IPs, which is the most common scenario.
Just don't set -P too high or your RAM may be overwhelmed.
GNU parallel can achieve similar results.
The downside of those methods is that they don't use a single connection for all files, which what curl does if you pass multiple URLs to it at once as in:
curl -O out1.txt http://exmple.com/1 -O out2.txt http://exmple.com/2
as mentioned at https://serverfault.com/questions/199434/how-do-i-make-curl-use-keepalive-from-the-command-line
Maybe combining both methods would give the best results? But I imagine that parallelization is more important than keeping the connection alive.
See also: Parallel download using Curl command line utility
Here is how I do it on a Mac (OSX), but it should work equally well on other systems:
What you need is a text file that contains your links for curl
like so:
http://www.site1.com/subdirectory/file1-[01-15].jpg
http://www.site1.com/subdirectory/file2-[01-15].jpg
.
.
http://www.site1.com/subdirectory/file3287-[01-15].jpg
In this hypothetical case, the text file has 3287 lines and each line is coding for 15 pictures.
Let's say we save these links in a text file called testcurl.txt on the top level (/) of our hard drive.
Now we have to go into the terminal and enter the following command in the bash shell:
for i in "`cat /testcurl.txt`" ; do curl -O "$i" ; done
Make sure you are using back ticks (`)
Also make sure the flag (-O) is a capital O and NOT a zero
with the -O flag, the original filename will be taken
Happy downloading!
As others have rightly mentioned:
-cat urls.txt | xargs -0 curl -O
+cat urls.txt | xargs -n1 curl -O
However, this paradigm is a very bad idea, especially if all of your URLs come from the same server -- you're not only going to be spawning another curl instance, but will also be establishing a new TCP connection for each request, which is highly inefficient, and even more so with the now ubiquitous https.
Please use this instead:
-cat urls.txt | xargs -n1 curl -O
+cat urls.txt | wget -i/dev/fd/0
Or, even simpler:
-cat urls.txt | wget -i/dev/fd/0
+wget -i/dev/fd/0 < urls.txt
Simplest yet:
-wget -i/dev/fd/0 < urls.txt
+wget -iurls.txt

Pipe Find command "stderr" to a command

Hi I have a peculiar problem and I'm trying hard to find (pun intended) a solution for it.
$> find ./subdirectory -type f 2>>error.log
I get an error, something like, "find: ./subdirectory/noidea: Permission denied" from this command and this will be redirected to error.log.
Is there any way I can pipe the stderr to another command before the redirection to error.log?
I want to be able to do something like
$> find ./subdirectory -type f 2 | sed "s#\(.*\)#${PWD}\1#" >> error.log
where I want to pipe only the stderr to the sed command and get the whole path of the find command error.
I know piping doesn't work here and is probably not the right way to go about.
My problem is I need both the stdout and stderr and the both have to be processed through different things simultaneously.
EDIT:
Ok. A slight modification to my problem.
Now, I have a shell script, solve_problem.sh
In this shell script, I have the following code
ErrorFile="error.log"
for directories in `find ./subdirectory -type f 2>> $ErrorFile`
do
field1=`echo $directories | cut -d / -f2`
field2=`echo $directories | cut -d / -f3`
done
Same problem but inside a shell script. The "find: ./subdirectory/noidea: Permission denied" error should go into $ErrorFile and stdout should get assigned to the variable $directories.
Pipe stderr and stdout simultaneously - idea taken from this post:
(find /boot | sed s'/^/STDOUT:/' ) 3>&1 1>&2 2>&3 | sed 's/^/STDERR:/'
Sample output:
STDOUT:/boot/grub/usb_keyboard.mod
STDERR:find: `/boot/lost+found': Brak dostępu
Bash redirections like 3>&1 1>&2 2>&3 swaps stderr and stdout.
I would modify your sample script to look like this:
#!/bin/bash
ErrorFile="error.log"
(find ./subdirectory -type f 3>&1 1>&2 2>&3 | sed "s#^#${PWD}: #" >> $ErrorFile) 3>&1 1>&2 2>&3 | while read line; do
field1=$(echo "$line" | cut -d / -f2)
...
done
Notice that I swapped stdout & stderr twice.
Small additional comment - look at -printf option in find manual page. It might be useful to you.
If you need to redirect stderr to stdout so that the following command in the pipe gets it as its input, then you can use 2>&1.
For more information, please have a look at the all about redirection how-to.
Edit: Even if you need to edit pass stdout further in the pipe, you can use sed to filter error messages and write them to a file:
$ find . -type f 2>&1 | sed '/^find:/{
> w error.log
> d
> }'
In this example:
in find command stderr is redirected to stdout
in sed command errors from find that match a regular expression are written to a file (w error.log) and removed from output (d).
any command in the pipeline following the pipeline will received the output from find.
Note: This will work as lon as all the error messages from find start with find:. Otherwise, the regular expression in sed should be modified to properly match all cases.
You can try this (on bash), which appears to work:
find ./subdirectory -type f 2> >(sed "s#\(.*\)#${PWD}\1#" >> error.log)
This does the following:
2> redirects stderr to
>(...) a process substitution (running sed, which appends to error.log)

unix command to extract part of a hostname

I would like to extract the first part of this hostname testsrv1
from testsrv1.main.corp.loc.domain.com in UNIX, within a shell script.
What command can I use? It would be anything before the first period .
Do you have the server name in a shell variable? Are you using a sh-like shell? If so,
${SERVERNAME%%.*}
will do what you want.
You can use cut:
echo "testsrv1.main.corp.loc.domain.com" | cut -d"." -f1
To build upon pilcrow's answer, no need for new variable, just use inbuilt $HOSTANME.
echo $HOSTNAME-->my.server.domain
echo ${HOSTNAME%%.*}-->my
Tested on two fairly different Linux's.
2.6.18-371.4.1.el5, GNU bash, version 3.2.25(1)-release (i386-redhat-linux-gnu)
3.4.76-65.111.amzn1.x86_64, GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
try the -s switch:
hostname -s
I use command cut, awk, sed or bash variables
Operation
Via cut
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | cut -d. -f1
testsrv1
[flying#lempstacker ~]$
Via awk
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | awk -v FS='.' '{print $1}'
testsrv1
[flying#lempstacker ~]$
Via sed
[flying#lempstacker ~]$ echo "testsrv1.main.corp.loc.domain.com" | sed -r 's#([^.]*).(.*)#\1#g'
testsrv1
[flying#lempstacker ~]$
Via Bash Variables
[flying#lempstacker ~]$ hostName='testsrv1.main.corp.loc.domain.com'
[flying#lempstacker ~]$ echo ${hostName%%.*}
testsrv1
[flying#lempstacker ~]$
You could have used "uname -n" to just get the hostname only.
You can use IFS to split text by whichever token you want. For domain names, we can use the dot/period character.
#!/usr/bin/env sh
shorthost() {
# Set IFS to dot, so that we can split $# on dots instead of spaces.
local IFS='.'
# Break up arguments passed to shorthost so that each domain zone is
# a new index in an array.
zones=($#)
# Echo out our first zone
echo ${zones[0]}
}
If this is in your script then, for instance, you'll get test when you run shorthost test.example.com. You can adjust this to fit your use case, but knowing how to break the zones into the array is the big thing here, I think.
I wanted to provide this solution, because I feel like spawning another process is overkill when you can do it easily and completely within your shell with IFS. One thing to watch out for is that some users will recommend doing things like hostname -s, but that doesn't work in the BSD userland. For instance, MacOS users don't have the -s flag, I don't think.
Assuming the variable $HOSTNAME exists, so try echo ${HOSTNAME%%.*} to get the top-most part of the full-qualified hostname. Hope it helps.
If interested, the hint is from the below quoted partial /etc/bashrc on a REHL7 host:
if [ -e /etc/sysconfig/bash-prompt-screen ]; then
PROMPT_COMMAND=/etc/sysconfig/bash-prompt-screen
else
PROMPT_COMMAND='printf "\033k%s#%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"'
fi
;; ... ```

Most powerful examples of Unix commands or scripts every programmer should know

There are many things that all programmers should know, but I am particularly interested in the Unix/Linux commands that we should all know. For accomplishing tasks that we may come up against at some point such as refactoring, reporting, network updates etc.
The reason I am curious is because having previously worked as a software tester at a software company while I am studying my degree, I noticed that all of developers (who were developing Windows software) had 2 computers.
To their left was their Windows XP development machine, and to the right was a Linux box. I think it was Ubuntu. Anyway they told me that they used it because it provided powerful unix operations that Windows couldn't do in their development process.
This makes me curious to know, as a software engineer what do you believe are some of the most powerful scripts/commands/uses that you can perform on a Unix/Linux operating system that every programmer should know for solving real world tasks that may not necessarily relate to writing code?
We all know what sed, awk and grep do. I am interested in some actual Unix/Linux scripting pieces that have solved a difficult problem for you, so that other programmers may benefit. Please provide your story and source.
I am sure there are numerous examples like this that people keep in their 'Scripts' folder.
Update: People seem to be misinterpreting the question. I am not asking for the names of individual unix commands, rather UNIX code snippets that have solved a problem for you.
Best answers from the Community
Traverse a directory tree and print out paths to any files that match a regular expression:
find . -exec grep -l -e 'myregex' {} \; >> outfile.txt
Invoke the default editor(Nano/ViM)
(works on most Unix systems including Mac OS X)
Default editor is whatever your
"EDITOR" environment variable is
set to. ie: export
EDITOR=/usr/bin/pico which is
located at ~/.profile under Mac OS
X.
Ctrl+x Ctrl+e
List all running network connections (including which app they belong to)
lsof -i -nP
Clear the Terminal's search history (Another of my favourites)
history -c
I find commandlinefu.com to be an excellent resource for various shell scripting recipes.
Examples
Common
# Run the last command as root
sudo !!
# Rapidly invoke an editor to write a long, complex, or tricky command
ctrl-x ctrl-e
# Execute a command at a given time
echo "ls -l" | at midnight
Esoteric
# output your microphone to a remote computer's speaker
dd if=/dev/dsp | ssh -c arcfour -C username#host dd of=/dev/dsp
How to exit VI
:wq
Saves the file and ends the misery.
Alternative of ":wq" is ":x" to save and close the vi editor.
grep
awk
sed
perl
find
A lot of Unix power comes from its ability to manipulate text files and filter data. Of course, you can get all of these commands for Windows. They are just not native in the OS, like they are in Unix.
and the ability to chain commands together with pipes etc. This can create extremely powerful single lines of commands from simple functions.
Your shell is the most powerful tool you have available
being able to write simple loops etc
understanding file globbing (e.g. *.java etc.)
being able to put together commands via pipes, subshells. redirection etc.
Having that level of shell knowledge allows you to do enormous amounts on the command line, without having to record info via temporary text files, copy/paste etc., and to leverage off the huge number of utility programs that permit slicing/dicing of data.
Unix Power Tools will show you so much of this. Every time I open my copy I find something new.
I use this so much I am actually ashamed of myself. Remove spaces from all filenames and replace them with an underscore:
[removespaces.sh]
#!/bin/bash
find . -type f -name "* *" | while read file
do
mv "$file" "${file// /_}"
done
My personal favorite is the lsof command.
"lsof" can be used to list opened file descriptors, sockets, and pipes.
I find it extremely useful when trying to figure out which processes have used which ports/files on my machine.
Example: List all internet connections without hostname resolution and without port to port name conversion.
lsof -i -nP
http://www.manpagez.com/man/8/lsof/
If you make a typo in a long command, you can rerun the command with a substitution (in bash):
mkdir ~/aewseomeDirectory
you can see that "awesome" is mispelled, you can type the following to re run the command with the typo corrected
^aew^awe
it then outputs what it substituted (mkdir ~/aweseomeDirectory) and runs the command. (don't forget to undo the damage you did with the incorrect command!)
The tr command is the most under-appreciated command in Unix:
#Convert all input to upper case
ls | tr a-z A-Z
#take the output and put into a single line
ls | tr "\n" " "
#get rid of all numbers
ls -lt | tr -d 0-9
When solving problems on faulty linux boxes, by far the most common key sequence I type end up typing is alt+sysrq R E I S U B
The power of this tools (grep find, awk, sed) comes from their versatility, so giving a particular case seems quite useless.
man is the most powerful comand, because then you can understand what you type instead of just blindly copy pasting from stack overflow.
Example are welcome, but there are already topics for tis.
My most used :
grep something_to_find * -R
which can be replaced by ack and
find | xargs
find with results piped into xargs can be very powerful
some of you might disagree with me, but nevertheless, here's something to talk about. If one learns gawk ( other variants as well) throughly, one can skip learning and using grep/sed/wc/cut/paste and a few other *nix tools. all you need is one good tool to do the job of many combined.
Some way to search (multiple) badly formatted log files, in which the search string may be found on an "orphaned" next line. For example, to display both the 1st, and a concatenated 3rd and 4th line when searching for id = 110375:
[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:02] [INFO] ...; id = 238998; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception
...; id = 110375; ...
[2008-11-08 07:07:05] [INFO] ...; id = 800612; ...
I guess there must be better solutions (yes, add them...!) than the following concatenation of the two lines using sed prior to actually running grep:
#!/bin/bash
if [ $# -ne 1 ]
then
echo "Usage: `basename $0` id"
echo "Searches all myproject's logs for the given id"
exit -1
fi
# When finding "caught exception" then append the next line into the pattern
# space bij using "N", and next replace the newline with a colon and a space
# to ensure a single line starting with a timestamp, to allow for sorting
# the output of multiple files:
ls -rt /var/www/rails/myproject/shared/log/production.* \
| xargs cat | sed '/caught exception$/N;s/\n/: /g' \
| grep "id = $1" | sort
...to yield:
[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception: ...; id = 110375; ...
Actually, a more generic solution would append all (possibly multiple) lines that do not start with some [timestamp] to its previous line. Anyone? Not necessarily using sed, of course.
for card in `seq 1 8` ;do
for ts in `seq 1 31` ; do
echo $card $ts >>/etc/tuni.cfg;
done
done
was better than writing the silly 248 lines of config by hand.
Neded to drop some leftover tables that all were prefixed with 'tmp'
for table in `echo show tables | mysql quotiadb |grep ^tmp` ; do
echo drop table $table
done
Review the output, rerun the loop and pipe it to mysql
Finding PIDs without the grep itself showing up
export CUPSPID=`ps -ef | grep cups | grep -v grep | awk '{print $2;}'`
Best answers from the Community
Traverse a directory tree and print out paths to any files that match a regular expression:
find . -exec grep -l -e 'myregex' {} \; >> outfile.txt
Invoke the default editor(Nano/ViM)
(works on most Unix systems including Mac OS X)
Default editor is whatever your
"EDITOR" environment variable is
set to. ie: export
EDITOR=/usr/bin/pico which is
located at ~/.profile under Mac OS
X.
Ctrl+x Ctrl+e
List all running network connections (including which app they belong to)
lsof -i -nP
Clear the Terminal's search history (Another of my favourites)
history -c
Repeat your previous command in bash using !!. I oftentimes run chown otheruser: -R /home/otheruser and forget to use sudo. If you forget sudo, using !! is a little easier than arrow-up and then home.
sudo !!
I'm also not a fan of automatically resolved hostnames and names for ports, so I keep an alias for iptables mapped to iptables -nL --line-numbers. I'm not even sure why the line numbers are hidden by default.
Finally, if you want to check if a process is listening on a port as it should, bound to the right address you can run
netstat -nlp
Then you can grep the process name or port number (-n gives you numeric).
I also love to have the aid of colors in the terminal. I like to add this to my bashrc to remind me whether I'm root without even having to read it. This actually helped me a lot, I never forget sudo anymore.
red='\033[1;31m'
green='\033[1;32m'
none='\033[0m'
if [ $(id -u) -eq 0 ];
then
PS1="[\[$red\]\u\[$none\]#\H \w]$ "
else
PS1="[\[$green\]\u\[$none\]#\H \w]$ "
fi
Those are all very simple commands, but I use them a lot. Most of them even deserved an alias on my machines.
Grep (try Windows Grep)
sed (try Sed for Windows)
In fact, there's a great set of ports of really useful *nix commands available at http://gnuwin32.sourceforge.net/. If you have a *nix background and now use windows, you should probably check them out.
You would be better of if you keep a cheatsheet with you... there is no single command that can be termed most useful. If a perticular command does your job its useful and powerful
Edit you want powerful shell scripts? shell scripts are programs. Get the basics right, build on individual commands and youll get what is called a powerful script. The one that serves your need is powerful otherwise its useless. It would have been better had you mentioned a problem and asked how to solve it.
Sort of an aside, but you can get powershell on windows. Its really powerful and can do a lot of the *nix type stuff. One cool difference is that you work with .net objects instead of text which can be useful if you're using the pipeline for filtering etc.
Alternatively, if you don't need the .NET integration, install Cygwin on the Windows box. (And add its directory to the Windows PATH.)
The fact you can use -name and -iname multiple times in a find command was an eye opener to me.
[findplaysong.sh]
#!/bin/bash
cd ~
echo Matched...
find /home/musicuser/Music/ -type f -iname "*$1*" -iname "*$2*" -exec echo {} \;
echo Sleeping 5 seconds
sleep 5
find /home/musicuser/Music/ -type f -iname "*$1*" -iname "*$2*" -exec mplayer {} \;
exit
When things work on one server but are broken on another the following lets you compare all the related libraries:
export MYLIST=`ldd amule | awk ' { print $3; }'`; for a in $MYLIST; do cksum $a; done
Compare this list with the one between the machines and you can isolate differences quickly.
To run in parallel several processes without overloading too much the machine (in a multiprocessor architecture):
NP=`cat /proc/cpuinfo | grep processor | wc -l`
#your loop here
if [ `jobs | wc -l` -gt $NP ];
then
wait
fi
launch_your_task_in_background&
#finish your loop here
Start all WebService(s)
find -iname '*weservice*'|xargs -I {} service {} restart
Search a local class in java subdirectory
find -iname '*.java'|xargs grep 'class Pool'
Find all items from file recursivly in subdirectories of current path:
cat searches.txt| xargs -I {} -d, -n 1 grep -r {}
P.S searches.txt: first,second,third, ... ,million
:() { :|: &} ;:
Fork Bomb without root access.
Try it on your own risk.
You can do anything with this...
gcc

Resources