Is there any utility in Solaris/AIX to package a shell Script? - unix

I have a Bash shell script which takes three input files as an argument. I would like to package them all, so I can place that package on any UNIX machines and run it.

There is an old and ancient technique to include a tar file in a executable well known to the grey haired admins ;)
At first create a script ... put it into a file named script
#!/bin/bash
TAR_STARTS=`awk '/^__TARMAN BEGINS__/ { print NR + 1; exit 0; }' $0`
NAME_OF_SCRIPT=`pwd`/$0
tail +$TAR_STARTS $NAME_OF_SCRIPT | gunzip -c | tar -xvf -
# Insert commands to execute after untaring here .... with relative
# pathname (e.g. if there is directory "bin" with a file "executable"
# then you should insert a line bin/executable
exit
__TARMAN BEGINS__
No newline after the last __
Of course this script is derived from somewhere in the internet. It's not mine. I just cannot remember where for proper kudos.
Then create your tarfile and put it at the end of the file. This is the reason why it's nescessary that there is no newline after the __
$ cat script test.tar.gz > selfexploding.sh
Now you can just try it
$ bash ./selfexploding.sh
tar: blocksize = 9
x testtar, 0 bytes, 0 tape blocks
x testtar/test2, 1024 bytes, 2 tape blocks
x testtar/test1, 1024 bytes, 2 tape blocks
You could of course put the name of a script before the exit, that you create by unpack ... of course path must be relative to the pwd of the execution. Don't know if this works with AIX. At least with Solaris 11.3 it works. But as it only uses standard command. It should work everywhere. Besides of this you could of course create native packages for Solaris and AIX.

Related

Piping the results of *nix commands into Vim's set of open files

I have a folder resembling this structure:
nietzsche.txt
kant.org
buddha.txt
kierkegaard.org
aristotle.txt
plato.org
I wish to read the text files that have the *.org extension, so I use the command:
ls | grep .org
The above command neatly sends the following to stdin:
> kant.org
> kierkegaard.org
> plato.org
I would like to open the files listed above in vim all at once - with the above given example, this is trivial; it would just mean typing out the list of files prefixed with "vim", for example:
vim kant.org kierkegaard.org plato.org
...but in my actual folder of articles there are several hundred plain text files, with the *.org and the *.txt extension. It isn't a matter of converting the org files to true plain text, it's trying to get vim to use the output of other commands through pipes. In reality, the conditions for generating the "books-to-read" list are far more complicated (ie. using date last read, author, date written etc) so a simple find and replace of org-to-txt wouldn't work, as I currently have a bash script to generate the list and spit it to stdout.
How would I get vim to accept the output of a command like grep as a list of files to open immediately?
In this specific example, ls | grep .org is pointless since you can simply do:
$ vim *.org
As for the general case, you would use $ man xargs on Unix-like systems:
$ <command that generates a list of files> | xargs -o vim
or:
$ <command that generates a list of files> | xargs vim --not-a-term
Note that xargs' -o and Vim's --not-a-term are more or less the opposite of each other. The former ensures that xargs passes a proper tty to Vim, while the latter ensures that Vim doesn't complain if there is no attached tty.
You can use command mode completion inside vim:
:e *.org<C-a>
Read more on :h c_CTRL-A

Grep: could not find file

In Unix environment, I need to write report to x_out file and also at the end of the process, the file needs to be removed. But, it always throws the following error.
grep: can't open /XYZ/123/Tmp/x_out
rm: /XYZ/123/Tmp/x_out non-existent
But, I can find the file x_out at the corresponding location. I'm able to open and view the contents too. I have found that sometime the file name changes with some '~' like characters appended to it. Is there a way to resolve this?
Edit: I'm not having any '~' appended to it. But, I have a doubt may be some unreadable chatacters like that have been appended.
Edit:I have added the actual error here
Edit: the command I used
grep "Report_values" ${REPORTOUT}|cut -d "|" -f 6
rm ${REPORTOUT}
Well, there are two possibilities I can see off the top of my head. There are undoubtedly more but the top of my head isn't a very big space :-)
The first is that the file doesn't exist despite your assertions.
The second is that it does exist but you're looking for it in the wrong place (for example, you've changed into a different directory).
If you place a line similar to:
( pwd ; cd ../.. ; pwd ; ls )
in your script before the grep/rm, it should tell you if either of those two possibilities is correct.
It will give your current directory, the directory you're looking in for the file and the files in that directory.
just check if you have non-printable/graphic character in the filename ... use -Q or -q flag of ls to see it... check below how it looks....
flag description from ls man page
-q, --hide-control-chars
print ? instead of non graphic characters
--show-control-chars
show non graphic characters as-is (default unless program is `ls' and output is a terminal)
-Q, --quote-name
enclose entry names in double quotes
--quoting-style=WORD
use quoting style WORD for entry names: literal, locale, shell, shell-always, c, escape
Demo Session
$ ls
demo.txt test.dat
$ ls -1
demo.txt
test.dat
$ cat demo.txt
cat: demo.txt: No such file or directory
$ rm demo.txt
rm: cannot remove `demo.txt': No such file or directory
$ ls -Q
"demo.txt " "test.dat"
$ ls -1Q
"demo.txt "
"test.dat"
$ rm "demo.txt "
$

Converting this code from R to Shell script?

So I'm running a program that works but the issue is my computer is not powerful enough to handle the task. I have a code written in R but I have access to a supercomputer that runs a Unix system (as one would expect).
The program is designed to read a .csv file and find everything with the unit ft3(monthly total) in the "Units" column and select the value in the column before it. The files are charts that list things in multiple units.
To convert this program in R:
getwd()
setwd("/Users/youruserName/Desktop")
myData= read.table("yourFileName.csv", header=T, sep=",")
funData= subset(myData, units="ft3(monthly total)", select=units:value)
write.csv(funData, file="funData.csv")
To a program in Shell Script, I tried:
pwd
cd /Users/yourusername/Desktop
touch RunThisProgram
nano RunThisProgram
(((In nano, I wrote)))
if
grep -r yourFileName.csv ft3(monthly total)
cat > funData.csv
else
cat > nofun.csv
fi
control+x (((used control x to close nano)))
chmod -x RunThisProgram
./RunThisProgram
(((It runs for a while)))
We get a funData.csv file output but that file is empty
What am I doing wrong?
It isn't actually running, because there are a couple problems with your script.
grep needs the pattern first, and quoted; -r is for recursing a
directory...
if without a then
cat is called wrong so it is actually reading from stdin.
You really only need one line:
grep -F "ft3(monthly total)" yourFileName.csv > funData.csv

Using the same file for stdin and stdout with redirection

I'm writing a application that acts like a filter: it reads input from a file (stdin), processes, and write output to another file (stdout). The input file is completely read before the application starts to write the output file.
Since I'm using stdin and stdout, I can run is like this:
$ ./myprog <file1.txt >file2.txt
It works fine, but if I try to use the same file as input and output (that is: read from a file, and write to the same file), like this:
$ ./myprog <file.txt >file.txt
it cleans file.txt before the program has the chance to read it.
Is there any way I can do something like this in a command line in Unix?
There's a sponge utility in moreutils package:
./myprog < file.txt | sponge file.txt
To quote the manual:
Sponge reads standard input and writes it out to the specified file. Unlike a shell redirect, sponge soaks up all its input before opening the output file. This allows constructing pipelines that read from and write to the same file.
The shell is what clobbers your output file, as it's preparing the output filehandles before executing your program. There's no way to make your program read the input before the shell clobbers the file in a single shell command line.
You need to use two commands, either moving or copying the file before reading it:
mv file.txt filecopy.txt
./myprog < filecopy.txt > file.txt
Or else outputting to a copy and then replacing the original:
./myprog < file.txt > filecopy.txt
mv filecopy.txt file.txt
If you can't do that, then you need to pass the filename to your program, which opens the file in read/write mode, and handles all the I/O internally.
./myprog file.txt # reads and writes according to its own rules
For a solution of a purely academic nature:
$ ( unlink file.txt && ./myprog >file.txt ) <file.txt
Possibly problematic side-effects are:
If ./myprog fails, you destroy your input. (Naturally...)
./myprog runs from a subshell (Use { ... ; } instead of ( ... ) to avoid.)
file.txt becomes a new file with a new inode and file permissions.
You need +w permission on the directory housing file.txt.

Why did my use of the read command not do what I expected?

I did some havoc on my computer, when I played with the commands suggested by vezult [1]. I expected the one-liner to ask file-names to be removed. However, it immediately removed my files in a folder:
> find ./ -type f | while read x; do rm "$x"; done
I expected it to wait for my typing of stdin:s [2]. I cannot understand its action. How does the read command work, and where do you use it?
What happened there is that read reads from stdin. When you put it at the end of a pipe, it read from that pipe.
So your find becomes
file1
file2
and so on; read reads that and replaces x successively with file1 then file2, and so your loop becomes
rm "file1"
rm "file2"
and sure enough, that rm's every file starting at the current directory ".".
A couple hints.
You didn't need the "/".
It's better and safer to say
find . -type f
because should you happen to type ". /" (ie, dot SPACE slash) find will start at the current directory and then go look starting at the root directory. That trick, given the right privileges, would delete every file in the computer. "." is already the name of a directory; you don't need to add the slash.
The find or rm commands will do this
It sounds like what you wanted to do was go through all the files in all the directories starting at the current directory ".", and have it ASK if you want to delete it. You could do that with
find . -type f -exec rm -i {} \;
or
find . -type f -ok rm {} \;
and not need a loop at all. You can also do
rm -r -i *
and get nearly the same effect, except that it will try to delete directories too. If the directory is empty, that'll even work.
Another thought
Come to think of it, unless you have a LOT of files, you could also do
rm -i `find . -type f`
Now the find in backquotes will become a bunch of file names on the command line, and the '-i' interactive flag on rm will ask the yes or no question.
Charlie Martin gives you a good dissection and explanation of what went wrong with your specific example, but doesn't address the general question of:
When should you use the read command?
The answer to that is - when you want to read successive lines from some file (quite possibly the standard output of some previous sequence of commands in a pipeline), possibly splitting the lines into several separate variables. The splitting is done using the current value of '$IFS', which normally means on blanks and tabs (newlines don't count in this context; they separate lines). If there are multiple variables in the read command, then the first word goes into the first variable, the second into the second, ..., and the residue of the line into the last variable. If there's only one variable, the whole line goes into that variable.
There are many uses. This is one of the simpler scripts I have that uses the split option:
#!/bin/ksh
#
# #(#)$Id: mkdbs.sh,v 1.4 2008/10/12 02:41:42 jleffler Exp $
#
# Create basic set of databases
MKDUAL=$HOME/bin/mkdual.sql
ELEMENTS=$HOME/src/sqltools/SQL/elements.sql
cat <<! |
mode_ansi with log mode ansi
logged with buffered log
unlogged
stores with buffered log
!
while read dbs logging
do
if [ "$dbs" = "unlogged" ]
then bw=""; cw=""
else bw="-ebegin"; cw="-ecommit"
fi
sqlcmd -xe "create database $dbs $logging" \
$bw -e "grant resource to public" -f $MKDUAL -f $ELEMENTS $cw
done
The cat command with a here-document has its output sent to a pipe, so the output goes into the while read dbs logging loop. The first word goes into $dbs and is the name of the (Informix) database I want to create. The remainder of the line is placed into $logging. The body of the loop deals with unlogged databases (where begin and commit do not work), then run a program sqlcmd (completely separate from the Microsoft new-comer of the same name; it's been around since about 1990) to create a database and populate it with some standard tables and data - a simulation of the Oracle 'dual' table, and a set of tables related to the 'table of elements'.
Other scripts that use the read command are bigger (by far), but generally read lines containing one or more file names and some other attributes of relevance, and then apply an appropriate transform to the files using the attributes.
Osiris JL: file * | grep 'sh.*script' | sed 's/:.*//' | xargs wgrep read
esqlcver:read version letter
jlss: while read directory
jlss: read x || exit
jlss: read x || exit
jlss: while read file type link owner group perms
jlss: read x || exit
jlss: while read file type link owner group perms
kb: while read size name
mkbod: while read directory
mkbod:while read dist comp
mkdbs:while read dbs logging
mkmsd:while read msdfile master
mknmd:while read gfile sfile version notes
publictimestamp:while read name type title
publictimestamp:while read name type title
Osiris JL:
'Osiris JL: ' is my command line prompt; I ran this in my 'bin' directory. 'wgrep' is a variant of grep that only matches entire words (to avoid words like 'already'). This gives some indication of how I've used it.
The 'read x || exit' lines are for an interactive script that reads a response from standard input, but exits if the command gets EOF (for example, if standard input comes from /dev/null).

Resources